GB2358752A - Surface or volumetric data processing method and apparatus - Google Patents

Surface or volumetric data processing method and apparatus Download PDF

Info

Publication number
GB2358752A
GB2358752A GB0002181A GB0002181A GB2358752A GB 2358752 A GB2358752 A GB 2358752A GB 0002181 A GB0002181 A GB 0002181A GB 0002181 A GB0002181 A GB 0002181A GB 2358752 A GB2358752 A GB 2358752A
Authority
GB
United Kingdom
Prior art keywords
subject
data
model
volumetric
volumetric data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0002181A
Other versions
GB0002181D0 (en
Inventor
Ivan Daniel Meir
Norman Ronald Smith
Guy Richard John Fowler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Priority to GB0002181A priority Critical patent/GB2358752A/en
Publication of GB0002181D0 publication Critical patent/GB0002181D0/en
Priority to AU2001230372A priority patent/AU2001230372A1/en
Priority to PCT/GB2001/000389 priority patent/WO2001057805A2/en
Publication of GB2358752A publication Critical patent/GB2358752A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4061Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A volumetric scanner (10) such as a PET or MRI scanner for example is provided with an array of cameras (C1 to C6) which are arranged to acquire a 3D surface representation (20) of a patient (P) which is referenced to the same reference frame as the internal volumetric image (30) of the patient. The 3D surface data is used to enhance the volumetric data eg by deblurring the image by distorting the volumetric reference frame with the aid of the surface representation, optionally with the aid of a statistical model of the possible configurations of the patient. Volumetric data sets of different modalities (eg MRI and PET respectively) can be combined by registering the 3D surfaces captured by camera arrangements associated with the respective scanners.

Description

2358752 Method and Apparatus for jRrocessing Configuration-sensitive Data
The present invention relates to a method and apparatus for processing configuration-sensitve data (,eg volumetric data) wherein the configuration-sensitive data is enhanced with configuration information. The invention relates particularly but not exclusively to processing sets of surface or volumetric data of a common subject acquired by different modalities (eg an MR1, CT or PET scanner or X-ray apparatus on the one hand and a 21) or 31) camera arrangement on the other) or complementary non-overlapping or weakly overlapping sets of data acquired by the same modality, in order to enhance one or both sets of data or their integration. The invention is applicable particularly but not exclusively to the pre- diagnosfic and post-diagnostic processing of such data in medical applications where the subject is a patient.
is It would be convenient for a clinician to combine surface or volumetric image data sets pertaining to the same patient but of two or more different modalities. andlor obtained on different occasions (and possibly of the same modality). However it is not normally possible to do this with conventional apparatus because each apparatus will have its own reference frame and furthermore in many cases there will be little or no overlap between the data sets (eg X-ray data and surface or sliced MR1 data) which will make accurate r-egistrationdifficult or impossible. Furthermore in Diany cases the configuration or shape of the patient will be somewhat different when the respective data sets are acquired, again tending to prevent accurate registration.
Additionally the patient may move during the acquisition of data - for example it may take 40 minutes to acquire an MRI scan. Even the fastest CT scan can take several seconds which will result in imagi g the n., errors unless patient is kept absolutely rigid.
An object of the present invention is to overcome or alleviate at least some of the above problems---.
In one aspect the invention provides a method of processing configurationsen:sitive data comprising acquiring configuration information in association with said configuration-sensitive data and enhancing the configuration-sensitive data using the confi gurati on information.
Preferably the confliguration-sensitive data comprises a volumetric data set relating to the interior of a subject.
For example in one embodiment the volumetric data set might define the shape of internal organs of a patient.
2 Preferably surface data is acquired by optical means and utilised to derive the configuration information. For example in one embodiment the configuration information might be a digitised surface representation of the patient's body acquired by a 21) or preferably a 31) camera arrangement. In another embodiment the surface data is acquired by optically tracking markers located on the surface of the subject. Preferably the markers are detachable and are located in predetern-lined relationships to permanent or semi-permanent markings on. the surface of the subject.
Since such surface data or such a digitised surface representation can be acquired extremely quickly and accurately, this offers great potential for improving the quality and integration of volu -metric -data, pardcular-ly in medical applications. The configuration information could also be information derived from the digitised surface representation - for example it could comprise a set of normaIs to the surface or could comprise model data (derived from a physical or statistical model of the relevant part of the patient's body for example) which could be used to enhance the configuration-sensitive data eg by means of statistical correlation between the configuration information and the configuration-sensitive data.
In preferred embodiments the object (or the surface or volumetric data thereof) is modelled (eg as a rigid model, an affine model or a spline model such as a NURRS model) in such a manner as to constrain its allowable movement and the images or volumetric data sets are registered under the constraint that movement is limited to that allowed by the model. Useful medical information can then be derived, eg by carrying over information from the surface data to the volumetric data. In certain embodiments such information can be derived with the aid of a statistical or physical model of the object.
In a related aspect the invention provides medical imaging apparatus arranged to -acquire configuration information in -association with configurationsensitive medical data.
Preferably the configuration information comprises surface data representative of a subject For example the configuration information could be a surface representation of a patient's bod and the apparatus could include one or more cameras for acquiring Y - such a representation. Preferably the apparatus is calibrated to output the surface representation and an internal representation (eg an- X-ray irnage or a voWmetric c data set) referenced to a common reference frame.
3 In a preferred embodiment the apparatus includes display means arranged to display -both a -present surface representation and a stored previous surface representation of the same patient and means for adjusting the position or orientation of the apparatus (optionally under the control' of a user) in relation to the patient such that the two surface representations are registered. This ensures that the previous and present internal representations are also registered and aids comparison of such representations. (eg X-ray m ages). and also. helps. to. prevent mistakes in align m ent of the apparatus relative to the patient which might result in further visits from the patient and unnecessary radiation dosage, for example.
In another aspect the invention provides a method of associating two sets of volumetric data.(V 1 and V2) comprising the step of -associating sets QS1, V 1} and {S2, V2}) of surface data (S1 and S2) registered with the respective sets of volumetric data. This aspect is related to the apparatus of the above- mentioned aspect in that each set of surface data associated with a set of volumetric data can be acquired with that apparatus.
Optionally the registration is performed on a model of the subject which is constrained to allow. movement only in accordance with a predetermined model - for example a rigid model, an affine model or a spline model.
In another aspect the invention provides a frame to fit the surface of scanned body part, the frame -carrying guide means for -locating -the site -of -a medical procedure -at a 0 defined position or orientation at or beneath the surface of the body part.
In one embodiment the frame is in the form of a mask shaped to fit the surface of a patient's face or head.
The invention also provides a method of making such a frame.
Further preferred features are defined in the dependent claims.
Preferred embodiments of the invention are described below by way of example a -drawinaS, wherein:
only with.reference to Figures.1 to 1.0 of the -accompanying C g Figure 1 is a diagrammatic representation of scanning apparatus in accordance with one aspect of the invention; Figure 2 is a flow diagram illustrating a method of processing images or volumetric data sets- in. accordance withanother aspect of the invention, 4 Figures 3A and 3B are diagrammatic representations showing the distortion of a coordinate system to correspond to the. movement of a patient -as - detected optically; -5 Figures 3C to 3E are diagrammatic representations showing the corresponding distortion of a coordinate system relative to which a volumetric data set is obtained; Figure 4 is a diagrammatic representation illustrating a method of calibration of the apparatus of Figure 1; Figure 51s a diagrammatic representation of scanning apparatus in accordance with an aspect of the invention and the registration of surface and volumetric representations obtained with the apparatus; Figure 6 is a diagrammatic representation of X-ray and video camera apparatus in accordance with. one aspect of the invention; Figure 7 is a diagrammatic profile of two sets of superimposed surface and volumetric data showing the registration in accordance with an aspect of the invention of the two volumetric data sets from the registration of each volumetric data set with its.associated surface and the -registration of the surfaces; Figure SA is a diagrammatic representation of the combination of two volumetric data sets in accordance with an aspect of the invention by the registration of their associated surfaces and Figure 8B is a similar diagrammatic representation utilising sets of sliced MR1 data., Figure 9 lis a diagrammatic transverse cross-section of alIFF scanner in accordance with an aspect of the invention showing the enhancement of the volumetric data with correction data derived from surface information, and -3.0 Figure 10 is a diagrammatic elevation of a mask fitted to a patient's head and incorporating a biopsy needle for taking a biopsy at. a. defined 3D position. in the patient's brain.
Referring to Figure 1, a volumetric scanner 10 which is suitably a MR1, CT or PET scanner for example is. shown. and is. arranged to generate a. 3D intern. al- volu metric. image 30 of a patient P. Such scanners are well known per se and accordingly no further details are given of the conventional features of such scanners. In accordance with a preferred feature of one aspect of the invention, three transversely disposed stereoscopic camera arrangements are provided, comprising digital cameras Cl and C2 which acquire a 31) image, of the head and shoulders of the patient P, digital cameras C3 and c4 which acquire a 3D image of the torso of the patient and digital cameras C5 and C6 which acquire -a 31) image of the legs -of -the patient. These images can be still or moving images and the left and right images of each pair can be correlated with the aid of a projected pattern in order to facilitate the derivation of a 3D surface representation. The ima ges acquired by the three sets of cameras are displayed to the user as indicated at 20A, 20B and 20C.
Multiple camera arrangements for acquiring 31) surfaces are commercially available - -eg -the.S4M apparatus available from Tricorder Technolog plc - -and - the -camera U arrangement of Figure 1 can be based on such known arrangements. Accordingly the processing electronics used to process the 21) images to a 31) image is not described or shown in Fi 'gure 1 However it should be noted that the fields of view of cameras
C I and C2 overlap with those of cameras C3 and C4 and that the fields of view of cameras, C3 and C4 overlap. with those of C5 and C6, to. enable an- overall 3D surface 20 of the patient P to be obtained.
Figure 2 shows how the data acquired eg by the scanner of Figure 1 can be processed. In. step.100 im agges. of a surface armYor volumetric data sets. are obtained from the same subject (eg patient P) at different times, eg before and after treatment.
It 'is assurnedIn this embodiment that little or no movement of the patient -P occurs during data acquisition, which is a valid assumption if the data acquisition takes less than say 25 milliseconds or so.
It should be noted that other embodiments of the invention specifically address 2.5 blurring of surface images. or, more particularly, volu -metric data- sets- caused by patient movement.
In step 500, a model of the object is provided and used to constrain allowable distortion of the images or volumetric data sets. of step. 100 when subsequently registering them (step 600). The model of step 500 could be a rigid model 200 (in which case no distortion is allowed between the respective limages or volumetric data sets acquired at different times) an affine model 300 (which allows shearing or stretching as shown) which term includes a piecewise affine model comprising a patchwork -of -affine 1ransforms -and allows different shearing -and stretching 0 distordions in the different parts of the surface or volumetric data set as shown) or a spline model 400 (eg a cubic spline or a NURBS representation, which could in either case also be a piecewise representation). In each case the models can take into account the characteristics of the scanner used to acquire the volumetric data sets.
In step 600 the selected model is used to constrain relative movement or distortion 6 of the images or volumetric data sets acquired at different times while they are registered as completely as possible within this constraint.
In step 700, -the volumetric -data -is enhanced and -output. More detailed information on this step is given in the description of the subsequent embodiments. This step is optionally performed with the aid of a physical model 700 and/or a statistical model 900 of the subject.
A physical model can incorporate a knowledge of the object's physical properties and can employ eg a finite element analysis. technique to. m odel- the sulject's deformations between the different occasions of data acquisition.
Alternatively or additionally a statistical model is employed. Statistical models of the human face and other parts of the human body (eg internal organs) are known, eg from A Hill, A Thomham, C J Taylor "Model-Based Interpretation of 3D Medical Images, -Proc _BMVC 1993 pp 339-348], which is -incorporated herein by reference. This reference describes Point Distribution Models.
Briefly, a Point Distribution Model (PDM) comprises an envelope in in dim. ensional space defined by eigenvectors representative of at least the principal modes of variation of the envelope, each point within the envelope representing a potential instance of the model and defining the positions (in 2D or 3D physical space) of n landmark points which are located on characteristic features of the model. The envelope is generated from a training set of examples- of the subject being Z:
, modelled (eg images of human faces. if the human face is being modelled). The main usefulness of such models lies in the possibility of applying -Principal Component Analysis (PCA) to find the eigenvectors corresponding to the main modes of variation of the envelope derived from the training set which enables the envelope to be approximated by an envelope in fewer,dimensions.
Point Distribution Models are described in more detail by A Hill et al, 'ModelBased Interpretation of 3D. Medical Images! Procs. 4th. British Machine Vision Conferrice pp 339 -348 Sept 1993 which is incorporated herein by reference.
Active Shape Models are derived from Point Distribution Models and are used to 35 generate new instances. of the model ie new shapes represented by points withi-n- the. envelope in the m-dimensional space.
Given a new shape (eg the shape of a new human face) known to nearly conform to the envelope, it& set of shape parameters. can be found and the shape can then be manipulated eg by rotation and scaling to onform better to the envelope, preferably 7 in an iterative process. In order to improve the matching, the PDM preferably incorporates grey level or other image information (eg colour information) besdes shape _and eg a grey scale profile perpendiccubw to - a bo,at a la.rk point is compared in order to move the landmark point to make the new image conform more closely to the set of aftowabfe shapes represented by the PD&f Thus the new example is deformed in ways to better fit the data represented by the training set. Thus an ASM consists of a shape model controlling a set of landmark points, together with a statistical model. of image information (eg grey --vcls around each. landmark. Active Shape Models and the associated Grey-Level Models are described -in more detail by -F -F Cootes et al in'Acflve Shape -Models: - Evaluation of a Multi -Resolution Method for Improving Image Search' Proc British Machine Vision Conference 1 1994 pp 327-336, which is incorporated herein by reference.
In the present application, the training set from which the statistical model 900 is derived is, preferably trained on a variety of image& or volumetric data set& derived from a single organ, patient or other subject in order to model the possible variation of a given subject as opposed to the vafiance in the general population of such subjects.
The resulting analysis enables data (eg growth of a tumour) which is not a function of a variation in the subjectio be hi ghFi glaed andi fwd.
One method of performing the registration step 600 is illustrated in Figures 3A to 3D. It is assumed that a patient P is scanned optically (eg by the cameras Cl to C6 of scanner 10 of Figure 1) on two occasions (eg before and after surgery) and that there is relative, mavement of the patient; the resulting surface representations- are shown in Figures 3A and 3B respectively with the difference in configuration of the patient shown greatly exaggerated for the sake of clafity. 'Re corresponding volumetric representations (eg 3D MRI images) are shown in Figures 3D and 3C respectively. In order to determine what the volumetric image (data set) would have been on the second occasion if the patient had not moved, and thereby enable an accurate comparison of the volumetric images obtained on the two occasions, the modef 200, 300 or 400 (Figure 2) is utifised to cristort the coordinate frame of Figure 3A in such a manner that the model patient coordinates in Figure 3B are unchanged relative to the coordinate frame. Thus a given point on the modelled patient (eg the no-se, tipY has the same coordinate& in the distorted coordinate fra m.e of Figure 3U as in the undistorted coordinate frame of Figure 3A.
A complementary distortion is then applied to the volumetric image (data set) red on the second occasion (as--- shown in Figure. 3D to. transform_ this--- data set to a data set (shown in Figure 3Q which would have been obtained on the second 8 occasion if the patient had not moved. The volumetric image of Figure 31) can then be compared with the volumetric data set (not shown) obtained on the first occasion.
In the embodiment of Figure 3E the surface of the patient P is permanently or serni permanently marked by small locating tat (not shx,.wri. which are visibLe to the human eye on close examination. Temporary markers M are applied to these tattoos and are sufficiently large to -be tracked easily lin real fimeby a stereoscop5ic camera arrangement. The markers M can for example be in the form of detachable stickers or can be drawn over the tattoos with a marker pen. It is not essential for the markers M to be precisely Jocated nvcr the tattoos Xakhoug-h this will usually be the m. ost practical option) but the markers should each have a location which is precisely defined by the tattoos - for example the markers could each be equidistant from two, three or more tattoos. On each occasion the patient P is scanned in a volumetric scanner (eg scanner 10 of Figure 1) the markers M are tracked optically and the is seanner's, volumetric coordinate frame i& distorted to, correspond with- that distortion.
of the scanner's optical coordinate frame which would leave the 3D positions of the markers M unchanged, either during the scan or relafive to a previous scan duiing which the markers were also used. As a result, either blurring of the volumetric image due to patient movement during the scan is avoided or proper registration of the przsent vdtne scan witha volu m.etr4.c scan acquired -an.a prev4ousoccasian is enabled.
The use of the markers M for tracking patient movement is believed to be novel and inventive in its own right -and the above technique can be used in all the embodiments of the invention.
The above description assumes that the surface data and volumetric data are initially referenced to a common reference frame. This can be achieved in a variety of ways.
Examples, of registration techniques. are illustrated in. Fig c 4,5and& Ur 8, -Referring to---Figure4, a caTibration targetT wlfichis,iis-ible -both to the cameras Cl and C2 and to the scanner 10 is imaged by both the scanner and the cameras resulting in images 11 in the camera reference frame F1 and 12 in the scanner reference frame F.1 A gporn, etrical transformation TR can be found in -a known manner which will map 11 onto 12 and the same transformation can then be applied eg in software or firmware to move, scale and (if necessary) cristort the visuar image of a patient's body to registration with the scanner reference frame F2.
In some cases (eg if the scanner is an MR1 or a CT scanner) there will be data common to, the. visual data acquired by the. camneras. and the. volumetric data typically the patient's skin surface. In such cases a calibration procedure such as that 9 shown in Figure 5 can be used. A patient P is imaged by both the cameras Cl and C2 and the scanner 10 and the resulting images p l and p2 of the patient's surface in the respective r--f5--rznce frames Fl and F2,of theca, m era system,--- andscanner can he registered by a transformation TR which can be found in known manner by analysis.
Tlis transformation TR can be stored and used subsequently to register the digitised surfaces acquired by the cameras to the volumetric data set acquired by the scanner 10.
Figure 6 shows a slightly different embodiment which consists essentially of an X ray camera Cx rigidly rnounted with. a digital video camera Cv on a common supporting frame FR The X-ray image and visual image 11 acquired by the cameras Cx and Cv are processed by a computer-PC and ffisplayed on a cTisplayD (only 11 lis shown). The computer PC is provided with video memory arranged to store a visual image 12 of the same patient previously acquired by the video camera Cv when a an Xray im.a,,,,,e, The cam.erasCY and jCx -are m.oy-ed,(zxr - underthecontrolof taking 0 the operator or possibly under control of a suitable image registration program) on their conunon mounting frame FR to superimpose image 12 on image Fl as shown by arrow al and the X-ray image is the captured. Consequently this X-ra ima y ge is correctly aligned with the X-ray image captured on the previous occasion. In a vanant of this. embodiment the, X-ray camera, i& rnoYable with respect to the. video camera and the movement of the X-ray camera required to register the the new X ray image with the previous -X-rayimage lis derived from the movement needed to register the surface images. Rather than utilising complete visual images 11 and 12, the arrangement can instead image an array of markers M located by tattoos on the patient i.n a. m anner sim flar to thatof Figure 3E 2-5 In either case, this results in a number of advantages:
a) mistakes in positioning of the apparatus (which might result in a further visit for an X-ray and hence, a greater X-ray dose than. is. necessary, as. well. as. i nwonve nie nce for the patient and clinical staffl are avoided, and b) X-ray images are standardised, which aids comparison and also facilitates further analysis such as that illustrated by blocks 700, 800 and 900 of Figure 2.
Other embodiments could utilise a standard type of volumetric scanner eg a CT 3.5 scanner or an MR1 scanner rather than an, X-ray camera Cx with similar advan1ac,,cs In particular, MR1 scanning can be enhanced by scanning only the relevant volume known form previously acquired surrace data to contain the reffion of interest.
Furthermore stereoscopic video camera ar=genlents rather than a single video camera Cv could be employed and more sophisticated registration techniques analagous to those of blocks 200 to 600 of Figure 2 could be employed.
Additionally, stereoscopic viewing arrangements rather than a screen could be used for the registration.
Figure 7 illustrates a further aspect of the invention involving using eg an NW scanner arrangement of Figure 1 to capture a surface representation S1 of the patient's body and a volumetric data set I1 (including the surface of the patient's body) and- using a further scanner of different mod-ality eg- a C.T scanner. to. -u.r.e a different volumetric data set 12 in association with a surface representation S2. The volumetric data sets I I and 12 are registered with their respective associated surface representations S1 and S2 by appropriate transformations rl and r2 as shown (preferably utilising the results of an earlier calibration procedure as described above) and the surface representations are then registered with each other by a transformation RL Since the volumetric data sets II and 12 are each referenced to the resulting common surface they can be registered with each other by a transformation R2 which can be simply derived from RI, rl and r2.
Assuming that the volumetric data sets 11 and 12 show different features of the patient as a result of their differing modalities then the resulting combined volum. etrie data set zonveys more infon.m.tion thaneither individually Such a combination of crifflerent features is shown in Figure SA, wherein surface representations S registered with respective volumetric data sets Va and Vb of different modality are combined to generate a new volumetric data set Vc registered with a surface representation S.' which is, a co mposite of the surface representations S. Such a technique can be used to combine not only volumetric data sets of different modafflity -but,aso volumetfic data sets of different resolution or accuracy or different size or shape. In a variant of this embodiment, the volumetric data sets acquired by different modalities have no overlap, ie no information in common, but are ineorparated into a co mmon reference fra me by mutually rec,,istenng surface representations which are acquired optically (eg by a camera arrangement similar to that of Figure 1) simultaneously with the respective volumetric data sets.
Figure 8B shows two composite images of an organ having cross-sections X l, X3 and X5 and X2, and X4 respectixely, wherein. the, crosssections. are acquired by MIZI and the surfaces S are acquired optically. By registering surfaces S to a composite surface Y, the NRI cross-sections are mutuaIly aTigned, as shown.
Y5 In a variant of the the embodiments of Figures 7, 8A. and 8B, more sophisticated -re gi s-tration technique s. -anal. agous to those of blocks 200 to fM of Figitre 2 can be used. The resulting incompletely registered data sets can then be processed using the techniques of Figure 2 blocks 700 to 900.
A particularly useful application of such a variant lies in the correction of (partwular-ly) volumetne data siets which _areacquired whilst the patient is moving.
For example an MRI scan can take 40 minutes and if the patient moves during this period the resulting scan is degraded by blurring. The blurring could be affeviated by utilisin g the optically acquired surface data of the movip, patient to distort the g reference frame of the volumetric data set and reconstructing the volumetric data set with reference to. the. distorted reference frame- This. debl-u m-.ac, technique- is. somewhat similar to the registration technique described above with reference to 1,0 -Figures 3A to 3-D -but unlike that technique, caninvolve a progresdve distortion of the reference frame to follow movement of the patient.
In another embodiment the blurring could be alleviated by utilising a statistical or physical m.. odel of the patient to define a range of possible zanfigurations of the patient in a mathematical space, generating volumetric data sets (in this case, artificial' AIRf scans) corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in the above mathematical space, whose appropriately weighted mean best matches (registers. with) the. actual blurred volumetric data, set acquired by the. scanner, and.
then processing the model and its associated volumetric data sets to find the volumetfic data set which wotfid 'have -been obtained -by the scanner if the patient had maintained a given configuration.
More generally, surface data acquired by one or more cameras can be usedto aid -directly the processing af volumetric scanning data to ggenerate an accurate 0 volumetric image of the interior of the patient. One example is shown in Figure 9, wherein a PET' scanner 10' having an array of photon detectors D around its periphery and having a centrally located positron source is provided with at least one stereoscopic arrangement of digital cameras C which capture the surface S of the subject, Although only two. cameras. are. shown. for the, sake. of si. m plicity, in. practice, the camera array would be arranged to capture the entire 360 degree periphery of the [U su ject and to this end could-be arranged to rotate around thelongitudinal wds of the scanner, for example.
True photon paths are shown by the full arrowed lines and include a path PT 35 resulting from. scattering at PS on the ohject surface. In -the absence jaf any C C5 information about the surface S, the outputs of the relevant photon detectors would be interpreted to infer a photon path M, which is physically impossiffife because it does not pass through surface S. Accordingly such an erroneous interpretation can be avoided with the aid of the surface data acquired by the cameras C.
12 Additionally, the surface data acquired by the cameras C can be used to derive volurnetric information whiclizan faeili the processing of the output signals of 0 the detectors D - in particular the absorption and scattering can be estimated from 5 the amount of tissue as determined from the surface acquired by the cameras C.
Other applications utilising the surface information include X-ray irnaging which can be. enhanced with a knowledge of the. patient'ssuxface. - eg to. locate. soft tissueFurthermore a statistical model of the X-ray image or volumetric data set, registered With the did or the surface representafion acquired -by a steresoscop5ic camera arrangement, could be used to aid the derivation of an actual X-ray image or volumetric data set from a patient. This would result in higher accuracy andlor a lower required dosage, One reconstruction method which would be applicable is based on levef sets as described b J A Sethian Level Sets and fast Matching Methods Cambridge Y University Press 1999 which is hereby incorporated by reference. This method would involve. com. putation. of successive. layers. from.. the. surface (acquired by the, cameras) toward the centre of the subject.
A further application of the registration of surface and volumetric data in accordance with the.. present invention. lies--- in. the. construed.on. of a fram. c. toL fit the. surface. of a. scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined poifion or ofientation at or -beneath the surface of the -body part. The position or orientation can be set by utilising the volumetric data which has been registered with the surface data and by implication with the frame and its guide.m. eans.
For example the frame could be a mask that fits over the patient's face or head or it could be. arranged to, be- fitted to. a rigid part of the, leg or abdomen- Such a m ask is.
C shown in Figure 10: mask M has an interior surface IS which matches a surface of the pafienf shead previously acquired by a stereoscopic camera arrangement and lis provided with a guide canal G for guiding a biopsy needle N to a defined position P in the patient's brain. To this end, the orientation of the guide canal is predetermined with the aid of volumetne data, reg gistered with the surface data and acqui.red by -a scanner in accordance with the invention (eg the scanner of Figure 1) and the guide canal is provided with a scale SC to enable the needle M to be advanced to a predetermined extent until a reference mark on the needle is aligned with a predetermined graduation of the scale (also chosen on the basis of the volumetric data)- In a variant of this. ernbodirn. ent, a stop could be. used instead of sealt. S.C._ 13 Although the described embodiments relate to the enhancement of volumetric data with the aid of surface data, in principle other medical data could be enhanced with the aid of such surface data. For example m-easuran.ip-nts of breathing - could he C combined with a moving 3D surface representation of the patient while such measurements are being made and the resulting measurements could be registered with previous measurements by a method of the type illustrated in Figaure 2.
Furthermore in other embodiments volumetric data capture by a scanner can be gated on the. basis. of suxface. data acquired bLy a ca mera arrange ment - eg to. ensure.
that volumetric data is captured only when the patient is in a defined position or configuration.
In the present specification the term 'optical' is to be construed to cover infra-red as well as- visible. wavelenggths.

Claims (1)

14 Claims
1. A method of processing configuration-sensitive data comprising acquiring,configuration i.nfar.m.. ation in association with said con.fig. uration-,sensitive dataand enhancing the configuration-sensitive data using the configuration information.
2. A method according to claim 1 wherein the configuration-sensitive data comprises -a valurr.ietr-i-c data set relating to the intenor of a subject.
C 3. A method according to claim 2 wherein surface data is acquired by optical means 10 and utilised to derive the configuration information.
4. A method acceording to claim 3 wherein the surface data is acquired by optically tracking markers. located on the. surface of the. subject- 5. A method according to claim 4 wler6in the markers are detachable and are located in predetermined relationships to permanent or semi-permanent markings on the surface of the subject.
6. A method according to claim 3 wherein the surface data comprises a three2G dimensional surface, representation of at least part of thesudace. of the. subject- 7. A method according to any of cldims 4 to 6 wherdin the surface datals referenced to the same reference frame as the volumetric data set.
8. A method according to any of claims 4 to 7 wherein the construction of the v-alun.wtnc,data set is zonstrained by the surface.data to ensurtconsistency between the surface data and the volumetric data set.
9. A method according to any of claims 4 to 8 wherein the volumetric data set is acqui.ied by a -PET scanner and the surface data is utilised to canstrai-n the 30 reconstruction of photon paths in the acquisition of said volumetric data set.
10. A method according to any of claims 4 to 8 wherein the volumetric data set is.acqui.red.by,an Xray scanner and the, surface data is. processed toder.ive valu metric information which is utilised to constrain an X-ray derived volumetric data set.
11. A method accordin g to any preceding claim wherein a first volumetric data set acquired from a subject is registered with a surface representation of the subject, a second volumetric data set acquired fro m. the. subject i-s- registered with. a surface representation of the subject and these registrations are utilised to register the first and second volumetric data sets with each other.
12. A method according to claim 11 wherein the first and second volumetric data sets are of different modality.
13. A method according to any preceding claim wherein a set of configurationsensitive data is enhanced by registration with a further set of such configurationsensitive data associated with the same subject. 14. A method according to claim 13 wherein said sets of configurationsensitive 10 data are acquired by scanning the subject on different occasions. 15. A method according to claim 13 or claim 14 wherein the registration is performed under the constraint of a model defining allowable deformations of the subject. 15 16. A method according to claim 15 wherein the model is a rigid model, an affine model or a spline model. 17. A method according to claim 16 wherein the model is a piecewise affine model 20 or a piecewise NURBS model. 18. A method according to any preceding claim wherein a physical or statistical model of the subject is utilised to enhance the configuration-sensitive data. 25 19. A method according to any of claims 15 to 18 wherein a coordinate frame is distorted in accordance with said model defining allowable deformations of the subject and the distortion of the coordinate frame is utilised to register two or more sets of configuration-sensitive data. 20. A method according to claim 19 comprising utilising a statistical or physical 30 model of the subject to define a range of possible configurations of the subject in a mathematical space, generating volumetric data sets corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in the above mathematical space, whose appropriately weighted mean best matches the actual blurred volumetric data set acquired from a 35 subject, and then processing the model and its associated volumetric data sets to find the volumetric data set which would have been obtained if the subject had maintained a given configuration. 21. A method according to any preceding claim wherein optically acquired surface 16 data of a moving subject is utilised to distort the reference frame of a volumetric data set acquired from the subject and the volumetric data set is reconstructed with reference to the distorted reference frame.
22. A method according to claim 21 wherein the reference frame of the volumetric data set is progressively distorted to follow movement of the subject while the volumetric data set is acquired. 23. A method according to any preceding claim wherein optically acquired surface data of a moving subject is utilised as an input to a de-blurring algorithm which is 10 used to de-blur a volumetric data set acquired from the subject. 24. A method according to any preceding claim wherein the timing of acquisition of a volumetric data set from a subject is gated in dependence upon said configuration information. 15 25. A method according to any preceding claim wherein the subject is a human being or an animal. 26. Medical imaging apparatus arranged to acquire configuration information in 20 association with configuration-sensitive medical data. 27. Medical imaging apparatus as claimed in claim 26 wherein the configuration information comprises surface data representative of a subject. 25 28. Medical imaging apparatus as claimed in claim 27 which is calibrated to output a surface representation and an internal representation referenced to a common reference frame. 29. Medical imaging apparatus as claimed in claim 28 which includes display means arranged to display both a present surface representation and a stored previous 30 surface representation of the same subject and includes means for adjusting the position or orientation of the apparatus in relation to the subject such that the two surface representations are registered. 30. Medical imaging apparatus as claimed in any of claims 26 to 29 which 35 comprises one or more cameras for acquiring said surface data.
31. A method of associating two sets of volumetric data comprising the step of associating sets of surface data registered with the respective sets of volumetric data.
17 32. A method as claimed in claim 31 wherein the registration of the surface and volumetric data sets is performed on a model of the subject which is constrained to allow movement only in accordance with a predetermined model.
33. A method according to claim 32 wherein the model is a rigid model, an affine model or a spline model.
34. A method according to claim 33 wherein the model is a piecewise affine model or a piecewise NURBS model.
35. A frame to fit the surface of a scanned body part, the frame carryingguide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part.
36. A frame according to claim 35 which is in the form of a mask shaped to fit the 15 surface of a patient's face or head.
37. A method of making a frame as claimed in claim 35 or claim 36 wherein surface data representative of the body part and a volumetric data set acquired from the body part are registered and the guide means is located relative to the frame by utilising 20 the spatial relationships between the registered surface data and volumetric data set.
38. Medical apparatus substantially as described hereinabove with reference to any of Figures 1, 6,9 and 10 of the accompanying drawings.
39. A method of processing medical data substantially as described hereinabove with reference to any of Figures 2 to 8B of the accompanying drawings.
GB0002181A 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus Withdrawn GB2358752A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0002181A GB2358752A (en) 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus
AU2001230372A AU2001230372A1 (en) 2000-01-31 2001-01-31 Image data processing method and apparatus
PCT/GB2001/000389 WO2001057805A2 (en) 2000-01-31 2001-01-31 Image data processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0002181A GB2358752A (en) 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus

Publications (2)

Publication Number Publication Date
GB0002181D0 GB0002181D0 (en) 2000-03-22
GB2358752A true GB2358752A (en) 2001-08-01

Family

ID=9884669

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0002181A Withdrawn GB2358752A (en) 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus

Country Status (3)

Country Link
AU (1) AU2001230372A1 (en)
GB (1) GB2358752A (en)
WO (1) WO2001057805A2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009083973A1 (en) * 2007-12-31 2009-07-09 Real Imaging Ltd. System and method for registration of imaging data
GB2455926B (en) * 2006-01-30 2010-09-01 Axellis Ltd Method of preparing a medical restraint
US8481812B2 (en) 2003-05-22 2013-07-09 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants generated thereby
US8513488B2 (en) 2007-04-09 2013-08-20 Evogene Ltd. Polynucleotides, polypeptides and methods for increasing oil content, growth rate and biomass of plants
US8620041B2 (en) 2007-12-31 2013-12-31 Real Imaging Ltd. Method apparatus and system for analyzing thermal images
US8686227B2 (en) 2007-07-24 2014-04-01 Evogene Ltd. Polynucleotides, polypeptides encoded thereby, and methods of using same for increasing abiotic stress tolerance and/or biomass and/or yield in plants expressing same
US8847008B2 (en) 2008-05-22 2014-09-30 Evogene Ltd. Isolated polynucleotides and polypeptides and methods of using same for increasing plant utility
US8937220B2 (en) 2009-03-02 2015-01-20 Evogene Ltd. Isolated polynucleotides and polypeptides, and methods of using same for increasing plant yield, biomass, vigor and/or growth rate of a plant
US8962915B2 (en) 2004-06-14 2015-02-24 Evogene Ltd. Isolated polypeptides, polynucleotides encoding same, transgenic plants expressing same and methods of using same
US9012728B2 (en) 2004-06-14 2015-04-21 Evogene Ltd. Polynucleotides and polypeptides involved in plant fiber development and methods of using same
US9018445B2 (en) 2008-08-18 2015-04-28 Evogene Ltd. Use of CAD genes to increase nitrogen use efficiency and low nitrogen tolerance to a plant
US9487796B2 (en) 2005-08-15 2016-11-08 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants and plants generated thereby
US10299686B2 (en) 2008-03-28 2019-05-28 Real Imaging Ltd. Method apparatus and system for analyzing images
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7912262B2 (en) 2005-03-10 2011-03-22 Koninklijke Philips Electronics N.V. Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
EP3712900A1 (en) * 2019-03-20 2020-09-23 Stryker European Holdings I, LLC Technique for processing patient-specific image data for computer-assisted surgical navigation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996033472A1 (en) * 1995-04-18 1996-10-24 Fouilloux Jean Pierre Process for determining a dimensionally-stable solid-type movement from an assembly of markers disposed on parts of the anatomy, in particular of the human body
JPH1119080A (en) * 1997-07-08 1999-01-26 Shimadzu Corp X-ray ct device
GB2330913A (en) * 1996-07-09 1999-05-05 Secr Defence Method and apparatus for imaging artefact reduction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets
US5902239A (en) * 1996-10-30 1999-05-11 U.S. Philips Corporation Image guided surgery system including a unit for transforming patient positions to image positions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996033472A1 (en) * 1995-04-18 1996-10-24 Fouilloux Jean Pierre Process for determining a dimensionally-stable solid-type movement from an assembly of markers disposed on parts of the anatomy, in particular of the human body
GB2330913A (en) * 1996-07-09 1999-05-05 Secr Defence Method and apparatus for imaging artefact reduction
JPH1119080A (en) * 1997-07-08 1999-01-26 Shimadzu Corp X-ray ct device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IEEE Engineering in Medicine, v.2, 1997, Westermann B et al,"On-line patient tracking...",pp.495-498 *
Mag. Resonance in Medicine, V.42, no.1, 1999, Ernst T et al,"Simultaneous Correction...", pp 201-205 *
Proc 5th Intl Conf. Computer Vision, 20-23/6/95, Park J et al, "Volumetric deformable..." pp700-705 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography
US8481812B2 (en) 2003-05-22 2013-07-09 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants generated thereby
US8962915B2 (en) 2004-06-14 2015-02-24 Evogene Ltd. Isolated polypeptides, polynucleotides encoding same, transgenic plants expressing same and methods of using same
US9012728B2 (en) 2004-06-14 2015-04-21 Evogene Ltd. Polynucleotides and polypeptides involved in plant fiber development and methods of using same
US9487796B2 (en) 2005-08-15 2016-11-08 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants and plants generated thereby
GB2455926B (en) * 2006-01-30 2010-09-01 Axellis Ltd Method of preparing a medical restraint
US8513488B2 (en) 2007-04-09 2013-08-20 Evogene Ltd. Polynucleotides, polypeptides and methods for increasing oil content, growth rate and biomass of plants
US8686227B2 (en) 2007-07-24 2014-04-01 Evogene Ltd. Polynucleotides, polypeptides encoded thereby, and methods of using same for increasing abiotic stress tolerance and/or biomass and/or yield in plants expressing same
US8620041B2 (en) 2007-12-31 2013-12-31 Real Imaging Ltd. Method apparatus and system for analyzing thermal images
US8670037B2 (en) 2007-12-31 2014-03-11 Real Imaging Ltd. System and method for registration of imaging data
US9710900B2 (en) 2007-12-31 2017-07-18 Real Imaging Ltd. Method apparatus and system for analyzing images
WO2009083973A1 (en) * 2007-12-31 2009-07-09 Real Imaging Ltd. System and method for registration of imaging data
US10299686B2 (en) 2008-03-28 2019-05-28 Real Imaging Ltd. Method apparatus and system for analyzing images
US8847008B2 (en) 2008-05-22 2014-09-30 Evogene Ltd. Isolated polynucleotides and polypeptides and methods of using same for increasing plant utility
US9018445B2 (en) 2008-08-18 2015-04-28 Evogene Ltd. Use of CAD genes to increase nitrogen use efficiency and low nitrogen tolerance to a plant
US8937220B2 (en) 2009-03-02 2015-01-20 Evogene Ltd. Isolated polynucleotides and polypeptides, and methods of using same for increasing plant yield, biomass, vigor and/or growth rate of a plant

Also Published As

Publication number Publication date
GB0002181D0 (en) 2000-03-22
WO2001057805A2 (en) 2001-08-09
AU2001230372A1 (en) 2001-08-14
WO2001057805A3 (en) 2002-03-21

Similar Documents

Publication Publication Date Title
US10842445B2 (en) System and method for unsupervised deep learning for deformable image registration
US7117026B2 (en) Physiological model based non-rigid image registration
US9524552B2 (en) 2D/3D registration of a digital mouse atlas with X-ray projection images and optical camera photos
JP5906015B2 (en) 2D / 3D image registration based on features
Maurer et al. A review of medical image registration
US5690106A (en) Flexible image registration for rotational angiography
US5937083A (en) Image registration using closest corresponding voxels with an iterative registration process
US5967979A (en) Method and apparatus for photogrammetric assessment of biological tissue
GB2358752A (en) Surface or volumetric data processing method and apparatus
JP4495926B2 (en) X-ray stereoscopic reconstruction processing apparatus, X-ray imaging apparatus, X-ray stereoscopic reconstruction processing method, and X-ray stereoscopic imaging auxiliary tool
Mellor Realtime camera calibration for enhanced reality visualization
US20060188139A1 (en) Tree structure based 2D to 3D registration
US8121380B2 (en) Computerized imaging method for a three-dimensional reconstruction from two-dimensional radiological images; implementation device
KR101767005B1 (en) Method and apparatus for matching images using contour-based registration
Chicherova et al. Automatic deformable registration of histological slides to μCT volume data
Hawkes et al. Registration methodology: introduction
Kim Intensity based image registration using robust similarity measure and constrained optimization: applications for radiation therapy
Bennani et al. Three dimensional (3D) lumbar vertebrae data set
Van de Kraats et al. Standardized evaluation of 2D-3D registration
Groisser et al. 3D Reconstruction of Scoliotic Spines from Stereoradiography and Depth Imaging
CN115908121B (en) Endoscope registration method, device and calibration system
Rösch et al. Robust 3D deformation field estimation by template propagation
Kumar et al. Frameless registration of MR and CT 3D volumetric data sets
Henry et al. Optimizing a feature-based motion tracking system for prospective head motion estimation in MRI and PET/MRI
Silic et al. Test Platform for Developing New Optical Position Tracking Technology towards Improved Head Motion Correction in Magnetic Resonance Imaging

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)