WO2014063840A1 - Matching patient images of different imaging modality using atlas information - Google Patents
Matching patient images of different imaging modality using atlas information Download PDFInfo
- Publication number
- WO2014063840A1 WO2014063840A1 PCT/EP2013/063640 EP2013063640W WO2014063840A1 WO 2014063840 A1 WO2014063840 A1 WO 2014063840A1 EP 2013063640 W EP2013063640 W EP 2013063640W WO 2014063840 A1 WO2014063840 A1 WO 2014063840A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- modality
- image
- representation
- data
- atlas
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 108
- 210000003484 anatomy Anatomy 0.000 claims abstract description 111
- 230000009466 transformation Effects 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000003672 processing method Methods 0.000 claims abstract description 15
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 230000008859 change Effects 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 16
- 238000003325 tomography Methods 0.000 claims description 10
- 238000002059 diagnostic imaging Methods 0.000 claims description 9
- 238000003709 image segmentation Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 abstract description 4
- 210000004556 brain Anatomy 0.000 description 22
- 238000002591 computed tomography Methods 0.000 description 19
- 210000001519 tissue Anatomy 0.000 description 13
- 238000005457 optimization Methods 0.000 description 12
- 238000013500 data storage Methods 0.000 description 11
- 238000000844 transformation Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000007428 craniotomy Methods 0.000 description 7
- 238000013507 mapping Methods 0.000 description 7
- 206010028980 Neoplasm Diseases 0.000 description 6
- 210000003625 skull Anatomy 0.000 description 6
- 210000004872 soft tissue Anatomy 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 230000001575 pathological effect Effects 0.000 description 4
- 238000007619 statistical method Methods 0.000 description 4
- 238000007408 cone-beam computed tomography Methods 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 231100000915 pathological change Toxicity 0.000 description 2
- 230000036285 pathological change Effects 0.000 description 2
- 230000000704 physical effect Effects 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000002835 absorbance Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000000133 brain stem Anatomy 0.000 description 1
- 210000000845 cartilage Anatomy 0.000 description 1
- 210000001638 cerebellum Anatomy 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000001072 colon Anatomy 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 210000002451 diencephalon Anatomy 0.000 description 1
- 230000001079 digestive effect Effects 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 210000000527 greater trochanter Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 210000000528 lesser trochanter Anatomy 0.000 description 1
- 210000003141 lower extremity Anatomy 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000001767 medulla oblongata Anatomy 0.000 description 1
- 210000001259 mesencephalon Anatomy 0.000 description 1
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 1
- 238000002600 positron emission tomography Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 210000001587 telencephalon Anatomy 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/10—Selection of transformation methods according to the characteristics of the input images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/32—Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10076—4D tomography; Time-sequential 3D tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10084—Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20128—Atlas-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Definitions
- the present invention is directed to a method, in particular data processing method, of determining a matching transformation for matching medical images describing an anatomical structure of a patient, wherein images of different imaging modality are matched.
- the invention is further directed to a corresponding program, in particular computer program, a computer running a program and a signal wave carrying information which represents the program.
- some medical procedures carried out on the brain require detenriining a difference between the position which an anatomical structure in the brain has before and after performing craniotomy.
- a magnetic resonance image is taken before craniotomy which allows for detenriining the first position of the brain in the skull.
- a computer tomography image of the brain is acquired which allows for detenriining the second position of the brain in the skull. It would now be desirable to allow for a comparison between the two images.
- imaging modalities in the above case magnetic resonance tomography and computer tomography (also called computed tomography), in general lead to different image colour contrast scales (which are associated with the respective imaging modality.
- image colour contrast scales which are associated with the respective imaging modality.
- grey values used to describe the anatomical structure in the magnetic resonance image differ from the grey values used to describe the anatomical structure in the computer tomography image.
- comparison of the images is achieved by fusing the two images, which may be hampered, though, for example by data processing instabilities due to the difference in the respectively used grey values.
- a problem to be solved by the invention therefore is to provide a stable and reliable way of comparing medical images which were generated based on applying different imaging modalities.
- the invention is direct to a medical data processing method of transforming a representation of an anatomical structure of a patient in a medical image of a first imaging modality (first modality medical image) into a representation of the anatomical structure in a second, other imaging modality (second modality image representation).
- the anatomical structure can be any anatomical structure which is known to be part of a patient's body, preferably it is the brain.
- a first matching transformation between the first modality medical image and the first modality atlas image is determined.
- the first modality atlas image is described by atlas data which also contains information about the representation of the general structure in the second imaging modality. This information may be embodied by or determined from a second modality atlas image contained in the atlas data.
- the first imaging modality preferably is MR
- the second imaging modality preferably is CT.
- a second modality image representation of the first modality medical image is determined.
- a second modality medical image is acquired. Then, a second matching transformation between the second modality image representation and the second modality medical image can be determined.
- the transformations determined or applied, respectively preferably are elastic fusion transformations.
- the transformations are determined taking into account representation properties such as element representation information and representation classes of individual image elements in the (image) representations between which the transformations are determined.
- the representation classes can describe tissue classes.
- the invention preferably also comprises features which are directed to avoiding computational faults if there are major differences between the representation of the anatomical structure in the second modality image representation and the second modality medical image.
- the present invention provides in particular a method (which is in particular a data processing method such as a medical data processing method) of transforming a representation of an anatomical structure of a patient in a first imaging modality to a representation of the anatomical structure in a second, other imaging modality.
- the anatomical structure may be any anatomical structure contained in the patient's body, for example the anatomical structure comprises (in particular consists of) at least one of bony tissue (for example a part of a bone - such as for example of the skull - or cartilage) and soft tissue (for example a part of the lung or the brain).
- the anatomical structure comprises at least part of the brain.
- the anatomical structure may also comprise at least part of the heart or an intestinal organ such as the stomach or the colon.
- the anatomical structure may comprise a bony structure such as at least part of the skull.
- the first modality medical image is generated (in particular has been generated before executing the disclosed method) based on first modality image data which has been generated based on applying the first imaging modality to the anatomical structure
- the second modality medical image is generated (in particular has been generated before executing the disclosed method) based on second modality medical image data which has been generated based on applying the second imaging modality to the anatomical structure.
- the second imaging modality preferably is different from the first imaging modality.
- the first imaging modality is magnetic resonance tomography
- the second imaging modality is computed tomography or x-ray.
- the type of imaging modality of the first imaging modality and the second imaging modality is preferably described by imaging modality data which is preferably also acquired during the inventive method.
- the first modality data may comprise imaging modality data describing the first imaging modality (in particular the type of the first medical imaging modality, for example by indicating that the first imaging modality is magnetic resonance tomography) and the second modality image data may comprise imaging modality data describing the second imaging modality (in particular the type of the second medical imaging modality, for example by indicating that the second imaging modality is computer tomography or x-ray).
- the term of imaging modality refers to a medical imaging technique and in particular refers to the type of energy which is applied to the anatomical structure in order to generate (medical) image data, in particular a medical image.
- the type of energy may for example be defined by the type of electromagnetic radiation applied to the anatomical structure.
- the respective type of energy is applied by an analytical device such as e.g. an x-ray tube, a computer tomograph, an ultrasound head or a magnetic resonance tomograph.
- An analytical device is in the framework of this disclosure also called analytical apparatus, imaging device, or imaging apparatus.
- imaging modalities include (but are not limited to) x-ray, computed x-ray tomography (also called computed tomography and abbreviated as CT), magnetic resonance tomography (abbreviated as MR or MRT), and ultrasound imaging.
- CT computed x-ray tomography
- MR magnetic resonance tomography
- ultrasound imaging ultrasound imaging.
- the term of imaging modality is also called medical imaging modality in order to underline the application of the imaging modality in the framework of a medical procedure.
- a synonymous term for medical imaging modality is medical imaging method which may also be used in this disclosure.
- the disclosed method preferably comprises steps of acquiring first modality image data describing the first modality medical image, wherein the first modality image data has been generated by applying the first imaging modality (to the anatomical structure).
- the disclosed method comprises a step of acquiring second modality image data describing the second modality medical image, wherein the second modality image has been generated by applying the second imaging modality (to the anatomical structure).
- the first modality medical image contains the representation of the anatomical structure in the first imaging modality
- the second modality medical image contains the representation of the anatomical structure in the second imaging modality.
- the process of generating the first modality image data and the second modality image data is not necessarily part of the disclosed method. However, steps of applying the first imaging xriodality and/or the second imaging modality and corresponding generation of the first modality image data and/or the second modality image data, respectively, may according to a particular embodiment also be part of the disclosed method.
- the first modality medical image and the second modality medical image each contain a representation of the aforementioned anatomical structure.
- the representation of the anatomical structure may differ between the first modality medical image and the second modality medical image in particular with regard to at least one of the colour values and contrast values used to represent the anatomical structure and with regard to the spatial properties (in particular at least one of position and orientation) of the anatomical structure for example relative to image features surrounding the anatomical structure (which represent in particular other anatomical structures contained in the patient's body) or its geometric properties (in particular at least one of size - in particular volume - and shape).
- Atlas data is acquired which describes a first modality atlas image.
- the first modality atlas image describes in particular a general structure of the anatomical structure in the first imaging modality, in particular the atlas data was generated based on medical image data which was generated by applying the first imaging modality.
- the atlas data in particular contains information describing the general structure of the anatomical structure (in this disclosure also referred to as general anatomical structure) in particular in a medical image (i.e. in particular by way of a medical image).
- the atlas data preferably contains element representation information which describes the representation of physical structures, for example the anatomical elements (referred to as "atlas elements") of the general anatomical structure in atlas images described by the atlas data.
- This representation corresponds to the representation of the physical structures in an image which is generated by means of an analytical device from a patient having for example an anatomical structure which is identical to the general anatomical structure.
- the influence of the generating process on the representation of the one or more physical strcutures is represented by a parameter set (for example, scanning parameters such as the type of analytical device used to generate the medical image data and/or the measurement parameters which are set, in particular adjusted, on the analytical, device and have an influence on the representation).
- the parameter set represents and in particular comprises one or in particular more parameters (also called representation parameters, such as the type of analytical device used for generating the medical image data and for example the magnetic field strength in an MRT device or the acceleration voltage in CT devices used to generate the medical image data) which reflect and in particular are parameters which have an influence on the representation of the image elements in the medical image which serves as a basis for generating the atlas image.
- This at least one parameter in paiticular its value and/or values is preferably described by the element representation infonnation and therefore the atlas data.
- the atlas data is acquired in particular from an anatomical atlas which typically consists of a plurality of generic models of objects, wherein the generic models of the objects together form a complex structure.
- the atlas data may therefore also be called generic patient model data.
- the atlas image is generated based on in particular a statistical analysis of the anatomy of the bodies of a plurality of human bodies, more particularly based on a statistical analysis of the anatomy of an anatomical body structure in a plurality of human bodies corresponding to the aforementioned anatomical structure of the patient.
- the atlas of a femur for example, can comprise the head, the neck, the body, the greater trochanter, the lesser trochanter, and the lower extremity as objects which may complex structure.
- the atlas of a brain can comprise a telencephalon, the cerebellum, the diencephalon, the pons, the mesencephalon and the medulla oblongata as the objects making up the complex structure.
- influences on the representation include influences on the image values which represent the physical structures, such as for instance influences on a grey value (representing the image value) which represents the anatomical element, or influences on the position of an image value in a colour space which represents the anatomical element.
- Other examples include influences on contrast, image value range, gamut, etc.
- the atlas data describes in particular an atlas image of the general anatomical structure in the first imaging modality (also called first modality atlas image) and comprises information about the representation of the general structure in the second imaging modality.
- the infomiation about the representation of the general structure in the second imaging modality is preferably included in a look-up table contained in the atlas data, further preferably it is determined from an atlas image of the general anatomical structure in the second imaging modality (also called second modality atlas image).
- the second modality atlas image then is included in the atlas data.
- the information about the representation of the general structure is determined based on the second modality atlas image.
- the general structure (and therefore also the atlas data) is preferably generated outside of the disclosed method based on medical image infomiation which is gathered from a plurality of human bodies.
- these bodies share a common characteristic such as for example at least one of gender, age and ethnicity.
- atlas data describing a general structure which was generated on the basis of medical image information which was gathered from huma bodies which do not share a common characteristic such as for example ethnicity
- the atlas data may have been generated on the basis of human bodies of different ethnicity.
- the general structure represents in particular the geometry (in particular at least one of size - in particular volume - and shape) of the anatomy of at least part of which was generated on the basis of the statistic analysis of the plurality of human bodies.
- the general structure represents the most probable geometry of a patient's body having a specific characteristic such as at least one of gender, age and ethnicity.
- the general structure may have been generated also on the basis of information about a specific pathologic state, for example the plurality of human bodies serving as a basis for the general structure may share a common pathologic state (such as a tumour disease or an anatomic anomaly).
- the representation of the general structure will vary between different imaging modalities.
- the representation is understood to encompass in particular the image appearance of the anatomical structure represented by the general structure in a specific image modality which is governed by for example the colour values assigned to specific physical structures (for example tissue - in particular soft tissue and bony tissue, and fluids such as liquids or gas, in particular liquids such a liquor or a gas such as air) in the respective imaging modality.
- the representation is described by the aforementioned element representation information.
- the colour values generated by a computed tomography will be in a grey value scale which is defined in the Hounsfield scale and represents the absorbance of physical structures for the x-rays with which the physi- cal structures are irradiated to generate a computed tomography.
- the image appearance may be governed by colour contrasts - this would be the case in particular if the applied imaging modality is magnetic resonance tomography.
- the applied imaging modality is magnetic resonance tomography.
- colour contrasts which depend in particular on the magnetic field strength, the type of magnetic pulse sequence, the considered type of magnetic relaxation (for example Ti or T 2 ) and the magnetic behaviour of the physical structure in an external magnetic field.
- the atlas data comprises in particular information about the representation of the general structure in the first imaging modality and in the second imaging modality. This information is contained in particular in the element representation information.
- the atlas data in particular comprises information which allows to map between the representation of a specific general structure in different imaging modalities (in particular in the first imaging modality and the second imaging modality). This information in particular allows to determine how a specific part of the general structure, the appearance of which is known in the first imaging modality, would appear in the second imaging modality.
- the atlas data was preferably generated based on a statistical analysis of a plurality of medical images of anatomical structures corresponding to the general structure which were taken with the first imaging modality in order to generate the first modality atlas image and also based on a statistical analysis of a plurality of medical images of the respective anatomical structures which were taken with the second imaging modality in particular in order to generate the second modality atlas image.
- the atlas data then includes for example correspondence data describing which image features in the first modality atlas image correspond to the respective image features in the second modality atlas image in particular by assigning them to the respective anatomical structure which they represent.
- the atlas data preferably comprises atlas geometry information which describes the geometric properties (in particular at least one of size - in particular volume - and shape) of the general anatomical structure.
- the spatial information can comprise only one set of static spatial information, i.e. spatial information which does not change over time and only provides one set of spatial properties for the general anatomical structure, or can comprise a plurality of sets of static spatial information which respectively describes the spatial properties of the general anatomical structure in different states, for instance at different points in time during for example a vital movement such as for example a breathing cycle.
- the vital movement is a movement of parts of the body due to vital functions of the body, such as for example breathing and/or the heart beat and/or digestive movements.
- the term "vital movements” covers any kind of movement of the body which is performed unconsciously and in particular controlled by the brain stem.
- the atlas spatial information can also describe different movement or posture states of the patient, such as the patient running, walking, standing or lying down. It can also cover different pathological states of a patient, such as a patient with an infection or a tumour in a particular part of the body, all particular states of a patient during surgery such as a patient within an exposed brain (i.e. in a state after craniotomy) resulting in a brain shift (which can in turn depend on the positioning of the head).
- the term "posture” as used here refers in particular to different positions of the extremities of the body, such as for example raised or lowered hands.
- representation class data is acquired which describes a representation class of the image elements describing the anatomical structure in the first modality medical image, the second modality medical image, the first modality atlas image and - as far as applicable - the second modality atlas image.
- the representation class describes in particular at least one of the aforementioned colour contrasts, colour values and the type of physical structure represented by the image elements.
- the representation class data comprises the element representation information, which then describes in particular the representation class of the respective image elements).
- the physical structure can be for example an anatomical structure (such as soft tissue or bony tissue) or a fluid as mentioned above.
- anatomical structures are assigned tissue classes which define the type of tissue contained in the anatomical structure represented by the respective image elements.
- the above-described mapping is established between image elements in the first modality atlas image and image elements in the second modality atlas image which represent the same anatomical structure. This mapping preferably is also contained in the atlas data.
- a first matching transformation is determined based on the first modality image data and the atlas data.
- the first matching transformation is in particular a matching transformation between the image information of the first modality image data and the first modality atlas image.
- a matching transformation is understood to bein particular a mapping function (more particularly, a linear mapping function) for mapping information (in particular positions) defined in a reference system (in particular positional reference system) used to define for example the positions in a first data set onto information (in particular positions) defined in a (in particular different) reference system (in particular positional reference system) used to define for example the positions in a second data set.
- a transformation can be determined based on for example executing at least one of an image segmentation algorithm and an image fusion algorithm (for example, an elastic image fusion algorithm as described below in the chapter "Definitions")-
- the transformations therefore are coordinate transformations and/or mappings between coordinate systems, for example between the coordinate system used to define information contained in the first data set and the coordinate to be used to define information contained in the second data set.
- the first data set is embodied by the image information of the first modality image data and the second data set is embodied by the second modality atlas image.
- the first matching transformation and the second modality image representation are determined based on the imaging modality data in order to input information into the disclosed method which imaging modalities are to be considered during execution of the disclosed method.
- the first matching transformation is constituted to match the representation of the anatomical structure in the first modality medical image with the representation of the general, structure of the anatomical structure in the first modality atlas image.
- a matching transformation contains at least one of a mapping and a matching function. Matching is understood to encompass in particular a spatial adaptation of (in particular positional) information contained in a first data set to (in particular positional) information contained in a second data set. This can happen for example by adaptation of the geometric properties (for example, at least one of size - in particular volume - and shape) of a structure described by both data sets.
- the representation of the general structure in the first modality atlas image is adapted to the spatial properties of the representation of the anatomical structure in the first modality medical image.
- the representation (in particular the geometric properties) of the general representation of the anatomical structure in the first modality atlas image is deformed to fit to the representation in the first modality medical image.
- the first matching transformation therefore is in particular an elastic transformation which can be implemented as an elastic fusion algorithm.
- An elastic transformation in the meaning of this disclosure is in particular a transformation which maps a first set of spatial information onto a second set of spatial information while adapting at least the second set to the first set in order to achieve congruence between the two sets of spatial information.
- the matching transformation is in the end determined to be unity (or at least optimized to be a value close as close to unity as possible considering in particular a predetermined maximum difference between the matching transformation and unity).
- a second modality image representation of the first modality medical image is then preferably determined based on the first matching transformation and the first modality atlas image and the information about the representation of the general structure of the anatomical structure in the second imaging modality (in particular based on the second modality atlas image).
- the second modality image representation describes what the first modality atlas image (in particular the matched first modality atlas image) would look like if the (matched) general structure had been generated on the basis of medical image data which had been acquired with the second imaging modality.
- the second modality image representation is preferably determined by replacing the element representation information of the image elements in the (matched) first modality atlas image with the corresponding element representation information for the second imaging modality.
- the second modality image representation is determined by determining a modality transformation between the first modality atlas image and the second modality atlas image which again is preferably an elastic fusion transformation for matching the second modality atlas image to the first modality atlas image.
- the modality transformation is a transformation between the first modality atlas image which has been matched to the first modality image data (the matched first modality atlas image), and the second modality atlas image.
- the first matching transformation is applied to the first modality atlas image in order to determine a matched first modal- ity atlas image containing a representation of the general structure of the anatomical structure which has been matched to the representation of the anatomical structure in the first modality medical image.
- the modality transformation then is preferably determined as a transformation between the matched first modality atlas image and the second modality atlas image.
- the representation of the anatomical structure in the second modality atlas image is matched (i.e. mapped and deformed) to fit the representation of the anatomical structure in the matched first modality atlas image.
- a second matching transformation is determined between the second modality image representation and the second modality medical image.
- the second matching transformation is determined in particular based on the second modality image representation and the second modality image data.
- the second modality image representation of the anatomical structure is matched (i.e. at least one of mapped and deformed) to fit the representation of the anatomical structure in the second modality medical image. This serves in particular to compare the representation of the anatomical structure in the second modality image representation and in the second modality medical image.
- at least one of the spatial properties (for example the positions) and the geometric properties (in particular the shape) of the two representations can be compared and differences can be determined in particular by determining parameters of the second matching transformation which are nonzero.
- An advantage of conducting the comparison based on the aforementioned method steps is that data processing instabilities which would occur when comparing image representations of different modality can be avoided since the second modality image representation and the second modality medical image contain image information in particular about the anatomical structure which is defined in the same space of colour value, colour contrasts and types of anatomical structures and which therefore in parti cular use the same set of representation classes (in particular tissue classes) for describing the image information.
- the disclosed method can be carried out irrespective of the number of space dimensions to be considered, for example it can be executed in both a two-dimensional and a three-dimensional environment.
- any positions and image information in both the medical images and the atlas images
- a specific preferred embodiment of the invention is directed to avoiding data processing instabilities in case at least one of a spatial (in particular a positional) and a geometric change (in particular a change in shape) has occurred to the anatomical structure between the point in time at which the first modality image data was generated and the point in time at which the second modality image data was generated.
- the second modality medical image may display a comparably large positional shift of the outer brain surface compared to the position of the outer brain surface in the first modality medical image and therefore in the second modality image representation.
- This positional shift may be due to for example a loss of surface tension on the outer surface of the brain below the position of a craniotomy which may lead to a collapse of the brain structure in the gravitation of field, in particular away from the position of the craniotomy.
- This positional shift may, however, lead to data processing instabilities when matching the second modality medical image to the second modality image representation.
- a large positional shift may hamper automatic detection of anatomical structures corresponding to each other in both representations.
- the present invention preferably comprises a step of defining a structural change region in the second modality image representation.
- the stractural change of the region comprises in particular a placeholder for a data structure which represents a change of the anatomical structure.
- the placeholder is in particular a seed structure which can be used in particular for adapting the second matching transformation to the change of the anatomical structure, in particular to a difference to the representations of the anatomical structure in the second modality representation and the second modality medical image.
- the placeholder can be expanded in the second modality image representation by applying a field of shift vectors to the placeholder and correspondingly shifting and/or defonning the structures in the second modality image representation which surround the placeholder. The expansion is preferably carried out until the placeholder corresponds to the difference in representation to a predetermined degree of similarity.
- similarity data describing a measure of similarity between the second modality image representation and the second modality medical image (in particular between the respective representations of the anatomical structure) is determined.
- the measure of similarity is determined based on for example a similarity between the second modality image representation and the second modality image based on comparing the representation classes of the respective image elements describing the anatomical structure in the second modality image representation and the second modality medical image, respectively.
- the measure of similarity preferably includes (in particular is) a cross-correlation, for example a local-cross- correlation between colour contrasts in the second modality image representation and colour contrast in the second modality medical image.
- it includes (in particular is) a local cross-correlation between colour values in the second modality image representation and colour values in the second modality medical image.
- regions in the second modality image representation and the second modality medical image for which at least substantially no similarity (in particular no similarity and/or a low value of the measure of similarity) has been determined are excluded as a basis for determining the second matching transformation. This avoids hampered data processing when determining the second matching transformation.
- a structural change region may be defined in a region of at least substantially no similarity and the placeholder may be expanded.
- the similarity data may then be re-detennined in particular until a predetermined value (in particular an acceptable level) of the measure of similarity is determined.
- a predetermined value in particular an acceptable level
- the structural change region may alternatively or additionally be also in particular an anatomical feature which is represented in the first modality medical image and therefore the second modality image representation, but is not represented in the second modality medical image (for example, an implant or a tumour which has been implanted or grown, respectively, in the meantime).
- the change region is preferably defined in the second modality image representation at the position of the respective anatomical structure, which is then compressed for example by applying a vector field in an inward direction of an anatomical structure in order to reduce it and it is no longer present and a predetermined level of similarity to the second modality medical image is reached.
- the invention also relates to a program which, when running on a computer or when loaded onto a computer, causes the computer to perform one or more or all of the method steps described herein and/or to a program storage medium on which the program is stored (in particular in a non-transitory form) and/or to a computer on which the program is running or into the memory of which the program is loaded and/or to a signal wave, in particular a digital signal wave, carrying information which represents the program, in particular the aforementioned program, which in particular comprises code means which are adapted to perform any or all of the method steps described herein.
- computer program elements can be embodied by hardware and/or software (this includes firmware, resident software, micro-code, etc.).
- computer program elements can take the form of a computer program product which can be embodied by a computer-usable, in particular computer- readable data storage medium comprising computer-usable, in particular computer-readable program instructions, "code” or a "computer program” embodied in said data storage medium for use on or in connection with the instruction-executing system.
- Such a system can be a computer; a computer can be a data processing device comprising means for executing the computer program elements and/or the program in accordance with the invention, in particular a data processing device comprising a digital processor (central processing unit or CPU) which executes the computer program elements and optionally a volatile memory (in particular, a random access memory or RAM) for storing data used for and/or produced by executing the computer program elements.
- a computer- usable, in particular computer-readable data storage medium can be any data storage medium which can include, store, communicate, propagate or transport the program for use on or in connection with the instruction-executing system, apparatus or device.
- the computer-usable, in particular computer-readable data storage medium can for example be, but is not limited to, an electronic, magnetic, optical, electromagnetic, inf ared or semiconductor system, apparatus or device or a medium of propagation such as for example the Internet.
- the computer-usable or computer-readable data storage medium could even for example be paper or another suitable medium onto which the program is printed, since the program could be electronically captured, for example by optically scanning the paper or other suitable medium, and then compiled, interpreted or otheiwise processed in a suitable manner.
- the data storage medium is preferably a non-volatile data storage medium.
- the computer program product and any software and/or hardware described here form the various means for performing the functions of the invention in the example embodiments.
- the computer and/or data processing device can in particular include a guidance information device which includes means for outputting guidance information.
- the guidance information can be outputted, for example to a user, visually by a visual indicating means (for example, a monitor and/or a lamp) and/or acoustically by an acoustic indicating means (for example, a loudspeaker and/or a digital speech output device) and/or tactilely by a tactile indicating means (for example, a vibrating element or vibration element incorporated into an instrument).
- a visual indicating means for example, a monitor and/or a lamp
- an acoustic indicating means for example, a loudspeaker and/or a digital speech output device
- tactilely by a tactile indicating means for example, a vibrating element or vibration element incorporated into an instrument.
- imaging methods are used to generate image data (for example, two- dimensional or three-dimensional image data) of anatomical structures (such as soft tissues, bones, organs, etc.) of the human body.
- Medical imaging methods are understood to mean advantageously apparatus-based imaging methods (so-called medical imaging modalities and/or radiological imaging methods), such as for instance computed tomography (CT) and cone beam computed tomography (CBCT; in particular volumetric CBCT), x-ray tomography, magnetic resonance tomography (M T or MRI), conventional x-ray, sonography and/or ultrasound examinations, and positron emission tomography.
- CT computed tomography
- CBCT cone beam computed tomography
- M T or MRI magnetic resonance tomography
- sonography and/or ultrasound examinations
- positron emission tomography positron emission tomography.
- Analytical devices are in particular used to generate the image data in apparatus-based imaging methods.
- the imaging methods are in particular used for medical diagnostics, to analyse the anatomical body in order to generate images which are described by the image data.
- the imaging methods are in particular used to detect pathological changes in the human body.
- some of the changes in the anatomical structure, in particular the pathological changes in the structures (tissue) may not be detectable and in particular may not be visible in the images generated by the imaging methods.
- a tumour for example represents an example of a change in an anatomical structure. If the tumour grows, it may then be said to represent an expanded anatomical structure. This expanded anatomical structure may not be detectable; in particular, only a part of the expanded anatomical structure may be detectable.
- Primary/high-grade brain tumours are for example usually visible on MRI scans when contrast agents are used to infiltrate the tumour.
- the MRI scans represent an example of an imaging method.
- the method in accordance with the invention is in particular a data processing method.
- the data processing method is preferably performed using technical means, in particular a com- puter.
- the data processing method is in particular executed by or on the computer.
- the computer in particular comprises a processor and a memory in order to process the data, in particular electronically and/or optically.
- the calculating steps described are in particular performed by a computer. Determining steps or calculating steps and acquiring steps are in particular steps of determining data within the framework of the technical data processing method, in particular within the framework of a program.
- a computer is in particular any kind of data processing device, in particular electronic data processing device.
- a computer can be a device which is generally thought of as such, for example desktop PCs, notebooks, netbooks, etc., but can also be any programmable apparatus, such as for example a mobile phone or an embedded processor.
- a computer can in particular comprise a system (network) of "sub- computers", wherein each sub-computer represents a computer in its own right.
- the term "computer” includes a cloud computer, in particular a cloud server.
- the term "cloud computer” includes a cloud computer system which in particular comprises a system of at least one cloud computer and in particular a plurality of operatively interconnected cloud computers such as a server farm.
- Such a cloud computer is preferably connected to a wide area network such as the world wide web (WWW) and located in a so-called cloud of computers which are all connected to the world wide web.
- WWW world wide web
- Such an infrastructure is used for "cloud computing” which describes computation, software, data access and storage services which do not require the end user to know the physical location and/or configuration of the computer delivering a specific service.
- the term "cloud” is used as a metaphor for the internet (world wide web).
- the cloud provides computing infrastructure as a service (laaS).
- the cloud computer can function as a virtual host for an operating system and/or data processing application which is used to execute the method of the invention.
- the cloud computer is for example an elastic compute cloud (EC2) as provided by Amazon Web ServicesTM.
- a computer in particular comprises interfaces in order to receive or output data and/or perform an analogue-to-digital conversion.
- the data are in particular data which represent physical properties and/or are generated from technical signals.
- the technical signals are in particular generated by means of (technical) detection devices (such as for example devices for detecting marker devices) and/or (technical) analytical devices (such as for example devices for performing imaging methods), wherein the technical signals are in particular electrical or optical signals.
- the technical signals in particular represent the data received or outputted by the computer.
- the expression "acquiring data” encompasses in particular (within the framework of a data processing method) the scenario in which the data are determined by the data processing method or program.
- Determining data in particular encompasses measuring physical quantities and transforming the measured values into in particular digital data and/or computing the data by means of a computer, in particular computing the data within the method of the invention.
- the meaning of "acquiring data” in particular also encompasses the scenario in which the data are received or retrieved by the data processing method or program, for example from another program, a previous method step or a data storage medium, in particular for further processing by the data processing method or program.
- "acquiring data” can also for example mean waiting to receive data and/or receiving the data.
- the received data can for example be inputted via an interface.
- Acquiring data can also mean that the data processing method or program performs steps in order to (actively) receive or retrieve the data from a data source, for instance a data storage medium (such as for example a ROM, RAM, database, hard disc, etc.) or via the interface (for instance, from another computer or a network).
- the data can achieve the state of being "ready for use” by performing an additional step before the acquiring step.
- the data are generated in order to be acquired.
- the data are in particular detected or captured (for example, by an analytical device).
- the data are inputted in accordance with the additional step, for instance via interfaces.
- the data generated can in particular be inputted (for instance, into the computer).
- the data can also be provided by performing the additional step of storing the data in a data storage medium (such as for example a ROM, RAM, CD and/or hard drive), such that they are ready for use within the framework of the method or program in accordance with the invention.
- a data storage medium such as for example a ROM, RAM, CD and/or hard drive
- "acquiring data” can also involve commanding a device to obtain and/or provide the data to be acquired.
- the acquiring step in particular does not involve an invasive step which would represent a substantial physical interference with the body requiring professional medical expertise to be carried out and entailing a substantial health risk even when earned out with the required professional care and expertise.
- determining, data in particular does not involve a surgical step and in particular does not involve a step of treating a human or animal body using surgery or therapy. This also applies in particular to any steps directed to determining data.
- the data are denoted (i.e. referred to) as "XY data” and the like and are de- fined in particular by the information which they describe which is preferably called "XY information”.
- the first and second matching transformations are for example image fusion transformations, in particular elastic fusion transformations which are designed to enable a seamless transition from one data set (e.g. first data set, e.g. first image) to another data set (e.g. second data set, e.g. second image).
- image morphing is also used as an alternative to the term “image fusion”, but with the same meaning.
- the transformations are is in particular designed such that one of the aforementioned first and second data sets (images) is deformed, in particular in such a way that corresponding structures (in particular, corresponding image elements) are arranged at the same position as in the other of the first and second images.
- the deformed (transformed) image which is transformed from one of the first and second images is in particular as similar as possible to the other of the first and second images.
- (numerical) optimization algorithms are applied in order to find the transformation which results in an optimum degree of similarity.
- the degree of similarity is preferably measured by way of a measure of similarity (also referred to in the following as a "similarity measure").
- the parameters of the optimization algorithm are in particular vectors of a deformation field F. These vectors are determined by the optimization algorithm which results in an optimum degree of similarity.
- the optimum degree of similarity represents a condition, in particular a constraint, for the optimization algorithm.
- the bases of the vectors lie in particular at voxel positions of one of the first and second images which is to be transformed, and the tips of the vectors lie at the corresponding voxel positions in the transformed image.
- a plurality of these vectors are preferably provided, for instance more than twenty or a hundred or a thousand or ten thousand, etc.
- the constraints include in particular the constraint that the transformation is regular, which in particular means that a Jacobian determinant calculated from a matrix of the deformation field (in particular, the vector field) is larger than zero.
- the constraints include in particular the constraint that the transformed (deformed) image is not self-intersecting and in particular that the transformed (deformed) image does not comprise faults and/or ruptures.
- the constraints include in particular the constraint that if a regular grid is transformed simultaneously with the image and in a corresponding manner, the grid is not allowed to interfold at any of its loca- tions.
- the optimizing problem is in particular solved iteratively, in particular by means of an optimization algorithm which is in particular a first-order optimization algorithm, in particular a gradient descent algorithm.
- Other examples of optimization algorithms include optimization algorithms which do not use derivations such as the downhill simplex algorithm or algorithms which use higher-order derivatives such as Newton-like algorithms.
- the optimization algorithm preferably performs a local optimization. If there are a plurality of local optima, global algorithms such as simulated annealing or genetic algorithms can be used. In the case of linear optimization problems, the simplex method can for instance be used.
- the voxels are in particular shifted by a magnitude in a direction such that the degree of similarity is increased.
- This magnitude is preferably less than a predefined limit, for instance less than 1/10 or 1/100 or 1/1000 of the diameter of the image, and in particular about equal to or less than the distance between neighboring voxels. Due in particular to a high number of (iteration) steps, large deformations can be implemented.
- the determined elastic fusion transformation can be in particular used to determine a degree of similarity (similarity measure also referred to as "measure of similarity") between the first and second data set (first and second image).
- a degree of similarity similarity measure also referred to as "measure of similarity”
- the degree of deviations can be for instance calculated by determining the difference between the determinant of the elastic fusion transformation and the identity transformation. The higher the deviation is the less is the similarity. Thus, the degree of deviation can be used to determine a measure of similarity.
- a measure of similarity can in particular be determined on the basis of a determined correlation between the first and second data set.
- Fig. 1 describes a general algorithm used for determining the second modality image representation
- Fig. 2 illustrates the meaning of a structural change region.
- the first modality image data and the atlas data is acquired in step S 1.
- the first modality medical image is then registered with the first modality atlas image in step S2 in particular by determining the first matching transformation.
- the method carries on in steps S3 with segmenting the representation classes in the first modality medical image for example by applying an expectation maximization algorithm.
- the second modality image data is acquired.
- the first modality image data comprises imaging modality data describing the type of imaging modality which is the first imaging modality.
- the second modality image data comprises imaging modality data which describes the type of imaging modality which is the second imaging modality.
- step S5 the first modality medical image is then simulated in the second imaging modality by determining the second modality image representation. This simulation is carried out in particular based on the results of the segmentation in step S3 an information contained in the atlas data which describes how a given representation class (in particular tissue type) appears in the first imaging modality and in the second imaging modality.
- an elastic fusion is performed in step S5 between the second modality image representation and the second modality medical image.
- This elastic fusion is an example of the second matching transformation.
- a specific example of the workflow shown in Fig. 1 is the following: a patient usually undergoes several pre-operative MR scans, for example a Tj and a T 2 scan. Different types of tissue can be determined from the images taken in Tj and T 2 , respectively, and a combined data set representing the first modality image data can be generated from the Ti and T 2 images. The first imaging modality therefore is set to be MR. Later on, a CT image may be taken of the patient, and the second imaging modality therefore is CT. Based on an information about the Hounsfield values for specific types of tissue in the CT image, a CT is simulated from the combined T ⁇ and T 2 data set (i.e. from the first modality image data. This results in one genuine and one simulated CT data set, and these two data sets can be fused with a higher stability than would be the case when using a multi-modal MR-CT fusion algorithm which uses for example mutual information.
- Fig. 2 shows how a structural change region is used to support determining the second matching transformation.
- Fig. 2(a) illustrates a second modality image representation of an anatomical structure embodied by a patient's brain 1 which, before growing a placeholder 3, extends along its outer boundary 2 almost up to the inner surface of the skull bones.
- a structural change region is defined comprising the placeholder 3.
- Fig. 2(b) illustrates a second modality medical image of the same patient's skull and brain ⁇ along with a craniotomy 5, due to which the brain 1 ' collapsed compared to its outer boundary 2 shown in Fig. 2(a).
- a placeholder 3 is inserted for example along the original boundary 2 of the brain 1 in the second modality image representation.
- the placeholder 3 is then grow by applying a vector field or to the image elements representing the placeholder 3.
- a cavity is grown in the second modality image representation so that the difference in appearance of the brain 1, ⁇ between the two images becomes similar.
- the cavity which was generated by the brain collapsing and is depicted in the second modality medical image leads to a fault in determining the second matching transformation T since no corresponding cavity would have been found in the second modality image representation before growing the placeholder.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Quality & Reliability (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The present invention relates to a medical data processing method of transforming a representation of an anatomical structure (1) of a patient in a first imaging modality into a representation of the anatomical structure (1') in a second, other imaging modality, the method being constituted to be executed by a computer and comprising the following steps: a) acquiring (S1) first modality image data describing the first modality medical image containing the representation of the anatomical structure (1) in the first imaging modality; b) acquiring (S1) atlas data describing a first modality atlas image describing a general structure of the anatomical structure (1) in the first imaging modality, the atlas data containing information about the representation of the general structure in the second imaging modality; c) determining (S3), based on the first modality image data and the atlas data, a first matching transformation between the first modality medical image and the first modality atlas image; d) determining (S5), based on the first matching transformation and the first modality atlas image and the information about the representation of the general structure in the second imaging modality second modality, a second modality image representation of the first modality medical
Description
Matching Patient images of Different Imaging Modality using Atlas Information
The present invention is directed to a method, in particular data processing method, of determining a matching transformation for matching medical images describing an anatomical structure of a patient, wherein images of different imaging modality are matched.
The invention is further directed to a corresponding program, in particular computer program, a computer running a program and a signal wave carrying information which represents the program.
In many applications, it is desirable to compare the position which the representation of a specific anatomical structure has in different medical images in order to for example determine a change in the position of the anatomical structure. For example, some medical procedures carried out on the brain require detenriining a difference between the position which an anatomical structure in the brain has before and after performing craniotomy. For example, a magnetic resonance image is taken before craniotomy which allows for detenriining the first position of the brain in the skull. After performing craniotomy, a computer tomography image of the brain is acquired which allows for detenriining the second position of the brain in the skull. It would now be desirable to allow for a comparison between the two images.
However, different imaging modalities, in the above case magnetic resonance tomography and computer tomography (also called computed tomography), in general lead to different image colour contrast scales (which are associated with the respective imaging modality. For example the grey values used to describe the anatomical structure in the magnetic resonance image differ from the grey values used to describe the anatomical structure in the computer tomography image. Preferably, comparison of the images is achieved by fusing the two images, which may be hampered, though, for example by data processing instabilities due to the difference in the respectively used grey values.
A problem to be solved by the invention therefore is to provide a stable and reliable way of comparing medical images which were generated based on applying different imaging modalities.
This problem is solved by the subject-matter of any appended independent claim. Advantages, advantageous features, advantageous embodiments and advantageous aspects of the present invention are disclosed in the following and contained in the subject-matter of the dependent claims. Different advantageous features can be combined in accordance with the invention as long as technically sensible and feasible. In particular, a feature of one embodiment which has the same or similar function of another feature of another embodiment can be exchanged. In particular, a feature of one embodiment which supplements a further function to another embodiment can be added to the other embodiment.
Exemplary short description of the invention
In this chapter, a short description of an example of the invention is offered which shall not be construed as limiting the invention to this example.
The invention is direct to a medical data processing method of transforming a representation of an anatomical structure of a patient in a medical image of a first imaging modality (first modality medical image) into a representation of the anatomical structure in a second, other imaging modality (second modality image representation). The anatomical structure can be any anatomical structure which is known to be part of a patient's body, preferably it is the brain. Based on the first modality medical image describing the representation of the anatomical structure in the first imaging modality and based on a first modality atlas image describing a general structure of the anatomical structure in the first imaging modality, a first matching transformation between the first modality medical image and the first modality atlas image is determined. The first modality atlas image is described by atlas data which also contains information about the representation of the general structure in the second imaging modality. This information may be embodied by or determined from a second modality atlas image contained in the atlas data. The first imaging modality preferably is MR, the second imaging modality preferably is CT. Based on the first matching transformation and the first modality atlas image and the information about the representation of the general structure in the second im-
aging modality second modality, a second modality image representation of the first modality medical image is determined. Preferably, also a second modality medical image is acquired. Then, a second matching transformation between the second modality image representation and the second modality medical image can be determined. Thereby, the appearance of the anatomical structure in the transformed first modality medical image (i.e. in the second modality image representation) can be compared with its appearance in the second modality medical image. The transformations determined or applied, respectively, preferably are elastic fusion transformations. Preferably, the transformations are determined taking into account representation properties such as element representation information and representation classes of individual image elements in the (image) representations between which the transformations are determined. The representation classes can describe tissue classes. The invention preferably also comprises features which are directed to avoiding computational faults if there are major differences between the representation of the anatomical structure in the second modality image representation and the second modality medical image.
General description of the invention
In order to solve the afore-mentioned problem, the present invention provides in particular a method (which is in particular a data processing method such as a medical data processing method) of transforming a representation of an anatomical structure of a patient in a first imaging modality to a representation of the anatomical structure in a second, other imaging modality. The anatomical structure may be any anatomical structure contained in the patient's body, for example the anatomical structure comprises (in particular consists of) at least one of bony tissue (for example a part of a bone - such as for example of the skull - or cartilage) and soft tissue (for example a part of the lung or the brain). In a particular embodiment, the anatomical structure comprises at least part of the brain. However, the anatomical structure may also comprise at least part of the heart or an intestinal organ such as the stomach or the colon. Alternatively or additionally, the anatomical structure may comprise a bony structure such as at least part of the skull.
The first modality medical image is generated (in particular has been generated before executing the disclosed method) based on first modality image data which has been generated based on applying the first imaging modality to the anatomical structure, and the second modality
medical image is generated (in particular has been generated before executing the disclosed method) based on second modality medical image data which has been generated based on applying the second imaging modality to the anatomical structure. The second imaging modality preferably is different from the first imaging modality. In particular, the first imaging modality is magnetic resonance tomography, and the second imaging modality is computed tomography or x-ray. The type of imaging modality of the first imaging modality and the second imaging modality is preferably described by imaging modality data which is preferably also acquired during the inventive method. For example, the first modality data may comprise imaging modality data describing the first imaging modality (in particular the type of the first medical imaging modality, for example by indicating that the first imaging modality is magnetic resonance tomography) and the second modality image data may comprise imaging modality data describing the second imaging modality (in particular the type of the second medical imaging modality, for example by indicating that the second imaging modality is computer tomography or x-ray).
Within the framework of this disclosure, the term of imaging modality refers to a medical imaging technique and in particular refers to the type of energy which is applied to the anatomical structure in order to generate (medical) image data, in particular a medical image. The type of energy may for example be defined by the type of electromagnetic radiation applied to the anatomical structure. The respective type of energy is applied by an analytical device such as e.g. an x-ray tube, a computer tomograph, an ultrasound head or a magnetic resonance tomograph. An analytical device is in the framework of this disclosure also called analytical apparatus, imaging device, or imaging apparatus. Examples of imaging modalities include (but are not limited to) x-ray, computed x-ray tomography (also called computed tomography and abbreviated as CT), magnetic resonance tomography (abbreviated as MR or MRT), and ultrasound imaging. Within the framework of this disclosure, the term of imaging modality is also called medical imaging modality in order to underline the application of the imaging modality in the framework of a medical procedure. A synonymous term for medical imaging modality is medical imaging method which may also be used in this disclosure.
The disclosed method preferably comprises steps of acquiring first modality image data describing the first modality medical image, wherein the first modality image data has been generated by applying the first imaging modality (to the anatomical structure). According to
preferred embodiment, the disclosed method comprises a step of acquiring second modality image data describing the second modality medical image, wherein the second modality image has been generated by applying the second imaging modality (to the anatomical structure). The first modality medical image contains the representation of the anatomical structure in the first imaging modality, the second modality medical image contains the representation of the anatomical structure in the second imaging modality. The process of generating the first modality image data and the second modality image data is not necessarily part of the disclosed method. However, steps of applying the first imaging xriodality and/or the second imaging modality and corresponding generation of the first modality image data and/or the second modality image data, respectively, may according to a particular embodiment also be part of the disclosed method.
The first modality medical image and the second modality medical image each contain a representation of the aforementioned anatomical structure. However, the representation of the anatomical structure may differ between the first modality medical image and the second modality medical image in particular with regard to at least one of the colour values and contrast values used to represent the anatomical structure and with regard to the spatial properties (in particular at least one of position and orientation) of the anatomical structure for example relative to image features surrounding the anatomical structure (which represent in particular other anatomical structures contained in the patient's body) or its geometric properties (in particular at least one of size - in particular volume - and shape).
Preferably, atlas data is acquired which describes a first modality atlas image. The first modality atlas image describes in particular a general structure of the anatomical structure in the first imaging modality, in particular the atlas data was generated based on medical image data which was generated by applying the first imaging modality. The atlas data in particular contains information describing the general structure of the anatomical structure (in this disclosure also referred to as general anatomical structure) in particular in a medical image (i.e. in particular by way of a medical image). The atlas data preferably contains element representation information which describes the representation of physical structures, for example the anatomical elements (referred to as "atlas elements") of the general anatomical structure in atlas images described by the atlas data. This representation corresponds to the representation of the physical structures in an image which is generated by means of an analytical device
from a patient having for example an anatomical structure which is identical to the general anatomical structure. The influence of the generating process on the representation of the one or more physical strcutures is represented by a parameter set (for example, scanning parameters such as the type of analytical device used to generate the medical image data and/or the measurement parameters which are set, in particular adjusted, on the analytical, device and have an influence on the representation). The parameter set represents and in particular comprises one or in particular more parameters (also called representation parameters, such as the type of analytical device used for generating the medical image data and for example the magnetic field strength in an MRT device or the acceleration voltage in CT devices used to generate the medical image data) which reflect and in particular are parameters which have an influence on the representation of the image elements in the medical image which serves as a basis for generating the atlas image. This at least one parameter (in paiticular its value and/or values) is preferably described by the element representation infonnation and therefore the atlas data.
The atlas data is acquired in particular from an anatomical atlas which typically consists of a plurality of generic models of objects, wherein the generic models of the objects together form a complex structure. The atlas data may therefore also be called generic patient model data. The atlas image is generated based on in particular a statistical analysis of the anatomy of the bodies of a plurality of human bodies, more particularly based on a statistical analysis of the anatomy of an anatomical body structure in a plurality of human bodies corresponding to the aforementioned anatomical structure of the patient. The atlas of a femur, for example, can comprise the head, the neck, the body, the greater trochanter, the lesser trochanter, and the lower extremity as objects which may complex structure. The atlas of a brain, for example, can comprise a telencephalon, the cerebellum, the diencephalon, the pons, the mesencephalon and the medulla oblongata as the objects making up the complex structure. Examples of the aforementioned influences on the representation include influences on the image values which represent the physical structures, such as for instance influences on a grey value (representing the image value) which represents the anatomical element, or influences on the position of an image value in a colour space which represents the anatomical element. Other examples include influences on contrast, image value range, gamut, etc. The atlas data describes in particular an atlas image of the general anatomical structure in the first imaging modality (also called first modality atlas image) and comprises information about the representation of the
general structure in the second imaging modality. The infomiation about the representation of the general structure in the second imaging modality is preferably included in a look-up table contained in the atlas data, further preferably it is determined from an atlas image of the general anatomical structure in the second imaging modality (also called second modality atlas image). The second modality atlas image then is included in the atlas data. The information about the representation of the general structure is determined based on the second modality atlas image.
The general structure (and therefore also the atlas data) is preferably generated outside of the disclosed method based on medical image infomiation which is gathered from a plurality of human bodies. Preferably, these bodies share a common characteristic such as for example at least one of gender, age and ethnicity. However, it is also possible and in the framework of the disclosed method to use atlas data describing a general structure which was generated on the basis of medical image information which was gathered from huma bodies which do not share a common characteristic such as for example ethnicity, in particular the atlas data may have been generated on the basis of human bodies of different ethnicity. The general structure represents in particular the geometry (in particular at least one of size - in particular volume - and shape) of the anatomy of at least part of which was generated on the basis of the statistic analysis of the plurality of human bodies. For example, the general structure represents the most probable geometry of a patient's body having a specific characteristic such as at least one of gender, age and ethnicity. In a particular embodiment, the general structure may have been generated also on the basis of information about a specific pathologic state, for example the plurality of human bodies serving as a basis for the general structure may share a common pathologic state (such as a tumour disease or an anatomic anomaly). The representation of the general structure will vary between different imaging modalities. The representation is understood to encompass in particular the image appearance of the anatomical structure represented by the general structure in a specific image modality which is governed by for example the colour values assigned to specific physical structures (for example tissue - in particular soft tissue and bony tissue, and fluids such as liquids or gas, in particular liquids such a liquor or a gas such as air) in the respective imaging modality. The representation is described by the aforementioned element representation information. For example, the colour values generated by a computed tomography will be in a grey value scale which is defined in the Hounsfield scale and represents the absorbance of physical structures for the x-rays with which the physi-
cal structures are irradiated to generate a computed tomography. In this case, bony tissue will be rendered in lighter grey values towards the white end of the grey scale, and soft tissue and fluids will be rendered in darker grey values towards the black end of the grey scale. Alternatively or additionally, the image appearance may be governed by colour contrasts - this would be the case in particular if the applied imaging modality is magnetic resonance tomography. In magnetic resonance imaging, physical structures are delineated from one another by colour contrasts which depend in particular on the magnetic field strength, the type of magnetic pulse sequence, the considered type of magnetic relaxation (for example Ti or T2) and the magnetic behaviour of the physical structure in an external magnetic field. The atlas data comprises in particular information about the representation of the general structure in the first imaging modality and in the second imaging modality. This information is contained in particular in the element representation information. Furthermore, the atlas data in particular comprises information which allows to map between the representation of a specific general structure in different imaging modalities (in particular in the first imaging modality and the second imaging modality). This information in particular allows to determine how a specific part of the general structure, the appearance of which is known in the first imaging modality, would appear in the second imaging modality. To this end, the atlas data was preferably generated based on a statistical analysis of a plurality of medical images of anatomical structures corresponding to the general structure which were taken with the first imaging modality in order to generate the first modality atlas image and also based on a statistical analysis of a plurality of medical images of the respective anatomical structures which were taken with the second imaging modality in particular in order to generate the second modality atlas image. The atlas data then includes for example correspondence data describing which image features in the first modality atlas image correspond to the respective image features in the second modality atlas image in particular by assigning them to the respective anatomical structure which they represent.
The atlas data preferably comprises atlas geometry information which describes the geometric properties (in particular at least one of size - in particular volume - and shape) of the general anatomical structure. The spatial information can comprise only one set of static spatial information, i.e. spatial information which does not change over time and only provides one set of spatial properties for the general anatomical structure, or can comprise a plurality of sets of static spatial information which respectively describes the spatial properties of the general
anatomical structure in different states, for instance at different points in time during for example a vital movement such as for example a breathing cycle. The vital movement is a movement of parts of the body due to vital functions of the body, such as for example breathing and/or the heart beat and/or digestive movements. The term "vital movements" covers any kind of movement of the body which is performed unconsciously and in particular controlled by the brain stem. The atlas spatial information can also describe different movement or posture states of the patient, such as the patient running, walking, standing or lying down. It can also cover different pathological states of a patient, such as a patient with an infection or a tumour in a particular part of the body, all particular states of a patient during surgery such as a patient within an exposed brain (i.e. in a state after craniotomy) resulting in a brain shift (which can in turn depend on the positioning of the head). The term "posture" as used here refers in particular to different positions of the extremities of the body, such as for example raised or lowered hands.
For example, representation class data is acquired which describes a representation class of the image elements describing the anatomical structure in the first modality medical image, the second modality medical image, the first modality atlas image and - as far as applicable - the second modality atlas image. The representation class describes in particular at least one of the aforementioned colour contrasts, colour values and the type of physical structure represented by the image elements. In a preferred embodiment, the representation class data comprises the element representation information, which then describes in particular the representation class of the respective image elements). The physical structure can be for example an anatomical structure (such as soft tissue or bony tissue) or a fluid as mentioned above. In particular, anatomical structures are assigned tissue classes which define the type of tissue contained in the anatomical structure represented by the respective image elements. On this basis, the above-described mapping is established between image elements in the first modality atlas image and image elements in the second modality atlas image which represent the same anatomical structure. This mapping preferably is also contained in the atlas data.
Preferably, a first matching transformation is determined based on the first modality image data and the atlas data. The first matching transformation is in particular a matching transformation between the image information of the first modality image data and the first modality atlas image. Within the context of this disclosure, a matching transformation is understood to
bein particular a mapping function (more particularly, a linear mapping function) for mapping information (in particular positions) defined in a reference system (in particular positional reference system) used to define for example the positions in a first data set onto information (in particular positions) defined in a (in particular different) reference system (in particular positional reference system) used to define for example the positions in a second data set. A transformation can be determined based on for example executing at least one of an image segmentation algorithm and an image fusion algorithm (for example, an elastic image fusion algorithm as described below in the chapter "Definitions")- In particular, the transformations therefore are coordinate transformations and/or mappings between coordinate systems, for example between the coordinate system used to define information contained in the first data set and the coordinate to be used to define information contained in the second data set. In the step of determining the first matching transfomiation, the first data set is embodied by the image information of the first modality image data and the second data set is embodied by the second modality atlas image. In particular, the first matching transformation and the second modality image representation are determined based on the imaging modality data in order to input information into the disclosed method which imaging modalities are to be considered during execution of the disclosed method.
In particular, the first matching transformation is constituted to match the representation of the anatomical structure in the first modality medical image with the representation of the general, structure of the anatomical structure in the first modality atlas image. In the context of this disclosure, a matching transformation contains at least one of a mapping and a matching function. Matching is understood to encompass in particular a spatial adaptation of (in particular positional) information contained in a first data set to (in particular positional) information contained in a second data set. This can happen for example by adaptation of the geometric properties (for example, at least one of size - in particular volume - and shape) of a structure described by both data sets. For example, the representation of the general structure in the first modality atlas image is adapted to the spatial properties of the representation of the anatomical structure in the first modality medical image. In particular, the representation (in particular the geometric properties) of the general representation of the anatomical structure in the first modality atlas image is deformed to fit to the representation in the first modality medical image. The first matching transformation therefore is in particular an elastic transformation which can be implemented as an elastic fusion algorithm. An elastic transformation in
the meaning of this disclosure is in particular a transformation which maps a first set of spatial information onto a second set of spatial information while adapting at least the second set to the first set in order to achieve congruence between the two sets of spatial information. In the ideal case, the matching transformation is in the end determined to be unity (or at least optimized to be a value close as close to unity as possible considering in particular a predetermined maximum difference between the matching transformation and unity). In the present case this implies that the representation of the anatomical structure in the first modality atlas image is deformed to constitute a best fit (for example in the sense of a least-squares fitting) to the geometry of the representation of the anatomical structure in the first modality medical image.
Based on the aforementioned information about corresponding anatomical structures in the first modality atlas image and the second modality atlas image, a second modality image representation of the first modality medical image is then preferably determined based on the first matching transformation and the first modality atlas image and the information about the representation of the general structure of the anatomical structure in the second imaging modality (in particular based on the second modality atlas image). The second modality image representation describes what the first modality atlas image (in particular the matched first modality atlas image) would look like if the (matched) general structure had been generated on the basis of medical image data which had been acquired with the second imaging modality. The second modality image representation is preferably determined by replacing the element representation information of the image elements in the (matched) first modality atlas image with the corresponding element representation information for the second imaging modality.
Alternatively and according to a less preferred embodiment, the second modality image representation is determined by determining a modality transformation between the first modality atlas image and the second modality atlas image which again is preferably an elastic fusion transformation for matching the second modality atlas image to the first modality atlas image. In particular, the modality transformation is a transformation between the first modality atlas image which has been matched to the first modality image data (the matched first modality atlas image), and the second modality atlas image. For example, the first matching transformation is applied to the first modality atlas image in order to determine a matched first modal-
ity atlas image containing a representation of the general structure of the anatomical structure which has been matched to the representation of the anatomical structure in the first modality medical image. The modality transformation then is preferably determined as a transformation between the matched first modality atlas image and the second modality atlas image. Advantageously, the representation of the anatomical structure in the second modality atlas image is matched (i.e. mapped and deformed) to fit the representation of the anatomical structure in the matched first modality atlas image.
Preferably, a second matching transformation is determined between the second modality image representation and the second modality medical image. The second matching transformation is determined in particular based on the second modality image representation and the second modality image data. Further particularly, the second modality image representation of the anatomical structure is matched (i.e. at least one of mapped and deformed) to fit the representation of the anatomical structure in the second modality medical image. This serves in particular to compare the representation of the anatomical structure in the second modality image representation and in the second modality medical image. In particular, at least one of the spatial properties (for example the positions) and the geometric properties (in particular the shape) of the two representations can be compared and differences can be determined in particular by determining parameters of the second matching transformation which are nonzero. An advantage of conducting the comparison based on the aforementioned method steps is that data processing instabilities which would occur when comparing image representations of different modality can be avoided since the second modality image representation and the second modality medical image contain image information in particular about the anatomical structure which is defined in the same space of colour value, colour contrasts and types of anatomical structures and which therefore in parti cular use the same set of representation classes (in particular tissue classes) for describing the image information.
The disclosed method can be carried out irrespective of the number of space dimensions to be considered, for example it can be executed in both a two-dimensional and a three-dimensional environment. In particular, any positions and image information (in both the medical images and the atlas images) can be defined in two or three dimensions.
A specific preferred embodiment of the invention is directed to avoiding data processing instabilities in case at least one of a spatial (in particular a positional) and a geometric change (in particular a change in shape) has occurred to the anatomical structure between the point in time at which the first modality image data was generated and the point in time at which the second modality image data was generated. For example, the second modality medical image may display a comparably large positional shift of the outer brain surface compared to the position of the outer brain surface in the first modality medical image and therefore in the second modality image representation. This positional shift may be due to for example a loss of surface tension on the outer surface of the brain below the position of a craniotomy which may lead to a collapse of the brain structure in the gravitation of field, in particular away from the position of the craniotomy. This positional shift may, however, lead to data processing instabilities when matching the second modality medical image to the second modality image representation. In particular, a large positional shift may hamper automatic detection of anatomical structures corresponding to each other in both representations. In order to avoid this problem, the present invention preferably comprises a step of defining a structural change region in the second modality image representation. The stractural change of the region comprises in particular a placeholder for a data structure which represents a change of the anatomical structure. The placeholder is in particular a seed structure which can be used in particular for adapting the second matching transformation to the change of the anatomical structure, in particular to a difference to the representations of the anatomical structure in the second modality representation and the second modality medical image. For example, the placeholder can be expanded in the second modality image representation by applying a field of shift vectors to the placeholder and correspondingly shifting and/or defonning the structures in the second modality image representation which surround the placeholder. The expansion is preferably carried out until the placeholder corresponds to the difference in representation to a predetermined degree of similarity.
Preferably, similarity data describing a measure of similarity between the second modality image representation and the second modality medical image (in particular between the respective representations of the anatomical structure) is determined. The measure of similarity is determined based on for example a similarity between the second modality image representation and the second modality image based on comparing the representation classes of the respective image elements describing the anatomical structure in the second modality image
representation and the second modality medical image, respectively. The measure of similarity preferably includes (in particular is) a cross-correlation, for example a local-cross- correlation between colour contrasts in the second modality image representation and colour contrast in the second modality medical image. Alternatively or additionally, it includes (in particular is) a local cross-correlation between colour values in the second modality image representation and colour values in the second modality medical image. As a preferred embodiment, regions in the second modality image representation and the second modality medical image for which at least substantially no similarity (in particular no similarity and/or a low value of the measure of similarity) has been determined are excluded as a basis for determining the second matching transformation. This avoids hampered data processing when determining the second matching transformation. Alternatively or additionally, a structural change region may be defined in a region of at least substantially no similarity and the placeholder may be expanded. The similarity data may then be re-detennined in particular until a predetermined value (in particular an acceptable level) of the measure of similarity is determined. Thereby, information about a structural change of the anatomical structure and a change of at least one of shape, size and position of the constituents of the anatomical structure which had already been represented in the first modality medical image can be determined in a concise manner.
The structural change region may alternatively or additionally be also in particular an anatomical feature which is represented in the first modality medical image and therefore the second modality image representation, but is not represented in the second modality medical image (for example, an implant or a tumour which has been implanted or grown, respectively, in the meantime). In this case, the change region is preferably defined in the second modality image representation at the position of the respective anatomical structure, which is then compressed for example by applying a vector field in an inward direction of an anatomical structure in order to reduce it and it is no longer present and a predetermined level of similarity to the second modality medical image is reached.
The invention also relates to a program which, when running on a computer or when loaded onto a computer, causes the computer to perform one or more or all of the method steps described herein and/or to a program storage medium on which the program is stored (in particular in a non-transitory form) and/or to a computer on which the program is running or into
the memory of which the program is loaded and/or to a signal wave, in particular a digital signal wave, carrying information which represents the program, in particular the aforementioned program, which in particular comprises code means which are adapted to perform any or all of the method steps described herein.
Definitions
In this chapter, definitions are disclosed which define terminology used in the present disclosure. These definitions also form part of the present disclosure.
Within the framework of the invention, computer program elements can be embodied by hardware and/or software (this includes firmware, resident software, micro-code, etc.). Within the framework of the invention, computer program elements can take the form of a computer program product which can be embodied by a computer-usable, in particular computer- readable data storage medium comprising computer-usable, in particular computer-readable program instructions, "code" or a "computer program" embodied in said data storage medium for use on or in connection with the instruction-executing system. Such a system can be a computer; a computer can be a data processing device comprising means for executing the computer program elements and/or the program in accordance with the invention, in particular a data processing device comprising a digital processor (central processing unit or CPU) which executes the computer program elements and optionally a volatile memory (in particular, a random access memory or RAM) for storing data used for and/or produced by executing the computer program elements. Within the framework of the present invention, a computer- usable, in particular computer-readable data storage medium can be any data storage medium which can include, store, communicate, propagate or transport the program for use on or in connection with the instruction-executing system, apparatus or device. The computer-usable, in particular computer-readable data storage medium can for example be, but is not limited to, an electronic, magnetic, optical, electromagnetic, inf ared or semiconductor system, apparatus or device or a medium of propagation such as for example the Internet. The computer-usable or computer-readable data storage medium could even for example be paper or another suitable medium onto which the program is printed, since the program could be electronically captured, for example by optically scanning the paper or other suitable medium, and then compiled, interpreted or otheiwise processed in a suitable manner. The data storage medium is
preferably a non-volatile data storage medium. The computer program product and any software and/or hardware described here form the various means for performing the functions of the invention in the example embodiments. The computer and/or data processing device can in particular include a guidance information device which includes means for outputting guidance information. The guidance information can be outputted, for example to a user, visually by a visual indicating means (for example, a monitor and/or a lamp) and/or acoustically by an acoustic indicating means (for example, a loudspeaker and/or a digital speech output device) and/or tactilely by a tactile indicating means (for example, a vibrating element or vibration element incorporated into an instrument).
In the Field of medicine, imaging methods are used to generate image data (for example, two- dimensional or three-dimensional image data) of anatomical structures (such as soft tissues, bones, organs, etc.) of the human body. Medical imaging methods are understood to mean advantageously apparatus-based imaging methods (so-called medical imaging modalities and/or radiological imaging methods), such as for instance computed tomography (CT) and cone beam computed tomography (CBCT; in particular volumetric CBCT), x-ray tomography, magnetic resonance tomography (M T or MRI), conventional x-ray, sonography and/or ultrasound examinations, and positron emission tomography. Analytical devices are in particular used to generate the image data in apparatus-based imaging methods. The imaging methods are in particular used for medical diagnostics, to analyse the anatomical body in order to generate images which are described by the image data. The imaging methods are in particular used to detect pathological changes in the human body. However, some of the changes in the anatomical structure, in particular the pathological changes in the structures (tissue), may not be detectable and in particular may not be visible in the images generated by the imaging methods. A tumour for example represents an example of a change in an anatomical structure. If the tumour grows, it may then be said to represent an expanded anatomical structure. This expanded anatomical structure may not be detectable; in particular, only a part of the expanded anatomical structure may be detectable. Primary/high-grade brain tumours are for example usually visible on MRI scans when contrast agents are used to infiltrate the tumour. The MRI scans represent an example of an imaging method.
The method in accordance with the invention is in particular a data processing method. The data processing method is preferably performed using technical means, in particular a com-
puter. The data processing method is in particular executed by or on the computer. The computer in particular comprises a processor and a memory in order to process the data, in particular electronically and/or optically. The calculating steps described are in particular performed by a computer. Determining steps or calculating steps and acquiring steps are in particular steps of determining data within the framework of the technical data processing method, in particular within the framework of a program. A computer is in particular any kind of data processing device, in particular electronic data processing device. A computer can be a device which is generally thought of as such, for example desktop PCs, notebooks, netbooks, etc., but can also be any programmable apparatus, such as for example a mobile phone or an embedded processor. A computer can in particular comprise a system (network) of "sub- computers", wherein each sub-computer represents a computer in its own right. The term "computer" includes a cloud computer, in particular a cloud server. The term "cloud computer" includes a cloud computer system which in particular comprises a system of at least one cloud computer and in particular a plurality of operatively interconnected cloud computers such as a server farm. Such a cloud computer is preferably connected to a wide area network such as the world wide web (WWW) and located in a so-called cloud of computers which are all connected to the world wide web. Such an infrastructure is used for "cloud computing" which describes computation, software, data access and storage services which do not require the end user to know the physical location and/or configuration of the computer delivering a specific service. In particular, the term "cloud" is used as a metaphor for the internet (world wide web). In particular, the cloud provides computing infrastructure as a service (laaS). The cloud computer can function as a virtual host for an operating system and/or data processing application which is used to execute the method of the invention. The cloud computer is for example an elastic compute cloud (EC2) as provided by Amazon Web Services™. A computer in particular comprises interfaces in order to receive or output data and/or perform an analogue-to-digital conversion. The data are in particular data which represent physical properties and/or are generated from technical signals. The technical signals are in particular generated by means of (technical) detection devices (such as for example devices for detecting marker devices) and/or (technical) analytical devices (such as for example devices for performing imaging methods), wherein the technical signals are in particular electrical or optical signals. The technical signals in particular represent the data received or outputted by the computer.
The expression "acquiring data" encompasses in particular (within the framework of a data processing method) the scenario in which the data are determined by the data processing method or program. Determining data in particular encompasses measuring physical quantities and transforming the measured values into in particular digital data and/or computing the data by means of a computer, in particular computing the data within the method of the invention. The meaning of "acquiring data" in particular also encompasses the scenario in which the data are received or retrieved by the data processing method or program, for example from another program, a previous method step or a data storage medium, in particular for further processing by the data processing method or program. Thus, "acquiring data" can also for example mean waiting to receive data and/or receiving the data. The received data can for example be inputted via an interface. "Acquiring data" can also mean that the data processing method or program performs steps in order to (actively) receive or retrieve the data from a data source, for instance a data storage medium (such as for example a ROM, RAM, database, hard disc, etc.) or via the interface (for instance, from another computer or a network). The data can achieve the state of being "ready for use" by performing an additional step before the acquiring step. In accordance with this additional step, the data are generated in order to be acquired. The data are in particular detected or captured (for example, by an analytical device). Alternatively or additionally, the data are inputted in accordance with the additional step, for instance via interfaces. The data generated can in particular be inputted (for instance, into the computer). In accordance with the additional step (which precedes the acquiring step), the data can also be provided by performing the additional step of storing the data in a data storage medium (such as for example a ROM, RAM, CD and/or hard drive), such that they are ready for use within the framework of the method or program in accordance with the invention. Thus, "acquiring data" can also involve commanding a device to obtain and/or provide the data to be acquired. The acquiring step in particular does not involve an invasive step which would represent a substantial physical interference with the body requiring professional medical expertise to be carried out and entailing a substantial health risk even when earned out with the required professional care and expertise. Acquiring, in particular determining, data in particular does not involve a surgical step and in particular does not involve a step of treating a human or animal body using surgery or therapy. This also applies in particular to any steps directed to determining data. In order to distinguish the different data used by the present method, the data are denoted (i.e. referred to) as "XY data" and the like and are de-
fined in particular by the information which they describe which is preferably called "XY information".
The first and second matching transformations are for example image fusion transformations, in particular elastic fusion transformations which are designed to enable a seamless transition from one data set (e.g. first data set, e.g. first image) to another data set (e.g. second data set, e.g. second image). In this application, the term "image morphing" is also used as an alternative to the term "image fusion", but with the same meaning. The transformations are is in particular designed such that one of the aforementioned first and second data sets (images) is deformed, in particular in such a way that corresponding structures (in particular, corresponding image elements) are arranged at the same position as in the other of the first and second images. The deformed (transformed) image which is transformed from one of the first and second images is in particular as similar as possible to the other of the first and second images. Preferably, (numerical) optimization algorithms are applied in order to find the transformation which results in an optimum degree of similarity. The degree of similarity is preferably measured by way of a measure of similarity (also referred to in the following as a "similarity measure"). The parameters of the optimization algorithm are in particular vectors of a deformation field F. These vectors are determined by the optimization algorithm which results in an optimum degree of similarity. Thus, the optimum degree of similarity represents a condition, in particular a constraint, for the optimization algorithm. The bases of the vectors lie in particular at voxel positions of one of the first and second images which is to be transformed, and the tips of the vectors lie at the corresponding voxel positions in the transformed image. A plurality of these vectors are preferably provided, for instance more than twenty or a hundred or a thousand or ten thousand, etc. Preferably, there are (other) constraints on the transformation (deformation), in particular in order to avoid pathological deformations (for instance, all the voxels being shifted to the same position by the transformation). The constraints include in particular the constraint that the transformation is regular, which in particular means that a Jacobian determinant calculated from a matrix of the deformation field (in particular, the vector field) is larger than zero. The constraints include in particular the constraint that the transformed (deformed) image is not self-intersecting and in particular that the transformed (deformed) image does not comprise faults and/or ruptures. The constraints include in particular the constraint that if a regular grid is transformed simultaneously with the image and in a corresponding manner, the grid is not allowed to interfold at any of its loca-
tions. The optimizing problem is in particular solved iteratively, in particular by means of an optimization algorithm which is in particular a first-order optimization algorithm, in particular a gradient descent algorithm. Other examples of optimization algorithms include optimization algorithms which do not use derivations such as the downhill simplex algorithm or algorithms which use higher-order derivatives such as Newton-like algorithms. The optimization algorithm preferably performs a local optimization. If there are a plurality of local optima, global algorithms such as simulated annealing or genetic algorithms can be used. In the case of linear optimization problems, the simplex method can for instance be used.
In the steps of the optimization algorithms, the voxels are in particular shifted by a magnitude in a direction such that the degree of similarity is increased. This magnitude is preferably less than a predefined limit, for instance less than 1/10 or 1/100 or 1/1000 of the diameter of the image, and in particular about equal to or less than the distance between neighboring voxels. Due in particular to a high number of (iteration) steps, large deformations can be implemented.
The determined elastic fusion transformation can be in particular used to determine a degree of similarity (similarity measure also referred to as "measure of similarity") between the first and second data set (first and second image). To this end, the deviation of the elastic fusion transformation and an identity transformation is determined. The degree of deviations can be for instance calculated by determining the difference between the determinant of the elastic fusion transformation and the identity transformation. The higher the deviation is the less is the similarity. Thus, the degree of deviation can be used to determine a measure of similarity.
A measure of similarity can in particular be determined on the basis of a determined correlation between the first and second data set.
Description of the fi ures
In the following, a preferred embodiment of the present invention is described with reference to the figures, without limiting the present invention to the features which are described in the following and shown in the figures, wherein
Fig. 1 describes a general algorithm used for determining the second modality image representation;
Fig. 2 illustrates the meaning of a structural change region.
According to Fig. 1 , the first modality image data and the atlas data is acquired in step S 1. The first modality medical image is then registered with the first modality atlas image in step S2 in particular by determining the first matching transformation. Then, the method carries on in steps S3 with segmenting the representation classes in the first modality medical image for example by applying an expectation maximization algorithm. In step S4, the second modality image data is acquired. Preferably, the first modality image data comprises imaging modality data describing the type of imaging modality which is the first imaging modality. Preferably, the second modality image data comprises imaging modality data which describes the type of imaging modality which is the second imaging modality.
In step S5, the first modality medical image is then simulated in the second imaging modality by determining the second modality image representation. This simulation is carried out in particular based on the results of the segmentation in step S3 an information contained in the atlas data which describes how a given representation class (in particular tissue type) appears in the first imaging modality and in the second imaging modality.
In order to compare the representation of the anatomical structure in the first modality medical image (in particular in the second modality image representation) with its representation in the second modality medical image, an elastic fusion is performed in step S5 between the second modality image representation and the second modality medical image. This elastic fusion is an example of the second matching transformation.
A specific example of the workflow shown in Fig. 1 is the following: a patient usually undergoes several pre-operative MR scans, for example a Tj and a T2 scan. Different types of tissue can be determined from the images taken in Tj and T2, respectively, and a combined data set representing the first modality image data can be generated from the Ti and T2 images. The first imaging modality therefore is set to be MR. Later on, a CT image may be taken of the patient, and the second imaging modality therefore is CT. Based on an information about the Hounsfield values for specific types of tissue in the CT image, a CT is simulated from the
combined T\ and T2 data set (i.e. from the first modality image data. This results in one genuine and one simulated CT data set, and these two data sets can be fused with a higher stability than would be the case when using a multi-modal MR-CT fusion algorithm which uses for example mutual information.
Fig. 2 shows how a structural change region is used to support determining the second matching transformation. Fig. 2(a) illustrates a second modality image representation of an anatomical structure embodied by a patient's brain 1 which, before growing a placeholder 3, extends along its outer boundary 2 almost up to the inner surface of the skull bones. Preferably, a structural change region is defined comprising the placeholder 3. Fig. 2(b) illustrates a second modality medical image of the same patient's skull and brain Γ along with a craniotomy 5, due to which the brain 1 ' collapsed compared to its outer boundary 2 shown in Fig. 2(a). In order to support determining the second matching transformation T for an anatomical structure represented by the brain between the images of Fig. 2(a) and (b), a placeholder 3 is inserted for example along the original boundary 2 of the brain 1 in the second modality image representation. The placeholder 3 is then grow by applying a vector field or to the image elements representing the placeholder 3. Thereby, a cavity is grown in the second modality image representation so that the difference in appearance of the brain 1, Γ between the two images becomes similar. Thereby, it is avoided that the cavity which was generated by the brain collapsing and is depicted in the second modality medical image leads to a fault in determining the second matching transformation T since no corresponding cavity would have been found in the second modality image representation before growing the placeholder.
Claims
1. A medical data processing method of transforaiing a representation of an anatomical structure (1) of a patient in a first imaging modality into a representation of the anatomical structure ( ) in a second, other imaging modality, the method being constituted to be executed by a computer and comprising the following steps:
a) acquiring (SI) first modality image data describing the first modality medical image containing the representation of the anatomical structure (1) in the first imaging modality;
b) acquiring (SI) atlas data describing a first modality atlas image describing a general structure of the anatomical structure (1) in the first imaging modality, the atlas data containing information about the representation of the general structure in the second imaging modality;
c) determining (S3), based on the first modality image data and the atlas data, a first matching transformation between the first modality medical image and the first modality atlas image;
d) determining (S5), based on the first matching transformation and the first modality atlas image and the information about the representation of the general structure in the second imaging modality second modality, a second modality image representation of the first modality medical image.
2. The method according to the preceding claim, further comprising steps of:
e) acquiring (S4) second modality image data describing a second modality medical image containing the representation of the anatomical structure (Γ) in the second imaging modality;
f) determining (S6), based on the second modality image representation and the second modality image data, a second matching transformation (T) between the second modality image representation and the second modality medical image.
3. The method according to any one of the preceding claims, wherein a matched first modality atlas image is determined by applying the first matching transformation to the first modality atlas image, and wherein the first matching transformation is determined based on the matched first modality atlas image.
4. The method according to any one of the preceding claims, wherein the atlas data describes a second modality atlas image describing a general structure of the anatomical structure (Γ) in the second image modality, and
wherein the information about the representation of the general structure in the second imaging modality is determined based on the second modality atlas image.
5. The method according to the preceding claim, wherein the second modality image representation is determined based on determining a modality transformation between the matched first modality atlas image and the second modality atlas image.
6. The method according to the preceding claim, comprising:
acquiring representation class data describing a representation class of the image elements describing the representation of the general structure of the anatomical structure in the first imaging modality and in the second imaging modality, wherein the representation class describes for example at least one of colour contrasts, colour values and type of physical structure represented by the image elements,
wherein the modality transformation is detennined based on the representation class data.
7. The method according to any one of the preceding claims, comprising:
defining a structural change region in the second modality image representation, the structural change region comprising a placeholder (3) for a data structure representing a change of the anatomical structure (1) which can be used for example for adapting the second matching transformation (T) to a difference between the representations of the anatomical structure (1, Γ) in the second modality representation and the second modality medical image.
8. The method according to any one of the preceding claims, comprising:
acquiring imaging modality data describing the first imaging modality and the second imaging modality,
wherein in particular the first matching transformation and the second modality image representation are determined based on the imaging modality data, wherein preferably also the second matching transformation (T) is determined based on the imaging modality data.
9. The method according to the preceding claim, wherein the representation class describes a grey value, for example Hounsfield units.
10. The method according to any one of the two preceding claims, comprising:
determining similarity data describing a measure of similarity between the second modality image representation and the second modality medical image, wherein the measure of similarity is determined based on for example a similarity between the second modality image representation and the second modality medical image with regard to the representation class of the respective image elements describing the anatomical structure (1, Γ).
11. The method according to the preceding claim, wherein the measure of similarity includes a cross-correlation, for example a local cross- correlation between at least one of colour contrasts and colour values in the second modality image representation and the second modality medical image.
12. The method according to any one of the three preceding claims, wherein regions in the second modality image representation and the second modality medical image for which at least substantially no similarity has been determined are excluded as a basis for determining the second matching transformation.
13. The method according to any one of the preceding claims, wherei the first matching transformation and the second matching transformation are determined based on executing at least one of an image segmentation algorithm and an image fusion algorithm, for example an elastic image fusion algorithm.
14. The method according to any one of the preceding claims, wherein the first imaging modality is magnetic resonance tomography and the second medical imaging modality is computed x-ray tomography or x-ray.
15. A program which, when running on a computer or when loaded onto a computer, causes the computer to perform the method steps according to any one of the preceding claims and/or a program storage medium on which the program is stored in particular in a non-transitory form and/or a computer, in particular a cloud computer, on which the program is running or into the memory of which the program is loaded and/or a signal wave carrying information which represents the aforementioned program, which comprises code means which are adapted to perform the method steps according to any one of the preceding claims.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13732178.2A EP2912630B1 (en) | 2012-10-26 | 2013-06-28 | Matching patient images of different imaging modality using atlas information |
US14/437,784 US9639938B2 (en) | 2012-10-26 | 2013-06-28 | Matching patient images of different imaging modality using atlas information |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EPPCT/EP2012/071241 | 2012-10-26 | ||
PCT/EP2012/071241 WO2014063746A1 (en) | 2012-10-26 | 2012-10-26 | Matching patient images and images of an anatomical atlas |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014063840A1 true WO2014063840A1 (en) | 2014-05-01 |
Family
ID=47177961
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2012/071241 WO2014063746A1 (en) | 2012-10-26 | 2012-10-26 | Matching patient images and images of an anatomical atlas |
PCT/EP2013/063640 WO2014063840A1 (en) | 2012-10-26 | 2013-06-28 | Matching patient images of different imaging modality using atlas information |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2012/071241 WO2014063746A1 (en) | 2012-10-26 | 2012-10-26 | Matching patient images and images of an anatomical atlas |
Country Status (3)
Country | Link |
---|---|
US (6) | US9704243B2 (en) |
EP (8) | EP3428881B1 (en) |
WO (2) | WO2014063746A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9704243B2 (en) | 2012-10-26 | 2017-07-11 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2819093B1 (en) * | 2013-06-24 | 2016-05-11 | RaySearch Laboratories AB | Method and system for atlas-based segmentation |
US20170091386A1 (en) * | 2014-05-16 | 2017-03-30 | Brainlab Ag | Inference Transparency System for Image-Based Clinical Decision Support Systems |
WO2016141449A1 (en) * | 2015-03-09 | 2016-09-15 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of The Department Of National Defence | Computer-assisted focused assessment with sonography in trauma |
EP3217307B1 (en) * | 2016-02-22 | 2018-11-07 | Eshard | Method of testing the resistance of a circuit to a side channel analysis of second order or more |
US10741283B2 (en) | 2016-09-06 | 2020-08-11 | International Business Machines Corporation | Atlas based prior relevancy and relevancy model |
CN107045712B (en) * | 2017-01-19 | 2020-12-08 | 宁波江丰生物信息技术有限公司 | Medical image adjusting method and digital pathological section browsing system |
FR3064782B1 (en) * | 2017-03-30 | 2019-04-05 | Idemia Identity And Security | METHOD FOR ANALYZING A STRUCTURAL DOCUMENT THAT CAN BE DEFORMED |
EP3698718A1 (en) | 2017-05-30 | 2020-08-26 | Brainlab AG | Heatmap and atlas |
US11751947B2 (en) | 2017-05-30 | 2023-09-12 | Brainlab Ag | Soft tissue tracking using physiologic volume rendering |
GB2569541B (en) * | 2017-12-19 | 2020-08-19 | Mirada Medical Ltd | Method and apparatus for medical imaging |
US10832423B1 (en) * | 2018-01-08 | 2020-11-10 | Brainlab Ag | Optimizing an atlas |
US11151726B2 (en) * | 2018-01-10 | 2021-10-19 | Canon Medical Systems Corporation | Medical image processing apparatus, X-ray diagnostic apparatus, and medical image processing method |
CN110772280B (en) * | 2018-07-31 | 2023-05-23 | 佳能医疗系统株式会社 | Ultrasonic diagnostic apparatus and method, and image processing apparatus and method |
US12112845B2 (en) | 2018-11-07 | 2024-10-08 | Brainlab Ag | Compartmentalized dynamic atlas |
US20220361959A1 (en) | 2019-10-06 | 2022-11-17 | Universitat Bern | System and Method for Computation of Coordinate System Transformations |
EP3813017A1 (en) | 2019-10-21 | 2021-04-28 | Bayer AG | Cardiac region segmentation in ct images |
WO2021121600A1 (en) * | 2019-12-19 | 2021-06-24 | Brainlab Ag | Medical image analysis using machine learning and an anatomical vector |
US12002153B2 (en) | 2021-01-22 | 2024-06-04 | Novocure Gmbh | Methods, systems, and apparatuses for medical image enhancement to optimize transducer array placement |
US20220319002A1 (en) * | 2021-04-05 | 2022-10-06 | Nec Laboratories America, Inc. | Tumor cell isolines |
EP4445328A2 (en) * | 2021-12-10 | 2024-10-16 | Mofaip, LLC | Multidimensional anatomic mapping, descriptions, visualizations, and translations |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070038058A1 (en) * | 2005-08-11 | 2007-02-15 | West Jay B | Patient tracking using a virtual image |
US20080188741A1 (en) * | 2007-02-05 | 2008-08-07 | General Electric Company | Brain image alignment method and system |
US20110069873A1 (en) * | 2009-09-24 | 2011-03-24 | Aze Ltd. | Medical image data alignment apparatus, method and program |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6611630B1 (en) * | 1996-07-10 | 2003-08-26 | Washington University | Method and apparatus for automatic shape characterization |
US6740883B1 (en) * | 1998-08-14 | 2004-05-25 | Robert Z. Stodilka | Application of scatter and attenuation correction to emission tomography images using inferred anatomy from atlas |
US6754374B1 (en) * | 1998-12-16 | 2004-06-22 | Surgical Navigation Technologies, Inc. | Method and apparatus for processing images with regions representing target objects |
WO2001043070A2 (en) | 1999-12-10 | 2001-06-14 | Miller Michael I | Method and apparatus for cross modality image registration |
US7167583B1 (en) * | 2000-06-28 | 2007-01-23 | Landrex Technologies Co., Ltd. | Image processing system for use with inspection systems |
US6466813B1 (en) | 2000-07-22 | 2002-10-15 | Koninklijke Philips Electronics N.V. | Method and apparatus for MR-based volumetric frameless 3-D interactive localization, virtual simulation, and dosimetric radiation therapy planning |
US20030013951A1 (en) * | 2000-09-21 | 2003-01-16 | Dan Stefanescu | Database organization and searching |
US6784148B2 (en) * | 2001-04-18 | 2004-08-31 | Kay Chemical, Inc | Sprayable hard surface cleaner and method of use |
US20030011624A1 (en) * | 2001-07-13 | 2003-01-16 | Randy Ellis | Deformable transformations for interventional guidance |
US7324842B2 (en) * | 2002-01-22 | 2008-01-29 | Cortechs Labs, Inc. | Atlas and methods for segmentation and alignment of anatomical data |
US20030228042A1 (en) * | 2002-06-06 | 2003-12-11 | Usha Sinha | Method and system for preparation of customized imaging atlas and registration with patient images |
WO2004040437A1 (en) * | 2002-10-28 | 2004-05-13 | The General Hospital Corporation | Tissue disorder imaging analysis |
AU2003219634A1 (en) * | 2003-02-27 | 2004-09-17 | Agency For Science, Technology And Research | Method and apparatus for extracting cerebral ventricular system from images |
EP1890261B1 (en) | 2006-08-14 | 2009-02-18 | BrainLAB AG | Registration of MR data using generic models |
WO2005023086A2 (en) * | 2003-08-25 | 2005-03-17 | University Of North Carolina At Chapel Hill | Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surgical planning |
US7103399B2 (en) | 2003-09-08 | 2006-09-05 | Vanderbilt University | Apparatus and methods of cortical surface registration and deformation tracking for patient-to-image alignment in relation to image-guided surgery |
WO2005079492A2 (en) * | 2004-02-17 | 2005-09-01 | Traxtal Technologies Inc. | Method and apparatus for registration, verification, and referencing of internal organs |
US20060004274A1 (en) | 2004-06-30 | 2006-01-05 | Hawman Eric G | Fusing nuclear medical images with a second imaging modality |
DE102004043889B4 (en) * | 2004-09-10 | 2007-06-06 | Siemens Ag | Method for generating a nuclear medical image |
US9471978B2 (en) | 2004-10-04 | 2016-10-18 | Banner Health | Methodologies linking patterns from multi-modality datasets |
WO2007056601A2 (en) * | 2005-11-09 | 2007-05-18 | The Regents Of The University Of California | Methods and apparatus for context-sensitive telemedicine |
US20090220136A1 (en) * | 2006-02-03 | 2009-09-03 | University Of Florida Research Foundation | Image Guidance System for Deep Brain Stimulation |
EP1868157A1 (en) | 2006-06-14 | 2007-12-19 | BrainLAB AG | Shape reconstruction using X-ray images |
US7646936B2 (en) | 2006-10-03 | 2010-01-12 | Varian Medical Systems International Ag | Spatially variant image deformation |
CA2670261A1 (en) | 2006-11-16 | 2008-05-29 | Vanderbilt University | Apparatus and methods of compensating for organ deformation, registration of internal structures to images, and applications of same |
US8045771B2 (en) * | 2006-11-22 | 2011-10-25 | General Electric Company | System and method for automated patient anatomy localization |
US8000522B2 (en) * | 2007-02-02 | 2011-08-16 | General Electric Company | Method and system for three-dimensional imaging in a non-calibrated geometry |
EP2120702B1 (en) * | 2007-03-06 | 2014-04-09 | Koninklijke Philips N.V. | Automated diagnosis and alignment supplemented with pet/mr flow estimation |
CA2687330A1 (en) | 2007-05-18 | 2008-11-27 | The Johns Hopkins University | A treatment simulator for brain diseases and method of use thereof |
WO2009016530A2 (en) | 2007-07-27 | 2009-02-05 | Koninklijke Philips Electronics N.V. | Interactive atlas to image registration |
WO2009029708A1 (en) * | 2007-08-29 | 2009-03-05 | Vanderbilt University | System and methods for automatic segmentation of one or more critical structures of the ear |
US7933380B2 (en) * | 2007-09-28 | 2011-04-26 | Varian Medical Systems International Ag | Radiation systems and methods using deformable image registration |
US8666128B2 (en) * | 2007-10-18 | 2014-03-04 | The University Of North Carolina At Chapel Hill | Methods, systems, and computer readable media for mapping regions in a model of an object comprising an anatomical structure from one image data set to images used in a diagnostic or therapeutic intervention |
WO2009111580A2 (en) * | 2008-03-04 | 2009-09-11 | Tomotherapy Incorporated | Method and system for improved image segmentation |
WO2009117419A2 (en) * | 2008-03-17 | 2009-09-24 | Worcester Polytechnic Institute | Virtual interactive system for ultrasound training |
EP2321002B1 (en) | 2008-05-15 | 2014-04-23 | Intelect Medical Inc. | Clinician programmer system and method for calculating volumes of activation |
WO2009146388A1 (en) | 2008-05-28 | 2009-12-03 | The Trustees Of Columbia University In The City Of New York | Voxel-based methods for assessing subjects using positron emission tomography |
EP2131212A3 (en) | 2008-06-05 | 2011-10-05 | Medison Co., Ltd. | Non-Rigid Registration Between CT Images and Ultrasound Images |
DE102008032006B4 (en) | 2008-07-07 | 2017-01-05 | Siemens Healthcare Gmbh | Method for controlling the image recording in an image recording device, and an image recording device |
US8687857B2 (en) | 2008-11-07 | 2014-04-01 | General Electric Company | Systems and methods for automated extraction of high-content information from whole organisms |
EP2189942A3 (en) * | 2008-11-25 | 2010-12-15 | Algotec Systems Ltd. | Method and system for registering a medical image |
GB0912845D0 (en) * | 2009-07-24 | 2009-08-26 | Siemens Medical Solutions | Initialisation of registration using an anatomical atlas |
GB0913930D0 (en) | 2009-08-07 | 2009-09-16 | Ucl Business Plc | Apparatus and method for registering two medical images |
US20120143090A1 (en) | 2009-08-16 | 2012-06-07 | Ori Hay | Assessment of Spinal Anatomy |
GB0917154D0 (en) | 2009-09-30 | 2009-11-11 | Imp Innovations Ltd | Method and apparatus for processing medical images |
US8498459B2 (en) * | 2009-10-08 | 2013-07-30 | Siemens Aktiengesellschaft | System and method for verifying registration accuracy in digital medical images |
US9530077B2 (en) * | 2010-02-10 | 2016-12-27 | Imorphics Limited | Image analysis |
US8861891B2 (en) * | 2010-03-05 | 2014-10-14 | Siemens Aktiengesellschaft | Hierarchical atlas-based segmentation |
EP2369551B1 (en) * | 2010-03-25 | 2019-10-30 | Emory University | Imaging system and method |
WO2011137370A2 (en) * | 2010-04-30 | 2011-11-03 | The Johns Hopkins University | Intelligent atlas for automatic image analysis of magnetic resonance imaging |
US8837791B2 (en) * | 2010-12-22 | 2014-09-16 | Kabushiki Kaisha Toshiba | Feature location method and system |
JP2014513622A (en) | 2011-03-29 | 2014-06-05 | ボストン サイエンティフィック ニューロモデュレイション コーポレイション | Communication interface for therapeutic stimulus delivery system |
US8406890B2 (en) * | 2011-04-14 | 2013-03-26 | Medtronic, Inc. | Implantable medical devices storing graphics processing data |
US8804619B2 (en) | 2011-04-27 | 2014-08-12 | Telefonaktiebolaget L M Ericsson (Publ) | Methods for assigning radio resources for mobile devices connected to a mobile communication module and related systems and devices |
CN102262699B (en) | 2011-07-27 | 2012-09-05 | 华北水利水电学院 | Soft tissue deformation simulation method based on coupling of mesh-free Galerkin and mass spring |
US9524552B2 (en) | 2011-08-03 | 2016-12-20 | The Regents Of The University Of California | 2D/3D registration of a digital mouse atlas with X-ray projection images and optical camera photos |
EP2742450A2 (en) | 2011-08-09 | 2014-06-18 | Boston Scientific Neuromodulation Corporation | Systems and methods for stimulation-related volume analysis, creation, and sharing |
US20130044927A1 (en) * | 2011-08-15 | 2013-02-21 | Ian Poole | Image processing method and system |
US20130072783A1 (en) | 2011-09-16 | 2013-03-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Indicating proximity of a body-insertable device to a destination region of interest |
WO2013113391A2 (en) * | 2012-02-02 | 2013-08-08 | Brainlab Ag | Method for determining an infusion parameter |
WO2015043671A1 (en) | 2013-09-30 | 2015-04-02 | Brainlab Ag | Generating supplementary slice images based on atlas data |
US9406130B2 (en) * | 2012-10-26 | 2016-08-02 | Brainlab Ag | Determining an anatomical atlas |
EP3428881B1 (en) | 2012-10-26 | 2020-12-23 | Brainlab AG | Matching patient images and images of an anatomical atlas |
US20150278471A1 (en) | 2012-10-26 | 2015-10-01 | Brainlab Ag | Simulation of objects in an atlas and registration of patient data containing a specific structure to atlas data |
-
2012
- 2012-10-26 EP EP18185818.4A patent/EP3428881B1/en active Active
- 2012-10-26 EP EP21170587.6A patent/EP3879487B1/en active Active
- 2012-10-26 EP EP18185825.9A patent/EP3428882B1/en active Active
- 2012-10-26 EP EP18185808.5A patent/EP3428879A1/en not_active Ceased
- 2012-10-26 EP EP18185830.9A patent/EP3428883B1/en active Active
- 2012-10-26 WO PCT/EP2012/071241 patent/WO2014063746A1/en active Application Filing
- 2012-10-26 EP EP18185811.9A patent/EP3428880B1/en active Active
- 2012-10-26 US US14/438,436 patent/US9704243B2/en active Active
- 2012-10-26 EP EP12784534.5A patent/EP2912629B1/en active Active
-
2013
- 2013-06-28 US US14/437,784 patent/US9639938B2/en active Active
- 2013-06-28 WO PCT/EP2013/063640 patent/WO2014063840A1/en active Application Filing
- 2013-06-28 EP EP13732178.2A patent/EP2912630B1/en active Active
-
2017
- 2017-05-30 US US15/608,322 patent/US10417762B2/en active Active
- 2017-05-30 US US15/608,199 patent/US10262418B2/en active Active
- 2017-05-30 US US15/608,578 patent/US10402971B2/en active Active
- 2017-05-30 US US15/608,485 patent/US10388013B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070038058A1 (en) * | 2005-08-11 | 2007-02-15 | West Jay B | Patient tracking using a virtual image |
US20080188741A1 (en) * | 2007-02-05 | 2008-08-07 | General Electric Company | Brain image alignment method and system |
US20110069873A1 (en) * | 2009-09-24 | 2011-03-24 | Aze Ltd. | Medical image data alignment apparatus, method and program |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9704243B2 (en) | 2012-10-26 | 2017-07-11 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
US10262418B2 (en) | 2012-10-26 | 2019-04-16 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
US10388013B2 (en) | 2012-10-26 | 2019-08-20 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
US10402971B2 (en) | 2012-10-26 | 2019-09-03 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
US10417762B2 (en) | 2012-10-26 | 2019-09-17 | Brainlab Ag | Matching patient images and images of an anatomical atlas |
Also Published As
Publication number | Publication date |
---|---|
EP3428879A1 (en) | 2019-01-16 |
US9639938B2 (en) | 2017-05-02 |
US10388013B2 (en) | 2019-08-20 |
EP3428880B1 (en) | 2021-08-04 |
US20150294467A1 (en) | 2015-10-15 |
US9704243B2 (en) | 2017-07-11 |
EP3428882A1 (en) | 2019-01-16 |
US20150254838A1 (en) | 2015-09-10 |
EP3428883B1 (en) | 2021-06-09 |
EP2912630A1 (en) | 2015-09-02 |
EP3428883A1 (en) | 2019-01-16 |
EP3428882B1 (en) | 2021-01-06 |
EP3879487B1 (en) | 2023-11-29 |
EP3428881B1 (en) | 2020-12-23 |
WO2014063746A1 (en) | 2014-05-01 |
EP2912629B1 (en) | 2018-10-03 |
US20170330325A1 (en) | 2017-11-16 |
US20170330323A1 (en) | 2017-11-16 |
US10402971B2 (en) | 2019-09-03 |
US20170330324A1 (en) | 2017-11-16 |
EP3428880A1 (en) | 2019-01-16 |
US10417762B2 (en) | 2019-09-17 |
EP3879487A1 (en) | 2021-09-15 |
US10262418B2 (en) | 2019-04-16 |
US20170330322A1 (en) | 2017-11-16 |
EP2912629A1 (en) | 2015-09-02 |
EP2912630B1 (en) | 2018-09-26 |
EP3428881A1 (en) | 2019-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9639938B2 (en) | Matching patient images of different imaging modality using atlas information | |
US10147190B2 (en) | Generation of a patient-specific anatomical atlas | |
EP3589355B1 (en) | Optimal deep brain stimulation electrode selection and placement on the basis of stimulation field modelling | |
US10639471B2 (en) | Simulating a target coverage for deep brain stimulation | |
WO2015043671A1 (en) | Generating supplementary slice images based on atlas data | |
JP7561819B2 (en) | Method for imaging a body part, computer, computer-readable storage medium, computer program, and medical system - Patents.com | |
US20230260129A1 (en) | Constrained object correction for a segmented image | |
EP2912633B1 (en) | Simulation of objects in an atlas and registration of patient data containing a specific structure to atlas data | |
EP3529808B1 (en) | Planning an external ventricle drainage placement | |
US20220183759A1 (en) | Determining a surgical port for a trocar or laparoscope | |
EP3526799B1 (en) | Optimizing an atlas | |
WO2024160375A1 (en) | Computing a hybrid surface from medical image data using trusted regions and a plausibility check |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13732178 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013732178 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14437784 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |