WO2008129506A1 - Planification de balayage avec capacité d'autoapprentissage - Google Patents

Planification de balayage avec capacité d'autoapprentissage Download PDF

Info

Publication number
WO2008129506A1
WO2008129506A1 PCT/IB2008/051527 IB2008051527W WO2008129506A1 WO 2008129506 A1 WO2008129506 A1 WO 2008129506A1 IB 2008051527 W IB2008051527 W IB 2008051527W WO 2008129506 A1 WO2008129506 A1 WO 2008129506A1
Authority
WO
WIPO (PCT)
Prior art keywords
scan
image
probability map
geometry
feature probability
Prior art date
Application number
PCT/IB2008/051527
Other languages
English (en)
Inventor
Daniel Bystrov
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2008129506A1 publication Critical patent/WO2008129506A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0037Performing a preliminary scan, e.g. a prescan for identifying a region of interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the invention relates to an apparatus for generating a feature probability map, the feature probability map to be employed during automatic geometry planning for a medical diagnostic modality.
  • the invention further relates to an apparatus for geometry planning for medical diagnostic modality.
  • the invention further relates to a method for generating a feature probability map, the feature probability map being arranged to be employed in a method for automatic geometry planning for a medical diagnostic modality.
  • the invention still further relates to a method for automatic acquisition geometry planning for a medical diagnostic modality.
  • the invention also relates to a computer program product.
  • MRI magnetic resonance imaging
  • a survey image also called “scout image”
  • the survey image may be a 3D image.
  • several anatomical structures are automatically identified in this survey image using an anatomical model of these structures and the expected contrast.
  • a geometry (3D coordinate system) is derived, which depends on some of these structures. The exact dependency can be automatically learned by analyzing previous scans of possibly different patients and the (manually) planned scan geometries.
  • WO 2006/013499 Al entitled "Automatic determination of parameters of an imaging geometry”.
  • the image acquisition device MR scanner
  • a diagnostic scan is acquired, typically at a higher resolution than the survey image.
  • the scan geometry may contain not only geometrical data such as the orientation of the sensor of the scanner, but also for example field strength parameters. In the context of MR scanning the field strength determines inter alia the contrast of the acquired image.
  • Automated magnetic resonance scan planning software that is currently being used learns the relevance of different anatomical landmarks for a specific scan geometry from manually adjusted sample geometries. This requires a "hard-coded" anatomical model together with a model of the expected image contrast for a standardized acquisition protocol.
  • the present invention addresses these and other problems in the prior art by providing a data processing apparatus for generating a feature probability map arranged to be employed during automatic geometry planning for a medical diagnostic modality.
  • the apparatus comprises a storage for a plurality of training images. Each training image represents at least a same type of anatomical structures and has a scan geometry associated with it.
  • the apparatus also comprises a scan geometry aligner arranged to align the training images by aligning their respective scan geometries.
  • the apparatus further comprises an image data combiner that is arranged to combine image data of corresponding regions of the aligned training images to determine a probability value of a corresponding region of the feature probability map.
  • the feature probability map is arranged to be employed during the automatic planning of a scan geometry for a diagnostic scan.
  • the apparatus performs a preparatory work for the actual automatic scan planning procedure.
  • This preparatory work can be performed once in advance in an "offline” manner.
  • the feature probability map is stored and can be retrieved each time when it is about to be used during a scan planning procedure.
  • the preparatory work of creating the feature probability map can also be performed from scratch eveiy time a scan planning procedure is requested, i.e. in an "online" manner. In comparison to the offline manner, the online manner usually requires more time due to calculating the feature probability map between the acquisition of the scout image and the acquisition of the diagnostic image.
  • MR scanner (as well as any other suitable medical diagnostic modality) is an expensive resource with limited throughput. Therefore hospitals or similar institutions are interested in maintaining a high utilization rate of the scanner.
  • the calculation of the feature probability map is usually not excessively complicated so that the delay may be nearly unnoticeable. Some actions can already be performed before the scout image is actually taken, such as retrieving and aligning the training images.
  • the online manner of creating the feature probability map offers greater flexibility, since user preferences can be taken into account during the creation of the feature probability map, as will be further explained below.
  • Training images are typically obtained from previous scans. It is desirable to have a high inter patient variability and possibly intra patient variability in the training images in order to cover a large range of situations.
  • Each training image should at least show the anatomical structure that was of interest when the training image was acquired.
  • the training image also contains other surrounding anatomical structures.
  • the claimed apparatus is prepared and designed to take these surrounding anatomical structures into account without requiring an elaborate segmentation or the like.
  • the scan geometry that is associated with each of the training images could be drawn into the training image, e.g. by using a unique color that is reserved for the scan geometry definition object.
  • the scan geometry could alternatively be tagged to the training image or stored in a separate file or data set.
  • the scan geometry aligner extracts the scan geometry information stored in or with each of the training images. Since the scan geometry is usually represented by a small number of points and/or fairly easy shapes, aligning all of the scan geometries does not involve excessively elaborate calculations.
  • the scan geometry aligner may use manually adjusted geometries as registration of a set of images (scouts), which enables to automatically build an anatomical atlas and a model of the expected image contrast for this specific scan geometry.
  • the parameters describing the aligning of a specific scan geometry is also applied to the corresponding training image. Accordingly, the training image is translated, rotated and scaled in the same manner as the scan geometry. Under certain circumstances it may however be the case that the alignment parameters undergo certain transformations before they are applied to the training image.
  • the image data combiner basically stacks the training images according to the respective alignment parameters. Accordingly, regions in the various training images that show the same type of anatomical structure can be expected to lie on top of one another, if theses anatomical structures have a more or less invariant relation to the scan geometry over the plurality of training images.
  • a probability value for each region of the feature probability map e.g each pixel or voxel
  • the corresponding regions in the training images are evaluated. Encountering the same kind of image data in a large number of training images means a high probability of the anatomical structure being in that region. Accordingly, under normal circumstances a high probability value is stored in the feature probability map at the region in question.
  • the apparatus described herein it will be possible to automatically learn the interesting structures of the scanned anatomy together with all contrast parameters directly at the console of a MRI-system.
  • the respective scan geometries of the training images are manually adjusted scan geometries.
  • the scan geometiy can be manually entered or adjusted by drawing a frame, an arrow or another simple geometric shape in the image. Another possibility is to adjust parameters controlling the display window, e.g. zoom factor, view angles and the like.
  • the manually adjusted scan geometiy should reflect or take into account the position, orientation and size of the anatomical structure of interest. Usually it does not take long for a user to manually adjust the scan geometiy.
  • the image data combiner is arranged to perform filtering, superposing, averaging, and/or calculating a statistical moment of the image data in the corresponding regions of the training images.
  • the probability value to be calculated or determined for a specific region of the feature probability map can be obtained by numerous methods, some of which are mentioned above.
  • Superposition of image data in the corresponding regions of the training images works well, if anatomical structures are represented by values that are close to the extreme values of the dynamic range of the image data (e.g. veiy dark or veiy bright). If this is not already the case for the original image, this property may be achieved by first applying a filter to the training images, such as an edge detection filter.
  • the edge detection filter enhances changes in the brightness of the image data, which often occurs at the border of anatomical structures.
  • Other filters are also envisaged, such as histogram spreading filters or histogram compressing filters.
  • the image data processing mentioned above may act on a single training image, several training images, all training images, the resulting feature probability map, or on an intermediate image.
  • the image data may represent image contrast, image color, and/or image grey value. This allows building an image contrast model, an image color model, or an image grey value model.
  • the apparatus is thus capable of automatic learning the interesting structures of the scanned anatomy together with all contrast (color / grey value) parameters directly at the console of e.g. an MR system.
  • Data relating to the scan geometry creation may be associated to each of the plurality of training images, respectively. Knowing about the circumstances of the creation of the scan geometry may provide valuable insights about the motivation and reasons for the choice of the scan geometry.
  • the equipment used may influence the choice of the scan geometry.
  • the equipment comprises in particular the scanner (MR, CT, PET, etc.).
  • scan geometries may vaiy.
  • Another information influencing the choice of the scan geometry may be the institution and/or person that creates the scan geometry. Eveiy physician or radiologist may have his/her own idea of how the scan geometry should be modelled.
  • the preferred scan geometries of two radiologists may differ by 5° in the orientation of the main imaging axis, or by the setting of the field strength (e.g. MR) or radiation strength (e.g CT).
  • the field strength e.g. MR
  • radiation strength e.g CT
  • Data pertaining to the patient, such as age, sex, weight, height etc. may be included in the scan geometry creation data, as well.
  • the apparatus may further comprise a database storing user preferences regarding modifications of scan geometries that are associated with a respective training image, wherein the modifications depend on the data relating to the scan geometry creation.
  • user preferences encompasses not only personal preferences of a particular user, but also e.g. institution wide preferences or hospital wide preferences.
  • training images and their associated scan geometries may not fully correspond to the preferences of the user that wishes to plan a scan geometry. In many cases however, it may be fairly easy to adopt the scan geometry of the training image to the user preferences. This may be achieved by applying a certain transformation to the scan geometry that is provided with the training image. The parameters of this transformation may be stored in the user preference database.
  • knowing that a given training image comes from hospital X and is to be used by research laboratory Y may result in a transformation including a 5° rotation about the sagittal axis and a 120% zoom.
  • the different transformations may be automatically learned during the exploitation of the feature probability map (see below), as well.
  • the unlearned database initially requires rather frequent corrections of the calculated scan geometries that are to be entered by the current user.
  • the apparatus analyzes which training scan geometries primarily caused the deviation. The deviations are gathered and sorted according to the scan geometry generation data. This allows an analysis as to whether the deviation is systematic and correlated to the scan geometry data, in particular the provenience of the training set (comprising training image and scan geometry).
  • a corresponding entry pertaining to the systematic deviation of training scan geometries and the provenience of the training data sets may be added to the user preferences.
  • This entry allows an automatic adaptation during future scan geometry planning processes.
  • the database is not a part of the apparatus, but operatively connectable to the apparatus.
  • the apparatus may further comprise a scan geometiy linker arranged to link scan geometries for different relevant anatomical structures in order to build and map an anatomical atlas.
  • Linking may be achieved for example with a non-rigid deformation field.
  • the different anatomical structures are free to move with respect to each other, at least within predefined bounds.
  • the different regions of the anatomical atlas corresponding to different anatomical structures may be activated individually. That means that a feature probability map that corresponds to the targeted anatomical structure is overlaid onto the anatomical atlas (at the correct position and orientation). Accordingly, the targeted anatomical structure will feature rather high probability values, whereas surrounding anatomical structures will feature lower probability values.
  • a common anatomical atlas is assumed to be easier to handle and manage than a large number of feature probability maps, each one of which only showing a part of the anatomy. Furthermore, overlapping regions in the feature probability maps may be a reason for increased storage requirements and processor load during processing of the feature probability maps. Integrating the feature probability maps in a common anatomical atlas allows the deletion of redundant and superfluous information, hence smaller storage and processing power requirements.
  • the apparatus may further comprise a display that is arranged to present at least one of the plurality of training images to the user, and a user input interface arranged to collect a manually entered and/or adjusted scan geometry for the at least one training image. The manually entered and/or adjusted scan geometry may define a relevance of the corresponding anatomical structure.
  • the apparatus comprises an input interface for a current image; an input interface for a feature probability map, the feature probability map containing a spatial distribution of image features, wherein a scan geometry is associated with the feature probability map; an aligner arranged to align the current image and the feature probability map; and an output interface for the scan geometry associated with the feature probability map as scan geometry that is based on the current image.
  • the apparatus for geometiy planning for a medical diagnostic modality is arranged to use a feature probability map that was created in a preceding stage as described above. Accordingly, the two apparatuses form a pair sharing a common inventive concept.
  • the apparatus for geometiy planning is used to receive and analyse a current image (e.g. a scout image) and to determine an optimal scan geometiy for a subsequent diagnostic image acquisition.
  • a current image e.g. a scout image
  • the relation between the current image and the to- be-determined scan geometiy is given by a feature probability map for the corresponding anatomical structure.
  • a feature probability map for the corresponding anatomical structure.
  • the current image and the selected feature probability map are then aligned. Alignment is based on image data of the current image and probability values of the feature probability map.
  • the result of the alignment may be a set of parameters that determine how the current image has to be transformed in order to match the feature probability map. Transformation of the current image may include translations, rotations, and scaling.
  • the determined transformation parameters are applied to the scan geometiy that is associated with the feature probability map. This produces a scan geometiy that is based on the current image and therefore matches the current position of a patient in the scanner. As usual, the patient is requested to move as little as possible between the acquisition of the scout image (i.e.
  • the time between the two acquisitions should be as short as possible (e.g. between approx. 15 seconds and 5 minutes, preferably between 15 seconds and 2 minutes, or even less than 15 seconds, if possible).
  • the aligner may be arranged to perform a spatial transformation.
  • the aligner may be arranged to perform a rigid-and-scale registration of the features of the current image with the feature probability map.
  • a spatial transformation which aligns the relevant anatomical structures can be found by a rigid-and-scale registration.
  • This transformation can be regarded as the geometry which aligns specific anatomical structures and can be used as an automatically generated scan geometry.
  • the apparatus may further comprise a user input arranged to collect user adjustments to the scan geometry; a realigner arranged to realign the current medical image by aligning its associated scan geometry with the scan geometry that is associated with the feature probability map; and an updater arranged to update the feature probability map with the current image.
  • a self-learning capability for feature probability maps and corresponding scan geometries has the potential to reduce complicated and labour intensive development of anatomical model.
  • the feature probability map and the corresponding scan geometry are created almost as a by-product of the daily work of a radiologist. Over the time, the radiologist will realize that lesser corrections to the scan geometries suggested by the apparatus are necessary. This is because the apparatus gradually adapts to the radiologist's standards and preferences.
  • the feature probability map instead of acting on the feature probability map, it may be envisioned to determine those training images that primarily caused a deviation of the suggested scan geometry from the one preferred by user. Then the circumstances of the creation of the training image and the corresponding scan geometry may be determined in order to find out any systematic deviation.
  • a systematic deviation might be caused by different scanning equipment or different user preferences between the time and place of the acquisition of the training images and the acquisition of the current image.
  • the different embodiments of an apparatus according to the invention may also be implemented in software, hardware or a combination of both.
  • the different embodiments of an apparatus according to the invention may be part of medical diagnostic equipment.
  • Another embodiment of the invention is directed at a method for generating a feature probability map arranged to be employed in a method for automatic geometry planning for a medical diagnostic modality. The method comprises:
  • each image at least representing a same type of anatomical structure and having a scan geometry associated with it;
  • the feature probability map is arranged to be employed during the automatic planning of a scan geometry for a diagnostic scan.
  • the method performs a preparatory work for the actual automatic scan planning procedure.
  • This preparatory work can be performed once in advance an "offline” manner or it can be performed in an “online” manner.
  • Training images are typically obtained from previous scans. It is desirable to have a high inter patient variability and possibly intra patient variability in the training images in order to cover a large range of situations.
  • Each training image should at least show the anatomical structure that was of interest when the training image was acquired.
  • the training image also contains other surrounding anatomical structures.
  • the claimed apparatus is prepared and designed to take these surrounding anatomical structures into account without requiring an elaborate segmentation or the like.
  • the scan geometry that is associated with each of the training images could be drawn into the training image, e.g. by using a unique color that is reserved for this object.
  • the scan geometry could alternatively be tagged to the training image or stored in a separate file or data set.
  • the scan geometry aligning may use manually adjusted geometries as registration of a set of images (scouts), which enables to automatically build an anatomical atlas and a model of the expected image contrast for this specific scan geometiy.
  • the parameters describing the aligning of a specific scan geometiy is also applied to the corresponding training image. Accordingly, the training image is translated, rotated, scaled etc. in the same manner as the scan geometry.
  • the alignment parameters undergo certain transformations before they are applied to the training image. This will be described in more detail below.
  • the training images are basically stacked onto each other according to the respective alignment parameters. Accordingly, regions in the various training images that show the same type of anatomical structure can be expected to lie on top of one another, if theses anatomical structures have a more or less invariant relation to the scan geometry over the plurality of training images.
  • a probability value for each region of the feature probability map e.g each pixel or voxel
  • Encountering the same kind of image data in a large number of training images means a high probability of the anatomical structure being in that region. Accordingly, a high probability value is stored in the feature probability map at the region in question.
  • the respective scan geometries of the training images may be manually adjusted scan geometries.
  • the image data combiner may be arranged to perform filtering, superposing, averaging, and/or calculating a statistical moment of the image data in the corresponding regions of the training images.
  • the image data may represent image contrast, image color, and/or image gray value.
  • data relating to the scan geometry creation may be associated to each of the plurality of training images, respectively.
  • the method further comprises storing user preferences regarding modifications of scan geometries that are associated with a respective training image in a database, wherein the modifications depend on the data relating to the scan geometry creation.
  • the method further comprises linking scan geometries for different relevant anatomical structures in order to build and map an anatomical atlas.
  • Another embodiment of the present invention is directed to a method for automatic acquisition geometry planning for a medical diagnostic modality.
  • the method comprises: - providing a current image;
  • the aligning comprises performing a spatial transformation. In an embodiment the aligning comprises performing a rigid-and-scale registration of the features of the current image with the feature probability map. In an embodiment, the method further comprises
  • Another embodiment of the present invention is directed at a computer program product having computer-executable instructions on it to cause a processor to perform the above described method.
  • the computer-executable instructions may be implemented in the form of software, notably in the form of software packages that upgrade already installed software to enable installed medical imaging systems and medical viewing stations to also operate according to the present invention.
  • Figure 1 presents a schematic view of an embodiment of the apparatus for generating a feature probability map according to the invention.
  • Figure 2 presents a flow diagram of a method for generating a feature probability map according to the invention.
  • Figure 3 presents a schematic view of an embodiment of the apparatus for geometry planning for a medical diagnostic modality according to the invention.
  • Figure 4 presents a flow diagram of a method for geometry planning for a medical diagnostic modality according to the invention.
  • Figure 5 schematically illustrates the generation of a feature probability map.
  • Figures 6 and 7 present exemplary input shapes for defining scan geometries.
  • Figure 8 schematically illustrates the creation of an anatomical atlas.
  • Figure 9 illustrates the procedure of creating a feature probability map with exemplary medical images.
  • Figure 10 illustrates the procedure of using a feature probability map with exemplary medical images.
  • Figure 1 shows an apparatus for generating a feature probability map.
  • An image data base IDB stores medical images, such as scout or survey images. These images serve as training images TI.
  • the training images TI are displayed on a display DSP.
  • a user watching the display may now enter a scan geometry SG via a user interface UIF.
  • the created training set consisting of a training image and a corresponding scan geometry is stored in a storage ST.
  • Entering the scan geometries manually via user interface UIF as just described is only one of several options.
  • ready-to-use training sets may be purchased or obtained from third parties, such as research institutes, medical equipment manufacturers and the like.
  • Yet another alternative might be the determination of scan geometries by means of automatic image analysis.
  • the training scan geometries may be sent to an aligner ALGN (also: alignment block or alignment module) via connection 12.
  • the aligner ALGN aligns the scan geometries so that they coincide in the best possible manner. This may be achieved by a data fitting method, such as least squares fit. Since scan geometries are usually succinct, the necessary calculations are not excessively lengthy and numerous.
  • the aligner produces a set of alignment parameters for each of the training scan geometries. These parameters are sent to a combiner CMB via connection 16. The combiner also has another input that is supplied with the corresponding training images via connection 14. Each training image is transformed according to the respective parameters received via connection 16.
  • the aligned scan geometries define a common spatial reference for all of the training images.
  • a feature probability map PM is created by combining image data of all of the training images.
  • the combination of image data happens in corresponding sub-divisional regions of the training images, e.g. pixel-wise or voxel-wise.
  • the combination of image data may be or comprise a superposition, an averaging, and/or a filtering of the image data.
  • manually adjusted geometries will be aligned with respect to some relevant anatomical structures.
  • the feature probability map is not restricted to reflect a probability distribution of the image data. It is also possible to perform a preliminary image analysis on the training images, such as deformable shape analysis. In a subsequent step, parameters of the deformable shapes (position, orientation, size, etc. ) may then be used as input data for the calculation of the feature probability map. Therefore, defining the feature probability map as containing a spatial distribution of image features should be understood in this more general sense.
  • Figure 2 the method for generating a feature probability map is illustrated in the form of a flow diagram.
  • Block 21 contains the text "Provide training images”.
  • Block 22 contains the text "Align scan geometries”.
  • Block 23 contains the text "Align training images according to scan geometiy alignment”.
  • Block 24 contains the text "Combine image data of corresponding regions”.
  • Block 25 contains the text "Assemble feature probability map”. For a more detailed discussion reference is made to the description of Figure 1.
  • FIG 3 an apparatus for geometiy planning for medical diagnostic modality is shown.
  • a current image CI is provided to an input interface for a current image 32.
  • An input interface for a feature probability map 33 receives a feature probability map PM from the database DB.
  • a corresponding scan geometiy SG is also retrieved from the database DB.
  • the current image CI and the feature probability map are sent to an aligner 34 via connections 36 and 37, respectively.
  • the aligner 34 tries to align the feature probability map to the current image. If a sufficiently good alignment could be achieved, any parameters that determine how the feature probability map had to be transformed in order to match the current image are passed on to an output interface 35.
  • the transformation parameters are also valid for the scan geometiy.
  • Block 41 contains the text "Provide current image”.
  • Block 42 contains the text "Provide feature probability map with associated scan geometry”.
  • Block 43 contains the text "Align current image and feature probability map”.
  • Block 44 contains the text "Apply alignment information to associated scan geometry”.
  • Block 45 contains the text "Use associated scan geometry as scan geom. for current image”.
  • FIG. 5 three training images TIl, TI2, and TB are represented.
  • Each of the training images contains an anatomical structure of interest 61, 62, 63 and another anatomical structure of less interest (51, 52, 53).
  • the anatomical structures of interest 61, 62, 63 differ slightly in their shape, especially at the upper end.
  • a user has already manually determined scan geometries 65 in each of the three training images TIl, TI2, and TI3.
  • the scan geometries are represented as an arrow which allows the definition of a position, a direction, and a length.
  • the arrow 65 is the same in all three training images, since all three arrows are already aligned so that they match one another.
  • the combination of the image data in the feature probability map PM shows the representation 64 of the anatomical structure of interest.
  • the representation 64 shows two regions, a first region of high probability 64a and a second region of lower probability 64b.
  • the combination of image data shows that there is only a veiy small region 54a that has an elevated probability.
  • the large remaining area 54b has only a relatively small probability. In terms of image sharpness and image blur that would mean that low-interest anatomical structure representation 54 is blurred.
  • Figure 6 shows a shape that can be used to define a scan geometry.
  • the shape is an ellipsoid having a longitudinal dimension 69 and a lateral dimension 68. Furthermore, an angle 66 between the direction of the longitudinal dimension 69 and a reference direction 67 defines the orientation of the scan geometry defining shape.
  • the end of the shape that is drawn in thick line selves as reference for the orientation of the shape.
  • Figure 6 shows a two- dimensional shape. However, three-dimensional or higher dimensional shapes are of course also possible.
  • Figure 7 shows a rectangle as another scan geometry defining shape with longitudinal dimension 79 and lateral dimension 78.
  • the thick line indicates a reference for the orientation of the scan geometry. For more details reference is made to Figure 6.
  • scan geometry defining shapes can be rather simple in the context of the present invention. The reason is that they primarily serve to indicate the location, size and orientation of an anatomical structure of interest. The finer details of the anatomical structure are taken into consideration by the training images. Since the scan geometry defining shapes are simple, they can be quickly and easily entered by user.
  • Figure 8 illustrates in a schematic way how an anatomical atlas could be built using the present invention.
  • Figure 8 contains three columns A, B, and C.
  • Each column contains a number of training images (of which only two per column are depicted) for a specific anatomical structure.
  • the anatomical structure dealt with in column A is the right femur (patient perspective, i.e. the femur on the left hand side of the image).
  • the anatomical structure dealt with in column B is the pelvic bone.
  • the anatomical structure dealt with in column C is the left femur.
  • each column is processed as described above in connection with the generation of a feature probability map. Accordingly, a feature probability map is obtained for each of the columns A, B, and C.
  • the three feature probability maps of each of the columns A, B, and C are combined and sent to a filter FIL in order to determine regions with high probability values in the different feature probability maps. Subsequently they are sent to a registration unit REG where they are aligned. Alignment can be achieved e.g. by having the user define reference points in each of the feature probability maps.
  • the result of the registration module REG is stored in the form of links between the different feature probability maps. These links may be in the form of non-rigid deformation fields.
  • the linked feature probability maps form an anatomical atlas ATL.
  • Figure 9 shows model building using manual adjusted scan geometries.
  • the upper row contains original scout images and manually adjusted geometries (white dashed squares).
  • the middle row illustrates the feature extraction, which in the present case is an edge detection.
  • the lower row illustrates three training images that are mapped in a common coordinate system using manual adjusted geometries.
  • the image to the lower right depicts the resulting model. It can be observed that relevant anatomical structures are sharp, whereas less relevant structures are blurred out.
  • Figure 10 shows a sequence of images as they occur during scan planning.
  • the left image is a new survey image (scout image) as the current image.
  • the next image to the right shows the feature extraction, which in the present case is an edge detection.
  • the third image illustrates the current image and the feature probability map after a rigid + scale registration of the model.
  • the image to the right shows the mapped scan geometry of the model applied to the current image.
  • MRI systems are the main application area of automated geometry planning apparatuses, methods, and software.
  • Other imaging modalities e.g. ultrasound, computed tomography
  • medical viewing workstations could use such a technology.
  • the described learning method of relevant structures of anatomies form geometry samples could be used whenever similar images of the same anatomy from possibly different patients or even animals will be acquired or visualized.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Multimedia (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un appareil pour générer une carte de probabilité de caractéristique disposée pour être employée lors de la planification de géométrie automatique pour une modalité diagnostique médicale. L'appareil comprend une mémoire pour une pluralité d'images de formation, chaque image de formation représentant au moins un même type de structure anatomique et possédant une géométrie de balayage associée à celle-ci ; un aligneur de géométrie de balayage disposé pour aligner les images de formation en alignant leurs géométries de balayage respectives ; et un combinateur de données d'image disposé pour combiner des données d'image de régions correspondantes des images de formation alignées afin de déterminer une valeur de probabilité d'une région correspondante de la carte de probabilité de caractéristique. L'invention concerne également un procédé correspondant tout comme un appareil et un procédé de planification de géométrie pour une modalité diagnostique médicale. L'invention concerne en outre un produit de programme informatique.
PCT/IB2008/051527 2007-04-23 2008-04-21 Planification de balayage avec capacité d'autoapprentissage WO2008129506A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP07106685.6 2007-04-23
EP07106685 2007-04-23

Publications (1)

Publication Number Publication Date
WO2008129506A1 true WO2008129506A1 (fr) 2008-10-30

Family

ID=39745514

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/051527 WO2008129506A1 (fr) 2007-04-23 2008-04-21 Planification de balayage avec capacité d'autoapprentissage

Country Status (1)

Country Link
WO (1) WO2008129506A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014027243A2 (fr) * 2012-08-15 2014-02-20 Questor Capital Holdings Ltd. Système de cartographie de probabilités
WO2018138065A1 (fr) 2017-01-30 2018-08-02 Koninklijke Philips N.V. Système d'apprentissage automatique pour planification de balayage
JP2019521439A (ja) * 2016-06-30 2019-07-25 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 統計的な胸部モデルの生成及びパーソナライズ
EP2729071B1 (fr) * 2011-07-06 2020-05-20 Koninklijke Philips N.V. Planification et/ou post-traitement d'acquisition d'image de suivi

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1657681A1 (fr) * 2004-11-10 2006-05-17 Agfa-Gevaert Procédé pour effectuer des mesures sur des images numériques
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1657681A1 (fr) * 2004-11-10 2006-05-17 Agfa-Gevaert Procédé pour effectuer des mesures sur des images numériques
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2729071B1 (fr) * 2011-07-06 2020-05-20 Koninklijke Philips N.V. Planification et/ou post-traitement d'acquisition d'image de suivi
WO2014027243A2 (fr) * 2012-08-15 2014-02-20 Questor Capital Holdings Ltd. Système de cartographie de probabilités
WO2014027243A3 (fr) * 2012-08-15 2014-04-17 Questor Capital Holdings Ltd. Système de cartographie de probabilités
US9378462B2 (en) 2012-08-15 2016-06-28 Questor Capital Holdings Ltd. Probability mapping system
JP2019521439A (ja) * 2016-06-30 2019-07-25 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 統計的な胸部モデルの生成及びパーソナライズ
JP7079738B2 (ja) 2016-06-30 2022-06-02 コーニンクレッカ フィリップス エヌ ヴェ 統計的な胸部モデルの生成及びパーソナライズ
JP7079738B6 (ja) 2016-06-30 2022-06-23 コーニンクレッカ フィリップス エヌ ヴェ 統計的な胸部モデルの生成及びパーソナライズ
WO2018138065A1 (fr) 2017-01-30 2018-08-02 Koninklijke Philips N.V. Système d'apprentissage automatique pour planification de balayage

Similar Documents

Publication Publication Date Title
CN107909622B (zh) 模型生成方法、医学成像的扫描规划方法及医学成像系统
US8698795B2 (en) Interactive image segmentation
JP4717427B2 (ja) 磁気共鳴断層撮影装置の作動方法および制御装置
US10580159B2 (en) Coarse orientation detection in image data
US6195409B1 (en) Automatic scan prescription for tomographic imaging
US7953260B2 (en) Predicting movement of soft tissue of the face in response to movement of underlying bone
EP2041723B1 (fr) Procédé, appareil, système et programme informatique pour transferer la géométrie de scan entre scans successifs
US20150139520A1 (en) Scan region determining apparatus
US9563979B2 (en) Apparatus and method for registering virtual anatomy data
US20040101186A1 (en) Initializing model-based interpretations of digital radiographs
US8023706B2 (en) Automatically determining landmarks on anatomical structures
EP1840829A2 (fr) Système de nivellement de fenêtre et son procédé
US10445904B2 (en) Method and device for the automatic generation of synthetic projections
WO2008129506A1 (fr) Planification de balayage avec capacité d'autoapprentissage
US11903691B2 (en) Combined steering engine and landmarking engine for elbow auto align
CN116883281A (zh) 一种用于图像引导放疗的影像呈像画质增强系统和方法
CN116205954A (zh) 用于图像配准的方法和系统
US8265361B2 (en) Automatic transfer of outlined objects from one data set into another data set
Kim et al. Mutual information for automated multimodal image warping
US6947581B1 (en) Integrated data conversion and viewing station for medical images
US20070286525A1 (en) Generation of imaging filters based on image analysis
CN106683058A (zh) 医学图像的校正方法及其装置
CN1954778A (zh) 用于将放射图像数据映像到神经解剖学坐标系中的方法
US20220249014A1 (en) Intuitive display for rotator cuff tear diagnostics
US20230064516A1 (en) Method, device, and system for processing medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08737934

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08737934

Country of ref document: EP

Kind code of ref document: A1