WO2012107057A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2012107057A1
WO2012107057A1 PCT/EP2011/000635 EP2011000635W WO2012107057A1 WO 2012107057 A1 WO2012107057 A1 WO 2012107057A1 EP 2011000635 W EP2011000635 W EP 2011000635W WO 2012107057 A1 WO2012107057 A1 WO 2012107057A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
compression
processing device
compressed
Prior art date
Application number
PCT/EP2011/000635
Other languages
French (fr)
Inventor
Guido VAN SCHIE
Nico Karssemeijer
Thorsten Twellmann
Original Assignee
Mevis Medical Solutions Ag
Stichting Katholieke Universiteit
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mevis Medical Solutions Ag, Stichting Katholieke Universiteit filed Critical Mevis Medical Solutions Ag
Priority to PCT/EP2011/000635 priority Critical patent/WO2012107057A1/en
Publication of WO2012107057A1 publication Critical patent/WO2012107057A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10112Digital tomosynthesis [DTS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Definitions

  • the invention relates to an image processing device, an image processing method and an image processing computer program for finding corresponding regions in two three-dimensional images.
  • Mammography is the current first line modality for breast cancer detection. It is common practice to acquire two 2D images of each breast (commonly referred to as mediolateral-oblique (MLO) view and craniocaudal (CC) view) from two different viewpoints and with two different compression settings of the breast. The direction of compression force is always orthogonal to the detector plate.
  • MLO mediolateral-oblique
  • CC craniocaudal
  • Each mammogram is a 2D projection image of the entire breast which therefore provides no depth information orthogonal to the image plane.
  • Tomosynthesis is a new modality for 3D imaging which combines aspects of mammography and computer tomography and is expected to overcome some limitations of conventional mammography.
  • mammography the breast is commonly also slightly compressed and imaged from two different perspectives. The imaging perspective and the direction of the compression force coincide. Instead of acquiring a single projection image for each perspective, the x-ray tube is moved in a small arc and a small number of images are taken. From these images, a 3D image volume can be reconstructed for each perspective which has a high spatial resolution in 2 dimensions, but a rather low spatial resolution in the third orthogonal dimension due to the limited length of the arc.
  • each tomosynthesis study commonly consists of two 3D image volumes (ipsilateral views) for each side of the breast which cannot be reformatted into arbitrary oblique planes without significant degradation of the image quality.
  • the orientation with the best image quality is referred to as the major image plane.
  • an image processing device for finding corresponding regions in two three-dimensional images of an object is presented, wherein the image processing device comprises:
  • an image providing unit for providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction,
  • a model providing unit for providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object,
  • a first region determination unit for determining a first region in the first image
  • a matching unit for matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model
  • a second region determination unit for determining a second region, which corresponds to the first region in the first image, in the second image, the second region determination unit being adapted to:
  • the compression model which at least approximately describes the compression in the two compression directions and which can be matched to the respective first and second images, and by simulating the steps of compressing in the first compression direction, decompressing and compressing in the second compression direction, two corresponding regions in the two images showing the same object can be determined very accurately, although the two images show the object compressed in different compression directions.
  • the image processing apparatus can therefore be used for assisting a user in finding corresponding regions in these two images.
  • the image providing unit can be adapted to provide more than two images, for example, the image providing unit can further provide a third image and a fourth image showing the same object compressed in other respective compression directions. If a first region has been determined in the first image, the compression model can be adapted to determine third and fourth regions, which correspond to the first region, in the third and fourth images.
  • the image providing unit can be a storage unit, in which the images are stored already, and/or a receiving unit for receiving images via a wireless or wired data connection and for providing the received images for processing the same.
  • the image providing unit can also be an image generation apparatus for generating the images.
  • the image providing unit can be a tomosynthesis apparatus for generating tomosynthesis images, in particular, of a breast.
  • the image providing unit can also be another imaging unit for generating three- dimensional images like a magnetic resonance imaging unit, a computed tomography imaging unit, a nuclear imaging unit, for example, a positron emission tomography unit or a single photon emission tomography unit, an ultrasound imaging unit, et cetera.
  • matching and “registering” refer preferentially to a process in which the compression model is adapted to the compressed object as shown in the respective image.
  • This process includes preferentially computation of certain model parameters, based on characteristics of the compressed object like the volume of the compressed object, from the image data and aligning the compression model and its coordinate system to suitable landmarks in the image like, if the object is a breast, the nipple position, the pectoral line and the nipple-pectoral line.
  • a further, internal structure of the object is preferentially not considered during the matching and registration procedures.
  • the terms "matching” and “registering” refer preferentially to a process which includes aligning a coordinate system of the compression model and, thus, aligning the compression model with the compressed object shown in the respective image based on landmarks detectable in the respective image and estimating a strain ratio parameter, which is indicative of a ratio of the radius of the sphere at the equator with and without compression, from the volume of the compressed object in the respective image and a simulated two-dimensional projection of the compressed object.
  • the landmarks are preferentially, as already mentioned above, the nipple position, the pectoral line and the nipple-pectoral line.
  • the compression model represents in its decompressed state a sphere or an ellipsoid or a part of a sphere or of an ellipsoid.
  • the compression model represents in its decompressed state a hemisphere.
  • the sphere comprises poles defined by intersections of a line, which is aligned with the first compression direction and which traverses the center of the sphere, with the surface of the sphere, while the compression model is compressed in the first compression direction, and wherein the second region determination unit is adapted to:
  • the compression model is compressed, it is preferentially assumed that the compression model is located between two parallel plates, which are forced towards each other in the respective compression direction, for compressing the compression model.
  • the first image is acquired in a first acquisition direction and the second image is acquired in a second acquisition direction, wherein the first compression direction and the first acquisition direction are substantially the same and the second compression direction and the second acquisition direction are substantially the same.
  • the compression model models preferentially a volume-preserving, homogenous, rubber-like material.
  • a Neo-Hookean model material can be assumed, which is a hyperelastic model material which can be used to describe the non-linear relationship between stress and strain for materials undergoing large deformations.
  • the compression model preferentially defines deformation functions of all locations in a hemisphere and, thus, for symmetry reasons in a complete sphere, when it is compressed and decompressed. Preferentially, with these deformation functions and a simple rotation corresponding regions in ipsilateral views being the first and second images are computed very quickly.
  • the image providing unit is adapted to provide tomosynthesis images, which correspond to different acquisition directions, as the first and second images.
  • the tomosynthesis images are two different views of a single object.
  • the image providing unit is adapted to provide first and second images of a breast as the first and second images.
  • one of the first image and the second image corresponds to one of a craniocaudal view (CC), a mediolateral-oblique view (MLO) and a me- diolaterial view (ML) and the other of the first image and the second image corresponds to another one of the craniocaudal view (CC), the mediolateral-oblique view (MLO) and the medi- olaterial view (ML).
  • the first image and the second image can also be two different three- dimensional images, i.e. views, which have been acquired at different times, in particular, at different dates, wherein the acquisition geometry, in particular, the acquisition direction, can be the same or can be different for the first image and the second image.
  • the matching unit is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the first image and adapt the compressed compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the deter- mined nipple position, and the estimated volume of the compressed breast in the first image, and
  • the second region determination unit is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the second image and simulate the compression of the compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the second image.
  • the segmentation of the compressed breast just includes determining the skin fold of the breast.
  • the matching unit and/or the second region determination unit are adapted to perform the segmentation of the compressed breast manually, semi-automatically or full- automatically using suitable tools and algorithms.
  • the first region determination unit comprises a user interface allowing a user to determine the first region in the first image.
  • the first region determination unit can comprise a graphical user interface which allows the user together with a mouse, keyboard, touch screen or another input device to select the first region in the first image.
  • the first region determination unit comprises a marker providing unit, e.g. a computer-aided detection (CAD) unit or a unit providing structured reports generated by another reader, for determining a marker defining the first region in the first image.
  • CAD computer-aided detection
  • the finding providing unit can determine several marks shown in the first image and the first region determination unit can be adapted to allow a user to select one of these marks as the first region.
  • the image processing device further comprises a display for displaying the first region in the first image and the second region in the second image.
  • the second image is comprised of multiple slice images, wherein the image processing device further comprises a slice determination unit for determining a slice of the second image which shows the second region, wherein the display is adapted to show the determined slice.
  • the display preferentially shows a marker at the second region in the determined slice.
  • the image processing device further comprises an uncertainty determination unit for determining an uncertainty of determining the second region in the second image, wherein the display is adapted to indicate the uncertainty in the second image.
  • an uncertainty determination unit for determining an uncertainty of determining the second region in the second image, wherein the display is adapted to indicate the uncertainty in the second image.
  • a margin can be displayed around the second region, wherein the size of the margin depends on the determined uncertainty.
  • the image processing device further comprises a tool providing unit for providing a tool for being used in the first image and in the second image, wherein the display is adapted to show the tool at the first region in the first image and at the second region in the second image.
  • the tool is, for example, a local digital magnifier, a cross hair, a peak-hole view, et cetera.
  • the image processing device further comprises a sub-images determination unit for determining sub-images of the first image, wherein the first region determination unit is adapted to allow a user to select a sub-image of the first image, wherein the selected sub-image defines the first region in the first image, wherein the sub-images determination unit is adapted to determine a sub-image of the second image covering the determined second region and wherein the display is adapted to show the selected sub-image in the first image and the determined sub-image in the second image.
  • the first region determination unit is adapted for allowing a user to consecutively select all sub-images of the first image, wherein the sub-images determination unit is adapted to determine for each selected sub-image of the first image a corresponding sub-image of the second image, which is shown on the display, wherein the sub-images determination unit is further adapted to determine un- shown sub-images of the second image, which have not been shown on the display while all sub-images of the first image have consecutively be selected, and wherein the display is adapted to show the un-shown sub-images of the second image.
  • the image processing device comprises a finding providing unit for providing findings in the first image and the second image, wherein the first region determination unit is adapted to determine the region of a first finding in the first image as the first region, wherein the image processing device further comprises a grouping unit for grouping the first finding and a second finding in the second image into a group of findings, if the distance between the position of the second finding in the second image and the position of the second region is smaller than a predefined threshold.
  • the image processing device can comprise a group classification unit for classifying a group of findings based on features of the findings of the group and on predefined group classification rules.
  • an image processing method for finding corresponding regions in two three-dimensional images of an object comprises:
  • the compression model is adapted to model the effect of pressure applied to the object
  • an imaging processing computer program for finding corresponding regions in two three-dimensional images is presented, wherein the computer program comprises program code means for causing an image processing device as defined in claim 1 to carry out the steps of the image processing method as defined in claim 19, when the computer program is run on a computer controlling the image processing device.
  • Fig. 1 shows schematically and exemplarily an embodiment of an image processing device for finding corresponding regions in two three-dimensional images of an object
  • Fig. 2 shows a schematic overview of a determination of a second region in a second image, which corresponds to a first region in a first image
  • Fig. 3 shows schematically and exemplarily a decompressed compression model and a compressed compression model
  • Fig. 4 shows exemplarily landmarks of a breast and a corresponding coordinate system
  • Fig. 5 shows exemplarily and schematically two-dimensional slices of three-dimensional first and second images and corresponding first and second regions in these slices, and
  • Fig. 6 shows a flowchart exemplarily illustrating an embodiment of an image processing method for finding corresponding regions in two three-dimensional images of an object.
  • Fig. 1 shows schematically and exemplarily an image processing device for finding corresponding regions in two three-dimensional images of an object.
  • the image processing device 1 comprises an image providing unit 2 for providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction.
  • the image providing unit 2 is a storage unit, in which the images are stored already.
  • the image providing unit can also be a receiving unit for receiving images via a wireless or wired data connection and for providing the received images for processing the same.
  • the image providing unit can also be an image generation apparatus for generating the images.
  • the image providing unit can be a tomosynthesis apparatus for generating tomosynthesis images, in particular, of a breast.
  • the image providing unit can also be another imaging unit for generating three-dimensional images like a magnetic resonance imaging unit, a computer tomography imaging unit, a nuclear imaging unit, for example, a positron emission tomography unit or a single photon emission tomography unit, an ultrasound imaging unit, et cetera.
  • the image providing unit 2 is a adapted to provide tomosynthesis images as the first and second images.
  • the provided tomosynthesis images are different views of a single object, which is, in this embodiment, a breast.
  • one of the first image and the second image corresponds to one of a CC-view, a MLO-view and a ML-view and the other of the first image and the second image corresponds to another of the CC-view, the MLO-view and the ML-view.
  • the image processing device 1 further comprises a model providing unit 3 for providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object.
  • the compression model models a breast, wherein the compression model represents in its decompressed state a sphere or a part of a sphere.
  • the compression model represents in its decompressed state a sphere or hemisphere.
  • the compression model can represent an ellipse or a part of an ellipse.
  • the compression model models volume-preserving, homogenous, rubber-like material.
  • a Neo-Hookean model material is preferentially assumed, which is a hyper-elastic model material, which can be used to describe the non-linear relationship between stress and strain for materials undergoing large deformations.
  • the compression model preferentially defines deformation functions of all locations in a hemisphere and, thus, for symmetry reasons in a complete sphere, when it is compressed and decompressed.
  • the image processing device also includes a first region determination unit 4 for determining a first region in the first image.
  • the first region determination unit 4 comprises a user interface 9 allowing the user to determine the first region in the first image.
  • the first region determination unit 4 comprises a graphical user interface, which allows a user together with an input unit like a mouse, a keyboard, a touchscreen, etc. to select the first region in the first image, which is shown on a display 7. The user can, for example, select a suspicious region in the first image as the first region.
  • the first region determination unit 4 can comprise a marker providing unit 18, e.g. a CAD unit or a unit providing structured reports, for determining a mark defining the first region in the first image.
  • the marker providing unit 18 can determine several marks shown in the first image and the first region determination unit 4 can be adapted to allow a user to select via the graphical user interface 9 one of these marks as the first region.
  • the image processing device 1 further comprises a matching unit 5 for matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model.
  • a second region determination unit 6 determines a second region, which corresponds to the first region in the first image, in the second image.
  • the second region determination unit 6 is adapted to simulate a decompression of the compression model, simulate a compression of the compression model in the second compression direction, such that the compressed compression model is registered with the object shown in the second image, and determines a region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.
  • the image providing unit 2 can be adapted to provide more than two images, for example, the image providing unit 2 can further provide a third image and a fourth image showing the same object compressed in other respective compression directions. If a first region has been determined in the first image, the compression model can be adapted to determine third and fourth regions, which correspond to the first region, in the third and fourth images.
  • the compression model is preferentially a sphere comprising poles defined by intersections of a line, which is aligned with the first compression direction and which traverses the center of the sphere, with the surface of the sphere, while the compression model is compressed in the first compression direction.
  • the second region determination unit 6 is adapted to rotate, after decompression, the decompressed compression model to an orientation, in which an axis through the poles of the sphere is aligned with the second compression direction and to simulate the compression of the compressed model in the second compression direction such that the compressed compression model is registered with the object shown in the second image by simulating a uni-axial force applied to the poles of the sphere in the second compression direction.
  • the second region determination unit is adapted to rotate the decompressed compression model and simulate the compression of the compressed model accordingly.
  • the matching unit 5 is adapted to determine the skin fold of the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the first image and to adapt the compressed compression model such that it corresponds to the determined skin fold of the compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the first image.
  • the second region determination unit 6 is preferentially adapted to determine the skin fold of the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine a breast center of mass or the nipple position, and estimate the volume of the compressed breast in the second image and to simulate the compression of the compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the second image
  • the matching unit 5 and the second region determination unit 6 can be adapted to perform the detection and segmentation of the landmarks in the compressed breast manually, semi-automatically or full-automatically using known tools and algorithms.
  • the display 7 is adapted to display the first region in the first image and the second region in the second image.
  • the second image is comprised of multiple slice images, wherein the image processing device 1 comprises a slice determination unit 10 for determining a slice of the second image, which shows the second region, wherein the display 7 is adapted to show the determined slice.
  • the display 7 preferentially shows a marker at the second region in the determined slice.
  • the first and second images are ipsilateral views of the breast in one study.
  • Each view is acquired from a different perspective with different compression.
  • the image providing unit can provide more than one study. For example, two studies of the same breast, referred to as current and prior, which have been acquired at different times, can be provided. Each study may contain two or more views of the same breast. The number of views in each study may be different. The perspective and compression may also be different.
  • the image processing device can be adapted to determine an approximate start point, i.e. a three-dimensional coordinate, for searching for corresponding structures in the ipsilateral views.
  • the image processing device 1 can estimate a three-dimensional coordinate in the other ipsilateral view, i.e. in the second image, which corresponds to the position that was, for example, marked by a user by clicking with a cursor device or by a full-automatic detection algorithm in the first image. Based on the estimated three-dimensional coordinate, i.e. based on the determined second region, the closest image slice can be selected from the second image being a three-dimensional image volume, wherein the selected closest image slice may be shown on the display 7.
  • the image processing device 1 determines the second region in the second image by using a compression model, which approximates the breast, the resulting determined second region is only an approximation of the true corresponding position.
  • the image processing device 1 comprises therefore preferentially further an uncertainty determination unit 11 for determining an uncertainty of determining the second region in the second image, wherein the display 7 is adapted to indicate the uncertainty in the second image.
  • the estimated three-dimensional coordinate i.e. the determined second region
  • the estimated three-dimensional coordinate may be depicted as a point, by using, for example, a cross hair in the closest image slice.
  • the estimated three-dimensional coordinate may also be displayed as a three-dimensional volume like a sphere, with an extent that reflects the uncertainty of the determination of the second region in the second image, i.e. that reflects certain statistics about the method's estimation error.
  • the statistics may be estimated from a data base of images with ground truth information.
  • the image slice closest to the estimated coordinate may be selected by the slice determination unit 10 as the initial slice.
  • a projection image of a slab may be displayed, which has the same thickness as the extent of the three- dimensional volume in the direction orthogonal to the image plane. In this way, a user can see the entire three-dimensional volume, in which the true position is likely to be located, without further scrolling through the image stack.
  • the first region determination unit 4 can be adapted to allow a user to outline a two-dimensional region or a three-dimensional region, potentially with the support of a full-automatic or semiautomatic computer tool, instead of marking a single three-dimensional coordinate, as the first region in the first image.
  • the image processing device can be adapted to map all or a subset of the image elements, i.e. the voxels, in the first region to the one or more other ipsi- lateral views, in particular, to the second image.
  • the mapped coordinates are displayed for example, with a certain color.
  • a hull for example, a convex hull, enclosing all sub-regions, may be computed and displayed on the display 7.
  • the regions may be depicted as solid, for example, colored or translucent, regions or as contours or surfaces.
  • an additional uncertainty margin can be added to the determined second region, which increases its extent. If the second region comprises several sub-regions, each of these sub-regions may comprise an uncertainty margin reflecting the respective uncertainty.
  • the depicted second region, in particular, the depicted sub-regions, i.e. the depicted volumes, in the second image reflect the areas, in which the structure, that was outlined in the original view, is likely to be located.
  • the image processing device 1 further comprises a tool providing unit for providing a tool for being used in the first image and the second image, wherein the display 7 is adapted to show the tool at the first region in the first image and at the second region in the second image.
  • the image processing device can therefore be utilized for synchronizing tools, which are simultaneously used in two or more images, i.e. in two or more views.
  • the provided tool is, for example, a local digital magnifier that magnifies a small sub-region of an image for the purpose of analyzing local image details.
  • the image processing device can be adapted to simultaneously magnify and depict in the same way also the corresponding region in each other ipsilateral view, in particular, in the second image, as it reflects complementary information.
  • the user can move the selected tool to a certain position in one view, while the tools in the corresponding other views are manipulated in real-time by computing the corresponding positions.
  • the tool providing unit 12 can be adapted to provide other tools.
  • cross hairs can be provided, wherein cross hairs are synchronized between views to browse simultaneously through image stacks of the ipsilateral views.
  • the image processing device may be adapted to allow a user to control the cross hair in one view and may change the slice or move the cursor in the image plane.
  • the current user-defined cross hair position is regarded as being the first region, which is mapped to the one or several other ipsilateral views, in particular, to the second image by determining the corresponding second region, wherein in the other ipsilateral views a cross hair is depicted in the closest image slice at the estimated in-plane position.
  • the provided tool can also be a peak-hole view tool. This tool supports the user to focus on a small region of an image by displaying only the local neighborhood around the computed position, while the complement is suppressed, for example, not displayed or greyed-out.
  • the image processing device preferentially further comprises a finding providing unit 15 for providing findings in the first image and the second image, wherein the first region determination is adapted to determine the region of a first finding in the first image as the first region, wherein the image processing device 1 further comprises a grouping unit 16 for grouping the first finding and a second finding in the second image into a group of findings, if the distance between the position of the second finding in the second image and the position of the second region is smaller than a predefined threshold.
  • the image processing device 1 further comprises a group classification unit 17 for classifying a group of findings based on features of the findings of the group and on predefined group classification rules.
  • the image processing device can therefore be adapted to full-automatically link findings in two or more views.
  • the spatial location of a finding for example, of a tumor, may be described with a graphical annotation like a point, a two-dimensional region or a three-dimensional region.
  • the annotation may be full-automatically computed by the finding providing unit 15, for example, by a CAD algorithm, or manually defined by an input from the user.
  • the image processing device can be adapted to link two or more findings by computing their spatial distance, for example, by mapping a point annotation from one view to another view and by computing the distance between the mapped point and the annotation in the second view. If the distance is less than a predefined threshold, it is likely that both annotations mark the same finding. For a two- dimensional annotation or a three-dimensional annotation, the distance to a representative point like the center of gravity or one of the points of the contour or volume of the breast may be computed.
  • Finding candidates may be full-automatically generated in two or more views, i.e. in two or more images, of the same breast, for example, by a CAD algorithm.
  • the image processing device can be used to full-automatically link the finding candidates from all views, in order to combine the corresponding image-based features for a joined assessment, for example, by a statistical classification algorithm.
  • Suitable features determined from each view are, for example, the shape of the finding, the characteristic of the finding margin or other features describing the local tissue morphology which potentially give evidence of, e.g. malignant disorders, but are perhaps differently distinctive in each view.
  • This classification of a group of findings based on features of the findings, i.e. a joined feature assessment can improve the classification performance of CAD- algorithms, but also supports human readers in the clinical decision making process.
  • the image processing device can determine whether one or several other annotations drawn in one or more other ipsilateral views are close and likely to refer to the same tissue structure. If this is the case, the image processing device may ask the user, whether this correspondence shall be registered for later reporting. This or a similar workflow is likely to improve the reporting process in terms of speed and safety.
  • the target image i.e. the second image, to which the first region, which may be a three-dimensional coordinate, is mapped
  • the first region which may be a three-dimensional coordinate
  • a three-dimensional coordinate of a suspicious finding detected in a MLO-view of a tomosynthesis study may define a first region, wherein a second region is determined in a MLO-view of a second tomosynthesis study, which has been acquired some time later.
  • positions in CC-views and/or ML-views, which were acquired at different times can be mapped by the image processing device.
  • the views from different imaging studies of the same breast which may have been acquired from different perspectives and with different compressions, can be processed by the image processing device such that corresponding regions in these views are determined.
  • the target view i.e. the second image, to which a coordinate is mapped
  • the source volume i.e. as the first image. It is therefore possible to retrieve a finding in a current study that has already been detected in a prior study. In a similar fashion, this may be used for monitoring therapy response.
  • the image processing device further comprises a sub-image determination unit 13 for determining sub-images of the first image, wherein the first region determination unit 4 is adapted to allow a user to select a sub-image of the first image, wherein the selected sub-image defines the first region in the first image, wherein the sub-images determination unit 13 is adapted to determine a sub-image of the second image covering the determined second region and wherein the display 7 is adapted to show the selected sub-image in the first image and the determined sub- image in the second image.
  • the display can be adapted to show an image as a sequence of magnified sub-images, for example, with a magnification factor chosen such that one image pixel corresponds to one display pixel.
  • the partitioning of the respective image into sub-images may be calculated by the sub-images determination 13 such that the user can conveniently step through a list of parameters, which, for example, describe the center point and/or two corner points of the respective sub-image.
  • the different ipsilateral views i.e. the first image and the second image, usually depict complementary information. It is therefore desirable that from all ipsilateral views the sub-region corresponding to the same breast region are depicted at the same time.
  • the image processing device can be adapted to compute the parameters, in particular, the locations parameterised by the center points and/or two corner points of the respective sub-image, in the one or more other ipsilateral views, in particular, in the second image, by mapping the initial parameter list, i.e. by mapping corresponding first regions in the first image, to the one or several other ipsilateral views, wherein this mapping is performed by determining, for example, second regions in the second image, which correspond to the first regions in the first image.
  • a list M of sub-image parameters for an MLO-view of a tomosynthesis study can be determined by the sub-image determination unit 13.
  • the elements of the parameter list which define locations, i.e.
  • first regions, in the first image can be mapped to, for example, an ipsilateral CC-view being a second image, which results in a parameter list C of the same length, wherein the parameter list C defines second regions in the second image, which correspond to the first regions defined by the parameter list M.
  • the user can start browsing through the parameter list by selecting the first element of each list.
  • the selected parameters define sub-images in the corresponding views, which depict the same anatomical sub-region of the breast, but each with a different perspective.
  • the first region determination unit 4 can be adapted to allow a user to consecutively select all sub-images of the first image, wherein the sub-images determination unit 13 is adapted to determine for each selected sub-image of the first image a corresponding sub-image of the second image, which is shown on the display 7, wherein the sub-image determination unit 13 is further adapted to determine unshown sub-images of the second image, which have not been shown on the display 7 while all sub-images of the first image have been consecutively selected, and wherein the display 7 is adapted to show the un-shown sub-images of the second image.
  • sub-regions in the breast are identified, which are not covered by at least one sub-image defined by the mapped parameters.
  • one or several sub- images can be defined, which together cover the missed sub-region.
  • the corresponding parameters can be mapped as described above to all other ipsilateral views.
  • the resulting parameters can be appended to the corresponding parameter lists. If the user browses now through the extended parameter list, it is assured that all sub-regions in all views are displayed at least once in a single pass.
  • 2D x-ray mammography currently is the gold standard for the detection of breast cancer in its early stages.
  • a limitation of this modality is that in a 2D projection of the breast, supe- rim positions of normal tissue may look suspicious and lead to false positives, while true lesions can get obscured by overlying breast tissue.
  • two views of each breast are usually acquired per exam; a craniocaudal (CC) and mediolateral oblique (MLO) view, where the MLO view is acquired at an angle between 30 and 60 degrees from the CC view.
  • CC craniocaudal
  • MLO mediolateral oblique
  • DBT Digital breast tomosynthesis
  • the image processing device described above with reference to Fig. 1 provides a fast method to estimate corresponding locations in ipsilateral tomosynthesis views, by matching and applying an analytical mechanical compression model for, for example, hemispheres to the tomosynthesis data.
  • Such a method can be very useful for several tasks. First, as a starting point for a feature based local search method to link suspicious regions in a multiview CAD system. Second, as an initialization phase for a more precise but time-consuming registration method.
  • the above described image processing device allows to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation derived from the analytical solution of compressing, for example, a hemisphere.
  • a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then the location of the corresponding point in the ipsilateral view is estimated by assuming that this model was decompressed, rotated and compressed again (see Fig. 2).
  • Exact modelling of the process of breast deformation during compression is highly complex and an active field of research (A. Samani, J. Bishop, M. J. Yaffe, and D. B. Plewes, 2001. Biomechanical 3D Finite Element Modeling of the Human Breast Using MRI Data.
  • an analytic deformation model for compression is employed.
  • the model defines the deformation functions of all locations in such a sphere when it is compressed and decompressed. With these deformation functions and a simple rotation, corresponding locations in ipsi- lateral views can be computed very quickly.
  • Fig. 2 shows a schematic overview of the method performed by the image processing device (frontal views of the left breast 20 of a patient) when going from a location in a CC tomosynthesis volume 25 to the estimated location in the ipsilateral MLO volume 26.
  • the original location (solid dot 23) is decompressed, rotated to the other view and compressed again.
  • the depth information is known in DBT.
  • reference number 21 denotes the compression model
  • reference number 22 denotes the nipple position
  • reference number 23 denotes in the left part of Fig. 2 the first region and in the right part of Fig. 2 the corresponding second region
  • reference number 27 denotes a detector, i.e.
  • first plate 27 is a detector. However, while generating the tomosynthesis volumes the detector detects projection data which are used for reconstructing the tomosynthesis volumes.
  • a breast is modeled by a sphere or hemisphere consisting of a homogeneous, isotropic rubbery (Neo-Hookean) material that is volume preserving when compressed.
  • the force that is applied to the breast during compression is assumed to be uniax- ially, applied to the poles of the sphere.
  • the sphere When compressed, the sphere will become flatter and expand outwards (see Fig. 3).
  • the Neo-Hookean model is a hyperelastic material model which can be used to describe the non-linear relationship between the stress and strain (i.e. change in length of the material) for materials undergoing large deformations (typically accurate for strains up to 20%).
  • the stress at the equator can be calculated from the original area of the disc as:
  • the stress is different for each disc at a certain height in the sphere because the area of each disc is different.
  • the local stress at the disc at height z* (or ⁇ p) can be calculated by combining eq. 1 , 6, 7 and 5 respectively:
  • An initial point with Cartesian coordinates (x*, y*, z*) can be converted to a coordinate system where its location is defined by (R*, ⁇ , ⁇ ), where R* is the radial distance from the origin and ⁇ the azimuthal angle in the disc at normalized height ⁇ :
  • the height of the disc When compressed, the height of the disc is reduced with the strain ratio ⁇ ⁇ , while the radius of the disc is stretched.
  • the radius of the disc at height ⁇ p before compression is given by eq. 8 and because the disc is volume preserving, the radius of the disc after compression can be computed by:
  • the (x*, y ⁇ z * ) coordinates of a point before compression can be computed from the (x,y,z) coordinates of that point in the compressed state.
  • ⁇ p the one that results in the smallest error in eq. 13
  • R*, ⁇ , ⁇ the point is converted to (R*, ⁇ , ⁇ ) coordinates with:
  • the Cartesian coordinate system of the sphere is defined in the data using suitable anatomical landmarks and an estimation of the strain ratio parameter ⁇ 0 is made.
  • the coordinate system is defined as follows (see Fig. 4).
  • the z-axis points in the direction of compression, i.e. from the compression paddle to the detector (perpendicular to the slices).
  • the y-axis is defined in accordance with the results found in (Yasuyo Kita, Ralph Highnam, and Michael Brady, 2001. Correspondence between Different View Breast X Rays Using Curved Epipo- lar Lines. Computer Vision and Image Understanding, 83-1 (2001 ), 38 - 56) and (Bin Zheng, Jun Tan, Marie A Ganott, Denise M Chough, and David Gur, Nov 2009. Matching breast masses depicted on different views: a comparison of three methods.
  • the pectoral muscle is frequently not visible and the y-axis is therefore chosen to run parallel to the vertical image edge at a position such that the distance to the FBC is the same as in the MLO view.
  • the definition of the FBC thus defines the position of the central axis (x).
  • This central axis is assumed to be the axis around which the detector and x-ray tube are rotated when a patient is repositioned from one viewing position to the other. Therefore, when a point is transformed in the uncompressed state from one view to the other, we also rotate around this central x-axis.
  • the rotation angle is the difference (in the appropriate direction) between the angles that were used when the CC and MLO views were acquired.
  • the nipple location is automatically estimated.
  • the FBC is defined by the point in the central slice that lies on the skin fold and is furthest away from the pectoral muscle (y-axis).
  • the nipple location is manually annotated, i.e. manually annotated 3D nipple locations are used as the FBC.
  • the benefit of this approach is that the FBC's in both views are actually corresponding locations.
  • the FBC is determined in such a way that the amount of breast tissue is more or less symmetrically distributed around the central axis. This is done by letting the central axis run through the center of mass (COM) of breast tissue volume. The FBC is then defined by the point on the skin contour that is perpendicular to the y-z-plane and runs through this COM. In order to obtain a robust estimate of the COM of the actual breast tissue, the pectoral muscle, axilla, skin folds and nipple are excluded from computation, using automated segmentation.
  • COM center of mass
  • a method is used where the COM is computed from that part of the breast that has a distance to the pectoral muscle that is more than 55% and less than 95% of the maximum distance to the pectoral muscle boundary. Parameters are chosen in such a way that the remaining part showed a good representation of the distribution of mass of the breast.
  • Fig. 4 shows exemplarily employed coordinate axes x, y and an illustration of defining FBC in slices of a MLO and CC tomosynthesis view of a right breast.
  • the visible breast tissue 30 was extracted from the background by automatic segmentation.
  • the rectangle 31 in the upper right corner of the MLO view shows the automatically segmented pectoral muscle that defines the y- axis.
  • the dot 32 on the skin surface shows the FBC as defined by the a) automatically estimated nipple location, b) manually annotated nipple location, c) center of mass based method.
  • the FBC defines the x-axis.
  • the slices that are shown in a) and c) are the central slices of the MLO and CC volumes.
  • the slice shown in the CC view of b) is located several slices more caudally than the central slice, as defined by the manually annotated nipple location.
  • the hatched region 33 in c) shows the region that is used to compute the center of mass of the breast.
  • vol MLO and vol C c are the volumes of the breast tissue without the pectoral muscle of the MLO and CC view, respectively.
  • the compressed radius Ro is computed in a similar fashion from the area of a simulated 2D projection of the compressed breast (without the pectoral muscle): areaMLO + areacc where area ML o and area C c are the projected areas of the two views, computed from a simulated 2D projection in the direction of the compression force.
  • areaMLO + areacc
  • area ML o and area C c are the projected areas of the two views, computed from a simulated 2D projection in the direction of the compression force.
  • the parameter a 0 is therefore the same for both views of the same breast.
  • Fig. 5 illustrates exemplarily images shown by the display 7.
  • the FBC is defined by the COM based method.
  • Fig. 5a shows an exemplary slice of an MLO 40 and CC 41 tomosynthesis view of the right breast of a patient.
  • the green arrow 42 in the MLO view 40 points to a large calcification that is visible in the current slice of the MLO view 40, but not in the slice of the CC view 41.
  • the image processing device estimates the location in the CC view 41 in real-time and synchronizes this view to the estimated slice.
  • a small search area 43, 44 is shown at the estimated x and y location in that CC slice, to indicate where the corresponding region most likely can be found (see Fig. 5b).
  • the search area consists of two ellipses 43, 44 with radii that are based on percentiles of the difference between the actual and estimated x and y coordinates of all the points in this study (using the COM based method).
  • the radii of the inner and outer ellipse are set to the 50th and 75th percentiles respectively.
  • the line 45 indicates the x-axis that runs from the origin to the FBC 46 that was computed by means of the center of mass based method.
  • step 101 a three-dimensional, first image of the object and a three-dimensional, second image of the same object are provided, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction.
  • step 102 an analytical compression model of the object is provided, wherein the compression model is adapted to model the effect of pressure applied to the object.
  • step 103 a first region is determined in the first image, and in step 104, the compression model is matched to the first image, thereby aligning the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model.
  • step 105 a decompression of the compression model is simulated and, in step 106, a compression of the compression model is simulated in the second compression direction such that the compressed compression model is registered with the object shown in the second image.
  • step 107 the region in the second image, which corresponds to the first image in the compression model, compressed in the second compression direction is determined as the second region.
  • step 108 the second region is shown within the second image on a display.
  • the compression model strongly simplifies the difficult process of breast deformation during compression.
  • the breast can be modeled by a hemisphere of homogeneous elastically deformable material, several intrinsic properties of breast compression are ignored.
  • breast tissue obviously is not homogeneous, but consists of several types of tissue, each with its specific compression properties (e.g. stiffness of fibrous tissue).
  • Boundary conditions are also simplified by employing friction-less sliding at the plates and at the planar posterior side.
  • the assumption that a breast can be modeled by a hemisphere is also a strong assumption. Fluids in the breast can be squeezed out during compression, and a breast therefore is also not completely volume preserving.
  • the strongly simplified compression model already provides a sufficiently accurate determination of a second region in the second image, which corresponds to a first region in a first image, the accuracy may be further improved by incorporating some of the above mentioned issues into a more accurate compression model and/or by using different compression model parameters per view.
  • a single unit or device may fulfill the functions of several items recited in the claims.
  • the mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
  • Calculations like the matching of the compression model to the first image, the simulations of decompression and compression of the compression model and the determination of the second region in the second image performed by one or several units or devices can be performed by any other number of units or devices.
  • steps 102 to 107 can be performed by a single unit or by any other number of different units.
  • the calculations and/or the control of the image processing device in accordance with the image processing method can be implemented as program code means of a computer program and/or as dedicated hardware.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium, supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium, supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • the invention relates to an image processing device for finding corresponding regions in two three-dimensional images of an object.
  • An analytical compression model is matched to a first image showing the object compressed in a first compression direction, thereby determining the position of a first region of the object, which is provided in the first image, in the compressed model.
  • a second region, which corresponds to the first region in the first image, in the second image is determined by simulating a decompression of the compression model, simulating a compression of the compression model in a second compression direction, in which the object shown in the second image is compressed, and determining the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an image processing device for finding corresponding regions (23) in two three-dimensional images (25, 26) of an object (20). An analytical compression model (21 ) is matched to a first image showing the object compressed in a first compression direction, thereby determining the position of a first region of the object, which is provided in the first image, in the compressed model. A second region, which corresponds to the first region in the first image, in the second image is determined by simulating a decompression of the compression model, simulating a compression of the compression model in a second compression direction, in which the object shown in the second image is compressed, and determining the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.

Description

Image processing device
FIELD OF THE INVENTION
The invention relates to an image processing device, an image processing method and an image processing computer program for finding corresponding regions in two three-dimensional images.
BACKGROUND OF THE INVENTION
Mammography is the current first line modality for breast cancer detection. It is common practice to acquire two 2D images of each breast (commonly referred to as mediolateral-oblique (MLO) view and craniocaudal (CC) view) from two different viewpoints and with two different compression settings of the breast. The direction of compression force is always orthogonal to the detector plate. Each mammogram is a 2D projection image of the entire breast which therefore provides no depth information orthogonal to the image plane.
Interpretation of mammograms is a challenging task because tumors and other breast tissue, in particular radio-opaque glandular tissue, may overlap in the 2D projection images such that a suspicious finding may fully or partially be obscured and difficult to detect. The problem is slightly relieved if one combines information from the MLO and CC view, because structures which overlap in one view sometimes do not overlap in the other view due to the different perspective and compression. This however requires that the radiologist is able to identify reliably the corresponding positions in both views which can be difficult due to the different perspective and missing depth information.
Tomosynthesis is a new modality for 3D imaging which combines aspects of mammography and computer tomography and is expected to overcome some limitations of conventional mammography. As in mammography, the breast is commonly also slightly compressed and imaged from two different perspectives. The imaging perspective and the direction of the compression force coincide. Instead of acquiring a single projection image for each perspective, the x-ray tube is moved in a small arc and a small number of images are taken. From these images, a 3D image volume can be reconstructed for each perspective which has a high spatial resolution in 2 dimensions, but a rather low spatial resolution in the third orthogonal dimension due to the limited length of the arc. In summary, each tomosynthesis study commonly consists of two 3D image volumes (ipsilateral views) for each side of the breast which cannot be reformatted into arbitrary oblique planes without significant degradation of the image quality. In the following, the orientation with the best image quality (highest in-plane resolution) is referred to as the major image plane.
Due to the 3D reconstruction, depth information is preserved and, for example, suspicious findings are not obscured by overlapping tissue in reconstructed tomosynthesis slices. However, the user now has to carefully analyze each slice of the 3D image volume. Moreover, it is still important to also combine both views of the same breast as they depict complementary information. If the user detects a suspicious structure in one view, it is important to also analyze the corresponding region in the other view. For this purpose, the reader has to identify the corresponding location in the major image plane at the right depth in the other view. This is time- consuming, because he has to scroll through the image stack and analyze each slice carefully to find the right position in the 3D volume, and also a difficult task due to the subtlety of potential tumors, the larger number of slices of a tomosynthesis volume and other imaging parameters like the angle between the two ipsilateral views and the compression of the breast, which both can vary for each patient.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an image processing device, an image processing method and an image processing computer program, which assist a user in finding corresponding regions in two three-dimensional images. In a first aspect of the present invention an image processing device for finding corresponding regions in two three-dimensional images of an object is presented, wherein the image processing device comprises:
an image providing unit for providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction,
a model providing unit for providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object,
a first region determination unit for determining a first region in the first image, a matching unit for matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model,
a second region determination unit for determining a second region, which corresponds to the first region in the first image, in the second image, the second region determination unit being adapted to:
simulate a decompression of the compression model,
simulate a compression of the compression model in the second compression direction such that the compressed compression model is registered with the object shown in the second image,
determine the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.
By using the compression model, which at least approximately describes the compression in the two compression directions and which can be matched to the respective first and second images, and by simulating the steps of compressing in the first compression direction, decompressing and compressing in the second compression direction, two corresponding regions in the two images showing the same object can be determined very accurately, although the two images show the object compressed in different compression directions. The image processing apparatus can therefore be used for assisting a user in finding corresponding regions in these two images.
The image providing unit can be adapted to provide more than two images, for example, the image providing unit can further provide a third image and a fourth image showing the same object compressed in other respective compression directions. If a first region has been determined in the first image, the compression model can be adapted to determine third and fourth regions, which correspond to the first region, in the third and fourth images. The image providing unit can be a storage unit, in which the images are stored already, and/or a receiving unit for receiving images via a wireless or wired data connection and for providing the received images for processing the same. The image providing unit can also be an image generation apparatus for generating the images. For example, the image providing unit can be a tomosynthesis apparatus for generating tomosynthesis images, in particular, of a breast. However, the image providing unit can also be another imaging unit for generating three- dimensional images like a magnetic resonance imaging unit, a computed tomography imaging unit, a nuclear imaging unit, for example, a positron emission tomography unit or a single photon emission tomography unit, an ultrasound imaging unit, et cetera.
The terms "matching" and "registering" refer preferentially to a process in which the compression model is adapted to the compressed object as shown in the respective image. This process includes preferentially computation of certain model parameters, based on characteristics of the compressed object like the volume of the compressed object, from the image data and aligning the compression model and its coordinate system to suitable landmarks in the image like, if the object is a breast, the nipple position, the pectoral line and the nipple-pectoral line. A further, internal structure of the object is preferentially not considered during the matching and registration procedures.
In particular, if the compression model is a sphere, the terms "matching" and "registering" refer preferentially to a process which includes aligning a coordinate system of the compression model and, thus, aligning the compression model with the compressed object shown in the respective image based on landmarks detectable in the respective image and estimating a strain ratio parameter, which is indicative of a ratio of the radius of the sphere at the equator with and without compression, from the volume of the compressed object in the respective image and a simulated two-dimensional projection of the compressed object. If the object is a breast, the landmarks are preferentially, as already mentioned above, the nipple position, the pectoral line and the nipple-pectoral line.
It is preferred that the compression model represents in its decompressed state a sphere or an ellipsoid or a part of a sphere or of an ellipsoid. In a preferred embodiment, the compression model represents in its decompressed state a hemisphere. It is further preferred that the sphere comprises poles defined by intersections of a line, which is aligned with the first compression direction and which traverses the center of the sphere, with the surface of the sphere, while the compression model is compressed in the first compression direction, and wherein the second region determination unit is adapted to:
rotate, after decompression, the decompressed compression model to an orientation in which an axis through the poles of the sphere is aligned with the second compression direction, simulate the compression of the compressed model in the second compression direction such that the compressed compression model is registered with the object shown in the second image by simulating a uniaxial force applied to the poles of the sphere in the second compression direction.
If the compression model is compressed, it is preferentially assumed that the compression model is located between two parallel plates, which are forced towards each other in the respective compression direction, for compressing the compression model.
In a preferred embodiment, the first image is acquired in a first acquisition direction and the second image is acquired in a second acquisition direction, wherein the first compression direction and the first acquisition direction are substantially the same and the second compression direction and the second acquisition direction are substantially the same.
The compression model models preferentially a volume-preserving, homogenous, rubber-like material. For modeling the rubber-like material a Neo-Hookean model material can be assumed, which is a hyperelastic model material which can be used to describe the non-linear relationship between stress and strain for materials undergoing large deformations. The compression model preferentially defines deformation functions of all locations in a hemisphere and, thus, for symmetry reasons in a complete sphere, when it is compressed and decompressed. Preferentially, with these deformation functions and a simple rotation corresponding regions in ipsilateral views being the first and second images are computed very quickly.
In a preferred embodiment, the image providing unit is adapted to provide tomosynthesis images, which correspond to different acquisition directions, as the first and second images. In particular, the tomosynthesis images are two different views of a single object. It is further preferred that the image providing unit is adapted to provide first and second images of a breast as the first and second images. In an embodiment, one of the first image and the second image corresponds to one of a craniocaudal view (CC), a mediolateral-oblique view (MLO) and a me- diolaterial view (ML) and the other of the first image and the second image corresponds to another one of the craniocaudal view (CC), the mediolateral-oblique view (MLO) and the medi- olaterial view (ML). The first image and the second image can also be two different three- dimensional images, i.e. views, which have been acquired at different times, in particular, at different dates, wherein the acquisition geometry, in particular, the acquisition direction, can be the same or can be different for the first image and the second image.
In a preferred embodiment,
the matching unit is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the first image and adapt the compressed compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the deter- mined nipple position, and the estimated volume of the compressed breast in the first image, and
the second region determination unit is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the second image and simulate the compression of the compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the second image.
Preferentially, the segmentation of the compressed breast just includes determining the skin fold of the breast.
Preferentially, the matching unit and/or the second region determination unit are adapted to perform the segmentation of the compressed breast manually, semi-automatically or full- automatically using suitable tools and algorithms.
It is further preferred that the first region determination unit comprises a user interface allowing a user to determine the first region in the first image. For example, the first region determination unit can comprise a graphical user interface which allows the user together with a mouse, keyboard, touch screen or another input device to select the first region in the first image.
In an embodiment, the first region determination unit comprises a marker providing unit, e.g. a computer-aided detection (CAD) unit or a unit providing structured reports generated by another reader, for determining a marker defining the first region in the first image. In particular, the finding providing unit can determine several marks shown in the first image and the first region determination unit can be adapted to allow a user to select one of these marks as the first region.
It is preferred that the image processing device further comprises a display for displaying the first region in the first image and the second region in the second image. In a preferred embodiment, the second image is comprised of multiple slice images, wherein the image processing device further comprises a slice determination unit for determining a slice of the second image which shows the second region, wherein the display is adapted to show the determined slice. The display preferentially shows a marker at the second region in the determined slice.
It is further preferred that the image processing device further comprises an uncertainty determination unit for determining an uncertainty of determining the second region in the second image, wherein the display is adapted to indicate the uncertainty in the second image. For exam- pie, a margin can be displayed around the second region, wherein the size of the margin depends on the determined uncertainty.
It is also preferred that the image processing device further comprises a tool providing unit for providing a tool for being used in the first image and in the second image, wherein the display is adapted to show the tool at the first region in the first image and at the second region in the second image. This allows using the tool in both images at corresponding positions. The tool is, for example, a local digital magnifier, a cross hair, a peak-hole view, et cetera.
In a preferred embodiment, the image processing device further comprises a sub-images determination unit for determining sub-images of the first image, wherein the first region determination unit is adapted to allow a user to select a sub-image of the first image, wherein the selected sub-image defines the first region in the first image, wherein the sub-images determination unit is adapted to determine a sub-image of the second image covering the determined second region and wherein the display is adapted to show the selected sub-image in the first image and the determined sub-image in the second image. It is preferred that the first region determination unit is adapted for allowing a user to consecutively select all sub-images of the first image, wherein the sub-images determination unit is adapted to determine for each selected sub-image of the first image a corresponding sub-image of the second image, which is shown on the display, wherein the sub-images determination unit is further adapted to determine un- shown sub-images of the second image, which have not been shown on the display while all sub-images of the first image have consecutively be selected, and wherein the display is adapted to show the un-shown sub-images of the second image.
In a further embodiment, the image processing device comprises a finding providing unit for providing findings in the first image and the second image, wherein the first region determination unit is adapted to determine the region of a first finding in the first image as the first region, wherein the image processing device further comprises a grouping unit for grouping the first finding and a second finding in the second image into a group of findings, if the distance between the position of the second finding in the second image and the position of the second region is smaller than a predefined threshold. The image processing device can comprise a group classification unit for classifying a group of findings based on features of the findings of the group and on predefined group classification rules.
In a further aspect of the present invention an image processing method for finding corresponding regions in two three-dimensional images of an object is presented, wherein the image processing method comprises:
providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction,
providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object,
determining a first region in the first image,
matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model,
simulating a decompression of the compression model,
simulating a compression of the compression model in the second compression direction such that the compressed compression model is registered with the object shown in the second image,
determining the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.
In a further aspect of the present invention an imaging processing computer program for finding corresponding regions in two three-dimensional images is presented, wherein the computer program comprises program code means for causing an image processing device as defined in claim 1 to carry out the steps of the image processing method as defined in claim 19, when the computer program is run on a computer controlling the image processing device.
It shall be understood that the image processing device of claim 1 , the image processing method of claim 19, and the image processing computer program of claim 20 have similar and/or identical preferred embodiments, in particular, as defined in the dependent claims.
It shall be understood that a preferred embodiment of the invention can also be any combination of the dependent claims with the respective independent claim.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 shows schematically and exemplarily an embodiment of an image processing device for finding corresponding regions in two three-dimensional images of an object,
Fig. 2 shows a schematic overview of a determination of a second region in a second image, which corresponds to a first region in a first image, Fig. 3 shows schematically and exemplarily a decompressed compression model and a compressed compression model,
Fig. 4 shows exemplarily landmarks of a breast and a corresponding coordinate system,
Fig. 5 shows exemplarily and schematically two-dimensional slices of three-dimensional first and second images and corresponding first and second regions in these slices, and
Fig. 6 shows a flowchart exemplarily illustrating an embodiment of an image processing method for finding corresponding regions in two three-dimensional images of an object.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 shows schematically and exemplarily an image processing device for finding corresponding regions in two three-dimensional images of an object. The image processing device 1 comprises an image providing unit 2 for providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction.
In this embodiment, the image providing unit 2 is a storage unit, in which the images are stored already. However, the image providing unit can also be a receiving unit for receiving images via a wireless or wired data connection and for providing the received images for processing the same. The image providing unit can also be an image generation apparatus for generating the images. For example, the image providing unit can be a tomosynthesis apparatus for generating tomosynthesis images, in particular, of a breast. The image providing unit can also be another imaging unit for generating three-dimensional images like a magnetic resonance imaging unit, a computer tomography imaging unit, a nuclear imaging unit, for example, a positron emission tomography unit or a single photon emission tomography unit, an ultrasound imaging unit, et cetera.
The image providing unit 2 is a adapted to provide tomosynthesis images as the first and second images. In particular, the provided tomosynthesis images are different views of a single object, which is, in this embodiment, a breast. For example, one of the first image and the second image corresponds to one of a CC-view, a MLO-view and a ML-view and the other of the first image and the second image corresponds to another of the CC-view, the MLO-view and the ML-view. The image processing device 1 further comprises a model providing unit 3 for providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object. In this embodiment, the compression model models a breast, wherein the compression model represents in its decompressed state a sphere or a part of a sphere. Preferentially, the compression model represents in its decompressed state a sphere or hemisphere. In another embodiment, the compression model can represent an ellipse or a part of an ellipse.
The compression model models volume-preserving, homogenous, rubber-like material. For modeling the rubber-like material, a Neo-Hookean model material is preferentially assumed, which is a hyper-elastic model material, which can be used to describe the non-linear relationship between stress and strain for materials undergoing large deformations. The compression model preferentially defines deformation functions of all locations in a hemisphere and, thus, for symmetry reasons in a complete sphere, when it is compressed and decompressed.
The image processing device also includes a first region determination unit 4 for determining a first region in the first image. In this embodiment, the first region determination unit 4 comprises a user interface 9 allowing the user to determine the first region in the first image. In particular, the first region determination unit 4 comprises a graphical user interface, which allows a user together with an input unit like a mouse, a keyboard, a touchscreen, etc. to select the first region in the first image, which is shown on a display 7. The user can, for example, select a suspicious region in the first image as the first region. In addition or alternatively, the first region determination unit 4 can comprise a marker providing unit 18, e.g. a CAD unit or a unit providing structured reports, for determining a mark defining the first region in the first image. In particular, the marker providing unit 18 can determine several marks shown in the first image and the first region determination unit 4 can be adapted to allow a user to select via the graphical user interface 9 one of these marks as the first region.
The image processing device 1 further comprises a matching unit 5 for matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model. A second region determination unit 6 determines a second region, which corresponds to the first region in the first image, in the second image. The second region determination unit 6 is adapted to simulate a decompression of the compression model, simulate a compression of the compression model in the second compression direction, such that the compressed compression model is registered with the object shown in the second image, and determines a region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region. The image providing unit 2 can be adapted to provide more than two images, for example, the image providing unit 2 can further provide a third image and a fourth image showing the same object compressed in other respective compression directions. If a first region has been determined in the first image, the compression model can be adapted to determine third and fourth regions, which correspond to the first region, in the third and fourth images.
The compression model is preferentially a sphere comprising poles defined by intersections of a line, which is aligned with the first compression direction and which traverses the center of the sphere, with the surface of the sphere, while the compression model is compressed in the first compression direction. The second region determination unit 6 is adapted to rotate, after decompression, the decompressed compression model to an orientation, in which an axis through the poles of the sphere is aligned with the second compression direction and to simulate the compression of the compressed model in the second compression direction such that the compressed compression model is registered with the object shown in the second image by simulating a uni-axial force applied to the poles of the sphere in the second compression direction. In particular, it is assumed that the sphere is located between two parallel plates and that the two plates are forced to each other for applying the uni-axial force. If the compression model represents an ellipsoid or a part of the sphere, in particular, a hemisphere, or a part of an ellipsoid, the second region determination unit is adapted to rotate the decompressed compression model and simulate the compression of the compressed model accordingly.
In this embodiment, the matching unit 5 is adapted to determine the skin fold of the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the first image and to adapt the compressed compression model such that it corresponds to the determined skin fold of the compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the first image. The second region determination unit 6 is preferentially adapted to determine the skin fold of the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine a breast center of mass or the nipple position, and estimate the volume of the compressed breast in the second image and to simulate the compression of the compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the second image The matching unit 5 and the second region determination unit 6 can be adapted to perform the detection and segmentation of the landmarks in the compressed breast manually, semi-automatically or full-automatically using known tools and algorithms.
The display 7 is adapted to display the first region in the first image and the second region in the second image. In particular, the second image is comprised of multiple slice images, wherein the image processing device 1 comprises a slice determination unit 10 for determining a slice of the second image, which shows the second region, wherein the display 7 is adapted to show the determined slice. The display 7 preferentially shows a marker at the second region in the determined slice.
In this embodiment, the first and second images are ipsilateral views of the breast in one study. Each view is acquired from a different perspective with different compression. The image providing unit can provide more than one study. For example, two studies of the same breast, referred to as current and prior, which have been acquired at different times, can be provided. Each study may contain two or more views of the same breast. The number of views in each study may be different. The perspective and compression may also be different.
The image processing device can be adapted to determine an approximate start point, i.e. a three-dimensional coordinate, for searching for corresponding structures in the ipsilateral views.
When a radiologist finds a suspicious structure in one view, i.e., for example, in the first image, he usually wants to analyze the corresponding location in all other ipsilateral views, in particular, in the second image. The image processing device 1 can estimate a three-dimensional coordinate in the other ipsilateral view, i.e. in the second image, which corresponds to the position that was, for example, marked by a user by clicking with a cursor device or by a full-automatic detection algorithm in the first image. Based on the estimated three-dimensional coordinate, i.e. based on the determined second region, the closest image slice can be selected from the second image being a three-dimensional image volume, wherein the selected closest image slice may be shown on the display 7.
Since the image processing device 1 determines the second region in the second image by using a compression model, which approximates the breast, the resulting determined second region is only an approximation of the true corresponding position. The image processing device 1 comprises therefore preferentially further an uncertainty determination unit 11 for determining an uncertainty of determining the second region in the second image, wherein the display 7 is adapted to indicate the uncertainty in the second image.
For example, the estimated three-dimensional coordinate, i.e. the determined second region, may be depicted as a point, by using, for example, a cross hair in the closest image slice. However, the estimated three-dimensional coordinate may also be displayed as a three-dimensional volume like a sphere, with an extent that reflects the uncertainty of the determination of the second region in the second image, i.e. that reflects certain statistics about the method's estimation error. The statistics may be estimated from a data base of images with ground truth information. The image slice closest to the estimated coordinate may be selected by the slice determination unit 10 as the initial slice. Instead of displaying the image slice closest to the estimated coordinate, also a projection image of a slab may be displayed, which has the same thickness as the extent of the three- dimensional volume in the direction orthogonal to the image plane. In this way, a user can see the entire three-dimensional volume, in which the true position is likely to be located, without further scrolling through the image stack.
The first region determination unit 4 can be adapted to allow a user to outline a two-dimensional region or a three-dimensional region, potentially with the support of a full-automatic or semiautomatic computer tool, instead of marking a single three-dimensional coordinate, as the first region in the first image. In this case, the image processing device can be adapted to map all or a subset of the image elements, i.e. the voxels, in the first region to the one or more other ipsi- lateral views, in particular, to the second image. The mapped coordinates are displayed for example, with a certain color. Since the mapping may result in several disconnected sub-regions of the second region, a hull for example, a convex hull, enclosing all sub-regions, may be computed and displayed on the display 7. The regions may be depicted as solid, for example, colored or translucent, regions or as contours or surfaces.
If the uncertainty determination unit 11 provides uncertainty information, an additional uncertainty margin can be added to the determined second region, which increases its extent. If the second region comprises several sub-regions, each of these sub-regions may comprise an uncertainty margin reflecting the respective uncertainty. The depicted second region, in particular, the depicted sub-regions, i.e. the depicted volumes, in the second image reflect the areas, in which the structure, that was outlined in the original view, is likely to be located.
The image processing device 1 further comprises a tool providing unit for providing a tool for being used in the first image and the second image, wherein the display 7 is adapted to show the tool at the first region in the first image and at the second region in the second image. The image processing device can therefore be utilized for synchronizing tools, which are simultaneously used in two or more images, i.e. in two or more views. The provided tool is, for example, a local digital magnifier that magnifies a small sub-region of an image for the purpose of analyzing local image details. The image processing device can be adapted to simultaneously magnify and depict in the same way also the corresponding region in each other ipsilateral view, in particular, in the second image, as it reflects complementary information. In particular, the user can move the selected tool to a certain position in one view, while the tools in the corresponding other views are manipulated in real-time by computing the corresponding positions. Instead of or in addition to providing a local digital magnifier as the tool the tool providing unit 12 can be adapted to provide other tools. For example, cross hairs can be provided, wherein cross hairs are synchronized between views to browse simultaneously through image stacks of the ipsilateral views. The image processing device may be adapted to allow a user to control the cross hair in one view and may change the slice or move the cursor in the image plane. The current user-defined cross hair position is regarded as being the first region, which is mapped to the one or several other ipsilateral views, in particular, to the second image by determining the corresponding second region, wherein in the other ipsilateral views a cross hair is depicted in the closest image slice at the estimated in-plane position. The provided tool can also be a peak-hole view tool. This tool supports the user to focus on a small region of an image by displaying only the local neighborhood around the computed position, while the complement is suppressed, for example, not displayed or greyed-out.
The image processing device preferentially further comprises a finding providing unit 15 for providing findings in the first image and the second image, wherein the first region determination is adapted to determine the region of a first finding in the first image as the first region, wherein the image processing device 1 further comprises a grouping unit 16 for grouping the first finding and a second finding in the second image into a group of findings, if the distance between the position of the second finding in the second image and the position of the second region is smaller than a predefined threshold. The image processing device 1 further comprises a group classification unit 17 for classifying a group of findings based on features of the findings of the group and on predefined group classification rules.
The image processing device can therefore be adapted to full-automatically link findings in two or more views. The spatial location of a finding, for example, of a tumor, may be described with a graphical annotation like a point, a two-dimensional region or a three-dimensional region. The annotation may be full-automatically computed by the finding providing unit 15, for example, by a CAD algorithm, or manually defined by an input from the user. The image processing device can be adapted to link two or more findings by computing their spatial distance, for example, by mapping a point annotation from one view to another view and by computing the distance between the mapped point and the annotation in the second view. If the distance is less than a predefined threshold, it is likely that both annotations mark the same finding. For a two- dimensional annotation or a three-dimensional annotation, the distance to a representative point like the center of gravity or one of the points of the contour or volume of the breast may be computed.
Finding candidates may be full-automatically generated in two or more views, i.e. in two or more images, of the same breast, for example, by a CAD algorithm. The image processing device can be used to full-automatically link the finding candidates from all views, in order to combine the corresponding image-based features for a joined assessment, for example, by a statistical classification algorithm. Suitable features determined from each view are, for example, the shape of the finding, the characteristic of the finding margin or other features describing the local tissue morphology which potentially give evidence of, e.g. malignant disorders, but are perhaps differently distinctive in each view. This classification of a group of findings based on features of the findings, i.e. a joined feature assessment, can improve the classification performance of CAD- algorithms, but also supports human readers in the clinical decision making process.
If a user draws an annotation in one view, i.e. if a first region is determined in a first image, the image processing device can determine whether one or several other annotations drawn in one or more other ipsilateral views are close and likely to refer to the same tissue structure. If this is the case, the image processing device may ask the user, whether this correspondence shall be registered for later reporting. This or a similar workflow is likely to improve the reporting process in terms of speed and safety.
In an embodiment, the target image, i.e. the second image, to which the first region, which may be a three-dimensional coordinate, is mapped, may also be an image of the same breast acquired in a different imaging session. As an example, a three-dimensional coordinate of a suspicious finding detected in a MLO-view of a tomosynthesis study may define a first region, wherein a second region is determined in a MLO-view of a second tomosynthesis study, which has been acquired some time later. Correspondingly, also positions in CC-views and/or ML-views, which were acquired at different times, can be mapped by the image processing device. In particular, the views from different imaging studies of the same breast, which may have been acquired from different perspectives and with different compressions, can be processed by the image processing device such that corresponding regions in these views are determined. In general, the target view, i.e. the second image, to which a coordinate is mapped, may also be an image volume acquired in a different session with the same or different image parameters (compression, perspective, etc.) as the source volume, i.e. as the first image. It is therefore possible to retrieve a finding in a current study that has already been detected in a prior study. In a similar fashion, this may be used for monitoring therapy response.
The image processing device further comprises a sub-image determination unit 13 for determining sub-images of the first image, wherein the first region determination unit 4 is adapted to allow a user to select a sub-image of the first image, wherein the selected sub-image defines the first region in the first image, wherein the sub-images determination unit 13 is adapted to determine a sub-image of the second image covering the determined second region and wherein the display 7 is adapted to show the selected sub-image in the first image and the determined sub- image in the second image. The display can be adapted to show an image as a sequence of magnified sub-images, for example, with a magnification factor chosen such that one image pixel corresponds to one display pixel. Due to the size of the image matrix, several sub-images are displayed to assure that each image region is displayed at least once. The partitioning of the respective image into sub-images, which may overlap, may be calculated by the sub-images determination 13 such that the user can conveniently step through a list of parameters, which, for example, describe the center point and/or two corner points of the respective sub-image. The different ipsilateral views, i.e. the first image and the second image, usually depict complementary information. It is therefore desirable that from all ipsilateral views the sub-region corresponding to the same breast region are depicted at the same time. For this purpose, the image processing device can be adapted to compute the parameters, in particular, the locations parameterised by the center points and/or two corner points of the respective sub-image, in the one or more other ipsilateral views, in particular, in the second image, by mapping the initial parameter list, i.e. by mapping corresponding first regions in the first image, to the one or several other ipsilateral views, wherein this mapping is performed by determining, for example, second regions in the second image, which correspond to the first regions in the first image. For instance, a list M of sub-image parameters for an MLO-view of a tomosynthesis study can be determined by the sub-image determination unit 13. The elements of the parameter list, which define locations, i.e. first regions, in the first image, can be mapped to, for example, an ipsilateral CC-view being a second image, which results in a parameter list C of the same length, wherein the parameter list C defines second regions in the second image, which correspond to the first regions defined by the parameter list M. The user can start browsing through the parameter list by selecting the first element of each list. The selected parameters define sub-images in the corresponding views, which depict the same anatomical sub-region of the breast, but each with a different perspective.
The first region determination unit 4 can be adapted to allow a user to consecutively select all sub-images of the first image, wherein the sub-images determination unit 13 is adapted to determine for each selected sub-image of the first image a corresponding sub-image of the second image, which is shown on the display 7, wherein the sub-image determination unit 13 is further adapted to determine unshown sub-images of the second image, which have not been shown on the display 7 while all sub-images of the first image have been consecutively selected, and wherein the display 7 is adapted to show the un-shown sub-images of the second image. Thus, sub-regions in the breast are identified, which are not covered by at least one sub-image defined by the mapped parameters. If such a missed sub-region is identified, one or several sub- images can be defined, which together cover the missed sub-region. The corresponding parameters can be mapped as described above to all other ipsilateral views. The resulting parameters can be appended to the corresponding parameter lists. If the user browses now through the extended parameter list, it is assured that all sub-regions in all views are displayed at least once in a single pass.
2D x-ray mammography currently is the gold standard for the detection of breast cancer in its early stages. However, a limitation of this modality is that in a 2D projection of the breast, supe- rim positions of normal tissue may look suspicious and lead to false positives, while true lesions can get obscured by overlying breast tissue. In order to diminish these problems, two views of each breast are usually acquired per exam; a craniocaudal (CC) and mediolateral oblique (MLO) view, where the MLO view is acquired at an angle between 30 and 60 degrees from the CC view. Digital breast tomosynthesis (DBT) was introduced as a promising modality to overcome these projection problems altogether, by reconstructing a 3D volume of the breast from several low dose, limited angle x-ray projections. With the introduction of DBT, it was suggested that by adding the extra dimension, only one tomosynthesis view per breast would be required. Recent insights however indicate that in DBT also two views may be required. Rafferty et al., for instance, found in their study (E. A. Rafferty, L. T. Niklason, and L. A. Jameson-Meehan, 2006. Breast tomosynthesis: One view or two? In Annual Meeting of the Radiological Society of North America, p. 335) that 12% of the lesions were much more visible on the MLO view; 15% were much more visible on the CC view; and 9% were only visible on the CC view. Their conclusion is that it is desirable to acquire both views in DBT, in order to optimally visualize lesions.
To make use of the information in both views, corresponding regions in the views need to be matched. Radiologists can then use both views of a suspicious region to establish a diagnosis and multiview computer-aided detection (CAD) systems can use the matched regions to compute an outcome. However, finding the corresponding location of a given region in the ipsilateral view can be a difficult task and in multiview CAD systems it is an essential step that needs to be automated. Several groups have worked on this topic in 2D mammography (Sophie Paquerault, Nicholas Petrick, Heang-Ping Chan, Berkman Sahiner, and Mark A Helvie, Feb 2002. Improvement of computerized mass detection on mammograms: Fusion of two-view information. Medical Physics, 29-2 (2002), 238-247; Marta Altrichter, Zoltan Ludanyi, and Gabor Horvath, 2005. Joint Analysis of Multiple Mammographic Views in CAD Systems for Breast Cancer Detection. In Image Analysis, Lecture Notes in Computer Science, volume 3540, p. 760-769; Bin Zheng, Joseph K Leader, Gordon S Abrams, Amy H Lu, Luisa P Wallace, Glenn S Maitz, and David Gur, Sep 2006. Multiview-based computer-aided detection scheme for breast masses. Med Phys, 33- 9 (2006), 3135-3143; Maurice Samulski and Nico Karssemeijer, 2008. Matching mammographic regions in mediolateral oblique and cranio caudal views: a probabilistic approach. In Medical Imaging, Maryellen L. Giger and Nico Karssemeijer, editor, Proceedings of the SPIE, volume 6915, p. 69151 M). In 2D projection mammography the depth of a region is unknown, which results in a large uncertainty when looking for the same region in the ipsilateral view. Radiologists often use a method where a search area around the nipple is used, assuming that the distance to the nipple is more or less constant between views. Zheng et al. (Bin Zheng, Jun Tan, Marie A Ganott, Denise M Chough, and David Gur, Nov 2009. Matching breast masses depicted on different views: a comparison of three methods. Academic Radiology, 16-11 (2009), 1338-1347) found that a straight-strip search area that is bounded by the location of the nipple and pectoral muscle yields a smaller search area. Another method, developed by Kita et al. (Yasuyo Kita, Ralph Highnam, and Michael Brady, 2001. Correspondence between Different View Breast X Rays Using Curved Epipolar Lines. Computer Vision and Image Understanding, 83-1 (2001 ), 38 - 56), estimates the most likely locations in ipsilateral views with curved epipolar lines that are computed by simulating the projection and (de)compression process of a breast exam. In 3D DBT the depth of a region is known, and therefore it should be possible to make a more accurate estimation of the location of corresponding regions in ipsilateral tomosynthesis views. The image processing device described above with reference to Fig. 1 provides a fast method to estimate corresponding locations in ipsilateral tomosynthesis views, by matching and applying an analytical mechanical compression model for, for example, hemispheres to the tomosynthesis data. Such a method can be very useful for several tasks. First, as a starting point for a feature based local search method to link suspicious regions in a multiview CAD system. Second, as an initialization phase for a more precise but time-consuming registration method. Third, as a visualization tool for radiologists to quickly find corresponding locations in ipsilateral tomosynthesis views. Finding these corresponding locations may not only be a difficult, but also a very time-consuming task for radiologists, because many slices may have to be inspected individually before correspondence is found. It is aimed to provide a fast and accurate method that allows a mammographic viewing station to automatically present the correct slice in the ipsilateral view with a marker at the estimated location, after a user has indicated a point of interest in the original view. Such a tool could lead to more efficient reading of DBT cases. Further applications of the image processing device are possible. In particular, the further applications described further above.
The above described image processing device allows to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation derived from the analytical solution of compressing, for example, a hemisphere. A compressed breast model is matched to the tomosynthesis view containing a point of interest. Then the location of the corresponding point in the ipsilateral view is estimated by assuming that this model was decompressed, rotated and compressed again (see Fig. 2). Exact modelling of the process of breast deformation during compression is highly complex and an active field of research (A. Samani, J. Bishop, M. J. Yaffe, and D. B. Plewes, 2001. Biomechanical 3D Finite Element Modeling of the Human Breast Using MRI Data. IEEE Transactions on Medical Imaging, 20(4) (2001 ), 271- 279; N. V. Ruiter, 2003. Registration of X-ray Mammograms and MR-Volumes of the Female Breast based on Simulated Mammographic Deformation. Dissertatie, Universitat Mannheim; Tanner, M. White, S. Guarino, M.A. Hall-Craggs, M. Douek, and D.J. Hawkes, 2009. Anisotropic Behaviour of Breast Tissue for Large Compressions. In Proceedings International Symposium on Biomedical Imaging: From Nano to Macro, p. 1223-1226; C. Tanner, J.H. Hipwell, and-D.J. Hawkes, 2008. Statistical Deformation Models of Breast Compressions from Biomechanical Simulations. In IWDM '08: Proceedings of the 9th international workshop on Digital Mammography, volume 5116, p. 426-432. Springer; J. Chung, V. Rajagopal, P. Nielsen, and M.P. Nash, 2008. Modelling Mammographic Compression of the Breast. In Medical Image Computing and Computer-Assisted Intervention, volume 5241 and 5242, p. 758-765. Springer; P. Pathmanathan, D. Gavaghan, J. Whiteley, M. Brady, M. Nash, P. Nielsen, and V. Rajagopal, 2004. Predicting Tumour Location by Simulating Large Deformations of the Breast Using a 3D Finite Element Model and Nonlinear Elasticity. In Medical Image Computing and Computer- Assisted Intervention, volume 3216 and 3217, p. 217-224. Springer; H. M. Yin, L. Z. Sun, G. Wang, T. Yamada, J. Wang, and M. W. Vannier, 2004. ImageParser: A Tool for Finte Element Generation from Three-Dimensional Medical Images. BioMedical Engineering OnLine, 3(31 ) (2004)). The image processing device assumes that the breast is of, for example, spherical shape and of volume-preserving, isotropic, homogenous, rubber-like (neo-Hookean) material. Therefore, an analytic deformation model for compression is employed. The model defines the deformation functions of all locations in such a sphere when it is compressed and decompressed. With these deformation functions and a simple rotation, corresponding locations in ipsi- lateral views can be computed very quickly.
Fig. 2 shows a schematic overview of the method performed by the image processing device (frontal views of the left breast 20 of a patient) when going from a location in a CC tomosynthesis volume 25 to the estimated location in the ipsilateral MLO volume 26. The original location (solid dot 23) is decompressed, rotated to the other view and compressed again. Note that the depth information is known in DBT. In Fig. 2, reference number 21 denotes the compression model, reference number 22 denotes the nipple position, reference number 23 denotes in the left part of Fig. 2 the first region and in the right part of Fig. 2 the corresponding second region, reference number 27 denotes a detector, i.e. a first plate, and reference number 28 denotes a compression paddle, i.e. a second plate. The image processing device 1 is preferentially adapted to model a compression between the first and second plates 27, 28. For modeling the compression it is irrelevant that first plate 27 is a detector. However, while generating the tomosynthesis volumes the detector detects projection data which are used for reconstructing the tomosynthesis volumes.
To mimic breast (de)compression, a breast is modeled by a sphere or hemisphere consisting of a homogeneous, isotropic rubbery (Neo-Hookean) material that is volume preserving when compressed. The force that is applied to the breast during compression is assumed to be uniax- ially, applied to the poles of the sphere. When compressed, the sphere will become flatter and expand outwards (see Fig. 3).
The Neo-Hookean model is a hyperelastic material model which can be used to describe the non-linear relationship between the stress and strain (i.e. change in length of the material) for materials undergoing large deformations (typically accurate for strains up to 20%). For an uniaxial compression, the stress-strain relationship is described by: σ = L = G■ ( ~2 - a)
(1 ) where the stress σ is the compression force f per area A, G is the shear modulus and a the ratio of stressed to unstressed length (strain). In the following formulas a star (*) is used to indicate that a variable relates to the uncompressed state of the sphere.
When a force is applied to the poles of a sphere, this force is being transmitted through every layer of the sphere, parallel to the equator. A disc at the equator with a thickness of dz* and radius of R*, will be compressed to a flatter but wider disc of thickness dz and radius Ro. The thickness of the disc changes with the strain ratio at the equator σ0: dz — CtQ - dz* (2)
The disc is volume preserving: π · R*2■ dz* = 7Γ · R · dz (3) and the strain ratio at the equator can therefore be calculated from the radius change at the equator between the uncompressed and compressed state:
2
ao = (R*/R0) (4)
The stress at the equator can be calculated from the original area of the disc as:
/
c0 = IT R*2 (5)
When another disc is considered at some vertical distance z* from the center of the sphere, the radius of that disk (R* z«) can be computed from:
>* 2 r>*2 *2
nz* — n — z
Normalizing the sphere with respect to its original radius will put this in nondimensional form as a disk at normalized height:
Figure imgf000021_0001
with a radius of (8)
Only the upper half of the sphere is analyzed and φ ranges from 0 at the center of the sphere to 1 at the top pole. The bottom half of the sphere is deformed identically due to spherical symmetry. Furthermore, after normalization the dimensionless stress at the equator is defined to be:
CO _9
0
G = Q° "
The stress is different for each disc at a certain height in the sphere because the area of each disc is different. The local stress at the disc at height z* (or <p) can be calculated by combining eq. 1 , 6, 7 and 5 respectively:
* π - R*.2 ir - (R*2 - z*2) π · R*2■ (1 - φ2) (1 - <t>2) . {10) From eq. 1 , 9 and 10 it can be seen that:
Q = QQ 2 - a0 = (1 - φ2) ( φ 2 - φ) (11 )
This can be solved for σφ by:
_ 6 (1 + ΖΛ/3) · b2 (1 - ¾ > ¾ /
® 3 + 3 . 22 3 . f + 6 . 2-/3
where
b = g/(l - φ2) and
/ = (-27 + 2b3 + 3 · y/3 V27 - 4ί»3)1/3 (12)
This shows that the degree of compression in any point of the sphere can be computed from only the normalized vertical distance from the center of the sphere (<p) and the ratio between uncompressed and compressed radius at the equator (σ0 from eq. 4). Because the equations above are normalized, they can be applied to spheres of any radii. Hence the displacement of any point can be computed, using only this compression ratio a0. Note that there are no constraints so that a0 can equally well be used to compute displacements of points that lie outside the boundaries of the original sphere.
Fig. 3 shows a schematic overview of the compression model. Compression of a unity sphere with σ0=0.7 results in a deformed sphere with a compressed vertical radius of 0.59 and a stretched horizontal radius of 1.20. In the following the deformation functions for the compression and decompression of a sphere are shown.
An initial point with Cartesian coordinates (x*, y*, z*) can be converted to a coordinate system where its location is defined by (R*, Φ, Θ), where R* is the radial distance from the origin and Θ the azimuthal angle in the disc at normalized height Φ:
R* = y 2 + y*2 + z*2
φ = z*/R*
Θ— arctan(j/*, x*)
When compressed, the height of the disc is reduced with the strain ratio αφ , while the radius of the disc is stretched. The radius of the disc at height <p before compression is given by eq. 8 and because the disc is volume preserving, the radius of the disc after compression can be computed by:
Figure imgf000023_0001
From this stretched radius and by numerical integration of the new (x,y,z) coordinates after compression can be obtained by: x = R*Rq,COS(0)
Figure imgf000023_0002
Vice versa, the (x*, y\ z*) coordinates of a point before compression can be computed from the (x,y,z) coordinates of that point in the compressed state. In accordance with the computations for compressing a numerical solution is used to find <p (the one that results in the smallest error in eq. 13), and the point is converted to (R*, Φ, Θ) coordinates with:
Figure imgf000023_0003
Θ = arctan( , x) 14\ The Cartesian coordinates before compression can then be found by:
Figure imgf000024_0001
In order to match the model to a tomosynthesis volume of a compressed breast, the Cartesian coordinate system of the sphere is defined in the data using suitable anatomical landmarks and an estimation of the strain ratio parameter σ0 is made.
The coordinate system is defined as follows (see Fig. 4). The z-axis points in the direction of compression, i.e. from the compression paddle to the detector (perpendicular to the slices). The y-axis is defined in accordance with the results found in (Yasuyo Kita, Ralph Highnam, and Michael Brady, 2001. Correspondence between Different View Breast X Rays Using Curved Epipo- lar Lines. Computer Vision and Image Understanding, 83-1 (2001 ), 38 - 56) and (Bin Zheng, Jun Tan, Marie A Ganott, Denise M Chough, and David Gur, Nov 2009. Matching breast masses depicted on different views: a comparison of three methods. Academic Radiology, 16-11 (2009), 1338-1347) and runs along the pectoral muscle boundary. In MLO views this boundary is detected automatically with a Hough transform based method (N. Karssemeijer, 1998. Automated classification of parenchymal patterns in mammograms. Physics in Medicine and Biology, 43-2 (1998), 365-378) and described by a plane that is parallel to the z-axis. The x-axis runs perpendicular to the other two, passing through a point on the breast surface that is called the Frontal Breast Center (FBC). This FBC is often the same point as the nipple location, but depends on the definition that is chosen, as will be explained later on. In the CC view, the pectoral muscle is frequently not visible and the y-axis is therefore chosen to run parallel to the vertical image edge at a position such that the distance to the FBC is the same as in the MLO view. The definition of the FBC thus defines the position of the central axis (x). This central axis is assumed to be the axis around which the detector and x-ray tube are rotated when a patient is repositioned from one viewing position to the other. Therefore, when a point is transformed in the uncompressed state from one view to the other, we also rotate around this central x-axis. The rotation angle is the difference (in the appropriate direction) between the angles that were used when the CC and MLO views were acquired. These angles are usually fixed by the acquisition protocol of the hospital and can be found in the dicom headers of the DBT volumes. The definition of the FBC thus can be an important issue. In the following three methods to define the FBC in a tomosynthesis volume (see Fig. 4) are described.
In the first approach the nipple location is automatically estimated. The FBC is defined by the point in the central slice that lies on the skin fold and is furthest away from the pectoral muscle (y-axis). In the second approach the nipple location is manually annotated, i.e. manually annotated 3D nipple locations are used as the FBC. The benefit of this approach is that the FBC's in both views are actually corresponding locations.
In the third approach the FBC is determined in such a way that the amount of breast tissue is more or less symmetrically distributed around the central axis. This is done by letting the central axis run through the center of mass (COM) of breast tissue volume. The FBC is then defined by the point on the skin contour that is perpendicular to the y-z-plane and runs through this COM. In order to obtain a robust estimate of the COM of the actual breast tissue, the pectoral muscle, axilla, skin folds and nipple are excluded from computation, using automated segmentation. Preferentially, a method is used where the COM is computed from that part of the breast that has a distance to the pectoral muscle that is more than 55% and less than 95% of the maximum distance to the pectoral muscle boundary. Parameters are chosen in such a way that the remaining part showed a good representation of the distribution of mass of the breast.
Fig. 4 shows exemplarily employed coordinate axes x, y and an illustration of defining FBC in slices of a MLO and CC tomosynthesis view of a right breast. The visible breast tissue 30 was extracted from the background by automatic segmentation. The rectangle 31 in the upper right corner of the MLO view shows the automatically segmented pectoral muscle that defines the y- axis. The dot 32 on the skin surface shows the FBC as defined by the a) automatically estimated nipple location, b) manually annotated nipple location, c) center of mass based method. The FBC defines the x-axis. The slices that are shown in a) and c) are the central slices of the MLO and CC volumes. The slice shown in the CC view of b) is located several slices more caudally than the central slice, as defined by the manually annotated nipple location. The hatched region 33 in c) shows the region that is used to compute the center of mass of the breast.
In order to obtain the strain ratio parameter a0 of Eq. 4, estimates are made of the radius at the equator of the sphere before and after compression (R* and Ro respectively). The following method can be used to obtain these estimates from tomosynthesis data at hand. The radius of the uncompressed sphere is computed from the volume of the compressed breast, assuming that breast volume is preserved when compressed:
4/3n(R*)3 = VOIMLO + voice where volMLO and volCc are the volumes of the breast tissue without the pectoral muscle of the MLO and CC view, respectively. The compressed radius Ro is computed in a similar fashion from the area of a simulated 2D projection of the compressed breast (without the pectoral muscle): areaMLO + areacc where areaMLo and areaCc are the projected areas of the two views, computed from a simulated 2D projection in the direction of the compression force. In both formulas the mean values of both views are used, in order to obtain robust estimates. The parameter a0 is therefore the same for both views of the same breast.
Fig. 5 illustrates exemplarily images shown by the display 7. In this example, the FBC is defined by the COM based method. Fig. 5a shows an exemplary slice of an MLO 40 and CC 41 tomosynthesis view of the right breast of a patient. The green arrow 42 in the MLO view 40 points to a large calcification that is visible in the current slice of the MLO view 40, but not in the slice of the CC view 41. When the user clicks on the location of the calcification in the MLO view 40, the image processing device estimates the location in the CC view 41 in real-time and synchronizes this view to the estimated slice. A small search area 43, 44 is shown at the estimated x and y location in that CC slice, to indicate where the corresponding region most likely can be found (see Fig. 5b). The search area consists of two ellipses 43, 44 with radii that are based on percentiles of the difference between the actual and estimated x and y coordinates of all the points in this study (using the COM based method). The radii of the inner and outer ellipse are set to the 50th and 75th percentiles respectively. In this example, it can be seen that the corresponding calcification can be found near the middle of the search area. The line 45 indicates the x-axis that runs from the origin to the FBC 46 that was computed by means of the center of mass based method.
In the following, an embodiment of an image processing method for finding corresponding regions in two three-dimensional images of an object will exemplarily be described with reference to a flowchart shown in Fig. 6.
In step 101 , a three-dimensional, first image of the object and a three-dimensional, second image of the same object are provided, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction. In step 102, an analytical compression model of the object is provided, wherein the compression model is adapted to model the effect of pressure applied to the object. In step 103, a first region is determined in the first image, and in step 104, the compression model is matched to the first image, thereby aligning the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model. In step 105, a decompression of the compression model is simulated and, in step 106, a compression of the compression model is simulated in the second compression direction such that the compressed compression model is registered with the object shown in the second image. In step 107, the region in the second image, which corresponds to the first image in the compression model, compressed in the second compression direction is determined as the second region. In step 108, the second region is shown within the second image on a display. By matching a relatively simple model of breast deformation to 3D tomosynthesis volumes, a fast and reasonably accurate method can be obtained to estimate corresponding locations in ip- silateral DBT views.
An inherent property of the image processing method is that points that lie on the central axis (x) in one view, will still lie on this axis when transformed to the other view.
The compression model strongly simplifies the difficult process of breast deformation during compression. By assuming that the breast can be modeled by a hemisphere of homogeneous elastically deformable material, several intrinsic properties of breast compression are ignored. For instance, breast tissue obviously is not homogeneous, but consists of several types of tissue, each with its specific compression properties (e.g. stiffness of fibrous tissue). Boundary conditions are also simplified by employing friction-less sliding at the plates and at the planar posterior side. The assumption that a breast can be modeled by a hemisphere is also a strong assumption. Fluids in the breast can be squeezed out during compression, and a breast therefore is also not completely volume preserving. Some other issues that make it difficult to construct an exact model of breast compression that are not taken into account in the compression model, are the deformations that are caused by gravity and by the positioning of the breast in the x-ray system. Although the strongly simplified compression model already provides a sufficiently accurate determination of a second region in the second image, which corresponds to a first region in a first image, the accuracy may be further improved by incorporating some of the above mentioned issues into a more accurate compression model and/or by using different compression model parameters per view.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality.
A single unit or device may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Calculations like the matching of the compression model to the first image, the simulations of decompression and compression of the compression model and the determination of the second region in the second image performed by one or several units or devices can be performed by any other number of units or devices. For example, steps 102 to 107 can be performed by a single unit or by any other number of different units. The calculations and/or the control of the image processing device in accordance with the image processing method can be implemented as program code means of a computer program and/or as dedicated hardware.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium, supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Any reference signs in the claims should not be construed as limiting the scope.
The work leading to this invention has received funding from the European Community's Seventh Framework Programme (FP7/2007-2011 ) under grant agreement n° 224538.
The invention relates to an image processing device for finding corresponding regions in two three-dimensional images of an object. An analytical compression model is matched to a first image showing the object compressed in a first compression direction, thereby determining the position of a first region of the object, which is provided in the first image, in the compressed model. A second region, which corresponds to the first region in the first image, in the second image is determined by simulating a decompression of the compression model, simulating a compression of the compression model in a second compression direction, in which the object shown in the second image is compressed, and determining the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.

Claims

1. An image processing device for finding corresponding regions in two three-dimensional images of an object, the image processing device (1 ) comprising:
an image providing unit (2) for providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction,
a model providing unit (3) for providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object, a first region determination unit (4) for determining a first region in the first image, a matching unit (5) for matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model,
a second region determination unit (6) for determining a second region, which corresponds to the first region in the first image, in the second image, the second region determination unit being adapted to:
simulate a decompression of the compression model,
simulate a compression of the compression model in the second compression direction such that the compressed compression model is registered with the object shown in the second image,
determine the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.
2. The image processing device as defined in claim 1 , wherein the compression model represents in its decompressed state a sphere or an ellipsoid or a part of a sphere or of an ellipsoid. In a preferred embodiment, the compression model represents in its decompressed state a hemisphere.
3. The image processing device as defined in claim 2, wherein the sphere comprises poles defined by intersections of a line, which is aligned with the first compression direction and which traverses the center of the sphere, with the surface of the sphere, while the compression model is compressed in the first compression direction, and wherein the second region determination unit (6) is adapted to:
rotate, after decompression, the decompressed compression model to an orientation in which an axis through the poles of the sphere is aligned with the second compression direction, simulate the compression of the compressed model in the second compression direction such that the compressed compression model is registered with the object shown in the second image by simulating a uniaxial force applied to the poles of the sphere in the second compression direction.
4. The image processing device as defined in claim 1 , wherein the compression model models a volume-preserving, homogenous, rubber-like material.
5. The image processing device as defined in claim 1 , wherein the image providing unit (2) is adapted to provide tomosynthesis images, which correspond to different acquisition directions, as the first and second images.
6. The image processing device as defined in claim 1 , wherein the image providing unit (2) is adapted to provide first and second images of a breast as the first and second images.
7. The image processing device as defined in claim 6, wherein one of the first image and the second image corresponds to one of a craniocaudal view (CC), a mediolateral-oblique view (MLO) and a mediolaterial view (ML) and the other of the first image and the second image corresponds to another one of the craniocaudal view (CC), the mediolateral-oblique view (MLO) and the mediolaterial view (ML).
8. The image processing device as defined in claim 6, wherein
the matching unit (5) is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the first image and adapt the compressed compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the first image, and
the second region determination unit (6) is adapted to segment the compressed breast, detect the contour of the pectoral muscle of the compressed breast, determine the breast center of mass or the nipple position, and estimate the volume of the compressed breast in the second image and simulate the compression of the compression model such that it corresponds to the segmented compressed breast, the detected contour of the pectoral muscle, the determined breast center of mass or the determined nipple position, and the estimated volume of the compressed breast in the second image.
9. The image processing device as defined in claim 1 , wherein the first region determination unit (4) comprises a user interface (9) allowing a user to determine the first region in the first image.
10. The image processing device as defined in claim 1 , wherein the first region determination unit (4) comprises a computer-aided detection (CAD) unit (18) for determining a CAD mark defining the first region in the first image.
11. The image processing device as defined in claim 1 , wherein the image processing device (1 ) further comprises a display (7) for displaying the first region in the first image and the second region in the second image.
12. The image processing device as defined in claim 11 , wherein the second image is comprised of multiple slice images, wherein the image processing device (1 ) further comprises a slice determination unit (10) for determining a slice of the second image which shows the second region, wherein the display (7) is adapted to show the determined slice.
13. The image processing device as defined in claim 11 , wherein the image processing device (1 ) further comprises an uncertainty determination unit (11 ) for determining an uncertainty of determining the second region in the second image and wherein the display (7) is adapted to indicate the uncertainty in the second image.
14. The image processing device as defined in claim 11 , wherein the image processing device ( ) further comprises a tool providing unit (12) for providing a tool for being used in the first image and in the second image and wherein the display (7) is adapted to show the tool at the first region in the first image and at the second region in the second image.
15. The image processing device as defined in claim 11 , wherein the image processing device (1 ) further comprises a sub-images determination unit (13) for determining sub-images of the first image, wherein the first region determination unit (4) is adapted to allow a user to select a sub-image of the first image, wherein the selected sub-image defines the first region in the first image, wherein the sub-images determination unit (13) is adapted to determine a sub- image of the second image covering the determined second region and wherein the display (7) is adapted to show the selected sub-image in the first image and the determined sub-image in the second image.
16. The image processing device as defined in claim 15, wherein the first region determination unit (4) is adapted for allowing a user to consecutively select all sub-images of the first image, wherein the sub-images determination unit (13) is adapted to determine for each selected sub-image of the first image a corresponding sub-image of the second image, which is shown on the display (7), wherein the sub-images determination unit (13) is further adapted to determine un-shown sub-images of the second image, which have not been shown on the display (7) while all sub-images of the first image have consecutively be selected, and wherein the display (7) is adapted to show the un-shown sub-images of the second image.
17. The image processing device as defined in claim 1 , wherein the image processing device (1 ) comprises a finding providing unit (15) for providing findings in the first image and the second image, wherein the first region determination unit (4) is adapted to determine the region of a first finding in the first image as the first region, wherein the image processing device (1 ) further comprises a grouping unit (16) for grouping the first finding and a second finding in the second image into a group of findings, if the distance between the position of the second finding in the second image and the position of the second region is smaller than a predefined threshold.
18. The image processing device as defined in claim 17, wherein the image processing device (1 ) further comprises a group classification unit (17) for classifying a group of findings based on features of the findings of the group and on predefined group classification rules.
19. An image processing method for finding corresponding regions in two three-dimensional images of an object, the image processing method comprising:
providing a three-dimensional, first image of the object and a three-dimensional, second image of the same object, wherein in the first image the object is shown compressed in a first compression direction and in the second image the object is shown compressed in a second compression direction,
providing an analytical compression model of the object, wherein the compression model is adapted to model the effect of pressure applied to the object,
determining a first region in the first image,
matching the compression model to the first image, thereby registering the compression model compressed in the first compression direction with the object shown in the first image and determining the position of the first region in the compressed compression model,
simulating a decompression of the compression model,
simulating a compression of the compression model in the second compression direction such that the compressed compression model is registered with the object shown in the second image,
determining the region in the second image, which corresponds to the first region in the compression model compressed in the second compression direction, as the second region.
20. An imaging processing computer program for finding corresponding regions in two three- dimensional images, the computer program comprising program code means for causing an image processing device as defined in claim 1 to carry out the steps of the image processing method as defined in claim 19, when the computer program is run on a computer controlling the image processing device.
PCT/EP2011/000635 2011-02-10 2011-02-10 Image processing device WO2012107057A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/000635 WO2012107057A1 (en) 2011-02-10 2011-02-10 Image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2011/000635 WO2012107057A1 (en) 2011-02-10 2011-02-10 Image processing device

Publications (1)

Publication Number Publication Date
WO2012107057A1 true WO2012107057A1 (en) 2012-08-16

Family

ID=43719452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/000635 WO2012107057A1 (en) 2011-02-10 2011-02-10 Image processing device

Country Status (1)

Country Link
WO (1) WO2012107057A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017047079A (en) * 2015-09-04 2017-03-09 東芝メディカルシステムズ株式会社 Medical image display apparatus and mammography apparatus
WO2018156017A1 (en) 2017-02-23 2018-08-30 Sigmascreening B.V. Mamography-apparatus
US10238354B2 (en) 2014-02-04 2019-03-26 Koninklijke Philips N.V Generating a breast parameter map
EP3858239A4 (en) * 2018-09-28 2021-11-17 FUJIFILM Corporation Image interpretation support device, operating program and operating method therefor
EP3868299A4 (en) * 2018-10-19 2021-12-15 FUJIFILM Corporation Radiological interpretation support apparatus, operation program thereof, and operation method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001069533A1 (en) * 2000-03-17 2001-09-20 Isis Innovation Limited Three-dimensional reconstructions of a breast from two x-ray mammographics
WO2011007312A1 (en) * 2009-07-17 2011-01-20 Koninklijke Philips Electronics N.V. Multi-modality breast imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001069533A1 (en) * 2000-03-17 2001-09-20 Isis Innovation Limited Three-dimensional reconstructions of a breast from two x-ray mammographics
WO2011007312A1 (en) * 2009-07-17 2011-01-20 Koninklijke Philips Electronics N.V. Multi-modality breast imaging

Non-Patent Citations (21)

* Cited by examiner, † Cited by third party
Title
A. SAMANI; J. BISHOP; M. J. YAFFE; D. B. PLEWES: "Biomechanical 3D Finite Element Modeling of the Human Breast Using MRI Data", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 20, no. 4, 2001, pages 271 - 279, XP001096853, DOI: doi:10.1109/42.921476
BIN ZHENG; JOSEPH K LEADER; GORDON S ABRAMS; AMY H LU; LUISA P WALLACE; GLENN S MAITZ; DAVID GUR: "Multiview-based computer-aided detection scheme for breast masses", MED PHYS, vol. 33, no. 9, September 2006 (2006-09-01), pages 3135 - 3143, XP012092246, DOI: doi:10.1118/1.2237476
BIN ZHENG; JUN TAN; MARIE A GANOTT; DENISE M CHOUGH; DAVID GUR: "Matching breast masses depicted on different views: a comparison of three methods", ACADEMIC RADIOLOGY, vol. 16, no. 11, November 2009 (2009-11-01), pages 1338 - 1347
C. TANNER; J.H. HIPWELL; D.J. HAWKES: "IWDM '08: Proceedings of the 9th international workshop on Digital Mammography", vol. 5116, 2008, SPRINGER, article "Statistical Deformation Models of Breast Compressions from Biomechanical Simulations", pages: 426 - 432
E. A. RAFFERTY; L. T. NIKLASON; L. A. JAMESON-MEEHAN: "Breast tomosynthesis: One view or two?", ANNUAL MEETING OF THE RADIOLOGICAL SOCIETY OF NORTH AMERICA, 2006, pages 335
H. M. YIN; L. Z. SUN; G. WANG; T. YAMADA; J. WANG; M. W. VANNIER: "ImageParser: A Tool for Finte Element Generation from Three-Dimensional Medical Images", BIOMEDICAL ENGINEERING ONLINE, vol. 3, 2004, pages 31, XP021007746, DOI: doi:10.1186/1475-925X-3-31
J. CHUNG; V. RAJAGOPAL; P. NIELSEN; M.P. NASH: "Medical Image Computing and Computer-Assisted Intervention", vol. 5241-524, 2008, SPRINGER, article "Modelling Mammographic Compression of the Breast", pages: 758 - 765
JAE-HOON CHUNG ET AL: "Modelling Mammographic Compression of the Breast", 6 September 2008, MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION Â MICCAI 2008; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 758 - 765, ISBN: 978-3-540-85989-5, XP019105232 *
KITA Y ET AL: "Correspondence between Different View Breast X Rays Using Curved Epipolar Lines", COMPUTER VISION AND IMAGE UNDERSTANDING, ACADEMIC PRESS, US, vol. 83, no. 1, 1 July 2001 (2001-07-01), pages 38 - 56, XP004434099, ISSN: 1077-3142, DOI: DOI:10.1006/CVIU.2001.0908 *
MARGARET YAM ET AL: "Three-Dimensional Reconstruction of Microcalcification Clusters from TwoMammographic Views", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 20, no. 6, 1 June 2001 (2001-06-01), XP011036102, ISSN: 0278-0062 *
MARTA ALTRICHTER; Z0LT6N LUDANYI; GABOR HORVÁTH: "oint Analysis of Multiple Mammographic Views in CAD Systems for Breast Cancer Detection", IMAGE ANALYSIS, LECTURE NOTES IN COMPUTER SCIENCE, vol. 3540, 2005, pages 760 - 769, XP019011068
MAURICE SAMULSKI; NICO KARSSEMEIJER: "Medical /m- aging", vol. 6915, 2008, article "Matching mammographic regions in mediolateral oblique and cranio caudal views: a probabilistic approach", pages: 69151
N. KARSSEMEIJER: "Automated classification of parenchymal patterns in mammograms", PHYSICS IN MEDICINE AND BIOLOGY, vol. 43, no. 2, 1998, pages 365 - 378
N. V. RUITER: "Dissertatie", 2003, UNIVERSITÄT MANN- HEIM, article "Registration of X-ray Mammograms and MR-Volumes of the Female Breast based on Simulated Mammographic Deformation"
P. PATHMANATHAN; D. GAVAGHAN; J. WHITELEY; M. BRADY; M. NASH; P. NIELSEN; V. RAJAGOPAL: "Medical Image Computing and Computer- Assisted Intervention", vol. 3216-321, 2004, SPRINGER, article "redicting Tumour Location by Simulating Large Deformations of the Breast Using a 3D Finite Element Model and Nonlinear Elasticity", pages: 217 - 224
SOPHIE PAQUERAULT; NICHOLAS PETRICK; HEANG-PING CHAN; BERKMAN SAHINER; MARK A HELVIE: "Improvement of computerized mass detection on mammograms: Fusion of two-view information", MEDICAL PHYSICS, vol. 29, no. 2, February 2002 (2002-02-01), pages 238 - 247, XP012011719, DOI: doi:10.1118/1.1446098
TANNER; M. WHITE; S. GUARINO; M.A. HALL-CRAGGS; M. DOUEK; D.J. HAWKES: "Anisotropic Behaviour of Breast Tissue for Large Compressions", PROCEEDINGS INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, 2009, pages 1223 - 1226, XP031502274
VIJAY RAJAGOPAL ET AL: "Modeling breast biomechanics for multi-modal image analysis successes and challenges", WILEY INTERDISCIPLINARY REVIEWS: SYSTEMS BIOLOGY AND MEDICINE,, vol. 2, no. 3, 1 May 2010 (2010-05-01), pages 293 - 304, XP009146209 *
YANG S C ET AL: "3D localization of clustered microcalcifications using cranio-caudal and medio-lateral oblique views", COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, PERGAMON PRESS, NEW YORK, NY, US, vol. 29, no. 7, 1 October 2005 (2005-10-01), pages 521 - 532, XP025334815, ISSN: 0895-6111, [retrieved on 20051001], DOI: DOI:10.1016/J.COMPMEDIMAG.2005.05.001 *
YASUYO KITA; RALPH HIGHNAM; MICHAEL BRADY: "Correspondence between Different View Breast X Rays Using Curved Epipolar Lines", COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 83, no. 1, 2001, pages 38 - 56, XP004434099, DOI: doi:10.1006/cviu.2001.0908
YONG ZHANG ET AL: "3D Finite Element Modeling of Nonrigid Breast Deformation for Feature Registration in -ray and MR Images", APPLICATIONS OF COMPUTER VISION, 2007. WACV '07. IEEE WORKSHOP ON, IEEE, PI, 1 February 2007 (2007-02-01), pages 38 - 38, XP031055147, ISBN: 978-0-7695-2794-9 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10238354B2 (en) 2014-02-04 2019-03-26 Koninklijke Philips N.V Generating a breast parameter map
JP2017047079A (en) * 2015-09-04 2017-03-09 東芝メディカルシステムズ株式会社 Medical image display apparatus and mammography apparatus
WO2018156017A1 (en) 2017-02-23 2018-08-30 Sigmascreening B.V. Mamography-apparatus
NL2018413B1 (en) * 2017-02-23 2018-09-17 Sigmascreening B V Mammography-apparatus
EP3858239A4 (en) * 2018-09-28 2021-11-17 FUJIFILM Corporation Image interpretation support device, operating program and operating method therefor
US11925499B2 (en) 2018-09-28 2024-03-12 Fujifilm Corporation Image interpretation support apparatus, and operation program and operation method thereof
EP3868299A4 (en) * 2018-10-19 2021-12-15 FUJIFILM Corporation Radiological interpretation support apparatus, operation program thereof, and operation method thereof

Similar Documents

Publication Publication Date Title
JP6967031B2 (en) Systems and methods for generating and displaying tomosynthesis image slabs
US9129362B2 (en) Semantic navigation and lesion mapping from digital breast tomosynthesis
Yam et al. Three-dimensional reconstruction of microcalcification clusters from two mammographic views
US8184890B2 (en) Computer-aided diagnosis and visualization of tomosynthesis mammography data
US20230125385A1 (en) Auto-focus tool for multimodality image review
JP5318877B2 (en) Method and apparatus for volume rendering of datasets
US20090129650A1 (en) System for presenting projection image information
US9384528B2 (en) Image annotation using a haptic plane
EP2535829A2 (en) Processing and displaying computer-aided detection information associated with breast x-ray images
US10417777B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
US20070116334A1 (en) Method and apparatus for three-dimensional interactive tools for semi-automatic segmentation and editing of image objects
WO2005079306A2 (en) Method, system, and computer software product for feature-based correlation of lesions from multiple images
CN105339983B (en) Link the breast lesion position across imaging research
WO2012107057A1 (en) Image processing device
US20090154782A1 (en) Dual-magnify-glass visualization for soft-copy mammography viewing
Van Schie et al. Correlating locations in ipsilateral breast tomosynthesis views using an analytical hemispherical compression model
US9142017B2 (en) TNM classification using image overlays
Hopp et al. 2D/3D registration for localization of mammographically depicted lesions in breast MRI
US20190005611A1 (en) Multi-Point Annotation Using a Haptic Plane
Said et al. Image registration between MRI and spot mammograms for X-ray guided stereotactic breast biopsy: preliminary results
US20190005612A1 (en) Multi-Point Annotation Using a Haptic Plane
Georgii et al. Model-based position correlation between breast images
Mertzanidou et al. Intensity-based MRI to X-ray mammography registration with an integrated fast biomechanical transformation
Wilms et al. Estimation of corresponding locations in ipsilateral mammograms: a comparison of different methods
Kim et al. Improving mass detection using combined feature representations from projection views and reconstructed volume of DBT and boosting based classification with feature selection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11704185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11704185

Country of ref document: EP

Kind code of ref document: A1