US20100104154A1 - Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms - Google Patents

Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms Download PDF

Info

Publication number
US20100104154A1
US20100104154A1 US12/624,273 US62427309A US2010104154A1 US 20100104154 A1 US20100104154 A1 US 20100104154A1 US 62427309 A US62427309 A US 62427309A US 2010104154 A1 US2010104154 A1 US 2010104154A1
Authority
US
United States
Prior art keywords
dtm
lesion
image
volume
breast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/624,273
Inventor
Heang-Ping Chan
Jun Wei
Berkman Sahiner
Lubomir M. Hadjiiski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Michigan
Original Assignee
University of Michigan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Michigan filed Critical University of Michigan
Priority to US12/624,273 priority Critical patent/US20100104154A1/en
Assigned to NATIONAL INSTITUTES OF HEALTH (NIH), U.S. DEPT. OF HEALTH AND HUMAN SERVICES (DHHS), U.S. GOVERNMENT reassignment NATIONAL INSTITUTES OF HEALTH (NIH), U.S. DEPT. OF HEALTH AND HUMAN SERVICES (DHHS), U.S. GOVERNMENT CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: UNIVERSITY OF MICHIGAN
Publication of US20100104154A1 publication Critical patent/US20100104154A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This relates generally to breast cancer detection and, more particularly, to a system and method for using computer-aided diagnosis (CAD) for Digital Tomosynthesis Mammograms (DTMs) for detection and characterization of breast lesions.
  • CAD computer-aided diagnosis
  • DTMs Digital Tomosynthesis Mammograms
  • Mammography is the most cost-effective screening method for early detection of breast cancer.
  • mammographic sensitivity is often limited by the presence of overlapping dense fibroglandular tissue in the breast.
  • the dense parenchyma reduces the conspicuity of the abnormalities, which constitutes one of the main causes of missed breast cancer.
  • full field digital detectors offers opportunities to develop new techniques for improved imaging of dense breasts such as digital tomosynthesis, stereomammography, and breast computed tomography. These new techniques are still under development and their potential impacts on breast cancer detection remain to be investigated.
  • Digital tomosynthesis is based on the same principle as conventional tomography in diagnostic radiology which uses a screen-film system as the image receptor for imaging body parts at selected depths.
  • conventional tomography a series of projection exposure is accumulated on the same film when the x-ray source is moved about a fulcrum while the screen-film system is moved in the opposite direction.
  • a drawback of conventional tomography is that each tomogram can only image one plane at a selected depth in relatively sharp focus. If the exact depth of interest is not known in advance or the abnormality encompasses a range of depths, a tomogram at each depth will have to be acquired at separate imaging, thus costing additional dose and examination time.
  • the series of projection exposure is readout as separate projection views at the different x-ray source locations.
  • Tomographic slices focused at any depths of the imaged volume can then be generated with digital reconstruction techniques from the same series of projection images. Because of the wide dynamic range and the linear response of the digital detector, each of the projection images can be acquired with a fraction of the x-ray exposure used for a regular projection radiograph.
  • the total dose required for digital tomosynthesis imaging may be kept at nearly the same or only slightly higher than that of a regular radiograph.
  • Properly designed digital reconstruction techniques provide the additional advantage that the depth resolution of tomosynthesis is generally much higher than that of conventional tomography. Digital tomosynthesis thus makes tomography more practical to be applied to breast imaging in terms of radiation dose, examination time, and spatial resolution.
  • Digital breast tomosynthesis mammography is one of the promising methods that may reduce the camouflaging effects of dense tissue and improve mammographic sensitivity for breast cancer detection in dense breasts.
  • DTM may also improve the accuracy in differentiation of malignant and benign breast lesions because the reduction of overlapping tissue allows the features of the lesions to be visualized or analyzed more reliably.
  • Several research groups are developing digital tomosynthesis methods for reconstruction of tomographic slices from the series of projection images. A study is underway to compare DTM with conventional mammograms in breast cancer detection.
  • Digital tomosynthesis mammography is a promising new modality that has the potential to improve breast cancer detection, especially in dense breasts.
  • the number of slices per breast may range from 30 to over 80, thus increasing the time required for interpretation.
  • CAD computer-aided diagnosis
  • the disclosed CAD system includes retrieving a DTM image file having a plurality of DIM image slices; applying a three-dimensional analysis to the DTM image file to detect lesion candidates; identifying a volume of interest and locating its center; segmenting the volume of interest by a three dimensional method; extracting one or more object characteristics from the object corresponding to the volume of interest; and determining if the object corresponding to the volume of interest is a breast lesion or normal breast tissue.
  • FIG. 1 is a block diagram of a computer aided diagnostic system that can be used to perform breast cancer screening and diagnosis based on a series of DTM images using one or more exams from a given patient;
  • FIG. 2 is a graph illustrating a distribution of a longest diameter of 23 masses and 3 areas of architectural distortion estimated on a DTM slice intersecting a lesion approximately at its largest cross section;
  • FIG. 3 is a graph illustrating a distribution of breast density in terms of BI-RADS category for 26 breasts estimated by an MQSA radiologist from conventional mammograms;
  • FIG. 4( a ) is an example of a mass imaged on a DTM slice
  • FIG. 4( b ) is an example of a screen-film mammogram
  • FIG. 5 is a flow chart illustrating a method of processing a set of DTM images for one or more patients to screen for lesion on DTM mammograms;
  • FIG. 6( a ) is a DTM slice intersecting a spiculated mass
  • FIG. 6( b ) is the segmented mass from FIG. 6( a ) in the corresponding slice after 3D region growing segmentation;
  • FIG. 6( c ) illustrates a rubber-band straightening transform (RBST) image of a 60-pixel wide region around the mass from FIG. 6( a );
  • RBST rubber-band straightening transform
  • FIG. 6( d ) illustrates a gradient image obtained by Sobel filtering of the RBST image from FIG. 6( c );
  • FIG. 7 is an ROC curve of a linear discriminant classifier obtained from leave-one-case-out testing
  • FIG. 8 is an FROC curve for test performance of a detection system for DTM mammograms.
  • a computer aided diagnosis (CAD) system 20 that may be used to detect and diagnose breast cancers includes a computer 22 having a processor 24 and a memory 26 therein and having a display screen 27 associated therewith.
  • a breast cancer detection and diagnostic system 28 in the form of, for example, a program written in computer implementable instructions or code, is stored in the memory 26 and is adapted to be executed on the processor 24 to perform processing on one or more sets of digital tomosynthesis mammography (DTM) images 30 , which may also be stored in the computer memory 26 .
  • the DTM images 30 may include DTM images for any number of patients and may be entered into or delivered to the system 20 using any desired importation technique.
  • any number of sets of images 30 a , 30 b , 30 c , etc. can be stored in the memory 26 wherein each of the image files 30 a , 30 b , etc. includes numerous DTM scan images associated with a particular DTM scan of a particular patient.
  • different ones of the images files 30 a , 30 b , etc. may be stored for different patients or for the same patient at different times.
  • each of the image files 30 a , 30 b , etc. includes a plurality of images therein corresponding to the different slices of information collected by a DTM imaging system during a particular DTM scan of a patient.
  • any of the image files 30 a , 30 b , etc. will vary depending on the size of the patient, the scanning image thickness, the type of DTM system used to produce the scanned images in the image file, etc.
  • the image files 30 are illustrated as stored in the computer memory 26 , they may be stored in any other memory and be accessible to the computer 22 via any desired communication network, such as a dedicated or shared bus, a local area network (LAN), wide area network (WAN), the internet, etc.
  • the breast cancer detection and diagnostic system 28 includes a number of components or routines which may perform different steps or functionality in the process of analyzing one or more of the image files 30 to detect and/or diagnose breast cancers.
  • the breast cancer detection and diagnostic system 28 may include 3 D gradient field analysis routines 34 , multi-resolution filtering routines 36 , object segmentation routines 37 , and feature extraction routines 38 .
  • the breast cancer detection and diagnostic system 28 may also include one or more three dimension image processing filters 40 , feature classification routines 42 , classifiers 43 , such as neural network analyzers, linear discriminant analyzers which use linear discriminant analysis routines to classify objects, support vector machines, rule based analyzers, including standard or crisp rule based analyzers and fuzzy logic rule based analyzers, etc., all of which may perform classification based on object features provided thereto.
  • image processing routines and devices may be included within the system 28 as needed.
  • the CAD system 20 may include a set of files 50 that store information developed by the different routines 34 - 38 of the system 28 .
  • These files 50 may include temporary image files that are developed from one or more of the DTM images within an image file 30 and object files that identify or specify objects within the DIM images.
  • the files 50 may also include one or more object files specifying the location and boundaries of objects that may be considered as lesion candidates, and object feature files specifying one or more features of each of these lesion candidates as determined by the feature classifying routines 42 .
  • other types of data may be stored in the different files 50 for use by the system 28 to detect and diagnose breast cancers from the DTM images of one or more of the image files 30 .
  • the breast cancer detection and diagnostic system 28 may include a display program or routine 52 that provides one or more displays to a user, such as a radiologist, via, for example, the screen 27 .
  • the display routine 52 could provide a display of any desired information to a user via any other output device, such as a printer, via a personal data assistant (PDA) using wireless technology, etc.
  • PDA personal data assistant
  • the breast cancer detection and diagnostic system 28 operates on a specified one or ones of the image files 30 a , 30 b , etc. to detect and diagnose breast cancer lesions associated with the selected image file.
  • the system 28 may provide a display to a user, such as a radiologist, via the screen 27 or any other output mechanism, connected to or associated with the computer 22 indicating the results of the breast cancer detection and diagnostic process.
  • the CAD system 20 may use any desired type of computer hardware and software, using any desired input and output devices to obtain DTM images and display information to a user and may take on any desired form other than that specifically illustrated in FIG. 1 .
  • CAD Computer-aided diagnosis
  • the DTM images may be acquired by a GE digital tomosynthesis mammography system.
  • the system may have a flat panel CsI/a:Si detector with a pixel size of 0.1 mm ⁇ 0.1 mm.
  • the DTM system acquires projection views (PVs) of the compressed breast over a 50-degree arc in the mediolateral oblique (MLO) view.
  • PVs projection views
  • MLO mediolateral oblique
  • a total dose for the PVs was designed to be less than 1.5 times that of a single standard film mammogram.
  • DTM slices are reconstructed at 1-mm slice spacing using an iterative maximum-likelihood algorithm.
  • DTM images may be acquired by DTM systems of other manufacturers and used as input to the CAD system 20 .
  • the digital detector can be made of other materials such as a selenium detector or a storage phosphor detector.
  • the pixels of the detector can be any sizes acceptable for mammographic imaging, which may range from 0.05 mm ⁇ 0.05 mm to 0.2 mm ⁇ 0.2 mm.
  • the PVs may be acquired with different parameters such as the angular range, number of PVs per breast, radiation dose, mammographic views.
  • the DTM slices may be reconstructed at different pixel sizes or slice thickness.
  • DTM slices can be reconstructed using a variety of reconstruction algorithms including, but not limited to, iterative maximum-likelihood algorithms, filtered backprojection algorithms, backprojection algorithms, algebraic reconstruction techniques (ART), simultaneous ART, and simultaneous iterative reconstruction techniques (SIRT).
  • iterative maximum-likelihood algorithms including, but not limited to, iterative maximum-likelihood algorithms, filtered backprojection algorithms, backprojection algorithms, algebraic reconstruction techniques (ART), simultaneous ART, and simultaneous iterative reconstruction techniques (SIRT).
  • a case may consist DTM slices of a single breast.
  • 26 cases including 23 masses and 3 areas of architectural distortion were available. Thirteen of the masses and 2 of the architectural distortion were proven to be malignant by biopsy. The true location of the mass or the architectural distortion in each case was identified by a Mammography Quality Standards Act (MQSA) radiologist based on the diagnostic information.
  • MQSA Mammography Quality Standards Act
  • the distribution of the longest diameter of the masses or the areas of architectural distortion is shown in FIG. 2 .
  • the distribution of the breast density in terms of BI-RADS category for the 26 breasts as estimated by an MQSA radiologist from viewing the conventional screen-film mammograms is shown in FIG. 3 .
  • FIG. 4( a ) An example of a DTM slice intersecting a spiculated mass is shown in FIG. 4( a ).
  • the same mass imaged in a conventional screen-film mammogram of the same view is shown in FIG. 4( b ) for comparison.
  • the spicules of the mass are much more conspicuous in the DIM slice than in the mammogram, likely attributable to the reduced structured background in the DTM image.
  • FIG. 5 depicts a flow chart 60 that illustrates a general method of performing breast cancer detection and diagnosis for a patient based on a set of DTM images for the patient as well as a method of determining whether the detected breast lesions are benign or malignant.
  • the flow chart 60 of FIG. 5 may generally be implemented by software or firmware as the breast cancer detection and diagnostic system 28 of FIG. 1 if so desired.
  • the method of detecting breast cancer depicted by the flow chart 60 includes a series of steps 62 - 74 that are performed on each of the DTM images for a particular image file 30 of a patient to identify and classify the areas of interest on the DTM images.
  • the breast cancer detection CAD system 28 includes several major steps—prescreening, segmentation, feature extraction, false-positive (FP) reduction, as shown in the schematic in FIG. 5 .
  • DTM slices containing the entire breast volume are input into the system for processing (block 62 ).
  • the goal is to detect lesion candidates (block 64 ) by using the property that the breast lesions are generally denser and have higher gradient at the transition between the breast tissue background and the lesion.
  • a three-dimensional (3D) gradient field analysis can be applied to the volumetric data set of each case as described below.
  • the image voxels may first be averaged to obtain a smoothed volumetric data set.
  • the gradient field analysis is calculated in a spherical region about the size of a lesion, e.g., 5 to 10 mm in radius, centered at each voxel, c(i), of the breast volume.
  • the gradient vector at each smoothed voxel v(j) in the spherical region is computed and the direction of the gradient vector is projected to the radial direction from the central voxel c(i) to the voxel v(j).
  • the average gradient direction over a spherical shell of voxels at a radius, R(k), of k voxels from c(i) is calculated as the mean of the gradient directions over voxels on three adjacent spherical shells, e.g., R(k ⁇ 1), R(k), and R(k+1).
  • the gradient field convergence at c(i) is determined by computing some statistics, e.g., the maximum of the average gradient directions among all shells in the spherical region.
  • the gradient field convergence calculation is performed over all voxels in the breast region, resulting in a 3D gradient field image.
  • Hessian matrix has been described in the literature but has not been applied to DTM for lesion detection.
  • Hessian matrix is calculated by convolution of the second derivatives of 3D Gaussian filters with the DTM data set. Multiscale Gaussian filters covering the size range of the lesions of interest will be applied.
  • a response function designed to enhance 3D spherical objects within a size range is calculated at each voxel.
  • the response functions at different scales are combined by a maximum operation.
  • the resulting response image depicts locations of potential lesions as locations of relatively high responses.
  • the response that exceeds a selected threshold can then be labeled as potential lesions. These locations may be subjected to subsequent analysis, similar to that applied to the locations obtained by gradient field analysis, as described below.
  • variations in the implementation of the Hessian matrix analysis are possible.
  • a volume of interest (e.g., 256 ⁇ 256 ⁇ 256 voxels) is then identified with its center placed at each location of high gradient convergence or high response.
  • the object in each VOI is segmented by a 3D region growing method in which the location of high gradient convergence is used as the starting point and the object is allowed to grow across multiple slices (block 66 ).
  • Region growing may be guided by the radial gradient magnitude.
  • the growth of the object is terminated where the radial gradient reaches a threshold value adaptively selected for the local object. After region growing, all connected voxels constituting the object are labeled.
  • An alternative segmentation method is to use k-means clustering to group the voxels in the VOI into a lesion class and a background class.
  • the largest connected object in the lesion class will be used as the initial region for a 3D active contour model or level set segmentation method.
  • the search for object boundary by the 3D active contour model is guided by minimization of an energy function, which is a weighted sum of internal energy terms such as homogeneity energy, continuity energy, two-dimensional (2D) curvature energy, and 3D curvature energy, and external energy terms such as 2D gradient energy, 3D gradient energy, and balloon energy.
  • the weights of the 3D active contour are optimized using a training set of DTM lesions.
  • the 3D object characteristics are then extracted from the object (block 70 ).
  • the morphological features describe the shape of the object. They include, but are not limited to, the volume in terms of the number of voxels in the object, volume change before and after 3D morphological opening by a spherical element with a radius of several voxels, e.g., 5 voxels, the surface area, the maximum perimeter of the segmented object among all slices intersecting the object, and the longest diameter of the object.
  • the compactness of the object is described in terms of the percentage overlap with a sphere of the same volume centered at the centroid of the object.
  • the gray level features include, but are not limited to, the contrast of the object relative to the surrounding background, the minimum and the maximum gray levels, and the characteristics derived from the gray level histogram of the object such as the skewness, kurtosis, energy, and the entropy.
  • the texture features can be extracted either in 2D or 3D.
  • 2D texture feature extraction is described as follows.
  • the rubber-band straightening transform (RBST) previously developed for analysis of masses on 2D mammograms is applied to the object.
  • a region around the object margin e.g., a band of 60 pixels wide, is transformed to a rectangular coordinate system.
  • the lesion boundary will be parallel to the long side, and the radially oriented spicules or textures are aligned and parallel to the short side, thus facilitating texture analysis.
  • a gradient image of the transformed rectangular object margin is derived from Sobel filtering.
  • Texture features can be extracted from the run length statistics (RLS) of the gradient image in both the horizontal and vertical directions.
  • the texture features may include, but are not limited to, short runs emphasis, long runs emphasis, gray level nonuniformity, run length nonuniformity and run percentage. Any other type of texture features may also be extracted, such as spatial gray level dependence (SGLD) textures, gray level difference statistics (GLDS) textures, and the Laws textures.
  • SGLD spatial gray level dependence
  • GLDS gray level difference statistics
  • Laws textures Detailed description of the RBST and the RLS and SGLD texture features for mammographic lesions is well known by those of ordinary skill in the art.
  • each RLS texture feature is averaged over the corresponding feature values over slices containing the segmented object.
  • 3D texture extraction methods can be considered to be an extension of a 2D method.
  • the lesion can be analyzed in three groups of planes: (1) along the reconstructed high-resolution DTM slices that are parallel to the detector plane, (2) a set of planes that cut through the centroid and the south-north poles, and (3) a set of planes that cut through the centroid and the east-west poles.
  • the cross section of the lesion will be similar to a lesion in 2D.
  • the RBST is then applied to the band of pixels around the lesion boundary to transform it to a rectangular image.
  • the texture in the RBST image can be further enhanced by, e.g., Sobel filtering or multi-resolution wavelet decomposition and reconstruction.
  • Texture measures such as the RLS, the SGLD, the GLDS, and the Laws textures can then be extracted from the gray level statistics of the RBST image.
  • the texture characteristics will be extracted from the planes over all orientations, described above.
  • the corresponding texture measure extracted around the lesion can be averaged over all planes in each group to obtain an average texture measure over all directions around the lesion.
  • the RBST and texture feature extraction can be applied to the band of voxels inside the lesion boundary for further analysis of lesion characteristics.
  • the extracted features may incorporate object characteristics, including, but not limited to, spiculation features, boundary sharpness, and shape irregularity, that can be used by a trained classifier to estimate the likelihood that a lesion is malignant or benign. This may be a part of the output information displayed to a user.
  • object characteristics including, but not limited to, spiculation features, boundary sharpness, and shape irregularity
  • a classifier is trained to differentiate true lesions from false positive (FP) objects (block 72 ).
  • Another classifier is trained to differentiate malignant and benign lesions.
  • the classifier may include, but are not limit to, a linear discriminant analysis, a neural network, a support vector machine, rule-based classification, or fuzzy logic classification.
  • the free response receiver operating characteristic (FROC) analysis may be used to evaluate the test performance of the CAD system.
  • a decision threshold is applied to the test discriminant score of each detected object. For an object that has a discriminant score above the threshold, the object may be compared to the true lesion location of that case. The object is considered to be true positive if the centroid of the true lesion marked by the radiologist falls within the volume of the object; otherwise, it is a false positive.
  • the detection sensitivity and the average number of FPs per case is determined from the entire data set.
  • the FROC curve is generated by varying the decision threshold over a range of values.
  • An appropriate decision threshold corresponding to a preferred sensitivity and specificity can be chosen along the FROC curve for the clinical application.
  • the CAD system will detect the suspicious lesion locations and display these locations to the user.
  • the CAD system will also estimate and display the likelihood of malignancy of the lesion to the user as an option.
  • FIGS. 6( a ) and 6 ( b ) show an example of a slice through a mass in a VOI and the mass boundary obtained by 3D region growing segmentation, respectively.
  • An example of the RBST applied to the slice of the mass and the gradient image derived from the RBST image are shown in FIGS. 6( c ) and 6 ( d ).
  • the spicules radiating from the mass are approximately in the vertical direction and the segmented boundary of the mass is transformed to a straight line, forming the upper edge of the rectangular RBST image.
  • a stepwise or other feature selection procedure may select the most effective subset of features from the available feature pool, thus reducing the dimensionality of the feature space for the classifier.
  • seven features may be selected from the available feature pool.
  • the most often selected features are likely to include the contrast, minimum gray level, volume change before and after 3D morphological opening, maximum perimeter, compactness, and two RLS texture features—horizontal short runs emphasis and gray level nonuniformity.
  • the performance of the classifier for differentiation of true and false lesions in the feature classification step of the CAD system 20 can be evaluated by ROC analysis.
  • an ROC curve obtained from a linear discriminant analysis (LDA) classifier with stepwise feature selection is shown in FIG. 7 .
  • LDA linear discriminant analysis
  • the LDA classifier was designed with the training subset in each of the leave-one-case-out cycle.
  • the trained classifier was applied to the lesion candidates in the left-out case such that each object was assigned a discriminant score.
  • the area under the ROC curve reached 0.87 with a standard deviation of 0.02.
  • the overall test performance of the detection system after FP reduction for a data set of 26 cases is shown as the FROC curve in FIG. 8 .
  • the system has proven to achieve a sensitivity of 85% at 2.2 FPs/case and 80% at 2.0 FPs/case. This performance can further be improved if it is trained with a larger training set.
  • DTM mammography the structured background such as the dense fibroglandular tissue is suppressed in the reconstructed DTM slices.
  • DTM is different from computed tomography in that the overlapping tissues are reduced but not totally eliminated.
  • the tomosynthesis reconstruction leaves residues of the overlapping tissue on the DTM slices.
  • the shadow of a lesion can be seen in most slices even though the actual size of the lesion may only be a fraction of the breast thickness.
  • the voxel dimension along the z-direction (i.e., the direction perpendicular to the slices) in the reconstructed slices may be several times larger than that in the x-y plane (the planes of the slices).
  • the boundary of an object in the z-direction is therefore not as well-defined as that on the x-y planes.
  • the features extracted in 3D may have a strong directional dependence. For example, extracted texture features may be taken along the x-y planes and a 3D texture feature obtained by averaging the corresponding 2D texture values over slices containing the object. Alternatively, for 3D texture analysis, the texture features may be calculated in the shell of voxels surrounding the object or on the planes slicing through the object centroid from different directions.
  • Computerized characterization of lesions on DTM images is intended as an aid to radiologists in making biopsy or follow-up recommendations in patient management. Classification between malignant and benign lesions may be performed as a separate step if the radiologist prefers to have an estimate of the likelihood of malignancy (LM) of a selected lesion during screening or diagnostic work-up. Alternatively, the computer may provide a malignancy rating for each suspicious lesion found by the computerized detection algorithm.
  • Computerized lesion characterization has the general steps—preprocessing, segmentation, feature extraction, and feature classification. The specific computer vision techniques developed for characterization of breast lesions on DTMs are described below. Some of the feature extraction methods and features are similar to those developed for the lesion detection system. However, the feature classifier is designed to differentiate true lesions and false positives (normal breast tissue) in the detection system, whereas the feature classifier is trained to differentiate malignant and benign lesions in the characterization system.
  • the slice thickness may first be linearly interpolated to the same as the pixel size so that the voxels are isotropic cubes.
  • DTM has reduced background compared with regular mammograms, dense tissue adjacent to a lesion is not uncommon.
  • the background normal tissue structure is reduced by estimating the low frequency background gray level image from a shell of voxels at the periphery of the VOI.
  • the gray level at a given voxel of the background image is obtained as a weighted average of the six gray level values of the voxels at the intersection between the normal from the voxel of interest to each of the six surfaces of the VOI.
  • the weighting factor of each voxel value can be taken as the inverse of the length of the respective normal.
  • the estimated background image is then subtracted from the lesion VOI.
  • the segmentation method outlines the main body of the lesion.
  • the lesion within the background-corrected VOI is segmented using either generalized k-means clustering followed by a 3D active contour model or a level set segmentation method, or 3D region growing with adaptive thresholding.
  • a set of pixels in 3D is grouped into a lesion class and a background class using the k-means technique. Clustering is performed iteratively until the cluster centers of the classes stabilize.
  • the voxels grouped into the non-background class may or may not be connected.
  • a 26-connectivity criterion is used to determine the various connected objects in the 3D space and the largest connected object in the lesion class is used as the initial region for a 3D active contour model, where the weights of the energy terms in the active contour model are optimized using a training set for the specific application of lesion segmentation on DTMs.
  • an adaptive threshold is determined as the maximum average gradient over all radial directions around the object being segmented. The threshold can also be made variable over the different radial directions by estimating a local threshold over a limited-angle conical section around a given radial direction.
  • Morphological, gray-level, texture, and spiculation features are extracted.
  • 3D morphological characteristics extracted from the segmented lesion boundary include, but are not limited to, the volume, the longest diameter, the surface-area-to-volume ratio, the sphericity, the eccentricity of a fitted ellipsoid, the normalized radial length from the object centroid, and the variance of the normalized radial length.
  • the gray level features for lesion characterization include, but are not limited to, the contrast of the object relative to the surrounding background, the minimum and the maximum gray levels, and the characteristics derived from the gray level histogram of the object such as the skewness, kurtosis, energy, and the entropy.
  • 3D texture features may be extracted from the lesion analyzed in a number of planes, including: (1) along the reconstructed high-resolution DTM slices that are parallel to the detector plane, (2) a set of planes that cut through the centroid and the south-north poles, and (3) a set of planes that cut through the centroid and the east-west poles.
  • the cross section of the lesion is similar to a lesion in 2D.
  • the RBST previously developed for analysis of masses on 2D mammograms is applied to the object.
  • a region around the object margin e.g., a band of 60 pixels wide, is transformed to a rectangular coordinate system.
  • the lesion boundary is parallel to the long side, and the radially oriented spicules or textures are aligned and parallel to the short side, thus facilitating texture analysis.
  • the texture in the RBST image can be further enhanced by, e.g., Sobel filtering or multi-resolution wavelet decomposition and reconstruction.
  • Texture measures such as the RLS, the SGLD matrices, the GLDS, and the Laws textures are extracted from gray level statistics of the RBST image.
  • the texture characteristics may be extracted from the planes over all orientations, described above.
  • the corresponding texture measure extracted around the lesion can be averaged over all planes in each group to obtain an average texture measure over all directions around the lesion.
  • An alternative to extract 3D texture features is to generalize the RBST to 3D, i.e., transforming a shell of voxels surrounding the lesion to a slab of rectangular voxels with the surface of the lesion transformed to a flat plane.
  • the texture measures described above can then be generalized to 3D and extracted from the slab of voxels in all three dimensions.
  • Spiculation features are extracted both from the 2D planes described above, and also in 3D.
  • a spiculation likelihood map around the lesion margin is calculated by the statistics of the gradient magnitudes and directions at each pixel.
  • a search region is defined as the set of all image pixels that i) lie outside the mass; ii) have a positive contrast; iii) are at a certain distance (e.g., less than 4 mm) from (i c ,j c ); and iv) are within ⁇ /4 of the normal to the mass contour at (i c ,j c ).
  • the obtuse angle ⁇ between two lines is computed, where the first line is defined by the gradient direction at (i,j), and the second line joins the pixel (i,j) to the mass boundary pixel (i c ,j c ).
  • a method based on convolution with Gaussian derivatives may be used for computing the gradients.
  • the spiculation measure at a mass boundary pixel (i c ,j c ) is defined as the average value of ⁇ in the search region.
  • the spiculation measure may be computed for a number of contours, where each contour is obtained by expanding the previous contour by one pixel at all pixels on the contour, and the first contour is given by the segmentation method.
  • the resulting image in a band around the mass is referred to as the spiculation likelihood map. High values in the map indicate a high likelihood of the presence of spiculations.
  • a threshold above which the voxel is considered to be part of a spicule is determined by training with case samples.
  • the features in this map are used to classify lesions as spiculated and non-spiculated.
  • Spiculation measures extracted from the spiculation likelihood image include, but are not limited to, the number, the density, the total area of spiculations, and the strength of the spiculation gradients to measure the degree of spiculation of the lesion. After extracting the spiculation measures from each plane, the corresponding measures can be averaged over all planes in the same group to derive 3D spiculation measures.
  • spiculation measures can be used directly, and in combination with morphological and texture features, as input features to the malignant-benign classifiers. Additional features for distinguishing fibrous tissue overlapping with the lesion from true spiculations include measures for gradient strengths and direction inside the lesion, and the estimate of the direction and the continuity of the structures from the inside to the outside of the lesion.
  • the spiculation likelihood map is generalized to 3D by calculating the statistics of the gradient magnitudes and directions in 3D space at each voxel in the lesion margin region.
  • the gradient statistics at a point are calculated over a cone-shape region around the normal at that point.
  • High values of gradient statistics in the lesion margin indicate possible presence of spiculations.
  • a threshold above which the voxel is considered to be part of a spicule is determined by training with case samples.
  • 3D spiculation measures such as number, density, and the proportion of voxels considered to be spicules relative to the total number of voxels in the lesion margin are derived from the 3D spiculation likelihood map.
  • a classifier is trained to differentiate malignant and benign lesions based on the extracted features above.
  • the classifier may include a linear discriminant analysis, a neural network, a support vector machine, rule-based classification, or fuzzy logic classification.
  • a stepwise or other feature selection procedure may select the most effective subset of features from the available feature pool, thus reducing the dimensionality of the feature space for the classifier.
  • classifiers are well known by those of ordinary skill in the art, but the combinations of segmentation and feature extraction techniques that are developed for the DTM images make the designed classifier specific for the CAD task described herein.
  • the raw data are acquired as projection view (PV) mammograms.
  • the number of PVs will depend on the design of the DTM x-ray system. For example, a first generation GE prototype DTM system acquired 11 PVs. A second generation GE prototype DTM system acquired 21 PVs. The total dose of all PVs used about 1.5 times of the dose of a conventional mammogram. A PV is therefore noisier than a conventional mammogram.
  • the PVs offer the advantage that a lesion will be projected at a slightly different angle and thus having somewhat different overlapping tissues on each view. A lesion that may be camouflaged by dense tissues on some views may become more conspicuous on other views.
  • overlapping tissues that mimic lesions on some views may be less lesion-mimicking on other views. If a CAD system for lesion detection is applied to the PVs, the complementary information on the different PVs may be utilized to improve sensitivity and reduce FPs.
  • the disclosed algorithm uses DTM slices reconstructed from 11 PVs using the iterative maximum-likelihood algorithm as input, image processing methods should not depend strongly on the reconstruction method or the number of PVs for generating the DTM slices as long as the image quality of the reconstructed slices are reasonable.
  • the raw PV mammograms may also be directly used as input to the CAD system, with a chosen reconstruction method installed as a preprocessing step in the CAD system 20 .
  • the CAD system may first process the PV mammograms individually using 2D image processing techniques.
  • the detected objects on the PVs may then be merged based on the extracted features.
  • the merged information is used to differentiate true lesions or false positives.
  • any of the software described herein may be stored in any computer readable memory such as on a magnetic disk, an optical disk, or other storage medium, in a RAM or ROM of a computer or processor, etc.
  • this software may be delivered to a user or a computer using any known or desired delivery method including, for example, on a computer readable disk or other transportable computer storage mechanism or over a communication channel such as a telephone line, the Internet, the World Wide Web, any other local area network, wide area network, or wireless network, etc. (which delivery is viewed as being the same as or interchangeable with providing such software via a transportable storage medium).
  • this software may be provided directly without modulation or encryption or may be modulated and/or encrypted using any suitable modulation carrier wave and/or encryption technique before being transmitted over a communication channel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method for using computer-aided diagnosis (CAD) for digital tomosynthesis mammograms (DTM) including retrieving a DTM image file having a plurality of DTM image slices; applying a three-dimensional analysis to the DTM image file to detect lesion candidates; identifying a volume of interest and locating its center; segmenting the volume of interest by a three dimensional method; extracting one or more object characteristics from the object corresponding to the volume of interest; and determining if the object corresponding to the volume of interest is a breast lesion or normal breast tissue.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 11/350,301, which was filed on Feb. 1, 2006 and claims the benefit under 35 U.S.C. §119(e) of U.S. Provisional Application Ser. No. 60/650,923 filed Feb. 8, 2005, the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
  • FIELD OF TECHNOLOGY
  • This relates generally to breast cancer detection and, more particularly, to a system and method for using computer-aided diagnosis (CAD) for Digital Tomosynthesis Mammograms (DTMs) for detection and characterization of breast lesions.
  • DESCRIPTION OF THE RELATED ART
  • Cancer is a serious and pervasive medical condition that has garnered much attention in the past 50 years. As a result there has and continues to be significant effort in the medical and scientific communities to reduce deaths resulting from cancer. Mammography is the most cost-effective screening method for early detection of breast cancer. However, mammographic sensitivity is often limited by the presence of overlapping dense fibroglandular tissue in the breast. The dense parenchyma reduces the conspicuity of the abnormalities, which constitutes one of the main causes of missed breast cancer. The advent of full field digital detectors offers opportunities to develop new techniques for improved imaging of dense breasts such as digital tomosynthesis, stereomammography, and breast computed tomography. These new techniques are still under development and their potential impacts on breast cancer detection remain to be investigated.
  • Digital tomosynthesis is based on the same principle as conventional tomography in diagnostic radiology which uses a screen-film system as the image receptor for imaging body parts at selected depths. In conventional tomography, a series of projection exposure is accumulated on the same film when the x-ray source is moved about a fulcrum while the screen-film system is moved in the opposite direction. A drawback of conventional tomography is that each tomogram can only image one plane at a selected depth in relatively sharp focus. If the exact depth of interest is not known in advance or the abnormality encompasses a range of depths, a tomogram at each depth will have to be acquired at separate imaging, thus costing additional dose and examination time.
  • With a digital detector, the series of projection exposure is readout as separate projection views at the different x-ray source locations. Tomographic slices focused at any depths of the imaged volume can then be generated with digital reconstruction techniques from the same series of projection images. Because of the wide dynamic range and the linear response of the digital detector, each of the projection images can be acquired with a fraction of the x-ray exposure used for a regular projection radiograph. The total dose required for digital tomosynthesis imaging may be kept at nearly the same or only slightly higher than that of a regular radiograph. Properly designed digital reconstruction techniques provide the additional advantage that the depth resolution of tomosynthesis is generally much higher than that of conventional tomography. Digital tomosynthesis thus makes tomography more practical to be applied to breast imaging in terms of radiation dose, examination time, and spatial resolution.
  • Digital breast tomosynthesis mammography is one of the promising methods that may reduce the camouflaging effects of dense tissue and improve mammographic sensitivity for breast cancer detection in dense breasts. DTM may also improve the accuracy in differentiation of malignant and benign breast lesions because the reduction of overlapping tissue allows the features of the lesions to be visualized or analyzed more reliably. Several research groups are developing digital tomosynthesis methods for reconstruction of tomographic slices from the series of projection images. A study is underway to compare DTM with conventional mammograms in breast cancer detection.
  • SUMMARY
  • Digital tomosynthesis mammography (DTM) is a promising new modality that has the potential to improve breast cancer detection, especially in dense breasts. However, the number of slices per breast may range from 30 to over 80, thus increasing the time required for interpretation. A computer-aided diagnosis (CAD) method and a system is disclosed that provides a second opinion to radiologists during their interpretation of DTMs. The disclosed CAD system includes retrieving a DTM image file having a plurality of DIM image slices; applying a three-dimensional analysis to the DTM image file to detect lesion candidates; identifying a volume of interest and locating its center; segmenting the volume of interest by a three dimensional method; extracting one or more object characteristics from the object corresponding to the volume of interest; and determining if the object corresponding to the volume of interest is a breast lesion or normal breast tissue.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of a computer aided diagnostic system that can be used to perform breast cancer screening and diagnosis based on a series of DTM images using one or more exams from a given patient;
  • FIG. 2 is a graph illustrating a distribution of a longest diameter of 23 masses and 3 areas of architectural distortion estimated on a DTM slice intersecting a lesion approximately at its largest cross section;
  • FIG. 3 is a graph illustrating a distribution of breast density in terms of BI-RADS category for 26 breasts estimated by an MQSA radiologist from conventional mammograms;
  • FIG. 4( a) is an example of a mass imaged on a DTM slice;
  • FIG. 4( b) is an example of a screen-film mammogram;
  • FIG. 5 is a flow chart illustrating a method of processing a set of DTM images for one or more patients to screen for lesion on DTM mammograms;
  • FIG. 6( a) is a DTM slice intersecting a spiculated mass;
  • FIG. 6( b) is the segmented mass from FIG. 6( a) in the corresponding slice after 3D region growing segmentation;
  • FIG. 6( c) illustrates a rubber-band straightening transform (RBST) image of a 60-pixel wide region around the mass from FIG. 6( a);
  • FIG. 6( d) illustrates a gradient image obtained by Sobel filtering of the RBST image from FIG. 6( c);
  • FIG. 7 is an ROC curve of a linear discriminant classifier obtained from leave-one-case-out testing;
  • FIG. 8 is an FROC curve for test performance of a detection system for DTM mammograms.
  • DETAILED DESCRIPTION
  • Referring to FIG. 1, a computer aided diagnosis (CAD) system 20 that may be used to detect and diagnose breast cancers includes a computer 22 having a processor 24 and a memory 26 therein and having a display screen 27 associated therewith. As illustrated in an expanded view of the memory 26, a breast cancer detection and diagnostic system 28 in the form of, for example, a program written in computer implementable instructions or code, is stored in the memory 26 and is adapted to be executed on the processor 24 to perform processing on one or more sets of digital tomosynthesis mammography (DTM) images 30, which may also be stored in the computer memory 26. The DTM images 30 may include DTM images for any number of patients and may be entered into or delivered to the system 20 using any desired importation technique. Generally speaking, any number of sets of images 30 a, 30 b, 30 c, etc. (called image files) can be stored in the memory 26 wherein each of the image files 30 a, 30 b, etc. includes numerous DTM scan images associated with a particular DTM scan of a particular patient. Thus, different ones of the images files 30 a, 30 b, etc. may be stored for different patients or for the same patient at different times. As noted above, each of the image files 30 a, 30 b, etc. includes a plurality of images therein corresponding to the different slices of information collected by a DTM imaging system during a particular DTM scan of a patient. The actual number of stored scan images in any of the image files 30 a, 30 b, etc. will vary depending on the size of the patient, the scanning image thickness, the type of DTM system used to produce the scanned images in the image file, etc. While the image files 30 are illustrated as stored in the computer memory 26, they may be stored in any other memory and be accessible to the computer 22 via any desired communication network, such as a dedicated or shared bus, a local area network (LAN), wide area network (WAN), the internet, etc.
  • As also illustrated in FIG. 1, the breast cancer detection and diagnostic system 28 includes a number of components or routines which may perform different steps or functionality in the process of analyzing one or more of the image files 30 to detect and/or diagnose breast cancers. As will be explained in more detail herein, the breast cancer detection and diagnostic system 28 may include 3D gradient field analysis routines 34, multi-resolution filtering routines 36, object segmentation routines 37, and feature extraction routines 38. To perform these routines 34-38, the breast cancer detection and diagnostic system 28 may also include one or more three dimension image processing filters 40, feature classification routines 42, classifiers 43, such as neural network analyzers, linear discriminant analyzers which use linear discriminant analysis routines to classify objects, support vector machines, rule based analyzers, including standard or crisp rule based analyzers and fuzzy logic rule based analyzers, etc., all of which may perform classification based on object features provided thereto. Of course other image processing routines and devices may be included within the system 28 as needed.
  • Still further, the CAD system 20 may include a set of files 50 that store information developed by the different routines 34-38 of the system 28. These files 50 may include temporary image files that are developed from one or more of the DTM images within an image file 30 and object files that identify or specify objects within the DIM images. The files 50 may also include one or more object files specifying the location and boundaries of objects that may be considered as lesion candidates, and object feature files specifying one or more features of each of these lesion candidates as determined by the feature classifying routines 42. Of course, other types of data may be stored in the different files 50 for use by the system 28 to detect and diagnose breast cancers from the DTM images of one or more of the image files 30.
  • Still further, the breast cancer detection and diagnostic system 28 may include a display program or routine 52 that provides one or more displays to a user, such as a radiologist, via, for example, the screen 27. Of course, the display routine 52 could provide a display of any desired information to a user via any other output device, such as a printer, via a personal data assistant (PDA) using wireless technology, etc.
  • During operation, the breast cancer detection and diagnostic system 28 operates on a specified one or ones of the image files 30 a, 30 b, etc. to detect and diagnose breast cancer lesions associated with the selected image file. After performing the detection and diagnostic functions, which will be described in more detail below, the system 28 may provide a display to a user, such as a radiologist, via the screen 27 or any other output mechanism, connected to or associated with the computer 22 indicating the results of the breast cancer detection and diagnostic process. Of course, the CAD system 20 may use any desired type of computer hardware and software, using any desired input and output devices to obtain DTM images and display information to a user and may take on any desired form other than that specifically illustrated in FIG. 1.
  • Computer-aided diagnosis (CAD) has been shown to improve breast cancer detection and characterization in mammography. Although a preliminary evaluation indicated that the breast lesions can be visualized in DTM images more easily than on conventional mammograms, the overall detection sensitivity and specificity of DTM in comparison with conventional mammograms remain to be investigated. With DTM, the number of reconstructed slices for each breast is very large. Even at 1-mm slice thickness, the number of slices per breast will range from about 30 to over 80. The time required for interpretation of a DTM case can be expected to be much greater than that for conventional mammograms. With the increase in radiologists' workload, the chance for oversight of subtle lesions may not be negligible. CAD will likely play a role in the interpretation of DTM as for conventional mammograms.
  • The DTM images may be acquired by a GE digital tomosynthesis mammography system. The system may have a flat panel CsI/a:Si detector with a pixel size of 0.1 mm×0.1 mm. The DTM system acquires projection views (PVs) of the compressed breast over a 50-degree arc in the mediolateral oblique (MLO) view. A total dose for the PVs was designed to be less than 1.5 times that of a single standard film mammogram. DTM slices are reconstructed at 1-mm slice spacing using an iterative maximum-likelihood algorithm.
  • Although a specific example of a DTM system is described here, DTM images may be acquired by DTM systems of other manufacturers and used as input to the CAD system 20. The digital detector can be made of other materials such as a selenium detector or a storage phosphor detector. The pixels of the detector can be any sizes acceptable for mammographic imaging, which may range from 0.05 mm×0.05 mm to 0.2 mm×0.2 mm. The PVs may be acquired with different parameters such as the angular range, number of PVs per breast, radiation dose, mammographic views. The DTM slices may be reconstructed at different pixel sizes or slice thickness. DTM slices can be reconstructed using a variety of reconstruction algorithms including, but not limited to, iterative maximum-likelihood algorithms, filtered backprojection algorithms, backprojection algorithms, algebraic reconstruction techniques (ART), simultaneous ART, and simultaneous iterative reconstruction techniques (SIRT).
  • A case may consist DTM slices of a single breast. In a preliminary study, 26 cases including 23 masses and 3 areas of architectural distortion were available. Thirteen of the masses and 2 of the architectural distortion were proven to be malignant by biopsy. The true location of the mass or the architectural distortion in each case was identified by a Mammography Quality Standards Act (MQSA) radiologist based on the diagnostic information. The longest diameter of the lesions ranged from 5.4 mm to 29.4 mm (mean=14.2 mm, median=12.1 mm) as estimated on the DTM slice intersecting the lesion approximately at its largest cross section. The distribution of the longest diameter of the masses or the areas of architectural distortion is shown in FIG. 2. The distribution of the breast density in terms of BI-RADS category for the 26 breasts as estimated by an MQSA radiologist from viewing the conventional screen-film mammograms is shown in FIG. 3.
  • An example of a DTM slice intersecting a spiculated mass is shown in FIG. 4( a). The same mass imaged in a conventional screen-film mammogram of the same view is shown in FIG. 4( b) for comparison. The spicules of the mass are much more conspicuous in the DIM slice than in the mammogram, likely attributable to the reduced structured background in the DTM image.
  • Computerized Detection
  • FIG. 5 depicts a flow chart 60 that illustrates a general method of performing breast cancer detection and diagnosis for a patient based on a set of DTM images for the patient as well as a method of determining whether the detected breast lesions are benign or malignant. The flow chart 60 of FIG. 5 may generally be implemented by software or firmware as the breast cancer detection and diagnostic system 28 of FIG. 1 if so desired. Generally speaking, the method of detecting breast cancer depicted by the flow chart 60 includes a series of steps 62-74 that are performed on each of the DTM images for a particular image file 30 of a patient to identify and classify the areas of interest on the DTM images.
  • The breast cancer detection CAD system 28 includes several major steps—prescreening, segmentation, feature extraction, false-positive (FP) reduction, as shown in the schematic in FIG. 5. For a given case, DTM slices containing the entire breast volume are input into the system for processing (block 62).
  • Prescreening:
  • In the prescreening step, the goal is to detect lesion candidates (block 64) by using the property that the breast lesions are generally denser and have higher gradient at the transition between the breast tissue background and the lesion. To achieve this goal, a three-dimensional (3D) gradient field analysis can be applied to the volumetric data set of each case as described below. Of course, variations of the method or the implementation to extract similar gradient convergence information are possible. To reduce noise in the gradient calculation, the image voxels may first be averaged to obtain a smoothed volumetric data set. The gradient field analysis is calculated in a spherical region about the size of a lesion, e.g., 5 to 10 mm in radius, centered at each voxel, c(i), of the breast volume. The gradient vector at each smoothed voxel v(j) in the spherical region is computed and the direction of the gradient vector is projected to the radial direction from the central voxel c(i) to the voxel v(j). The average gradient direction over a spherical shell of voxels at a radius, R(k), of k voxels from c(i) is calculated as the mean of the gradient directions over voxels on three adjacent spherical shells, e.g., R(k−1), R(k), and R(k+1). Finally, the gradient field convergence at c(i) is determined by computing some statistics, e.g., the maximum of the average gradient directions among all shells in the spherical region. The gradient field convergence calculation is performed over all voxels in the breast region, resulting in a 3D gradient field image.
  • An alternative method to detect lesion candidates is to apply 3D Hessian matrix analysis to the DTM volumetric data set. Hessian matrix has been described in the literature but has not been applied to DTM for lesion detection. Hessian matrix is calculated by convolution of the second derivatives of 3D Gaussian filters with the DTM data set. Multiscale Gaussian filters covering the size range of the lesions of interest will be applied. For each scale, a response function designed to enhance 3D spherical objects within a size range is calculated at each voxel. The response functions at different scales are combined by a maximum operation. The resulting response image depicts locations of potential lesions as locations of relatively high responses. The response that exceeds a selected threshold can then be labeled as potential lesions. These locations may be subjected to subsequent analysis, similar to that applied to the locations obtained by gradient field analysis, as described below. Of course, variations in the implementation of the Hessian matrix analysis are possible.
  • Segmentation:
  • Still referring to FIG. 5, a volume of interest (VOI) (e.g., 256×256×256 voxels) is then identified with its center placed at each location of high gradient convergence or high response. The object in each VOI is segmented by a 3D region growing method in which the location of high gradient convergence is used as the starting point and the object is allowed to grow across multiple slices (block 66). Region growing may be guided by the radial gradient magnitude. The growth of the object is terminated where the radial gradient reaches a threshold value adaptively selected for the local object. After region growing, all connected voxels constituting the object are labeled.
  • An alternative segmentation method is to use k-means clustering to group the voxels in the VOI into a lesion class and a background class. The largest connected object in the lesion class will be used as the initial region for a 3D active contour model or level set segmentation method. The search for object boundary by the 3D active contour model is guided by minimization of an energy function, which is a weighted sum of internal energy terms such as homogeneity energy, continuity energy, two-dimensional (2D) curvature energy, and 3D curvature energy, and external energy terms such as 2D gradient energy, 3D gradient energy, and balloon energy. The weights of the 3D active contour are optimized using a training set of DTM lesions.
  • Feature Extraction:
  • The 3D object characteristics are then extracted from the object (block 70). Three groups of features—morphological features, gray level features, and texture features—are extracted from the segmented object. The morphological features describe the shape of the object. They include, but are not limited to, the volume in terms of the number of voxels in the object, volume change before and after 3D morphological opening by a spherical element with a radius of several voxels, e.g., 5 voxels, the surface area, the maximum perimeter of the segmented object among all slices intersecting the object, and the longest diameter of the object. The compactness of the object is described in terms of the percentage overlap with a sphere of the same volume centered at the centroid of the object. The gray level features include, but are not limited to, the contrast of the object relative to the surrounding background, the minimum and the maximum gray levels, and the characteristics derived from the gray level histogram of the object such as the skewness, kurtosis, energy, and the entropy.
  • The texture features can be extracted either in 2D or 3D. One example of 2D texture feature extraction is described as follows. On each slice, the cross section of the 3D object is treated as an object in a 2D image. The rubber-band straightening transform (RBST) previously developed for analysis of masses on 2D mammograms is applied to the object. A region around the object margin, e.g., a band of 60 pixels wide, is transformed to a rectangular coordinate system. In the RBST image, the lesion boundary will be parallel to the long side, and the radially oriented spicules or textures are aligned and parallel to the short side, thus facilitating texture analysis. A gradient image of the transformed rectangular object margin is derived from Sobel filtering. Texture features can be extracted from the run length statistics (RLS) of the gradient image in both the horizontal and vertical directions. The texture features may include, but are not limited to, short runs emphasis, long runs emphasis, gray level nonuniformity, run length nonuniformity and run percentage. Any other type of texture features may also be extracted, such as spatial gray level dependence (SGLD) textures, gray level difference statistics (GLDS) textures, and the Laws textures. Detailed description of the RBST and the RLS and SGLD texture features for mammographic lesions is well known by those of ordinary skill in the art. For a 3D object in the DTM data set, each RLS texture feature is averaged over the corresponding feature values over slices containing the segmented object.
  • 3D texture extraction methods can be considered to be an extension of a 2D method. For example, the lesion can be analyzed in three groups of planes: (1) along the reconstructed high-resolution DTM slices that are parallel to the detector plane, (2) a set of planes that cut through the centroid and the south-north poles, and (3) a set of planes that cut through the centroid and the east-west poles. On each plane, the cross section of the lesion will be similar to a lesion in 2D. The RBST is then applied to the band of pixels around the lesion boundary to transform it to a rectangular image. The texture in the RBST image can be further enhanced by, e.g., Sobel filtering or multi-resolution wavelet decomposition and reconstruction. Texture measures such as the RLS, the SGLD, the GLDS, and the Laws textures can then be extracted from the gray level statistics of the RBST image. The texture characteristics will be extracted from the planes over all orientations, described above. The corresponding texture measure extracted around the lesion can be averaged over all planes in each group to obtain an average texture measure over all directions around the lesion. Likewise, the RBST and texture feature extraction can be applied to the band of voxels inside the lesion boundary for further analysis of lesion characteristics.
  • The extracted features may incorporate object characteristics, including, but not limited to, spiculation features, boundary sharpness, and shape irregularity, that can be used by a trained classifier to estimate the likelihood that a lesion is malignant or benign. This may be a part of the output information displayed to a user.
  • Feature Classification and False Positive (FP) Reduction:
  • If a small data set is all that is available, a leave-one-case-out resampling technique can be used for training and testing the performance of the CAD system 20. If a large data set is available, the CAD system can be trained and tested with independent data sets. A classifier is trained to differentiate true lesions from false positive (FP) objects (block 72). Another classifier is trained to differentiate malignant and benign lesions. The classifier may include, but are not limit to, a linear discriminant analysis, a neural network, a support vector machine, rule-based classification, or fuzzy logic classification.
  • The free response receiver operating characteristic (FROC) analysis may be used to evaluate the test performance of the CAD system. A decision threshold is applied to the test discriminant score of each detected object. For an object that has a discriminant score above the threshold, the object may be compared to the true lesion location of that case. The object is considered to be true positive if the centroid of the true lesion marked by the radiologist falls within the volume of the object; otherwise, it is a false positive. For each decision threshold, the detection sensitivity and the average number of FPs per case is determined from the entire data set. The FROC curve is generated by varying the decision threshold over a range of values. After the CAD system is trained and tested to be useful, the CAD system can be applied to new patient cases as obtained in clinical practice. An appropriate decision threshold corresponding to a preferred sensitivity and specificity can be chosen along the FROC curve for the clinical application. The CAD system will detect the suspicious lesion locations and display these locations to the user. The CAD system will also estimate and display the likelihood of malignancy of the lesion to the user as an option.
  • FIGS. 6( a) and 6(b) show an example of a slice through a mass in a VOI and the mass boundary obtained by 3D region growing segmentation, respectively. An example of the RBST applied to the slice of the mass and the gradient image derived from the RBST image are shown in FIGS. 6( c) and 6(d). The spicules radiating from the mass are approximately in the vertical direction and the segmented boundary of the mass is transformed to a straight line, forming the upper edge of the rectangular RBST image.
  • For the design of the classifier for FP reduction, a stepwise or other feature selection procedure may select the most effective subset of features from the available feature pool, thus reducing the dimensionality of the feature space for the classifier. As an example, seven features may be selected from the available feature pool. The most often selected features are likely to include the contrast, minimum gray level, volume change before and after 3D morphological opening, maximum perimeter, compactness, and two RLS texture features—horizontal short runs emphasis and gray level nonuniformity. The performance of the classifier for differentiation of true and false lesions in the feature classification step of the CAD system 20 can be evaluated by ROC analysis. As an example, an ROC curve obtained from a linear discriminant analysis (LDA) classifier with stepwise feature selection is shown in FIG. 7. The LDA classifier was designed with the training subset in each of the leave-one-case-out cycle. The trained classifier was applied to the lesion candidates in the left-out case such that each object was assigned a discriminant score. The area under the ROC curve reached 0.87 with a standard deviation of 0.02.
  • During prescreening, most, if not all, of the masses and architectural distortions are typically detected. The overall test performance of the detection system after FP reduction for a data set of 26 cases is shown as the FROC curve in FIG. 8. The system has proven to achieve a sensitivity of 85% at 2.2 FPs/case and 80% at 2.0 FPs/case. This performance can further be improved if it is trained with a larger training set.
  • In DTM mammography, the structured background such as the dense fibroglandular tissue is suppressed in the reconstructed DTM slices. However, DTM is different from computed tomography in that the overlapping tissues are reduced but not totally eliminated. The tomosynthesis reconstruction leaves residues of the overlapping tissue on the DTM slices. Similarly, the shadow of a lesion can be seen in most slices even though the actual size of the lesion may only be a fraction of the breast thickness. In addition, the voxel dimension along the z-direction (i.e., the direction perpendicular to the slices) in the reconstructed slices may be several times larger than that in the x-y plane (the planes of the slices).
  • The boundary of an object in the z-direction is therefore not as well-defined as that on the x-y planes. The features extracted in 3D may have a strong directional dependence. For example, extracted texture features may be taken along the x-y planes and a 3D texture feature obtained by averaging the corresponding 2D texture values over slices containing the object. Alternatively, for 3D texture analysis, the texture features may be calculated in the shell of voxels surrounding the object or on the planes slicing through the object centroid from different directions.
  • Computerized Characterization
  • Computerized characterization of lesions on DTM images is intended as an aid to radiologists in making biopsy or follow-up recommendations in patient management. Classification between malignant and benign lesions may be performed as a separate step if the radiologist prefers to have an estimate of the likelihood of malignancy (LM) of a selected lesion during screening or diagnostic work-up. Alternatively, the computer may provide a malignancy rating for each suspicious lesion found by the computerized detection algorithm. Computerized lesion characterization has the general steps—preprocessing, segmentation, feature extraction, and feature classification. The specific computer vision techniques developed for characterization of breast lesions on DTMs are described below. Some of the feature extraction methods and features are similar to those developed for the lesion detection system. However, the feature classifier is designed to differentiate true lesions and false positives (normal breast tissue) in the detection system, whereas the feature classifier is trained to differentiate malignant and benign lesions in the characterization system.
  • Preprocessing:
  • A rectangular-prism shaped volume of interest (VOI) enclosing the lesion that has been either been identified manually by a radiologist or automatically detected by the computer, is extracted. The slice thickness may first be linearly interpolated to the same as the pixel size so that the voxels are isotropic cubes. Although DTM has reduced background compared with regular mammograms, dense tissue adjacent to a lesion is not uncommon. The background normal tissue structure is reduced by estimating the low frequency background gray level image from a shell of voxels at the periphery of the VOI. The gray level at a given voxel of the background image is obtained as a weighted average of the six gray level values of the voxels at the intersection between the normal from the voxel of interest to each of the six surfaces of the VOI. The weighting factor of each voxel value can be taken as the inverse of the length of the respective normal. The estimated background image is then subtracted from the lesion VOI.
  • Segmentation:
  • The segmentation method outlines the main body of the lesion. The lesion within the background-corrected VOI is segmented using either generalized k-means clustering followed by a 3D active contour model or a level set segmentation method, or 3D region growing with adaptive thresholding. In the first method, a set of pixels in 3D is grouped into a lesion class and a background class using the k-means technique. Clustering is performed iteratively until the cluster centers of the classes stabilize. The voxels grouped into the non-background class may or may not be connected. A 26-connectivity criterion is used to determine the various connected objects in the 3D space and the largest connected object in the lesion class is used as the initial region for a 3D active contour model, where the weights of the energy terms in the active contour model are optimized using a training set for the specific application of lesion segmentation on DTMs. In the second method, an adaptive threshold is determined as the maximum average gradient over all radial directions around the object being segmented. The threshold can also be made variable over the different radial directions by estimating a local threshold over a limited-angle conical section around a given radial direction.
  • Feature Extraction:
  • Morphological, gray-level, texture, and spiculation features are extracted. 3D morphological characteristics extracted from the segmented lesion boundary include, but are not limited to, the volume, the longest diameter, the surface-area-to-volume ratio, the sphericity, the eccentricity of a fitted ellipsoid, the normalized radial length from the object centroid, and the variance of the normalized radial length.
  • The gray level features for lesion characterization include, but are not limited to, the contrast of the object relative to the surrounding background, the minimum and the maximum gray levels, and the characteristics derived from the gray level histogram of the object such as the skewness, kurtosis, energy, and the entropy.
  • 3D texture features may be extracted from the lesion analyzed in a number of planes, including: (1) along the reconstructed high-resolution DTM slices that are parallel to the detector plane, (2) a set of planes that cut through the centroid and the south-north poles, and (3) a set of planes that cut through the centroid and the east-west poles. On each plane, the cross section of the lesion is similar to a lesion in 2D. The RBST previously developed for analysis of masses on 2D mammograms is applied to the object. A region around the object margin, e.g., a band of 60 pixels wide, is transformed to a rectangular coordinate system. In the RBST image, the lesion boundary is parallel to the long side, and the radially oriented spicules or textures are aligned and parallel to the short side, thus facilitating texture analysis. The texture in the RBST image can be further enhanced by, e.g., Sobel filtering or multi-resolution wavelet decomposition and reconstruction. Texture measures such as the RLS, the SGLD matrices, the GLDS, and the Laws textures are extracted from gray level statistics of the RBST image. The texture characteristics may be extracted from the planes over all orientations, described above. The corresponding texture measure extracted around the lesion can be averaged over all planes in each group to obtain an average texture measure over all directions around the lesion.
  • An alternative to extract 3D texture features is to generalize the RBST to 3D, i.e., transforming a shell of voxels surrounding the lesion to a slab of rectangular voxels with the surface of the lesion transformed to a flat plane. The texture measures described above can then be generalized to 3D and extracted from the slab of voxels in all three dimensions.
  • Spiculation features are extracted both from the 2D planes described above, and also in 3D. For 2D spiculation analysis, a spiculation likelihood map around the lesion margin is calculated by the statistics of the gradient magnitudes and directions at each pixel. For a pixel (ic,jc) on the mass boundary, a search region is defined as the set of all image pixels that i) lie outside the mass; ii) have a positive contrast; iii) are at a certain distance (e.g., less than 4 mm) from (ic,jc); and iv) are within ±π/4 of the normal to the mass contour at (ic,jc). At each image pixel (i,j) in the search region, the obtuse angle θ between two lines is computed, where the first line is defined by the gradient direction at (i,j), and the second line joins the pixel (i,j) to the mass boundary pixel (ic,jc). A method based on convolution with Gaussian derivatives may be used for computing the gradients. The spiculation measure at a mass boundary pixel (ic,jc) is defined as the average value of θ in the search region. If the pixel (ic,jc) lies on the path of a spiculation, then θ will be close to π/2 whenever the image pixel (i,j) is on the spiculation, and hence the mean of the spiculation measure will be high. The spiculation measure may be computed for a number of contours, where each contour is obtained by expanding the previous contour by one pixel at all pixels on the contour, and the first contour is given by the segmentation method. The resulting image in a band around the mass is referred to as the spiculation likelihood map. High values in the map indicate a high likelihood of the presence of spiculations. A threshold above which the voxel is considered to be part of a spicule is determined by training with case samples. The features in this map are used to classify lesions as spiculated and non-spiculated. Spiculation measures extracted from the spiculation likelihood image include, but are not limited to, the number, the density, the total area of spiculations, and the strength of the spiculation gradients to measure the degree of spiculation of the lesion. After extracting the spiculation measures from each plane, the corresponding measures can be averaged over all planes in the same group to derive 3D spiculation measures. These spiculation measures can be used directly, and in combination with morphological and texture features, as input features to the malignant-benign classifiers. Additional features for distinguishing fibrous tissue overlapping with the lesion from true spiculations include measures for gradient strengths and direction inside the lesion, and the estimate of the direction and the continuity of the structures from the inside to the outside of the lesion.
  • For 3D spiculation analysis, the spiculation likelihood map is generalized to 3D by calculating the statistics of the gradient magnitudes and directions in 3D space at each voxel in the lesion margin region. The gradient statistics at a point are calculated over a cone-shape region around the normal at that point. High values of gradient statistics in the lesion margin indicate possible presence of spiculations. A threshold above which the voxel is considered to be part of a spicule is determined by training with case samples. 3D spiculation measures such as number, density, and the proportion of voxels considered to be spicules relative to the total number of voxels in the lesion margin are derived from the 3D spiculation likelihood map. These features can then be used directly, or in combination with other morphological and texture features, as input predictor variables of classifiers for differentiation of malignant and benign lesions.
  • Feature Classification:
  • A classifier is trained to differentiate malignant and benign lesions based on the extracted features above. The classifier may include a linear discriminant analysis, a neural network, a support vector machine, rule-based classification, or fuzzy logic classification. For the design of the classifier, a stepwise or other feature selection procedure may select the most effective subset of features from the available feature pool, thus reducing the dimensionality of the feature space for the classifier.
  • The formulation of the classifiers is well known by those of ordinary skill in the art, but the combinations of segmentation and feature extraction techniques that are developed for the DTM images make the designed classifier specific for the CAD task described herein.
  • For DTM imaging, the raw data are acquired as projection view (PV) mammograms. The number of PVs will depend on the design of the DTM x-ray system. For example, a first generation GE prototype DTM system acquired 11 PVs. A second generation GE prototype DTM system acquired 21 PVs. The total dose of all PVs used about 1.5 times of the dose of a conventional mammogram. A PV is therefore noisier than a conventional mammogram. However, the PVs offer the advantage that a lesion will be projected at a slightly different angle and thus having somewhat different overlapping tissues on each view. A lesion that may be camouflaged by dense tissues on some views may become more conspicuous on other views. In addition, overlapping tissues that mimic lesions on some views may be less lesion-mimicking on other views. If a CAD system for lesion detection is applied to the PVs, the complementary information on the different PVs may be utilized to improve sensitivity and reduce FPs.
  • Furthermore, although the disclosed algorithm uses DTM slices reconstructed from 11 PVs using the iterative maximum-likelihood algorithm as input, image processing methods should not depend strongly on the reconstruction method or the number of PVs for generating the DTM slices as long as the image quality of the reconstructed slices are reasonable. The raw PV mammograms may also be directly used as input to the CAD system, with a chosen reconstruction method installed as a preprocessing step in the CAD system 20. Alternatively, the CAD system may first process the PV mammograms individually using 2D image processing techniques. The detected objects on the PVs may then be merged based on the extracted features. The merged information is used to differentiate true lesions or false positives.
  • When implemented, any of the software described herein may be stored in any computer readable memory such as on a magnetic disk, an optical disk, or other storage medium, in a RAM or ROM of a computer or processor, etc. Likewise, this software may be delivered to a user or a computer using any known or desired delivery method including, for example, on a computer readable disk or other transportable computer storage mechanism or over a communication channel such as a telephone line, the Internet, the World Wide Web, any other local area network, wide area network, or wireless network, etc. (which delivery is viewed as being the same as or interchangeable with providing such software via a transportable storage medium). Furthermore, this software may be provided directly without modulation or encryption or may be modulated and/or encrypted using any suitable modulation carrier wave and/or encryption technique before being transmitted over a communication channel.
  • While the present invention has been described with reference to specific examples, which are intended to be illustrative only and not to be limiting of the invention, it will be apparent to those of ordinary skill in the art that changes, additions or deletions may be made to the disclosed embodiments without departing from the spirit and scope of the invention

Claims (1)

1. A method for using computer-aided diagnosis (CAD) for digital tomosynthesis mammograms (DTM), comprising:
retrieving a DTM image file having a plurality of DTM image slices;
applying a three-dimensional analysis to the DTM image file to detect lesion candidates;
identifying a volume of interest and locating its center;
segmenting the volume of interest by a three dimensional method;
extracting one or more object characteristics from the object corresponding to the volume of interest; and
determining if the object corresponding to the volume of interest is a breast lesion or normal breast tissue.
US12/624,273 2005-02-08 2009-11-23 Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms Abandoned US20100104154A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/624,273 US20100104154A1 (en) 2005-02-08 2009-11-23 Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US65092305P 2005-02-08 2005-02-08
US11/350,301 US7646902B2 (en) 2005-02-08 2006-02-08 Computerized detection of breast cancer on digital tomosynthesis mammograms
US12/624,273 US20100104154A1 (en) 2005-02-08 2009-11-23 Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/350,301 Continuation US7646902B2 (en) 2005-02-08 2006-02-08 Computerized detection of breast cancer on digital tomosynthesis mammograms

Publications (1)

Publication Number Publication Date
US20100104154A1 true US20100104154A1 (en) 2010-04-29

Family

ID=36780006

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/350,301 Expired - Fee Related US7646902B2 (en) 2005-02-08 2006-02-08 Computerized detection of breast cancer on digital tomosynthesis mammograms
US12/624,273 Abandoned US20100104154A1 (en) 2005-02-08 2009-11-23 Computerized Detection of Breast Cancer on Digital Tomosynthesis Mamograms

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/350,301 Expired - Fee Related US7646902B2 (en) 2005-02-08 2006-02-08 Computerized detection of breast cancer on digital tomosynthesis mammograms

Country Status (1)

Country Link
US (2) US7646902B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155468A1 (en) * 2006-12-21 2008-06-26 Sectra Ab Cad-based navigation of views of medical image data stacks or volumes
US20090070329A1 (en) * 2007-09-06 2009-03-12 Huawei Technologies Co., Ltd. Method, apparatus and system for multimedia model retrieval
US20110317898A1 (en) * 2010-06-29 2011-12-29 Lin Shi Registration of 3D tomography images
US20120051654A1 (en) * 2010-08-24 2012-03-01 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US20120051612A1 (en) * 2010-08-24 2012-03-01 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US20120257796A1 (en) * 2010-12-31 2012-10-11 Henderson Jonathan 3d object delineation
DE102012200207B3 (en) * 2012-01-09 2013-05-29 Siemens Aktiengesellschaft Method for determining guide line for biopsy needle for executing stereotactic biopsy acquisition at lesion in breast of patient to remove lesion tissue, involves determining guide line position in coordination system associated to breast
WO2016057960A1 (en) * 2014-10-10 2016-04-14 Radish Medical Solutions, Inc. Apparatus, system and method for cloud based diagnostics and image archiving and retrieval
WO2017021919A1 (en) 2015-08-06 2017-02-09 Tel Hashomer Medical Research, Infrastructure And Services Ltd. Mamography apparatus
US9639929B2 (en) 2013-10-24 2017-05-02 Samsung Electronics Co., Ltd. Apparatus and method for computer-aided diagnosis
US10037601B1 (en) 2017-02-02 2018-07-31 International Business Machines Corporation Systems and methods for automatic detection of architectural distortion in two dimensional mammographic images

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7646902B2 (en) * 2005-02-08 2010-01-12 Regents Of The University Of Michigan Computerized detection of breast cancer on digital tomosynthesis mammograms
US10492749B2 (en) * 2005-05-03 2019-12-03 The Regents Of The University Of California Biopsy systems for breast computed tomography
FR2897182A1 (en) * 2006-02-09 2007-08-10 Gen Electric METHOD FOR PROCESSING TOMOSYNTHESIS PROJECTION IMAGES FOR DETECTION OF RADIOLOGICAL SIGNS
FR2897461A1 (en) * 2006-02-16 2007-08-17 Gen Electric X-RAY DEVICE AND IMAGE PROCESSING METHOD
US8073252B2 (en) * 2006-06-09 2011-12-06 Siemens Corporation Sparse volume segmentation for 3D scans
US7876937B2 (en) * 2006-09-15 2011-01-25 Carestream Health, Inc. Localization of nodules in a radiographic image
US10121243B2 (en) * 2006-09-22 2018-11-06 Koninklijke Philips N.V. Advanced computer-aided diagnosis of lung nodules
US8483462B2 (en) * 2006-11-03 2013-07-09 Siemens Medical Solutions Usa, Inc. Object centric data reformation with application to rib visualization
FR2909207B1 (en) * 2006-11-24 2009-01-30 Gen Electric METHOD FOR THREE - DIMENSIONAL VISUALIZATION OF TOMOSYNTHESIS IMAGES IN MAMMOGRAPHY.
US20080144935A1 (en) * 2006-11-30 2008-06-19 Chav Ramnada Method for segmentation of digital images
US7844087B2 (en) * 2006-12-19 2010-11-30 Carestream Health, Inc. Method for segmentation of lesions
US8224065B2 (en) * 2007-01-09 2012-07-17 Purdue Research Foundation Reconstruction of shapes of objects from images
US8194920B2 (en) * 2007-02-16 2012-06-05 Ford Global Technologies, Llc Method and system for detecting objects using far infrared images
JP4493679B2 (en) * 2007-03-29 2010-06-30 富士フイルム株式会社 Target region extraction method, apparatus, and program
BRPI0721562A2 (en) * 2007-04-20 2013-01-22 Softkinetic S A Volume Recognition Method and System
US8208700B2 (en) * 2007-05-15 2012-06-26 Three Palm Software Mass spicules detection and tracing from digital mammograms
JP5226974B2 (en) * 2007-06-19 2013-07-03 富士フイルム株式会社 Image diagnosis support apparatus, method and program
JP2009061266A (en) * 2007-08-09 2009-03-26 Toshiba Medical Systems Corp Image diagnosis support system, medical image management apparatus, image diagnosis support processing apparatus, and image diagnosis support method
US20090136111A1 (en) * 2007-11-25 2009-05-28 General Electric Company System and method of diagnosing a medical condition
DE102007058606A1 (en) * 2007-12-04 2009-06-10 Codewrights Gmbh Method for integrating device objects into an object-based management system for field devices in automation technology
EP2131325B1 (en) * 2008-05-08 2013-01-30 Agfa Healthcare Method for mass candidate detection and segmentation in digital mammograms
US20100014738A1 (en) * 2008-07-21 2010-01-21 Tomtec Imaging Systems Gmbh Method and system for breast cancer screening
US8331641B2 (en) * 2008-11-03 2012-12-11 Siemens Medical Solutions Usa, Inc. System and method for automatically classifying regions-of-interest
US8184890B2 (en) * 2008-12-26 2012-05-22 Three Palm Software Computer-aided diagnosis and visualization of tomosynthesis mammography data
US7961910B2 (en) 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
US8963829B2 (en) * 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US8867820B2 (en) 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
JP5507962B2 (en) * 2009-11-05 2014-05-28 キヤノン株式会社 Information processing apparatus, control method therefor, and program
US20120014578A1 (en) * 2010-07-19 2012-01-19 Qview Medical, Inc. Computer Aided Detection Of Abnormalities In Volumetric Breast Ultrasound Scans And User Interface
US8582858B2 (en) * 2009-12-17 2013-11-12 The Regents Of The University Of California Method and apparatus for quantitative analysis of breast density morphology based on MRI
JP5340204B2 (en) * 2010-03-01 2013-11-13 キヤノン株式会社 Inference apparatus, control method thereof, and program
US8675933B2 (en) * 2010-04-30 2014-03-18 Vucomp, Inc. Breast segmentation in radiographic images
JP5927180B2 (en) 2010-04-30 2016-06-01 ヴィユーコンプ インクVucomp, Inc. Image data processing method, system, and program for identifying image variants
WO2012006318A1 (en) * 2010-07-07 2012-01-12 Vucomp, Inc. Marking system for computer-aided detection of breast abnormalities
US9208556B2 (en) * 2010-11-26 2015-12-08 Quantitative Insights, Inc. Method, system, software and medium for advanced intelligent image analysis and display of medical images and information
US9721338B2 (en) * 2011-01-11 2017-08-01 Rutgers, The State University Of New Jersey Method and apparatus for segmentation and registration of longitudinal images
US8917930B2 (en) * 2012-05-08 2014-12-23 Hewlett-Packard Development Company, L.P. Selecting metrics for substrate classification
JP5519753B2 (en) * 2012-09-28 2014-06-11 富士フイルム株式会社 Tomographic image generating apparatus and method
US20150282782A1 (en) * 2014-04-08 2015-10-08 General Electric Company System and method for detection of lesions
US10282858B2 (en) * 2014-04-18 2019-05-07 The University Of British Columbia Methods and systems for estimating three-dimensional information from two-dimensional concept drawings
US9256939B1 (en) 2014-07-17 2016-02-09 Agfa Healthcare System and method for aligning mammography images
JP6379785B2 (en) * 2014-07-18 2018-08-29 コニカミノルタ株式会社 Tomographic image generation system
WO2016064921A1 (en) * 2014-10-20 2016-04-28 MedSight Tech Corp. Automatic detection of regions of interest in 3d space
JP6486100B2 (en) * 2014-12-22 2019-03-20 キヤノン株式会社 Image processing apparatus, image processing method, and program
KR20170007181A (en) 2015-07-10 2017-01-18 3스캔 인크. Spatial multiplexing of histological stains
WO2017060133A1 (en) * 2015-10-05 2017-04-13 Koninklijke Philips N.V. Apparatus for characterization of a feature of a body part
US9918686B2 (en) 2015-11-16 2018-03-20 International Business Machines Corporation Automated fibro-glandular (FG) tissue segmentation in digital mammography using fuzzy logic
US10223792B2 (en) * 2017-02-02 2019-03-05 Elekta Ab (Publ) System and method for detecting brain metastases
GB201705911D0 (en) 2017-04-12 2017-05-24 Kheiron Medical Tech Ltd Abstracts
EP3631806A1 (en) * 2017-06-02 2020-04-08 Koninklijke Philips N.V. Quantified aspects of lesions in medical images
US10380739B2 (en) * 2017-08-15 2019-08-13 International Business Machines Corporation Breast cancer detection
WO2019239155A1 (en) 2018-06-14 2019-12-19 Kheiron Medical Technologies Ltd Second reader suggestion
CN109145944B (en) * 2018-07-11 2021-11-05 哈尔滨工程大学 Classification method based on longitudinal three-dimensional image deep learning features
CN109363699B (en) * 2018-10-16 2022-07-12 杭州依图医疗技术有限公司 Method and device for identifying focus of breast image
US10957043B2 (en) * 2019-02-28 2021-03-23 Endosoftllc AI systems for detecting and sizing lesions
EP3973506A4 (en) * 2019-06-17 2022-08-03 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image processing
TWI707663B (en) * 2019-07-19 2020-10-21 財團法人資訊工業策進會 Multi-view mammogram analysis method, multi-view mammogram analysis system, and non-transitory computer-readable medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463181B2 (en) * 2000-12-22 2002-10-08 The United States Of America As Represented By The Secretary Of The Navy Method for optimizing visual display of enhanced digital images
US20040052328A1 (en) * 2002-09-13 2004-03-18 Sabol John M. Computer assisted analysis of tomographic mammography data
US20040094167A1 (en) * 2000-03-17 2004-05-20 Brady John Michael Three-dimensional reconstructions of a breast from two x-ray mammographics
US6748047B2 (en) * 2002-05-15 2004-06-08 General Electric Company Scatter correction method for non-stationary X-ray acquisitions
US20050049497A1 (en) * 2003-06-25 2005-03-03 Sriram Krishnan Systems and methods for automated diagnosis and decision support for breast imaging
US20050113681A1 (en) * 2002-11-27 2005-05-26 Defreitas Kenneth F. X-ray mammography with tomosynthesis
US6909797B2 (en) * 1996-07-10 2005-06-21 R2 Technology, Inc. Density nodule detection in 3-D digital images
US7298881B2 (en) * 2004-02-13 2007-11-20 University Of Chicago Method, system, and computer software product for feature-based correlation of lesions from multiple images
US7646902B2 (en) * 2005-02-08 2010-01-12 Regents Of The University Of Michigan Computerized detection of breast cancer on digital tomosynthesis mammograms

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6909797B2 (en) * 1996-07-10 2005-06-21 R2 Technology, Inc. Density nodule detection in 3-D digital images
US20040094167A1 (en) * 2000-03-17 2004-05-20 Brady John Michael Three-dimensional reconstructions of a breast from two x-ray mammographics
US6463181B2 (en) * 2000-12-22 2002-10-08 The United States Of America As Represented By The Secretary Of The Navy Method for optimizing visual display of enhanced digital images
US6748047B2 (en) * 2002-05-15 2004-06-08 General Electric Company Scatter correction method for non-stationary X-ray acquisitions
US20040052328A1 (en) * 2002-09-13 2004-03-18 Sabol John M. Computer assisted analysis of tomographic mammography data
US6748044B2 (en) * 2002-09-13 2004-06-08 Ge Medical Systems Global Technology Company, Llc Computer assisted analysis of tomographic mammography data
US20050113681A1 (en) * 2002-11-27 2005-05-26 Defreitas Kenneth F. X-ray mammography with tomosynthesis
US20050049497A1 (en) * 2003-06-25 2005-03-03 Sriram Krishnan Systems and methods for automated diagnosis and decision support for breast imaging
US7298881B2 (en) * 2004-02-13 2007-11-20 University Of Chicago Method, system, and computer software product for feature-based correlation of lesions from multiple images
US7646902B2 (en) * 2005-02-08 2010-01-12 Regents Of The University Of Michigan Computerized detection of breast cancer on digital tomosynthesis mammograms

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155468A1 (en) * 2006-12-21 2008-06-26 Sectra Ab Cad-based navigation of views of medical image data stacks or volumes
US8051386B2 (en) * 2006-12-21 2011-11-01 Sectra Ab CAD-based navigation of views of medical image data stacks or volumes
US20090070329A1 (en) * 2007-09-06 2009-03-12 Huawei Technologies Co., Ltd. Method, apparatus and system for multimedia model retrieval
US8082263B2 (en) * 2007-09-06 2011-12-20 Huawei Technologies Co., Ltd. Method, apparatus and system for multimedia model retrieval
US20110317898A1 (en) * 2010-06-29 2011-12-29 Lin Shi Registration of 3D tomography images
US8634626B2 (en) * 2010-06-29 2014-01-21 The Chinese University Of Hong Kong Registration of 3D tomography images
US8620042B2 (en) * 2010-08-24 2013-12-31 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US20120051612A1 (en) * 2010-08-24 2012-03-01 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US20120051654A1 (en) * 2010-08-24 2012-03-01 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US8620043B2 (en) * 2010-08-24 2013-12-31 Olympus Corporation Image processing apparatus, image processing method, and computer-readable recording medium
US8908926B2 (en) * 2010-12-31 2014-12-09 Foster Findlay Associates Limited Method of 3D object delineation from 3D seismic data
US20140044316A1 (en) * 2010-12-31 2014-02-13 Foster Findlay Associates Limited Generator Studios 3d object delineation
US20120257796A1 (en) * 2010-12-31 2012-10-11 Henderson Jonathan 3d object delineation
DE102012200207B3 (en) * 2012-01-09 2013-05-29 Siemens Aktiengesellschaft Method for determining guide line for biopsy needle for executing stereotactic biopsy acquisition at lesion in breast of patient to remove lesion tissue, involves determining guide line position in coordination system associated to breast
US9639929B2 (en) 2013-10-24 2017-05-02 Samsung Electronics Co., Ltd. Apparatus and method for computer-aided diagnosis
US10147223B2 (en) 2013-10-24 2018-12-04 Samsung Electronics Co., Ltd. Apparatus and method for computer-aided diagnosis
WO2016057960A1 (en) * 2014-10-10 2016-04-14 Radish Medical Solutions, Inc. Apparatus, system and method for cloud based diagnostics and image archiving and retrieval
WO2017021919A1 (en) 2015-08-06 2017-02-09 Tel Hashomer Medical Research, Infrastructure And Services Ltd. Mamography apparatus
EP3319502A4 (en) * 2015-08-06 2018-06-06 Tel HaShomer Medical Research Infrastructure and Services Ltd. Mamography apparatus
US10499866B2 (en) 2015-08-06 2019-12-10 Tel Hashomer Medical Research, Infrastructure And Services Ltd. Mammography apparatus
US10037601B1 (en) 2017-02-02 2018-07-31 International Business Machines Corporation Systems and methods for automatic detection of architectural distortion in two dimensional mammographic images

Also Published As

Publication number Publication date
US20060177125A1 (en) 2006-08-10
US7646902B2 (en) 2010-01-12

Similar Documents

Publication Publication Date Title
US7646902B2 (en) Computerized detection of breast cancer on digital tomosynthesis mammograms
Chan et al. Computer‐aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches
Reiser et al. Computerized mass detection for digital breast tomosynthesis directly from the projection images
Rangayyan et al. A review of computer-aided diagnosis of breast cancer: Toward the detection of subtle signs
US8977019B2 (en) Methods for microcalcification detection of breast cancer on digital tomosynthesis mammograms
US7840046B2 (en) System and method for detection of breast masses and calcifications using the tomosynthesis projection and reconstructed images
US6246782B1 (en) System for automated detection of cancerous masses in mammograms
US7903861B2 (en) Method for classifying breast tissue density using computed image features
US5657362A (en) Automated method and system for computerized detection of masses and parenchymal distortions in medical images
US6760468B1 (en) Method and system for the detection of lung nodule in radiological images using digital image processing and artificial neural network
US6278793B1 (en) Image quality based adaptive optimization of computer aided detection schemes
Sahiner et al. Computer‐aided detection of clustered microcalcifications in digital breast tomosynthesis: a 3D approach
US8340388B2 (en) Systems, computer-readable media, methods, and medical imaging apparatus for the automated detection of suspicious regions of interest in noise normalized X-ray medical imagery
US7974455B2 (en) Method and apparatus for tomosynthesis projection imaging for detection of radiological signs
US20120014578A1 (en) Computer Aided Detection Of Abnormalities In Volumetric Breast Ultrasound Scans And User Interface
US20070052700A1 (en) System and method for 3D CAD using projection images
JP2010504129A (en) Advanced computer-aided diagnosis of pulmonary nodules
Fan et al. Mass detection and segmentation in digital breast tomosynthesis using 3D-mask region-based convolutional neural network: a comparative analysis
Pöhlmann et al. Three-dimensional segmentation of breast masses from digital breast tomosynthesis images
Karahaliou et al. A texture analysis approach for characterizing microcalcifications on mammograms
Salman et al. Breast Cancer Classification as Malignant or Benign based on Texture Features using Multilayer Perceptron
Spyrou et al. “Hippocrates-mst”: a prototype for computer-aided microcalcification analysis and risk assessment for breast cancer
US20230177686A1 (en) Automated multi-class segmentation of digital mammogram
Wang et al. Quantitative study of image features of clustered microcalcifications in for-presentation mammograms
Sample Computer assisted screening of digital mammogram images

Legal Events

Date Code Title Description
AS Assignment

Owner name: NATIONAL INSTITUTES OF HEALTH (NIH), U.S. DEPT. OF

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:UNIVERSITY OF MICHIGAN;REEL/FRAME:023665/0583

Effective date: 20091211

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION