US20080002870A1  Automatic detection and monitoring of nodules and shaped targets in image data  Google Patents
Automatic detection and monitoring of nodules and shaped targets in image data Download PDFInfo
 Publication number
 US20080002870A1 US20080002870A1 US11/824,669 US82466907A US2008002870A1 US 20080002870 A1 US20080002870 A1 US 20080002870A1 US 82466907 A US82466907 A US 82466907A US 2008002870 A1 US2008002870 A1 US 2008002870A1
 Authority
 US
 United States
 Prior art keywords
 nodules
 image data
 set
 nodule
 max
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Granted
Links
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6201—Matching; Proximity measures
 G06K9/6202—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
 G06K9/6203—Shifting or otherwise transforming the patterns to accommodate for positional errors
 G06K9/6206—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching
 G06K9/6209—Shifting or otherwise transforming the patterns to accommodate for positional errors involving a deformation of the sample or reference pattern; Elastic matching based on shape statistics, e.g. active shape models of the pattern to be recognised

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/00127—Acquiring and recognising microscopic objects, e.g. biological cells and cellular parts
 G06K9/0014—Preprocessing, e.g. image segmentation ; Feature extraction

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/62—Methods or arrangements for recognition using electronic means
 G06K9/6201—Matching; Proximity measures
 G06K9/6215—Proximity measures, i.e. similarity or distance measures

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T7/00—Image analysis
 G06T7/0002—Inspection of images, e.g. flaw detection
 G06T7/0012—Biomedical image inspection

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/30—Subject of image; Context of image processing
 G06T2207/30004—Biomedical image processing
 G06T2207/30061—Lung
 G06T2207/30064—Lung nodule
Abstract
A method for detecting a nodule in image data including the steps of segmenting scanning information from an image slice to isolate lung tissue from other structures, resulting in segmented image data; extracting anatomic structures, including any potential nodules, from the segmented image data, resulting in extracted image data; and detecting possible nodules from the extracted image data, based on deformable prototypes of candidates generated by a level set method in combination with a marginal gray level distribution method. Embodiments of the invention also relate to an automatic method for detecting and monitoring a nodule in image data, where the method includes the steps of determining adaptive probability models of visual appearance of small 2D and large 3D nodules to control evolution of deformable models to get accurate segmentation of pulmonary nodules from image data; modeling a first set of nodules in image data with a translation and rotation invariant MarkovGibbs random field (MGRF) of voxel intensities with pairwise interaction analytically identified from a set of training nodules; modeling a second subsequent set of nodules in image data by estimating a linear combination of discrete Gaussians; and integrating both models to guide the evolution of the deformable model to determine and monitor the boundary of each detected nodule in the image data.
Description
 This application claims priority under 35 U.S.C. §119 from prior provisional application Ser. No. 60/817,797, filed Jun. 30, 2006.
 A field of the invention is image analysis. Embodiments of the invention concern the automatic detection and monitoring of nodules, or other shaped targets, from image data, including image data of the lungs.
 Lung Cancer remains the leading cause of mortality cancer. In 1999, there were approximately 170,000 new cases of lung cancer in the U.S., where approximately one in every eighteen women and approximately one in every twelve men develop lung cancer. Early detection of lung tumors (visible on chest film as nodules) may increase the patient's chance of survival, but detecting nodules is a complicated task. Nodules show up as relatively lowcontrast white circular objects within the lung fields. The difficulty for computer aided image data search schemes is distinguishing true nodules from (overlapping) shadows, vessels and ribs.
 The early stage detection of lung cancer remains an important goal in medical research. Regular chest radiography and sputum examination programs haven proven ineffective in reducing mortality rates. Although screening for lung cancer with chest Xrays can detect early lung cancer, such screening can also possibly produce many falsepositive test results, causing needless extra tests.
 At present, lowdose spiral computed tomography (LDCT) is of prime interest for screening (high risk) groups for early detection of lung cancer and is being studied by variuos groups, including the National Cancer Institute. LDCT provides chest scans with very high spatial, temporal, and contrast resolution of anatomic structures and is able to gather a complete 3D volume of a human thorax in a single breathhold. Hence, for these reasons, in recent years most lung cancer screening programs are being investigated in the United States and Japan with LDCT as the screening modality of choice.
 Automatic screening of image data from LDCT typically involves selecting initial candidate lung abnormalities (nodules). Next, the false candidates, called false positive nodules (FPNs), are partially eliminated while preserving the true ones (TPNs).
 When selecting initial candidates, conformal nodule filtering or unsharp masking can enhance nodules and suppress other structures to separate the candidates from the background by simple thresholding or multiple graylevel thresholding techniques. A series of 3D cylindrical and spherical filters are used to detect small lung nodules from high resolution CT images. Circular and semicircular nodule candidates can be detected by template matching. However, these spherical, cylindrical, or circular assumptions are not adequate for describing general geometry of the lesions. This is because their shape can be irregular due to the spiculation or the attachments to the pleural surface (i.e., juxtapleural and peripheral) and vessels (i.e., vascularized). Morphological operators may be used to detect lung nodules. The drawbacks to these approaches are the difficulties in detecting lung wall nodules. Also, there are other pattern recognition techniques used in detection of lung nodules such as clustering, linear discriminant functions, rulebased classification, Hough transforms, connected component analysis of thresholded CT slices, graylevel distance transforms, and patientspecific a priori models.
 The FPNs may be excluded by feature extraction and classification. Such features as circularity, size, contrast, or local curvature that are extracted by morphological techniques, or artificial neural networks (ANN), can be used as postclassifiers. Also, there is a number of classification techniques used in the final stage of the nodule detection systems to reduce the FPNs such as: rulebased or linear classifiers; template matching; nearest cluster; and Markov random field.
 Monitoring is a largely unresolved problem in lung scans, and there exist similar issues in monitoring other nodules or shaped targets. Nodules are at issue, for example, in other types of medical imaging analysis. Also shaped target monitoring has broader applications.
 The invention provides a fully automatic method for detection and monitoring, over time, nodules or other shaped targets from image data. Preferred embodiments relate to lung nodule detection, and those are used to illustrate the invention. However, artisans will also recognize that other types of nodules can be detected and monitored from the deformable, analytical template approach of the invention, for example in other medical and nonmedical applications. Additionally, the invention has general application to shaped targets having analytical characteristics that can be detected and monitored from other types of image modalities and data.
 Briefly, embodiments of the invention relate to a method for detecting a nodule, or other shaped target, in image data. Preferably, the method includes the steps of segmenting scanning information from an image slice to isolate lung tissue from other structures, resulting in segmented image data; extracting anatomic structures, including any potential nodules, from the segmented image data, resulting in extracted image data; and detecting possible nodules, or other shaped targets, from the extracted image data, based on deformable prototypes of candidates generated by a level set method in combination with a marginal gray level distribution method.
 Embodiments of the invention also relate to an automatic method for detecting and monitoring nodules or other shaped targets in image data, where the method includes the comprising steps of determining adaptive probability models of visual appearance of small 2D and large 3D nodules or shaped targets to control the evolution of deformnable models to get accurate segmentation of pulmonary nodules from image data; modeling a first set of nodules or shaped targets in image data with a translation and rotation invariant MarkovGibbs random field (MGRF) of voxel intensities with pairwise interacti on analytically identified from a set of training nodules; modeling a second subsequent set of nodules or shaped targets in image data by estimating a linear combination of discrete Gaussians; and integrating both models to guide the evolution of the deformnable model to determine and monitor the boundary of each detected nodule or targeted shape in the image data.
 Preferred embodiments of the present invention are described herein with reference to the drawings wherein:
 FIGS. 1(A)1(C) show an example of the first two segmentation steps, where
FIG. 1 (A) shows an initial LDCT slice,FIG. 1 (B) shows the separated lung regions, andFIG. 1 (C) shows extracted objects in which a nodule candidate is encircled; 
FIG. 2 shows an empirical marginal gray level distribution for the objects ofFIG. 1 (C), where the gray range is between 98 and 255; 
FIG. 3 (A) shows part of the separated 3D lung objects; 
FIG. 3 (B) shows the initialized level set, indicated by darker color; 
FIG. 3 (C) shows the finally extracted potential nodule candidate; 
FIG. 4 shows the detected nodule candidate where: (a) shows slices of the extracted voxel set U; (b) shows the gray level prototype N; and (c) shows the actual gray levels C, where the correlation Corr_{C,N}=0.886;  FIGS. 5(A)5(C) show the estimated and empirical distributions for radial nonuniformity (
FIG. 5 (A)); mean gray level (FIG. 5 (B)); and 10%tile gray level (FIG. 5 (C)), andFIG. 5 (D) shows the radii d_{θ} in eight directions θ from the centroid to the boundary that specifies the radial nonuniformity max_{θ}(d_{θ})min_{θ}(d_{θ}); 
FIG. 6 shows large candidate nodules (shown in the darker color) detected with the approach of an embodiment of the present invention;  FIGS. 7(A)7(C) show small candidate nodules (shown in the darker color) detected with the approach of an embodiment of the present invention, where
FIG. 7 (A) shows nodules of size 68.9 mm^{3},FIG. 7 (B) shows nodules of size 52.3 mm , andFIG. 7 (C) shows nodules of size 34.8 mm^{3};  FIGS. 8(A)8(C) show step 1 of the present segmentation approach where
FIG. 8 (A) shows an LDCT slice,FIG. 8 (B) shows the slice with isolated lungs, andFIG. 8 (C) shows the normalized segmented lung image; 
FIG. 9 (A) andFIG. 9 (B) show, respectively, the centralsymmetric 2D and 3(D) neighborhoods for the eight distance ranges [d_{ν,min}=ν−0.5, d_{ν,max}=ν+0.5); νεN={1, . . . , 8} on the lattice R;  FIGS. 10(A)10(D) show 3D segmentation of pleural attached nodules; where the results are projected onto 2D axial (A), coronal (C), and saggital (S) planes for visualization, where
FIG. 10 (A) is the 2D profile of the original nodule,FIG. 10 (B) is the pixelwise Gibbs energies (b) for ν≦11,FIG. 10 (C) is the segmentation of the present invention, and Fiure 10(D) is the radiologist's segmentation;  FIGS. 11(A)11(D) show 2D segmentation of cavity nodules, where
FIG. 11 (A) is the 2D profile of the original nodule,FIG. 11 (B) is the pixelwise Gibbs energies for ν≦11,FIG. 11 (C) is the segmentation of the present invention, andFIG. 11 (D) is the radiologist's segmentation; and 
FIG. 12 shows the segmentation of the present invention for five patients.  Automatic diagnosis of lung nodules for early detection of lung cancer is the goal of a number of screening studies worldwide. With improvements in resolution and scanning time of low dose chest CT (computerized tomography) scanners, nodule detection and identification is continuously improving. The present invention relates to improvements in automatic detection of lung nodules. More specifically, preferred embodiments of invention employ a new template for nodule detection using level sets that describe various physical nodules irrespective of shape, size and distribution of gray levels. The template parameters are estimated automatically from the segmented data, without the need for any a priori learning of the parameters' density function, after performing preliminary steps of: (a) segmenting raw scanning information to isolate lung tissues from the rest of the structures in the lung cavity; and (b) extracting three dimensions (3D) anatomic structures (such as blood vessels, bronchioles, alveoli, etc., and possible abnormalities) from the already segmented lung tissues. Experiments show quantitatively that this template modeling approach drastically reduces the number of false positives in the nodule detection step (which follows the two preliminary steps above), thereby improving the overall accuracy of computeraided diagnostic (CAD) systems. The impact of the new template model includes: 1) flexibility with respect to nodule topology, whereby various nodules can be detected simultaneously by the same technique; 2) automatic parameter estimation of the nodule models using the gray level information of the segmented data; and 3) the ability to provide an exhaustive search for all of the possible nodules in the scan, without excessive processing time, thereby providing enhanced accuracy of the CAD system without an increase in the overall diagnosis time.
 Embodiments of the invention relates to the design of a diagnosis system, preferably employing a CAD (computeraided diagnostic) system that will contribute to the early diagnosis of lung cancer. In one preferred embodiment, the invention uses helical low dose thin slice (2 mm2.5mm) chest CT scanning (LDCT), which provides very high spatial, temporal, and contrast resolution of anatomic structures. Of course other scanning methods are also contemplated as being within the scope of the invention. Automated detection of lung nodules in thoracic CT scans is an important clinical challenge, especially because manual analysis by a radiologist is timeconsuming and may result in missed nodules.
 Most of the CAD work in lung cancer screening involves twostage detection of lung nodules, such that initial candidate nodules are first selected and then the false ones, called false positive nodules (FPNs), are partially eliminated while preserving the true positive nodules (TPNs). For example, conformal nodule filtering techniques and unsharp masking techniques both enhance nodules and suppress other structures at the first stage in order to separate the candidates from the background by simple thresholding. To improve the separation, the background trend is corrected within the image regions of interest. A series of threedimensional (3D) cylindrical and spherical filters are then used to detect small lung nodules from highresolution CT images. Next, circular nodule candidates are detected by template matching, or some other type of pattern recognition technique, such as fuzzy clustering, linear discriminant functions, rulebased classifications, or patientspecific a priori models. Also, cylindrical vascular models may be used along with spherical nodular models to amplify the template matching.
 The FPNs are excluded at the second stage by feature extraction and classification whereby such features as circularity, size, contrast, or local curvature are extracted by morphological techniques. In addition, artificial neural networks (ANN) are frequently used as postclassifiers.
 Referring now to FIGS. 1(A) through 1(C), a description will be provided of a preferred embodiment of the present CAD method and system for detecting the nodules in LDCT images. Basically, the system utilizes the following three main steps: Step (1) involves segmenting the raw scanning information from an initial LCDT slice, such as shown in
FIG. 1 (A), to isolate the lung tissues from the rest of the structures in the chest cavity, resulting in the separated lung regions, such as shown inFIG. 1 (B); Step (2) involves extracting the three dimensional (3D) anatomic structures (such as blood vessels, bronchioles, alveoli, etc., and possible abnormalities, such as lung nodules, if present) from the already segmented lung tissues, resulting in an image such as shown inFIG. 1 (C); and Step (3) involves identifying the nodules by isolating the true nodules from other extracted structures. The first two steps considerably reduce the search space by using segmentation algorithms based on representing each CT slice as a sample of a MarkovGibbs random field of region labels and gray levels. Details of the algorithms are presented in Aly A. Farag, Ayman ElBaz, Georgy L. Gimel'farb, “Precise segmentation of multimodal images,” IEEE Transactions on Image Processing Vol. 15, no. 4, April 2006, pp. 952968, which is hereby incorporated by reference.  Next, the details of the third step will be discussed, which step relates to detecting and classifying the nodules from among the extracted 3D structures. Both the nodules and normal objects have almost the same marginal gray level distributions, as can be seen in
FIG. 2 , which is the empirical marginal gray level distribution graph for all the extracted objects ofFIG. 1 (C). Therefore, segmentation of the nodules based solely on gray level distribution (such as thresholding) will not be sufficient. In the preferred embodiment, segmentation based on gray level distribution is supplemented by also including geometrical/shape information in the process. This approach includes a 3D deformable nodule prototype combined with a centralsymmetric 3D intensity model of the nodules. The model closely approximates an empirical marginal probability distribution of image intensities in real nodules of different sizes, and nodules are analytically identified from the empirical distribution.  Detecting Lung Nodules with Deformable Prototypes
 The detection step (Step (3)) extracts, by shape and intensities, and classifies the nodule candidates among all of the 3D objects selected at the second segmentation stage (Step (2)).
 Deformable Prototype of a Candidate Nodule
 In one preferred embodiment, to extract the nodule candidates from among the already selected objects, like those in
FIG. 1 (C), deformable prototypes generated by level sets, which have become a powerful segmentation tool in recent years, can be utilized. The evolving prototype's surface at time instant to is a propagating zerolevel front, φ(x, y, z, t^{o})=0, of a certain 4D scalar function φ(x, y, z, t) of 3D Cartesian coordinates (x, y, z) and time t. Changes of (p in continuous time are given by the partial differential equation:$\begin{array}{cc}\frac{\partial \varphi \left(x,y,z,t\right)}{\partial t}+F\left(x,y,z\right)\uf603\nabla \varphi \left(x,y,z,t\right)\uf604=0& \left(1\right)\end{array}$  where F (x, y, z) is a velocity function and ∇=[∂/∂x, ∂/∂y ∂/∂z]^{T}. The scalar velocity function controlling the front evolution depends on local geometric properties, e.g. a local curvature, k (x, y, z), of the front, as well as on local input data parameters, e.g. a 3D gradient, ∇I (x, y, z), of the segmented 3D image I.
 In practice, the difference relationship replaces Equation (1), and each next value φ(x, y, z, t_{n+1}) relates to the current one φ(x, y, z, t_{n}) at respective time instants t_{n+1 }and t_{n }such that t_{n+1}−t_{n}=Δt; n=0, 1, . . . , as follows: φ(x, y, z, t_{n+1})=φ(x, y, z, t_{n})−Δt. F(x, y, z)∇φ(x, y, z, t_{n}).
 The velocity function F plays a major role in the propagation process. Among known options for this function, this embodiment utilizes the following: F(x, y, z)=−h(x, y, z)(1+εk(x, y, z)), where h(x, y, z) and c are a local consistency term and a smoothing factor, respectively. .Since the level set for a segmented. 3D image I can always be initialized inside an object, an appropriate consistency term to evolve faster towards the object boundary can be as follows: h(x, y, z)=(1+∇I(x, y, z))^{−1}. To keep the level set front from propagating through blood vessels to which the nodules may be connected, a lowpass filter is preferably applied after each propagation step n.
 FIGS. 3(A)3(C) show an example of the results of extracting a potential nodule candidate with the deformable prototype, where
FIG. 3 (A) shows part of the separated 3D lung objects,FIG. 3 (B) is the initialized level set, indicated by the darker spot toward the lower left corner of the figure, andFIG. 3 (C) shows the finally extracted potential nodule candidate (toward the lower left corner of the figure). To check whether the extracted object is really a nodule candidate, measurements are preferably made of the similarity between grayscale patterns in the extracted part of the initial 3D image and the intensity prototype of the nodule of that shape, as described next.  Similarity Measure for Grayscale Nodule Prototypes
 Analysis of abnormalities in real 3D LDCT slices suggests that gray levels in central crosssections of a solidshape 2D or 3D nodule roughly follow a centralsymmetric Gaussian spatial pattern such that the large central intensity gradually decreases towards the boundary. Moreover, the marginal gray level distributions for all 3D objects separated from the lung tissues at the second segmentation stage (Step (2)), such as arteries, veins, bronchi, or nodules of different sizes, are very similar to each other. The 3D Gaussian intensity pattern in each grayscale nodule prototype ensures that the marginal gray level distribution closely approximates the empirical one for each real nodule in the LDCT data. The same approach can be followed in the 2D case (which is a special case from the 3D).
 Assume that the prototype is a centralsymmetric 3D Gaussian of radius R with the maximum intensity q_{max }in the center so that the gray level q(r) at any location (x, y, z) at radius r=(x^{2}+y2+z^{2})^{1/2 }with respect to the center (0,0,0) is given by the obvious relationship:
q(r)=q _{max}exp(−(r/ρ)^{2}); 0≦r≦R (2)  The scatter parameter ρ in Equation (2) specifies how fast the signals will decrease towards the boundary of the prototype. The maximum gray level, q_{max}=q(0), and the minimum gray level, q_{min}=q(R), on the boundary of the spherical Gaussian prototype of radius R uniquely determine this parameter as follows:
ρ=R(ln q _{max}−ln q _{min})^{1/2} (3)  Some earlier algorithms have failed to detect a large number of the true nodules, possibly because of their fixedsize templates and manual specification of their gray level patterns. See, e.g., the algorithm disclosed in Y. Lee, T. Hara, H. Fujita, S. Itoh, and T. Ishigaki, “Automated Detection of Pulmonary Nodules in Helical CT Images Based on an Improved TemplateMatching Technique,” IEEE Trans. on Medical Imaging, Vol. 20, pp. 595604, 2001. At times these patterns change from one LDCT slice to another depending on the scanned crosssection and internal organs that appear in that crosssection. In the present algorithm, these patterns are analytically adjusted to each extracted shape by applying Equation (3) to each CT slice, thereby reducing the number true positive nodules that fail to be detected.
 Because all of the points of the prototype with a fixed gray value q in the continuous interval [q_{min}, q_{max}] are located at the spherical surface of radius r(q)=ρ(ln q_{max}−ln q)^{1/2}, their density is proportional to the surface area 4πr^{2}(q). Therefore, the marginal probability density function for such a prototype is ψ(q)=γr^{2}(q),where γ is a normalizing factor such that
${\int}_{q\text{\hspace{1em}}\mathrm{min}}^{q\text{\hspace{1em}}\mathrm{max}}\psi \left(q\right)dq=1.$
It is easily shown that this function has the following closed form:$\begin{array}{cc}\psi \left(q\u2758{q}_{\mathrm{min},}{q}_{\mathrm{max}}\right)=\frac{\mathrm{ln}\text{\hspace{1em}}{q}_{\mathrm{max}}\mathrm{ln}\text{\hspace{1em}}q}{{q}_{\mathrm{max}}{q}_{\mathrm{min}}\left(1+\mathrm{ln}\text{\hspace{1em}}{q}_{\mathrm{max}}\mathrm{ln}\text{\hspace{1em}}n\text{\hspace{1em}}{q}_{\mathrm{min}}\right)}& \left(4\right)\end{array}$  The gray level parameters q_{max }and q_{min }can be estimated from the empirical marginal distribution for each segmented 3D object. For example, for the objects extracted in
FIG. 1 , the following estimations can be made from a review ofFIG. 2 : q_{max}=255 and q_{min}=98.  To evaluate similarity, the gray level nodule prototype is centered at the centroid of the volume extracted with the deformable prototype and then the similarity measure is calculated using normalized cross correlation (NCC). Details of examples of the algorithms that may be used are presented in A.A. Farag, A. ElBaz, G. Gimelfarb, R Falk, and S. G. Hushek, “Automatic detection and recognition of lung abnormalities in helical CT images using deformable templates,” Lecture Notes in Computer Science, vol. 3217, pp. 131139, September, 2004. Lung Nodule Detection Algorithm The following algorithm of a preferred embodiment of the invention explains how lung nodules can be detected from a CT scan.
 Step S1. Separate the lung regions from a given CT scan using any appropriate segmentation algorithm, such as the one disclosed in Aly A. Farag, Ayman ElBaz, Georgy L. Gimel'farb, “Precise segmentation of multimodal images,” IEEE Transactions on Image Processing Vol. 15, no. 4, April 2006, pp. 952968, which is hereby incorporated by reference. The main idea of the proposed segmentation algorithm is based on accurate identification of both the spatial interaction between the lung voxels and the intensity distribution for the voxels in the lung tissues. The present inventors used new techniques for unsupervised segmentation of multimodal grayscale images such that each regionofinterest relates to a single dominant mode of the empirical marginal probability distribution of gray levels. Embodiments of the present invention follow the most conventional description of the initial images and desired maps of regions, such as by using a joint MarkovGibbs random field (MGRF) model of independent image signals and interdependent region labels, but with a focus on more accurate model identification. To better specify region borders, each empirical distribution of image signals is preferably precisely approximated by a linear combination of Gaussians (LCG) with positive and negative components. The present inventors modified the ExpectationMaximization (EM) algorithm to deal with the LCG and also exploit our novel EMbased sequential technique to get a close initial LCGapproximation to start with. The proposed technique identifies individual LCGmodels in a mixed empirical distribution, including the number of positive and negative Gaussians. Then the initial LCGbased segmentation is iteratively refined using the MGRF with analytically estimated potentials. The analytical estimation is one of the key issues that makes the proposed segmentation accurate and fast and therefore suitable for clinical applications.
FIG. 1 (B) is an example of the resultant image of the separate lung regions.  Step S2. Separate the arteries, veins, bronchi, bronchioles, and lung nodules (if they exist) from the above lung regions of Step S1 using any appropriate segmentation algorithm, such as the one mentioned in Step S1.
FIG. 1 (C) is an example of the resultant image including the separated arteries, veins, bronchi, and bronchioles, as well as a lung nodule candidate, which has been encircled.  Step S3. From the empirical marginal gray level distribution for the objects separated at Step S2, calculate q_{min }and q_{max}.
FIG. 2 is an example of such an empirical marginal gray level distribution for all of the extracted objects ofFIG. 1 (C).  Step S4. Stack all of the voxels separated in Step S2.
 Step S5. Popup the top voxel from the stack as a seed for the deformable prototype, and let this level set propagate until reaching a steadystate, indicating that the voxel set U enclosed by the final prototype constitutes an extracted object.
 Step S6. Calculate the centroid for the voxel set U extracted in Step S5. Next, find the maximum radius, R_{max}, and the minimum radius, R_{min}, from the centroid of the boundary of that voxel set. Then, find the average radius using the equation R=(R_{min}+R_{max})/2. Finally, estimate the scatter parameter p from Equation (3).
 Step S7. Use Equation (2) to assign the prototype gray levels N_{x,y,z }for each extracted voxel (x, y, z)εU.
 Step S8. Use the normalized crosscorrelation Corr_{C,N }between the actual extracted object C=[C_{x, y, z}:(x, y, z)εU] and its gray level nodule prototype N=[N_{x,y,z}:(x, y, z )εU] as the similarity measure.
FIG. 4 shows an example of the detected nodule candidate, where (a) shows the slices of the extracted voxel set U, (b) shows the gray level prototype N, and (c) shows the actual gray levels C. In this example, Corr_{C,N}=0.886.  Step S9. If Corr_{C,N}≧τ, where τ is a preselected similarity threshold, then classify the extracted object as a potential nodule candidate. By way of example, τ may be selected within the range 0f 0.80 to 0.90, such as τ=0.85.
 Step S10. Remove all of the voxels of the extracted object from the stack.
 Step S11. If the stack is empty, then stop, otherwise go to Step S5 and repeat the process from Step S5 onward.
 To reduce the error rate, the initially selected potential candidates are preferably postclassified to distinguish between the false positive nodules (FPNs) and true positive nodules (TPNs). In one embodiment, such post classification can be performed using the textural and geometric features of each detected nodule, such as using the following three features: (i) the radial nonuniformity of its borders
$U=\underset{\theta}{\mathrm{max}}\left(d\right)\left(\theta \right))\underset{\theta}{\mathrm{min}}\left(d\right)\left(\theta \right))$
(where d(θ) is the distance at the angle θ between the center of the template and the border of the segmented object inFIG. 1 (C)); (ii) the mean gray level (q_{ave}) over the 3D or 2D nodular template; and (iii) the 10%tile gray level for the marginal gray level distribution over the 3D or 2D nodular template. To distinguish between the FPNs and the TPNs, one can use the Bayesian supervised classifier learning statistical characteristics of these features from a training set of false and true nodules. All three features [(i)(iii)] can be used to classify the FPNs within the lung, while only the last two features [(ii) and (iii)] can be applied to the lung wall nodules. A. A. Farag, A. ElBaz, G. Gimel'farb, R. Falk, and S. G. Hushek, “Automatic detection and recognition of lung abnormalities in helical CT images using deformable templates,” Proc. MICCAI, SaintMalo, France, Sep. 2629, 2004, pp. 856864, 2004, which is hereby incorporated by reference, provides more details of this process.  As an alternate method of postclassification, the probability distributions of each feature required by the classifier can be accurately estimated with linear combinations of Gaussians (LCGs) using the algorithms such as those described in Aly A. Farag, Ayman ElBaz, Georgy L. Gimel'farb, “Precise segmentation of multimodal images,” IEEE Transactions on Image Processing, Vol. 15, no. 4, Apr. 2006, pp. 952968, which is hereby incorporated by reference. Briefly, in the LCG method, the LCGs approximate the empirical distributions for a training set of the nodules.
FIG. 5 (A)5(C) show the empirical and estimated distributions of each feature for both TPNs and FPNs for the candidate shape ofFIG. 5 (D) for the radii d_{θ} in eight directions from the centroid to the boundary that specify the radial nonuniformity max_{θ}(d_{θ})−min_{θ}(d_{θ}).FIG. 5 (A) relates to radial nonuniformity;FIG. 5 (B) relates to the mean gray level; andFIG. 5 (C) relates to the 10%tile gray level. It is also contemplated that other postclassification methods could also be utilized.  Experimental Results and Conclusions of Previously Described Embodiments
 The lung and nodule detection algorithm described above has been tested on the same LDCT scans of fifty screened subjects. Among them, sixteen subjects had abnormalities in their CT scans and thirtyfour subjects were normal (this classification was validated by two radiologists). The chest CT data used in this paper were obtained from Mansoura University, Urology and Nephrology Center, Radiology Department, Mansoura, Egypt using the following: (1) The scanner: Multidetecor scanner (Light speed plus; GE), (2) Scan mode: Helical, (3) Slice thickness: 2.5 mm, (4) Field of view: large, (5) K. V.: 120, (6) M. A.: 200, (7) Exposure time: 2025 sec., and (8) Window level: −500 & length 1500. Of course, the parameters of these tests are only examples, and other types of scanners set with other parameters could be used.
 The approach using the lung nodule detection algorithm described above extracted one hundred and thirteen (113) potential nodule candidates out of the true one hundred and nineteen (119) nodules and fourteen (14) FPNs. Postclassification reduced the number of FPNs to four (4), but also simultaneously rejected two (2) true nodules (TPN). Thus, the final detection rate of the TPNs was 93.3% (111 out of 119) with an FPNs rate of 3.36%, which is a vast improvement over the results of a final detection rate of the TPNs of 82.3% and an FPNs rate of 9.2% from the earlier algorithm disclosed in G. Gimel'farb, A. A. Farag, and A. ElBaz, “ExpectationMaximization for a Linear Combination of Gaussians,” in Proc. IAPR Int. Conf. Pattern Recognition (ICPR 2004), Cambridge, UK, Aug. 2326, 2004, vol. 2, pp. 422425, 2004.
FIG. 6 shows examples of several large lung nodules (shown darkened) with the lung nodule detection algorithm described above, andFIG. 7 shows examples of a few, small, detected TPNs (shown darkened).  Our experiments show that the new deformable levelset prototype with the analytically modeled standard intensity pattern detects more than 90% of the true lung abnormalities. In our experiments, the overall processing time for the data set of size 1005×804×186 was six minutes, which is a vast improvement over previous algorithms that could take between fifteen and twenty minutes of processing time, using the same type of computer.
 Variation of Second Segmentation Stage
 Next, a discussion of a variation on the second segmentation stage, Step (2), will be provided. In this variation, the focus is on accurate segmentation of the detected nodules for subsequent volumetric measurements to monitor how the nodules change over time.
 In this variation, the first portion of the process of separating the nodules from their background can be the same as described above for the other method—segmenting an initial LDCT slice with algorithms described above to isolate lung tissues from surrounding structures in the chest cavity, such as shown in
FIG. 8 (A). However, in the second portion of the process of separating the nodules from their background, which is different from that described above, the nodules in the isolated lung regions are segmented by evolving deformable boundaries under forces that depend on the learned current and prior appearance models.  As in the other method described previously, in Step (1), each LDCT slice is modeled as a bimodal sample from a simple MarkovGibbs random field (MGRF) of interdependent region labels and conditionally independent voxel intensities (gray levels). This step provides for more accurate separation of nodules from the lung tissues at Step (2) because voxels of both the nodules and other chest structures around the lungs are normally of quite similar intensity. Then, in Step (2), this variation uses deformable boundary models, such that their evolution is controlled by both: (i) a learned prior probability model of the visual nodule appearance, such as an MGRFbased prior appearance model; and (ii) an adaptive appearance model of the nodules in a current image to be segmented, such as an LCDGbased (liner combination of discrete Gaussians) appearance model.
 In this portion of the specification, the following basic notation is utilized. Let (x, y, z) denote Cartesian coordinates of points in a finite arithmetic lattice R=[(x, y, z): x=0, . . . , X−1; y, . . . , Y−1; z, . . . , Z−1]. This lattice supports a given 3D grayscale image g=[g_{x,y,z}: (x, y, z)εR; g_{x,y,z}: εQ] with gray levels from a finite set Q={0, . . . , Q−1} and its region map m=[m_{x,y,z}: (x, y, z)εR; m_{x,y,z}εL] with region labels from a finite set L={nd, bg}. Each label m_{x,y,z }indicates whether the pixel (x, y, z) in the corresponding data set g belongs to the goal object (pulmonary nodule), m_{x,y,z}=nd, or to the background, m_{x,y,z}=bg. Let b=[P_{k}: k=1, . . . , K] be a deformable piecewiselinear boundary with K control points P_{k}=(x_{k}, y_{k}, z_{k}). The index k can be considered as a real number in the interval K indicating continuous positions around the boundary, such as K=[1, K] for the positions from P_{1 }to P_{K}.
 The conventional deformable model moves in the direction that minimizes a boundary energy E:
E=E _{int} +E _{ext}=∫_{kεK}(ξ_{int}(b(P _{k}))+ξ_{ext}(b(P _{k})))dk (5)
where ξ_{int }(b(P_{k})) and ξ_{ext}(b(P_{k})) are internal and external forces, respectively. As described below, this variation includes a new class of the external energy that guided the evolution of deformable model based on two new probability models that roughly describe the prior and current visual appearance of the nodules.
Data Normalization  To account for monotone (orderpreserving) changes of signals (e.g. due to different illumination or sensor characteristics),. for each segmented data set, we will calculate the occurrence histogram, then we normalize the segmented data set to make q_{max}=255 for each segmented data set, such as shown in the example of
FIG. 8 (C).  MGRFBased Prior Appearance Model
 To exclude an alignment stage before segmentation, the appearance of both small, 2D and large, 3D nodules (or other shaped targets) is modeled with a translation and rotation invariant generic MGRF (MarkovGibbs Random Field) with voxelwise and centralsymmetric pairwise voxel interaction specified by a set N of characteristic centralsymmetric voxel neighborhoods {n_{ν}: νεN} on R and a corresponding set V of Gibbs potentials, with one potential per neighborhood.
 A centralsymmetric voxel neighborhood n_{ν} embraces all voxel pairs such that (x, y, z)coordinate offsets between a voxel (x, y, z) and its neighbor (x′, y′, z′) belong to an indexed semiopen interval [d_{ν,min}, d_{ν,max}); νεN⊂{1, 2, 3, . . . } of the intervoxel distances: d_{ν,min}≦√{square root over ((x−x′)^{2}+(y−y′)^{2}+(z−z′)^{2})}<d_{ν,max}. FIGS. 9(A) and 9(B) illustrate the neighborhoods n_{ν} for the uniform distance ranges [ν−0.5, ν+0.5); νεN={1, . . . , 8}.
 The interactions in each neighborhood nV have the same Gibbs potential function V_{ν} of gray level cooccurrences in the neighboring voxel pairs, and the voxelwise interaction is given with the potential function V_{vox }of gray levels in the voxels:
V=[V _{vox} =[V _{vox}(q):qεQ];{V _{ν} =V _{ν}(q,q′):( q,q′)ε Q ^{2} ]:νεN}]
Model Identification  Let R_{t}={(x, y, z):(x, y, z)εRm_{t;x,y,z}=nd} and C_{ν,t }denote the part of the 3D lattice R supporting the training nodules in the imagemap pair (g_{t}, m_{t})εS and the family of voxel pairs in R_{t} ^{2 }with the coordinate offsets (ξ, η, γ)εn_{ν}, respectively. Let F_{vox,t }and F_{ν,t }be a joint empirical probability distribution of gray levels and of gray level cooccurrences in the training nodules from the image g_{t}, respectively:
${F}_{\mathrm{vox},t}=\left[{f}_{\mathrm{vox},t}\left(q\right)=\frac{\uf603{R}_{t,q}\uf604}{\uf603{R}_{t}\uf604};\sum _{q\in Q}{f}_{\mathrm{vox},t}\left(q\right)=1\right]$ $\mathrm{and}$ ${F}_{v,t}=({f}_{v,t}\left(q,{q}^{\prime}\right)=\frac{\uf603{C}_{v,t,q,{q}^{\prime}}\uf604}{\uf603{C}_{v,t}\uf604};$  Σ_{(q,q′)εQ} _{2}ƒ_{ν,t}(q,q′)=1] where R_{t,q}={x,y,z):(x,y,z)εR_{t} g_{x,y,z}=q} is a subset of voxels supporting the gray level q in the training nodules from the images g_{t }and C_{ν,t,q,q′} is a subfamily of the voxel pairs c_{ξ,nγ}(x, y, z)=((x, y, z), (x+ξ·y+η, z+γ))εR_{t} ^{2 }supporting the gray level cooccurrence (q,q′) in the same nodules, respectively.
 The MGRF model of the tth object is specified by the joint Gibbs probability distribution on the sublattice R_{t}:
$\begin{array}{cc}{P}_{t}=\frac{1}{{Z}_{t}}\mathrm{exp}\left(\uf603{R}_{t}\uf604\left({V}_{\mathrm{vox}}^{T}{F}_{\mathrm{vox},t}+\sum _{v\in N}\rho \text{\hspace{1em}}v\text{\hspace{1em}}t\text{\hspace{1em}}{V}_{v,t}^{T}{F}_{v,t}\right)\right)& \left(6\right)\end{array}$
where ρ_{ν,t}=C_{ν,t}/R_{t} is the average cardinality of the neighborhood n_{ν} with respect to the sublattice R_{t}.  To simplify notation, let areas of the training nodules be similar, so that R_{t}≈R_{nd }and C_{ν,t}≈C_{v,nd }for t=1, . . . , T, where R_{nd }and C_{ν,nd }are the average cardinalities over the training set S. Assuming the independent samples, the joint probability distribution of gray values for all the training nodules is as follows:
$P\text{\hspace{1em}}s=\frac{1}{Z}\mathrm{exp}\left(T\text{\hspace{1em}}{R}_{\mathrm{nd}}\left({V}_{\mathrm{vox}}^{T}{F}_{\mathrm{vox}}+\sum _{v\in N}{\rho}_{v}{V}_{v}^{T}{F}_{v}\right)\right)$  where ρ_{ν}=C_{ν,nd}/R_{nd}, and the marginal empirical distributions of gray levels F_{vox,nd }and gray level cooccurrences F_{ν,nd }describe now all the nodules from the training set. Zero empirical probabilities caused by a relatively small volume of the training data available to identify the above model are eliminated if fractions defining the empirical probabilities in terms of cardinalities of the related sublattices or subfamilies are modified as follows: (<nominator>+ε)/(<denominator>+Sε). With the Bayesian quadratic loss estimate, ε=1 and S=Q for the firstorder or S=Q^{2 }for the secondorder interactions.
 Using the analytical approach similar to that in A. A. Farag, A. ElBaz, and G. Gimel'farb, “Precise Segmentation of Multimodal Images,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 952968, April 2006, the potentials are approximated with the scaled centered empirical probabilities:
$\begin{array}{cc}{V}_{\mathrm{vox},\mathrm{nd}}\left(q\right)=\lambda \left({f}_{\mathrm{vox},\mathrm{nd}}\left(q\right)\frac{1}{Q}\right);\left(q\right)\in Q;\text{}{V}_{v,\mathrm{nd}}(q,q=\lambda \left({f}_{v,\mathrm{nd}}\left(q,{q}^{1}\right)\frac{1}{{Q}^{2}}\right);\left(q,{q}^{\prime}\right)\in {Q}^{2};v\in N& \left(7\right)\end{array}$
where the common factor λ is also computed analytically. It can be omitted (λ=1) if only relative potential values are used for computing relative energies E_{v,rel }of the centralsymmetric pairwise voxel interactions in the training data. The energies that are equal to the variances of the cooccurrence distributions:${E}_{v,\mathrm{rel}}=\sum _{q,{q}^{\prime}\in {Q}^{2}}{f}_{v,\mathrm{nd}}\left(q,{q}^{\prime}\right)\left({f}_{v,\mathrm{nd}}\left(q,{q}^{\prime}\right)\frac{1}{{Q}^{2}}\right)$  allow for ranking all the centralsymmetric neighborhoods n_{ν} and selecting the toprank, i.e. most characteristic ones N′⊂N to include to the prior appearance model of Equation (7). Under the model, any grayscale pattern within a deformable boundary b in an image g is described by its Gibbs energy
$\begin{array}{cc}E\left(g,b\right)={V}_{\mathrm{xox},\mathrm{nd}}^{T}{F}_{\mathrm{xox},\mathrm{nd}}\left(g;b\right)+\sum _{v\in {N}^{\prime}}{V}_{v,\mathrm{nd}}^{T}{F}_{v,\mathrm{nd}}\left(g,b\right)& \left(8\right)\end{array}$  where N′ is an index subset of the selected toprank neighborhoods, and the empirical probability distributions are collected within the boundary b in g.
 LCDGBased Current Appearance Model
 The visual appearance of nodules in each current data set g to be segmented typically differ from the appearance of the training nodules due to nonlinear intensity variations from different data acquisition systems and changes in patient tissue characteristics, radiation dose, scanner type, and scanning parameters. This is why, in addition to the appearance prior learned from the normalized training nodules, we model the marginal gray level distribution within an evolving boundary b in g with a dynamic mixture of two distributions for current candidates for nodules and their background, respectively. The mixture is closely approximated with a bimodal linear combination of discrete Gaussians (LCDG) and then partitioned into the nodule and background LCDGs. The approximation is preferably performed with a modified EMbased approach, such as described in A. A. Farag, A. ElBaz, and G. Gimel'farb, “Precise Segmentation of Multimodal Images,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 952968, April 2006.
 Boundary Evolution Using Two Appearance Models
 The following external energy term in Equation (5) combines the learned prior and current appearance models to guide an evolving boundary in a way such that maximizes the energy within the boundary:
ξ_{ext}(b(P _{k}=(x,y,z)))=−p_{vox,nd}(g _{x,y,z})π_{p}(g _{x,y,z} S) (9)  where P_{vox,nd}(q) is the marginal probability of the gray level q in the LCDG model for the nodules, arteries, and veins and π_{p}(qS) is the prior conditional probability of the gray level q, given the current gray values in the characteristic centralsymmetric neighborhoods of P_{k}, for the MGRF prior model:
${\pi}_{P}\left({g}_{x,y,z}S\right)=\frac{\mathrm{exp}\left(E\text{\hspace{1em}}\rho \left({g}_{x,y,z}S\right)\right)}{\sum _{q\in Q}\mathrm{exp}\left(E\text{\hspace{1em}}\rho \left(qS\right)\right)}$  where E_{P}(qS) is the voxelwise Gibbs energy for a gray level q assigned to P and the current fixed gray levels in all neighbors of P in the characteristic neighborhoods n_{ν}; νεN:
${E}_{p}\left(qS\right)={V}_{\mathrm{vox},\mathrm{nd}}\left(q\right)+\sum _{v\in N}\sum _{\left(\xi ,\eta ,y\right)\in v}\left({V}_{n,\mathrm{nd}}\left({g}_{x\varepsilon ,y\eta ,z\gamma ,q}\right)+{V}_{v,\mathrm{nd}}\left(q,{g}_{x+\xi ,y+n,z+\gamma}\right)\right)$  The boundary evolution in each 2D section with the fixed zcoordinate terminates after the total energy E_{r }of the region r⊂R inside the boundary b does not change:
$\begin{array}{cc}{E}_{r}=\sum _{\forall P=\left(x,y,z\right)\in r}{E}_{P}\left({g}_{x,y,z}S\right)& \left(10\right)\end{array}$  The deformable boundary b evolves in discrete time, r=0, 1, . . . , T, as follows: 22
 1. Initialization (r=0):
 (a) Initialize a boundary inside a nodule. For example, this step may be performed automatically, as described in such as described in A. A. Farag, A. ElBaz, and G. Gimel'farb, “Precise Segmentation of Multimodal Images,” IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 952968, April 2006.
 (b) Using voxels within and outside of the initial boundary, estimate the current “nodule” and “background” LCDGs P_{vox,nd }and P_{vox,bg}.
 2. Evolution (τ←τ+1):
 (a) Calculate the total energy of Equation (10) within the current boundary b_{τ}.
 (b) For each control point P_{k }on the current boundary, indicate the exterior (−) and interior (+) nearest neighbors with respect to the boundary.
 (c) For each (+) point, calculate the total energy of Equation (5) for each new candidate for the current control point.
 (d) Select the minimumenergy new candidate.
 (e) Calculate the. total energy of Equation (10) within the boundary that could have appeared if the current control point has been moved to the selected candidate position.
 (f) If the total energy increases, accept this new position of the current control point, otherwise for each (−) point, calculate the total energy of Equation (5) for each new candidate for the current control point.
 (g) Select the minimumenergy new candidate.
 (h) Calculate the total energy of Equation (10) within the boundary that could have appeared if the current control point has been moved to the selected candidate position.
 (i) If the total energy increases, accept this new position of the current control point.
 (j) Otherwise, do not move the current control point because it is already located on the edge of the desired nodule.
 (k) Mark each voxel visited by the deformable boundary.
 (l) If the current control point moves to the voxel visited earlier, then find the edge formed by the already visited voxels and use the edge points as the new control points of the deformable boundary.
 (m) If the new control points appear, interpolate the whole boundary using cubic splines, and then smooth its control points with a low pass filter.
 (n) If the total energy within the boundary does not change, terminate the process; otherwise return to Step 2b.
 Experimental Results and Conclusions with Regard to the Variation
 The proposed segmentation algorithm of the variation was tested on a database of clinical multislice 3D chest LDCT scans of twentynine patients with 0.7×0.7×2.5 mm^{3 }voxels that contain 350 nodules. More specifically, the database included 150 solid nodules of larger than 5 mm in diameter, 40 small solid nodules of less than 5 mm diameter, 10 cavity nodules, 61 nodules attached to the pleural surface, and 89 largely nonspherical nodules. The diameters of the nodules range from 3 mm to 30 mm.
 FIGS. 10(A)10(D) illustrate the results of 3D segmenting of pleural attached nodules shown projected onto axial (A), sagittal (S), and coronal (C) planes for visualization. More specifically,
FIG. 10 (A) shows the (A), (S) and (C) planes of the original profile;FIG. 10 (B) shows the (A), (S) and (C) planes for pixelwise Gibbs energies for v≦11;FIG. 10 (C) shows the (A), (S) and (C) planes for the variation segmentation of the present invention; andFIG. 10 (D) shows the (A), (S) and (C) planes of the radiologist's segmentation.  The pixelwise Gibbs energies in each crosssection are higher for the nodules than for any other lung voxels, including the attached artery. Therefore, this variation accurately separates the pulmonary nodules from any part of the attached artery. The evolution terminates after fifty iterations because the changes in the total energy become close to zero. The error of this type of segmentation of the present invention with respect to the radiologist “ground truth” is 1.86%.
 One of the main advantages of this variation over other algorithms is in the more accurate segmentation of thin cavity nodules, i.e. the nodules that appear only in a single slice. For example, FIGS. 11(A)11(D) are 2D segmentations of such thin cavity nodules, where
FIG. 11 (A) is a 2D profile of the original nodule;FIG. 11 (B) is of the pixelwise Gibbs energies for v≦1 1;FIG. 11 (C) is of the variation segmentation of the present invention; andFIG. 11 (D) is the radiologist's segmentation.  Experimental results of FIGS. 11(A)11(D) show that the error of this variation of the segmentation step of the present invention with respect to the radiologist is 2.1%. It is worthy to note that all the existing approaches fail to segment this cavity nodule because it is totally inhomogeneous.

FIG. 12 presents more segmentation results obtained by the variation algorithm of the present invention for five more patients, for each of the axial (A), sagittal (S), and coronal (C) planes. In total, our experiments with the segmentation of the 350 nodules resulted in an error range of between 0.4%2.35%, with a mean error of 0.96%, and a standard error deviation of 1.1%.  This type of segmentation of the present invention provides a new method to accurately segment small 2D and large 3D pulmonary nodules on LDCT chest images. With this approach, the evolution of a deformable model is controlled with probability models of visual appearance of pulmonary nodules. The prior MGRF model is identified from a given training set of nodules. The current appearance model adapts the control to each bimodal image of the lung regions to be segmented. Both of the models are learned using simple analytical and numerical techniques. Experiments with real LDCT chest images confirm the high accuracy of the segmentation method of the present invention with respect to the radiologist's ground truth. Further, the segmentation of this portion of the present invention outperforms other existing approaches for all types of nodules, and in particular, for cavity nodules where other existing approaches fail.
 Thus, it has been showns that methods and/or systems of the invention detect the nodules in LCDT data. In addition, nodules identified in initial data can be monitored in subsequently obtained data from the same subject, permitting, for example, the identification of nodules that have grown. In embodiments of the invention, raw scanning image data is segmented to isolate lung tissues from the rest of the structures in the chest cavity. Three dimensional (3D) anatomic structures (e.g., blood vessels, bronchioles, alveoli, etc., and possible abnormalities) are then extracted from the segmented lung tissues. Nodules are identified by isolating the true nodules from other extracted structures.
 Embodiments of the invention use a 3D deformable analytically determined nodule template model combined with a centralsymmetric 3D intensity model of the nodules. The template models can be irregularly shaped. Gray levels are made analytical in embodiments of the invention, where past efforts have used ad hoc radius and distribution of gray levels. The model closely approximates an empirical marginal probability distribution of image intensities in the real nodules of different size and is analytically identified from the empirical distribution. The impact of the new template model includes: (1) flexibility with respect to nodule topology—thus various nodules can be detected simultaneously by the same technique; (2) automatic parameter estimation of the nodule models using the gray level information of the segmented data; and (3) the ability to provide exhaustive search for all the possible nodules in the scan without excessive processing time—this provides an enhanced accuracy of the CAD system without increase in the overall diagnosis time.
 In addition, it ahs also been shown that lung nodules can be monitored over time, despite their potential shape and size changes. Databases of false positives can be created and used to remove false positives in subsequent tests. Unlike prior techniques, the invention does not use a try and see approach to test for nodules, but uses an analytical process to search for nodules. An iterative search is conducted that can use various shapes and sized templates with analytically determined template. Embodiments of the invention account for the spread and shape of nodules that can be traced and changed with time.
 While various embodiments of the present invention have been shown and described, it should be understood that other modifications, substitutions and alternatives may be apparent to one of ordinary skill in the art. Such modifications, substitutions and alternatives can be made without departing from the spirit and scope of the invention, which should be determined from the appended claims.
 Various features of the invention are set forth in the appended claims. 27
Claims (11)
1. A method for detecting a nodule, or other shaped target, in image data, the method comprising steps of:
segmenting scanning information from an image slice to isolate lung tissue from other structures, resulting in segmented image data;
extracting anatomic structures, including any potential nodules, from the segmented image data, resulting in extracted image data; and
detecting possible nodules, or other shaped targets, from the extracted image data, based on deformable prototypes of candidates generated by a level set method in combination with a marginal gray level distribution method.
2. The method according to claim 1 , wherein said step of detecting possible nodules includes evolving a surface of the deformable prototype using the following velocity function F(x, y, z) during a propagation process:
F(x, y, z)=−h(x, y, z)(1+εk(x, y, z)),
where h(x, y, z) is a local consistency term, k(x, y, z) is a local curvature term, and ε is a smoothing factor.
3. The method according to claim 2 , wherein said local consistency term is represented by:
h(x,y,z)=1+∇I(x,y,z))^{−1},
where ∇I (x, y, z) is a three dimensional gradient of said segmented image data.
4. The method according to claim 2 , wherein a lowpass filter is applied after each step of said propagation process to keep the level set from propagating through any blood vessels connected to the possible nodule or other shaped target.
5. The method according to claim 1 , wherein said marginal gray distribution method utilizes a marginal probability density function represented by:
where q(r) is gray level function, q_{min }is minimum gray level q_{max }is maximum gray level.
6. The method according to claim 1 , wherein said step of detecting possible nodules, or other shaped targets, includes:
determining minimum gray level, q_{min}, and maximum gray level, q_{max};
propagating a voxel set U of an extracted object until reaching a steady state;
calculating a centroid for the voxel set U;
calculating maximum radius, R_{max}, and minimum radius, R_{min}, of said centroid;
calculating average radius, R, using R=(R_{min}+R_{max})/2;
estimating scatter parameter, ρ, using ρ=R(ln q_{max}−ln q_{min})^{−1/2 }
assigning prototype gray levels, N_{x,y,z}, for each extracted voxel, where (x, y, z)εof said voxel set U, using q(r)=q_{max }exp(−(r/ρ)^{2}), where r is a radius of said extracted object;
determining similarity between said extracted object C=[C_{x,y,z}:(x, y, z)εU] and gray level nodule prototype N=[N_{x,y,z}:(x, y, z)εU using a normalized crosscorrelation Corr_{C,N }function; and
classifying said extracted object as a nodule candidate when result of crosscorrelation Corr_{C,N }is greater than a predetermined parameter.
7. The method according to claim 6 , further comprising a postclassification step using linear combinations of Gaussians (LCGs) for distinguishing between false positive nodules and true positive nodules.
8. The method according to claim 1 , wherein said image data is obtained with a low dose computed tomography scanner.
9. An automatic method for detecting and monitoring a nodule or other shaped target in image data, the method comprising steps of:
determining adaptive probability models of visual appearance of small 2D and large 3D nodules or shaped targets to control evolution of deformable models to get accurate segmentation of pulmonary nodules from image data;
modeling a first set of nodules or shaped targets in image data with a translation and rotation invariant MarkovGibbs random field of voxel intensities with pairwise interaction analytically identified from a set of training nodules;
modeling a second subsequent set of nodules or shaped targets in image data by estimating a linear combination of discrete Gaussians; and
integrating models to guide the evolution of the deformable model to determine and monitor the boundary of each detected nodule or targeted shape in the image data.
10. The method according to claim 9 , wherein said step of modeling a first set of nodules or shaped targets includes approximating potentials with scaled centered empirical probabilities:
where λ is common factor computed analytically, Q is a finite set of gray levels, and N is a set of characteristic central symmetric voxel neighborhoods.
11. The method of claim 10 , wherein said step of modeling a first set of nodules or shaped targets includes:
${E}_{v,\mathrm{rel}}=\sum _{q,{q}^{\prime}\in {Q}^{2}}{f}_{v,\mathrm{nd}}\left(q,{q}^{\prime}\right)\left({f}_{v,\mathrm{nd}}\left(q,{q}^{\prime}\right)\frac{1}{{Q}^{2}}\right),$
determining energies that are equal to variances of cooccurrence distributions using:
where E is energy, ƒ_{ν,nd}(q, q′) is the normalized joint cooccurrence frequency and Q is a finite set of gray levels.
ranking all centralsymmetric neighborhoods from highest to lowest; and
selecting the highest ranked centralsymmetric neighborhood.
Priority Applications (2)
Application Number  Priority Date  Filing Date  Title 

US81779706P true  20060630  20060630  
US11/824,669 US8073226B2 (en)  20060630  20070702  Automatic detection and monitoring of nodules and shaped targets in image data 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

US11/824,669 US8073226B2 (en)  20060630  20070702  Automatic detection and monitoring of nodules and shaped targets in image data 
Publications (2)
Publication Number  Publication Date 

US20080002870A1 true US20080002870A1 (en)  20080103 
US8073226B2 US8073226B2 (en)  20111206 
Family
ID=38876701
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

US11/824,669 Active 20300926 US8073226B2 (en)  20060630  20070702  Automatic detection and monitoring of nodules and shaped targets in image data 
Country Status (1)
Country  Link 

US (1)  US8073226B2 (en) 
Cited By (27)
Publication number  Priority date  Publication date  Assignee  Title 

US20080039723A1 (en) *  20060518  20080214  Suri Jasjit S  System and method for 3d biopsy 
US20080095422A1 (en) *  20061018  20080424  Suri Jasjit S  Alignment method for registering medical images 
US20080161687A1 (en) *  20061229  20080703  Suri Jasjit S  Repeat biopsy system 
US20080159606A1 (en) *  20061030  20080703  Suri Jasit S  Object Recognition System for Medical Imaging 
US20080240526A1 (en) *  20070328  20081002  Suri Jasjit S  Object recognition system for medical imaging 
US20090103797A1 (en) *  20071018  20090423  Lin Hong  Method and system for nodule feature extraction using background contextual information in chest xray images 
US20090118640A1 (en) *  20071106  20090507  Steven Dean Miller  Biopsy planning and display apparatus 
US7890590B1 (en) *  20070927  20110215  Symantec Corporation  Variable bayesian handicapping to provide adjustable error tolerance level 
US20110075938A1 (en) *  20090925  20110331  Eastman Kodak Company  Identifying image abnormalities using an appearance model 
US20110123095A1 (en) *  20060609  20110526  Siemens Corporate Research, Inc.  Sparse Volume Segmentation for 3D Scans 
US20110158490A1 (en) *  20091231  20110630  Shenzhen Mindray BioMedical Electronics Co., Ltd.  Method and apparatus for extracting and measuring object of interest from an image 
US20110228994A1 (en) *  20081020  20110922  Hitachi Medical Corporation  Medical image processing device and medical image processing method 
US8175350B2 (en)  20070115  20120508  Eigen, Inc.  Method for tissue culture extraction 
US20120201445A1 (en) *  20110208  20120809  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system incorporating appearance analysis for diagnosing malignant lung nodules 
US20120232390A1 (en) *  20110308  20120913  Samsung Electronics Co., Ltd.  Diagnostic apparatus and method 
US20130034270A1 (en) *  20100629  20130207  Fujifilm Corporation  Method and device for shape extraction, and size measuring device and distance measuring device 
US20130182931A1 (en) *  20111221  20130718  Institute of Automation, Chinese Academy of Scienc  Method for brain tumor segmentation in multiparametric image based on statistical information and multiscale struture information 
US8571277B2 (en)  20071018  20131029  Eigen, Llc  Image interpolation for medical imaging 
WO2014042678A1 (en) *  20120913  20140320  The Regents Of The University Of California  System and method for automated detection of lung nodules in medical images 
CN103886604A (en) *  20140331  20140625  山东科技大学  Parallel image segmentation method based on initial contour prediction model 
US20140355854A1 (en) *  20130531  20141204  Siemens Aktiengesellschaft  Segmentation of a structure 
US20150286786A1 (en) *  20140402  20151008  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system for classifying kidneys 
US20160155225A1 (en) *  20141130  20160602  Case Western Reserve University  Textural Analysis of Lung Nodules 
US9430827B2 (en)  20130531  20160830  Siemens Aktiengesellschaft  Segmentation of a calcified blood vessel 
US20170039737A1 (en) *  20150806  20170209  Case Western Reserve University  Decision support for disease characterization and treatment response with disease and peridisease radiomics 
US20170109880A1 (en) *  20151016  20170420  General Electric Company  System and method for blood vessel analysis and quantification in highly multiplexed fluorescence imaging 
US20180070905A1 (en) *  20160914  20180315  University Of Louisville Research Foundation, Inc.  Accurate detection and assessment of radiation induced lung injury based on a computational model and computed tomography imaging 
Families Citing this family (7)
Publication number  Priority date  Publication date  Assignee  Title 

JP4895204B2 (en) *  20070322  20120314  富士フイルム株式会社  Image component separation apparatus, method and program, and the normal image generation apparatus, method, and program 
US9342893B1 (en)  20080425  20160517  Stratovan Corporation  Method and apparatus of performing image segmentation 
WO2009132340A1 (en) *  20080425  20091029  Stratovan Corporation  Analysis of anatomic regions delineated from image data 
US8644578B1 (en) *  20080425  20140204  Stratovan Corporation  Method and apparatus of identifying objects of interest using imaging scans 
KR101090375B1 (en) *  20110314  20111207  동국대학교 산학협력단  Ct image auto analysis method, recordable medium and apparatus for automatically calculating quantitative assessment index of chestwall deformity based on automized initialization 
US9218661B2 (en) *  20110408  20151222  Algotec Systems Ltd.  Image analysis for specific objects 
US8914097B2 (en)  20120130  20141216  The Johns Hopkins University  Automated pneumothorax detection 
Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

US20030053669A1 (en) *  20010718  20030320  Marconi Medical Systems, Inc.  Magnetic resonance angiography method and apparatus 
US20040086162A1 (en) *  20021031  20040506  University Of Chicago  System and method for computeraided detection and characterization of diffuse lung desease 
US20050152588A1 (en) *  20031028  20050714  University Of Chicago  Method for virtual endoscopic visualization of the colon by shapescale signatures, centerlining, and computerized detection of masses 
US20050195185A1 (en) *  20040302  20050908  Slabaugh Gregory G.  Active polyhedron for 3D image segmentation 
US20060013482A1 (en) *  20040623  20060119  Vanderbilt University  System and methods of organ segmentation and applications of same 
US20060056701A1 (en) *  20040302  20060316  Gozde Unal  Joint segmentation and registration of images for object detection 
US20060120591A1 (en) *  20041207  20060608  Pascal Cathier  Shape index weighted voting for detection of objects 
US20090252395A1 (en) *  20020215  20091008  The Regents Of The University Of Michigan  System and Method of Identifying a Potential Lung Nodule 

2007
 20070702 US US11/824,669 patent/US8073226B2/en active Active
Patent Citations (9)
Publication number  Priority date  Publication date  Assignee  Title 

US20030053669A1 (en) *  20010718  20030320  Marconi Medical Systems, Inc.  Magnetic resonance angiography method and apparatus 
US20090252395A1 (en) *  20020215  20091008  The Regents Of The University Of Michigan  System and Method of Identifying a Potential Lung Nodule 
US20040086162A1 (en) *  20021031  20040506  University Of Chicago  System and method for computeraided detection and characterization of diffuse lung desease 
US20050152588A1 (en) *  20031028  20050714  University Of Chicago  Method for virtual endoscopic visualization of the colon by shapescale signatures, centerlining, and computerized detection of masses 
US20050195185A1 (en) *  20040302  20050908  Slabaugh Gregory G.  Active polyhedron for 3D image segmentation 
US20060056701A1 (en) *  20040302  20060316  Gozde Unal  Joint segmentation and registration of images for object detection 
US20060013482A1 (en) *  20040623  20060119  Vanderbilt University  System and methods of organ segmentation and applications of same 
US20060120591A1 (en) *  20041207  20060608  Pascal Cathier  Shape index weighted voting for detection of objects 
US7529395B2 (en) *  20041207  20090505  Siemens Medical Solutions Usa, Inc.  Shape index weighted voting for detection of objects 
Cited By (50)
Publication number  Priority date  Publication date  Assignee  Title 

US20080039723A1 (en) *  20060518  20080214  Suri Jasjit S  System and method for 3d biopsy 
US8425418B2 (en)  20060518  20130423  Eigen, Llc  Method of ultrasonic imaging and biopsy of the prostate 
US8073252B2 (en) *  20060609  20111206  Siemens Corporation  Sparse volume segmentation for 3D scans 
US20110123095A1 (en) *  20060609  20110526  Siemens Corporate Research, Inc.  Sparse Volume Segmentation for 3D Scans 
US20080095422A1 (en) *  20061018  20080424  Suri Jasjit S  Alignment method for registering medical images 
US8064664B2 (en)  20061018  20111122  Eigen, Inc.  Alignment method for registering medical images 
US7804989B2 (en)  20061030  20100928  Eigen, Inc.  Object recognition system for medical imaging 
US20080159606A1 (en) *  20061030  20080703  Suri Jasit S  Object Recognition System for Medical Imaging 
US20080161687A1 (en) *  20061229  20080703  Suri Jasjit S  Repeat biopsy system 
US8175350B2 (en)  20070115  20120508  Eigen, Inc.  Method for tissue culture extraction 
US7856130B2 (en)  20070328  20101221  Eigen, Inc.  Object recognition system for medical imaging 
US20080240526A1 (en) *  20070328  20081002  Suri Jasjit S  Object recognition system for medical imaging 
US7890590B1 (en) *  20070927  20110215  Symantec Corporation  Variable bayesian handicapping to provide adjustable error tolerance level 
US20090103797A1 (en) *  20071018  20090423  Lin Hong  Method and system for nodule feature extraction using background contextual information in chest xray images 
US8571277B2 (en)  20071018  20131029  Eigen, Llc  Image interpolation for medical imaging 
US8224057B2 (en)  20071018  20120717  Siemens Aktiengesellschaft  Method and system for nodule feature extraction using background contextual information in chest xray images 
US20090118640A1 (en) *  20071106  20090507  Steven Dean Miller  Biopsy planning and display apparatus 
US20120087557A1 (en) *  20071106  20120412  Eigen, Inc.  Biopsy planning and display apparatus 
US7942829B2 (en)  20071106  20110517  Eigen, Inc.  Biopsy planning and display apparatus 
US20110228994A1 (en) *  20081020  20110922  Hitachi Medical Corporation  Medical image processing device and medical image processing method 
US8542896B2 (en) *  20081020  20130924  Hitachi Medical Corporation  Medical image processing device and medical image processing method 
US8831301B2 (en) *  20090925  20140909  Intellectual Ventures Fund 83 Llc  Identifying image abnormalities using an appearance model 
US20110075938A1 (en) *  20090925  20110331  Eastman Kodak Company  Identifying image abnormalities using an appearance model 
US8699766B2 (en) *  20091231  20140415  Shenzhen Mindray BioMedical Electronics Co., Ltd.  Method and apparatus for extracting and measuring object of interest from an image 
US20110158490A1 (en) *  20091231  20110630  Shenzhen Mindray BioMedical Electronics Co., Ltd.  Method and apparatus for extracting and measuring object of interest from an image 
US20130034270A1 (en) *  20100629  20130207  Fujifilm Corporation  Method and device for shape extraction, and size measuring device and distance measuring device 
US8855417B2 (en) *  20100629  20141007  Fujifilm Corporation  Method and device for shape extraction, and size measuring device and distance measuring device 
US20120201445A1 (en) *  20110208  20120809  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system incorporating appearance analysis for diagnosing malignant lung nodules 
US9014456B2 (en) *  20110208  20150421  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system incorporating appearance analysis for diagnosing malignant lung nodules 
US20120232390A1 (en) *  20110308  20120913  Samsung Electronics Co., Ltd.  Diagnostic apparatus and method 
US8908948B2 (en) *  20111221  20141209  Institute Of Automation, Chinese Academy Of Sciences  Method for brain tumor segmentation in multiparametric image based on statistical information and multiscale structure information 
US20130182931A1 (en) *  20111221  20130718  Institute of Automation, Chinese Academy of Scienc  Method for brain tumor segmentation in multiparametric image based on statistical information and multiscale struture information 
WO2014042678A1 (en) *  20120913  20140320  The Regents Of The University Of California  System and method for automated detection of lung nodules in medical images 
US9418420B2 (en)  20120913  20160816  The Regents Of The University Of California  System and method for automated detection of lung nodules in medical images 
US20140355854A1 (en) *  20130531  20141204  Siemens Aktiengesellschaft  Segmentation of a structure 
US9406141B2 (en) *  20130531  20160802  Siemens Aktiengesellschaft  Segmentation of a structure 
US9430827B2 (en)  20130531  20160830  Siemens Aktiengesellschaft  Segmentation of a calcified blood vessel 
CN103886604A (en) *  20140331  20140625  山东科技大学  Parallel image segmentation method based on initial contour prediction model 
US9928347B2 (en) *  20140402  20180327  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system for classifying kidneys 
US20150286786A1 (en) *  20140402  20151008  University Of Louisville Research Foundation, Inc.  Computer aided diagnostic system for classifying kidneys 
US9595103B2 (en) *  20141130  20170314  Case Western Reserve University  Textural analysis of lung nodules 
US20160155225A1 (en) *  20141130  20160602  Case Western Reserve University  Textural Analysis of Lung Nodules 
US20170039737A1 (en) *  20150806  20170209  Case Western Reserve University  Decision support for disease characterization and treatment response with disease and peridisease radiomics 
US20170035381A1 (en) *  20150806  20170209  Case Western Reserve University  Characterizing disease and treatment response with quantitative vessel tortuosity radiomics 
US10064594B2 (en) *  20150806  20180904  Case Western Reserve University  Characterizing disease and treatment response with quantitative vessel tortuosity radiomics 
US10004471B2 (en) *  20150806  20180626  Case Western Reserve University  Decision support for disease characterization and treatment response with disease and peridisease radiomics 
US20180214111A1 (en) *  20150806  20180802  Case Western Reserve University  Decision support for disease characterization and treatment response with disease and peridisease radiomics 
US10019796B2 (en) *  20151016  20180710  General Electric Company  System and method for blood vessel analysis and quantification in highly multiplexed fluorescence imaging 
US20170109880A1 (en) *  20151016  20170420  General Electric Company  System and method for blood vessel analysis and quantification in highly multiplexed fluorescence imaging 
US20180070905A1 (en) *  20160914  20180315  University Of Louisville Research Foundation, Inc.  Accurate detection and assessment of radiation induced lung injury based on a computational model and computed tomography imaging 
Also Published As
Publication number  Publication date 

US8073226B2 (en)  20111206 
Similar Documents
Publication  Publication Date  Title 

Armato et al.  Computerized detection of pulmonary nodules on CT scans  
Rangayyan et al.  Boundary modelling and shape analysis methods for classification of mammographic masses  
Dehmeshki et al.  Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach  
Cheng et al.  Approaches for automated detection and classification of masses in mammograms  
US6898303B2 (en)  Method, system and computer readable medium for the twodimensional and threedimensional detection of lesions in computed tomography scans  
US8045770B2 (en)  System and method for threedimensional image rendering and analysis  
Way et al.  Computer‐aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours  
CA2186135C (en)  Automated detection of lesions in computed tomography  
Brown et al.  Patientspecific models for lung nodule detection and surveillance in CT images  
US20120219200A1 (en)  System and Method for ThreeDimensional Image Rendering and Analysis  
US7876938B2 (en)  System and method for whole body landmark detection, segmentation and change quantification in digital images  
Silveira et al.  Comparison of segmentation methods for melanoma diagnosis in dermoscopy images  
US20070086640A1 (en)  Method for automated analysis of digital chest radiographs  
US20030223627A1 (en)  Method for computeraided detection of threedimensional lesions  
Okada et al.  Robust anisotropic Gaussian fitting for volumetric characterization of pulmonary nodules in multislice CT  
US6549646B1 (en)  Divideandconquer method and system for the detection of lung nodule in radiological images  
Dominguez et al.  Detection of masses in mammograms via statistically based enhancement, multilevelthresholding segmentation, and region selection  
Choi et al.  Automated pulmonary nodule detection based on threedimensional shapebased feature descriptor  
US20020028008A1 (en)  Automatic detection of lung nodules from high resolution CT images  
US6937776B2 (en)  Method, system, and computer program product for computeraided detection of nodules with three dimensional shape enhancement filters  
US20040165767A1 (en)  Threedimensional pattern recognition method to detect shapes in medical images  
US6694046B2 (en)  Automated computerized scheme for distinction between benign and malignant solitary pulmonary nodules on chest images  
Shang et al.  Vascular active contour for vessel tree segmentation  
Näppi et al.  Feature‐guided analysis for reduction of false positives in CAD of polyps for computed tomographic colonography  
US9256941B2 (en)  Microcalcification detection and classification in radiographic images 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: UNIVERSITY OF LOUISVILLE RESEARCH FOUNDATION, INC. Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FARAG, ALY A.;ELBAZ, AYMAN;REEL/FRAME:019856/0213 Effective date: 20070726 

FPAY  Fee payment 
Year of fee payment: 4 