US20090123049A1 - Nodule Detection - Google Patents
Nodule Detection Download PDFInfo
- Publication number
- US20090123049A1 US20090123049A1 US12/260,651 US26065108A US2009123049A1 US 20090123049 A1 US20090123049 A1 US 20090123049A1 US 26065108 A US26065108 A US 26065108A US 2009123049 A1 US2009123049 A1 US 2009123049A1
- Authority
- US
- United States
- Prior art keywords
- region
- sphericity
- image
- sphericity index
- spherical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001514 detection method Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000001914 filtration Methods 0.000 claims abstract description 12
- 238000002591 computed tomography Methods 0.000 claims description 25
- 210000004072 lung Anatomy 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 9
- 238000004891 communication Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 6
- 206010056342 Pulmonary mass Diseases 0.000 description 5
- 238000009499 grossing Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010894 electron beam technology Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present invention relates to a method of detecting nodules in computed tomography (CT) images, and particularly but not exclusively for detecting nodules in CT images of the lung.
- CT computed tomography
- the method may be implemented using a computer, and the invention encompasses software and apparatus for carrying out the method.
- Nodule detection is one of the more challenging tasks in medical imaging. Nodules may be difficult to detect on CT scans because of low contrast, small size, or location of the nodule within an area of complicated anatomy.
- Computer-assisted techniques have been proposed to identify regions of interest containing a nodule within a CT scan image, to segment the nodule from surrounding objects such as blood vessels or the lung wall, to calculate physical characteristics of the nodule, and/or to provide an automated diagnosis of the nodule.
- Fully automated techniques perform all of these steps without intervention by a radiologist, but one or more of these steps may require input from the radiologist, in which case the method may be described as semi-automated.
- US 2003/0099391 discloses a method for automatically segmenting a lung nodule by dynamic programming and expectation maximization (EM), using a deformable circular model to estimate the contour of the nodule in each two-dimensional (2D) slice of the scan image, and fitting a three-dimension (3D) surface to the contours.
- EM dynamic programming and expectation maximization
- US 2003/0167001 discloses a method for automatically segmenting a CT image to identify regions of interest and to detect nodules within the regions of interest, in which a sphere is modeled within the region of interest, and points within the sphere are identified as belonging to a nodule, while a morphological test is applied to regions outside the sphere to determine whether they belong to the nodule.
- nodules Although many nodules are approximately spherical, the non-spherical aspects of a nodule may be most important for calculating physical characteristics and for performing diagnosis.
- a spherical model may be useful to segment nodules from surrounding objects, but if the result is to incorrectly identify the nodule as a sphere and to discard non-spherical portions of the nodule, the characteristics of the nodule may also be incorrectly identified.
- a method of detecting a nodule in a three-dimensional scan image comprising calculating a sphericity index for each point in the scan image relative to surrounding points of similar intensity, applying a high sphericity threshold to the sphericity index to obtain a candidate nodule region, and then performing region-growing from the candidate region using a relaxed sphericity threshold to identify an extended region including less spherical parts connected to the candidate region.
- the extended region may be provided for display and/or for subsequent processing to calculate physical characteristics and/or to perform automatic diagnosis.
- diagnosis may be performed by a radiologist on the basis of the enhanced image.
- non-spherical nodules generally include an approximately spherical region of a particular density: for example, a dense, spherical core may be surrounded by a slightly less dense, less spherical region that nevertheless forms part of the nodule. If a thresholding technique is applied to such a nodule, then only the shape of the outer, non-spherical region will be detected, and will be rejected as a candidate nodule because it is not sufficiently spherical. If the threshold is set between the density of the inner, spherical region and the outer, non-spherical region, then only the inner region will be detected and the outer region will be discarded. In contrast, embodiments of the present invention may allow such non-spherical nodules to be detected in their entirety.
- the sphericity index is calculated from the first and second partial derivatives of the smoothed image in each direction at each point, and by calculating principal curvatures at each voxel. Equal curvatures in each direction give a high sphericity index.
- This method is less computationally intensive than explicitly generating iso-intensity surfaces for the image and then deriving the sphericity index from those iso-intensity surfaces.
- the partial derivatives are calculated on a smoothed image.
- the smoothing function may involve the convolution of a smoothing function with the image.
- the smoothing may be applied at the same time as the partial derivatives are calculated, by convolving the scan image with the partial derivatives of the smoothing function.
- the extended region may be enhanced in the scan image by applying a spherical filter.
- the spherical filter may be fitted to the extended region by convolving the filter with the image, or a map of the sphericity of the image, and adjusting the filter until a maximum convolution value is achieved.
- the spherical filter may include a positive weighting in an inner region and a negative weighting in an outer region.
- the enhanced image may be output for display, and alternatively or additionally be used as input for subsequent processing stages.
- FIG. 1 is a schematic diagram showing a CT scanner and a remote computer for processing image data from the scanner according to an embodiment of the present invention.
- FIG. 2 is an example computer system according to an embodiment of the present invention.
- FIG. 3 is a flowchart of an algorithm according to an embodiment of the present invention.
- FIG. 4 is a diagram of a nodule showing iso-intensity contours.
- FIGS. 5 a - 5 c and 6 a - 6 c show original images and images with spherical enhancement of two different lung phantoms.
- FIGS. 7 a , 7 b to 11 a , 11 b show original images and images with spherical enhancement of five different real scans.
- FIG. 12 illustrates a spherical filter according to an embodiment of the present invention.
- FIG. 13 illustrates the variation in radius of a spherical filter according to an embodiment of the present invention.
- FIGS. 14 a and 14 b show an original image and a spherically filtered image of a lung phantom.
- FIGS. 15 a , 15 b to 20 a , 20 b show original images and spherically filtered images respectively of six different real scans.
- Each embodiment is performed on series of CT image slices obtained from a CT scan of the chest area of a human or animal patient.
- Each slice is a 2-dimensional digital grey-scale image of the x-ray absorption of the scanned area.
- the properties of the slice depend on the CT scanner used; for example, a high-resolution multi-slice CT scanner may produce images with a resolution of 0.5-0.6 mm/pixel in the x and y directions (i.e. in the plane of the slice).
- Each pixel may have 32-bit grayscale resolution.
- the intensity value of each pixel is normally expressed in Hounsfield units (HU).
- Sequential slices may be separated by a constant distance along the z direction (i.e. the scan separation axis); for example, by a distance of between 0.75-2.5 mm.
- the scan image is a three-dimensional (3D) grey scale image, with an overall size depending on the area and number of slices scanned.
- the present invention is not restricted to any specific scanning technique, and is applicable to electron beam computed tomography (EBCT), multi-detector or spiral scans or any technique that produces as output a 3D image, representing for example X-ray absorption or density.
- EBCT electron beam computed tomography
- multi-detector multi-detector
- spiral scans any technique that produces as output a 3D image, representing for example X-ray absorption or density.
- the scan image is created by a computer 110 which receives scan data from a scanner 120 and constructs the scan image.
- the scan image is saved as an electronic file or a series of files which are stored on a storage medium 130 , such as a fixed or removable disc.
- the scan image may be processed by the computer 110 to identify and display lung nodules, or the scan image may be transferred to another computer 140 which runs software for processing the image as described below.
- the image processing software may be stored on a carrier, such as a removable disc, or downloaded over a network.
- FIG. 2 illustrates an example computer system 200 , in which the present invention can be implemented as programmable code.
- FIG. 2 illustrates an example computer system 200 , in which the present invention can be implemented as programmable code.
- Various embodiments of the invention are described in terms of this example computer system 200 . After reading this description, it will become apparent to a person skilled in the art how to implement the invention using other computer systems and/or computer architectures.
- the computer system 200 includes one or more processors, such as processor 204 .
- Processor 204 can be a special purpose or a general purpose digital signal processor.
- the processor 204 is connected to a communication infrastructure 206 (for example, a bus or network).
- a communication infrastructure 206 for example, a bus or network.
- Computer system 200 also includes a main memory 208 , preferably random access memory (RAM), and may also include a secondary memory 210 .
- the secondary memory 210 may include, for example, a hard disk drive 212 and/or a removable storage drive 214 , representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc.
- the removable storage drive 214 reads from and/or writes to a removable storage unit 218 in a well known manner.
- Removable storage unit 218 represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 214 .
- the removable storage unit 218 includes a computer usable storage medium having stored therein computer software and/or data.
- secondary memory 210 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 200 .
- Such means may include, for example, a removable storage unit 222 and an interface 220 .
- Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 222 and interfaces 220 which allow software and data to be transferred from the removable storage unit 222 to computer system 200 .
- Computer system 200 may also include a communication interface 224 .
- Communication interface 224 allows software and data to be transferred between computer system 200 and external devices. Examples of communication interface 224 may include a modem, a network interface (such as an Ethernet card), a communication port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc.
- Software and data transferred via communication interface 224 are in the form of signals 228 which may be electronic, electromagnetic, optical, or other signals capable of being received by communication interface 224 . These signals 228 are provided to communication interface 224 via a communication path 226 .
- Communication path 226 carries signals 228 and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, or any other suitable communication channel. For instance, the communication path 226 may be implemented using a combination of channels.
- computer program medium and “computer usable medium” are used generally to refer to media such as removable storage drive 214 , a hard disk installed in hard disk drive 212 , and signals 228 . These computer program products are means for providing software to computer system 200 .
- Computer programs are stored in main memory 208 and/or secondary memory 210 . Computer programs may also be received via communication interface 224 . Such computer programs, when executed, enable the computer system 200 to implement the present invention as discussed herein. Accordingly, such computer programs represent controllers of the computer system 200 . Where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 200 using removable storage drive 214 , hard disk drive 212 , or communication interface 224 , to provide some examples.
- An embodiment comprises image-processing software for detecting nodules in a three-dimensional CT scan image of a lung.
- the embodiment uses an algorithm comprising three principle steps. First, a 3D sphericity index (SI) is calculated for each volume element within the 3D image (voxel); secondly, based on the computed sphericity index map, a high SI threshold is used to determine a spherical region; then, a relaxed SI threshold is applied and the 3D connectivity of voxels above the relaxed threshold to the spherical region is calculated to determine the extent of the nodule.
- SI 3D sphericity index
- FIG. 4 shows iso-intensity contours of a single slice of a sample nodule N, with intensity expressed in Hounsfield units (HU).
- the background intensity is 20 HU
- the outer boundary of the nodule is at approximately 90 HU, rising to over 120 HU at the core.
- the core C is spherical (circular in this slice), while the outer boundary B is less spherical.
- a traditional approach is to fit a parametric surface model to the 3D image and then to compute the differential characteristics of the surface in the local coordinate system. Because it is very computationally intensive to explicitly generate an iso-surface, the differential characteristics of the surface in this embodiment are calculated directly from the 3D image without explicitly defining an iso-surface. The main steps are described below.
- the 3D image I(x,y,z) is convolved with the Gaussian function g(x,y,z) to generate a smoothed digital 3D image (step 310 ):
- g ⁇ ( x , y , z ) 1 ⁇ ⁇ 2 ⁇ ⁇ ⁇ ⁇ - ⁇ ( x - ⁇ x ) 2 + ( y - ⁇ y ) 2 + ( z - ⁇ z ) 2 ⁇ 2 ⁇ ⁇ 2
- the first partial derivate of h(x,y,z) in the x direction is defined as:
- h x I ⁇ ( x , y , z ) * ⁇ g ⁇ ( x , y , z ) ⁇ x ( 3 )
- h y , h z which are the first partial derivatives in the y and z direction, respectively, and also the second partial derivatives h xx , h yy , h zz , h xy , h xz , h yz .
- h xy which is the second partial derivative in the x and y direction is defined as:
- both stages of the smoothing and calculating partial derivatives can be combined into one step, namely, the partial derivatives of the smoothed 3D images can be obtained by convoluting the raw 3D image I(x,y,z) with the high order Gaussian filters.
- the sphericity index SI(p) characterizes the topological shape of the volume in the vicinity of the voxel p, whereas the volumetric curvature represents the magnitude of the effective curvature. Both quantities are based on two principal curvatures defined as above.
- the sphericity index is a function of the difference between a maximum curvature and a minimum curvature of an iso-surface at each point. If the curvature is equal in all directions, the iso-surface is a section of the surface of a sphere and the sphericity index is 1. If the iso-surface is a section of the surface of a cylinder, the sphericity index is 0.75. It is important to exclude cylindrical shapes as these are normally blood vessels.
- a high threshold (e.g. 0.90) is applied to the sphericity index SI(p) (step 340 ), so that a set of foreground voxels is obtained for which SI(p) is above the threshold, and the foreground voxels are grouped together into connected regions.
- This grouping together may be done by region growing from an ungrouped foreground voxel, so as iteratively to add neighboring foreground voxels to the group until no neighboring foreground voxels exist.
- the process is then repeated from another ungrouped foreground voxel to define another group, until all foreground voxels belong to a group. Neighbors may be added in each of the three spatial dimensions, so that the region grows in three dimensions. The result is one or more highly spherical regions within the image. In the sample nodule N, this highly spherical region might extend only to the core C.
- the high threshold may be fixed by the software, or may be variable by the user, for example within the range 0.8 to 1.0.
- Each of the highly spherical regions is used as an object seed for three-dimensional region growing.
- a relaxed shape-index threshold e.g. SI(p)>0.80
- the region-growing technique is applied iteratively to the region, so that neighboring voxels above the relaxed sphericity index threshold are added to the region and new neighbors are then added if they are above the relaxed threshold, and the process continues until there are no new neighbors above the relaxed threshold.
- the result is one or more detected regions including connected areas of lower sphericity. In the example of FIG. 4 , the detected region may grow as far as the boundary B.
- the relaxed threshold may be fixed by the software, or may be variable by the user, for example within the range 0.75 to 0.85, but must in any case be lower than the high threshold.
- the detected regions may be highlighted for display in the original image, or may be displayed without the original image.
- the detected regions may be viewed by the radiologist as an aid to diagnosis, or may be provided as input to further processing steps to calculate physical characteristics and/or to perform automatic diagnosis.
- FIGS. 5 a - 5 c and 6 a - 6 c show the results of the spherical object enhancement on two different phantoms, with a) an original scan image, b) the scan image with the detected regions enhanced, and c) the detected regions without the original image.
- FIGS. 7 a , 7 b to 11 a , 11 b show single slice CT scans with a) the original scan image and b) the scan image with the detected regions enhanced.
- the proposed method has been implemented and tested on both phantom and clinical lung images. It demonstrates high performance in detecting objects such as lung nodules.
- An optional spherical enhancement step may be applied to the detected regions, to enhance lung nodules in a CT lung image by using spherical filtering (step 360 ).
- the spherical filtering process is based on image convolution with a spheroid kernel.
- the filter kernel has two distinct regions: a positively biased spherical inner region that has a diameter of the filter size, and a negatively biased outer shell region that has an inner diameter that is the filter size and an outer diameter that is less than twice the inner diameter, and is preferably set so that the volumes of the inner and outer shell regions are equal.
- the filter kernel defines a volumetric weighting function such that points within the inner region are positively weighted, while points in the outer region are negatively weighted.
- the positive weight is +1 and the negative weight is ⁇ 1.
- the volumetric weighting function is then convolved with the scan image data, and the convolution is summed to calculate a convolution strength.
- this means that the convolution strength is the sum of the intensities in the outer region subtracted from the sum of the intensities in the inner region.
- the maximum diameter d of the detected region is set as the initial diameter of the spherical inner region of the filter kernel, and the centre c of the filter kernel is set as the midpoint of the diameter d.
- the outer diameter of the outer shell region is set so that the volumes of the inner and outer regions are the same.
- the radius R 1 is then varied stepwise through a range R 1 ⁇ , where ⁇ is a small difference, such as 20% of R 1 .
- R 2 is varied correspondingly so that the inner and outer regions have the same volume, and the convolution strength is calculated.
- the maximum convolution strength is recorded, and the spherical filter with the corresponding value of R 1 is used to enhance the image.
- the image may be convolved with the spherical filter and the convoluted image may be output for display.
- the spherical filtering may be applied to the sphericity map rather than to the original image.
- FIGS. 14 a and 14 b show a) an original and b) spherically filtered image of a phantom
- FIGS. 15 a , 15 b to 20 a , 20 b show a) original and b) spherically filtered images from actual CT lung scans.
- the spheroid filtering method has been implemented and tested on both phantom and clinical lung images, with good results where the nodules were generally spherical in shape.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
A method of detecting a nodule in a three-dimensional scan image comprises calculating a three-dimensional sphericity index for each point in the scan image (310-330), applying a high sphericity threshold to the sphericity index (340) to obtain a candidate nodule region, and then performing region-growing (350) from the candidate region using a relaxed sphericity threshold to determine an extended region including less spherical parts connected to the candidate region. Optionally, spherical filtering may be applied to the image by matching the spherical filter to the extended region.
Description
- This application is a continuation application of patent application Ser. No. 10/868,892, filed Jun. 17, 2004 which claims the benefit of the filing date of GB Patent Application No. 0411284.3, filed May 20, 2004, each of which is incorporated herein by reference in its entirety.
- The present invention relates to a method of detecting nodules in computed tomography (CT) images, and particularly but not exclusively for detecting nodules in CT images of the lung. The method may be implemented using a computer, and the invention encompasses software and apparatus for carrying out the method.
- The mortality rate for lung cancer is higher than that for other kinds of cancers around the world. Detection of suspicious lesions in the early stages of cancer can be considered the most effective way to improve survival. Nodule detection is one of the more challenging tasks in medical imaging. Nodules may be difficult to detect on CT scans because of low contrast, small size, or location of the nodule within an area of complicated anatomy.
- Computer-assisted techniques have been proposed to identify regions of interest containing a nodule within a CT scan image, to segment the nodule from surrounding objects such as blood vessels or the lung wall, to calculate physical characteristics of the nodule, and/or to provide an automated diagnosis of the nodule. Fully automated techniques perform all of these steps without intervention by a radiologist, but one or more of these steps may require input from the radiologist, in which case the method may be described as semi-automated.
- Many lung nodules are approximately spherical, and various techniques have been proposed to identify spherical structures within a CT scan image. For example, the Nodule-Enhanced Viewing algorithm from Siemens AG is believed to perform thresholding on a three-dimensional (3D) CT scan to identify voxels having an intensity between predetermined maximum and minimum values. The identified voxels are grouped into connected objects, and objects which are approximately spherical are highlighted.
- US 2003/0099391 discloses a method for automatically segmenting a lung nodule by dynamic programming and expectation maximization (EM), using a deformable circular model to estimate the contour of the nodule in each two-dimensional (2D) slice of the scan image, and fitting a three-dimension (3D) surface to the contours.
- US 2003/0167001 discloses a method for automatically segmenting a CT image to identify regions of interest and to detect nodules within the regions of interest, in which a sphere is modeled within the region of interest, and points within the sphere are identified as belonging to a nodule, while a morphological test is applied to regions outside the sphere to determine whether they belong to the nodule.
- Although many nodules are approximately spherical, the non-spherical aspects of a nodule may be most important for calculating physical characteristics and for performing diagnosis. A spherical model may be useful to segment nodules from surrounding objects, but if the result is to incorrectly identify the nodule as a sphere and to discard non-spherical portions of the nodule, the characteristics of the nodule may also be incorrectly identified.
- According to an embodiment of the present invention, there is provided a method of detecting a nodule in a three-dimensional scan image, comprising calculating a sphericity index for each point in the scan image relative to surrounding points of similar intensity, applying a high sphericity threshold to the sphericity index to obtain a candidate nodule region, and then performing region-growing from the candidate region using a relaxed sphericity threshold to identify an extended region including less spherical parts connected to the candidate region. The extended region may be provided for display and/or for subsequent processing to calculate physical characteristics and/or to perform automatic diagnosis. In an embodiment, diagnosis may be performed by a radiologist on the basis of the enhanced image.
- The present inventor has realized that even non-spherical nodules generally include an approximately spherical region of a particular density: for example, a dense, spherical core may be surrounded by a slightly less dense, less spherical region that nevertheless forms part of the nodule. If a thresholding technique is applied to such a nodule, then only the shape of the outer, non-spherical region will be detected, and will be rejected as a candidate nodule because it is not sufficiently spherical. If the threshold is set between the density of the inner, spherical region and the outer, non-spherical region, then only the inner region will be detected and the outer region will be discarded. In contrast, embodiments of the present invention may allow such non-spherical nodules to be detected in their entirety.
- Preferably, the sphericity index is calculated from the first and second partial derivatives of the smoothed image in each direction at each point, and by calculating principal curvatures at each voxel. Equal curvatures in each direction give a high sphericity index. This method is less computationally intensive than explicitly generating iso-intensity surfaces for the image and then deriving the sphericity index from those iso-intensity surfaces.
- Preferably, the partial derivatives are calculated on a smoothed image. The smoothing function may involve the convolution of a smoothing function with the image. The smoothing may be applied at the same time as the partial derivatives are calculated, by convolving the scan image with the partial derivatives of the smoothing function.
- As an additional step, the extended region may be enhanced in the scan image by applying a spherical filter. The spherical filter may be fitted to the extended region by convolving the filter with the image, or a map of the sphericity of the image, and adjusting the filter until a maximum convolution value is achieved. The spherical filter may include a positive weighting in an inner region and a negative weighting in an outer region. The enhanced image may be output for display, and alternatively or additionally be used as input for subsequent processing stages.
-
FIG. 1 is a schematic diagram showing a CT scanner and a remote computer for processing image data from the scanner according to an embodiment of the present invention. -
FIG. 2 is an example computer system according to an embodiment of the present invention. -
FIG. 3 is a flowchart of an algorithm according to an embodiment of the present invention. -
FIG. 4 is a diagram of a nodule showing iso-intensity contours. -
FIGS. 5 a-5 c and 6 a-6 c show original images and images with spherical enhancement of two different lung phantoms. -
FIGS. 7 a, 7 b to 11 a, 11 b show original images and images with spherical enhancement of five different real scans. -
FIG. 12 illustrates a spherical filter according to an embodiment of the present invention. -
FIG. 13 illustrates the variation in radius of a spherical filter according to an embodiment of the present invention. -
FIGS. 14 a and 14 b show an original image and a spherically filtered image of a lung phantom. -
FIGS. 15 a, 15 b to 20 a, 20 b show original images and spherically filtered images respectively of six different real scans. - Each embodiment is performed on series of CT image slices obtained from a CT scan of the chest area of a human or animal patient. Each slice is a 2-dimensional digital grey-scale image of the x-ray absorption of the scanned area. The properties of the slice depend on the CT scanner used; for example, a high-resolution multi-slice CT scanner may produce images with a resolution of 0.5-0.6 mm/pixel in the x and y directions (i.e. in the plane of the slice). Each pixel may have 32-bit grayscale resolution. The intensity value of each pixel is normally expressed in Hounsfield units (HU). Sequential slices may be separated by a constant distance along the z direction (i.e. the scan separation axis); for example, by a distance of between 0.75-2.5 mm. Hence, the scan image is a three-dimensional (3D) grey scale image, with an overall size depending on the area and number of slices scanned.
- The present invention is not restricted to any specific scanning technique, and is applicable to electron beam computed tomography (EBCT), multi-detector or spiral scans or any technique that produces as output a 3D image, representing for example X-ray absorption or density.
- As shown in
FIG. 1 , the scan image is created by acomputer 110 which receives scan data from ascanner 120 and constructs the scan image. The scan image is saved as an electronic file or a series of files which are stored on astorage medium 130, such as a fixed or removable disc. The scan image may be processed by thecomputer 110 to identify and display lung nodules, or the scan image may be transferred to anothercomputer 140 which runs software for processing the image as described below. The image processing software may be stored on a carrier, such as a removable disc, or downloaded over a network. -
FIG. 2 illustrates anexample computer system 200, in which the present invention can be implemented as programmable code. Various embodiments of the invention are described in terms of thisexample computer system 200. After reading this description, it will become apparent to a person skilled in the art how to implement the invention using other computer systems and/or computer architectures. - The
computer system 200 includes one or more processors, such asprocessor 204.Processor 204 can be a special purpose or a general purpose digital signal processor. Theprocessor 204 is connected to a communication infrastructure 206 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the art how to implement the invention using other computer systems and/or computer architectures. -
Computer system 200 also includes amain memory 208, preferably random access memory (RAM), and may also include asecondary memory 210. Thesecondary memory 210 may include, for example, ahard disk drive 212 and/or aremovable storage drive 214, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. Theremovable storage drive 214 reads from and/or writes to aremovable storage unit 218 in a well known manner.Removable storage unit 218, represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to byremovable storage drive 214. As will be appreciated, theremovable storage unit 218 includes a computer usable storage medium having stored therein computer software and/or data. - In alternative implementations,
secondary memory 210 may include other similar means for allowing computer programs or other instructions to be loaded intocomputer system 200. Such means may include, for example, aremovable storage unit 222 and aninterface 220. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and otherremovable storage units 222 andinterfaces 220 which allow software and data to be transferred from theremovable storage unit 222 tocomputer system 200. -
Computer system 200 may also include acommunication interface 224.Communication interface 224 allows software and data to be transferred betweencomputer system 200 and external devices. Examples ofcommunication interface 224 may include a modem, a network interface (such as an Ethernet card), a communication port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc. Software and data transferred viacommunication interface 224 are in the form ofsignals 228 which may be electronic, electromagnetic, optical, or other signals capable of being received bycommunication interface 224. Thesesignals 228 are provided tocommunication interface 224 via acommunication path 226.Communication path 226 carriessignals 228 and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, or any other suitable communication channel. For instance, thecommunication path 226 may be implemented using a combination of channels. - In this document, the terms “computer program medium” and “computer usable medium” are used generally to refer to media such as
removable storage drive 214, a hard disk installed inhard disk drive 212, and signals 228. These computer program products are means for providing software tocomputer system 200. - Computer programs (also called computer control logic) are stored in
main memory 208 and/orsecondary memory 210. Computer programs may also be received viacommunication interface 224. Such computer programs, when executed, enable thecomputer system 200 to implement the present invention as discussed herein. Accordingly, such computer programs represent controllers of thecomputer system 200. Where the invention is implemented using software, the software may be stored in a computer program product and loaded intocomputer system 200 usingremovable storage drive 214,hard disk drive 212, orcommunication interface 224, to provide some examples. - Calculation of High Sphericity Areas
- An embodiment comprises image-processing software for detecting nodules in a three-dimensional CT scan image of a lung. The embodiment uses an algorithm comprising three principle steps. First, a 3D sphericity index (SI) is calculated for each volume element within the 3D image (voxel); secondly, based on the computed sphericity index map, a high SI threshold is used to determine a spherical region; then, a relaxed SI threshold is applied and the 3D connectivity of voxels above the relaxed threshold to the spherical region is calculated to determine the extent of the nodule. The detailed process is described below, with reference to the flowchart of
FIG. 3 . - Shape Feature Calculation and Sphericity Index Map Construction
-
-
FIG. 4 shows iso-intensity contours of a single slice of a sample nodule N, with intensity expressed in Hounsfield units (HU). In this case, the background intensity is 20 HU, while the outer boundary of the nodule is at approximately 90 HU, rising to over 120 HU at the core. Note that the core C is spherical (circular in this slice), while the outer boundary B is less spherical. - To compute the typical surface features, such as the principal curvature, a traditional approach is to fit a parametric surface model to the 3D image and then to compute the differential characteristics of the surface in the local coordinate system. Because it is very computationally intensive to explicitly generate an iso-surface, the differential characteristics of the surface in this embodiment are calculated directly from the 3D image without explicitly defining an iso-surface. The main steps are described below.
- The 3D image I(x,y,z) is convolved with the Gaussian function g(x,y,z) to generate a smoothed digital 3D image (step 310):
-
h(x,y,z)=I(x,y,z)*g(x,y,z) (1) - where
-
- and * is a convolution operator.
- Next, we compute the first and second partial derivatives of the smoothed 3D image h(x,y,z) (step 320).
- The first partial derivate of h(x,y,z) in the x direction is defined as:
-
- Based on the properties of the convolution operator, we have:
-
- So, the equation (1) can be rewritten as:
-
- Using the same method we can define hy, hz which are the first partial derivatives in the y and z direction, respectively, and also the second partial derivatives hxx, hyy, hzz, hxy, hxz, hyz. For example, hxy which is the second partial derivative in the x and y direction is defined as:
-
- Note that according to the above definition of the partial derivatives of the smoothed 3D image h(x,y,z) (e.g. equation 3 and equation 4), in the implementation process, both stages of the smoothing and calculating partial derivatives can be combined into one step, namely, the partial derivatives of the smoothed 3D images can be obtained by convoluting the raw 3D image I(x,y,z) with the high order Gaussian filters.
- Next, we compute the shape features using the first and second order partial derivatives (step 330).
- Compute Gaussian (K(p)) and mean (H(p)) curvatures:
-
- Principal curvatures (k1(p), k2(p)) at each voxel p:
-
k 1(p)=H(p)+√{square root over ((H 2(p)−K(p)))}{square root over ((H 2(p)−K(p)))} -
k 2(p)=H(p)−√{square root over ((H 2(p)−K(p)))}{square root over ((H 2(p)−K(p)))} - Sphericity Index:
-
- The sphericity index SI(p) characterizes the topological shape of the volume in the vicinity of the voxel p, whereas the volumetric curvature represents the magnitude of the effective curvature. Both quantities are based on two principal curvatures defined as above. The sphericity index is a function of the difference between a maximum curvature and a minimum curvature of an iso-surface at each point. If the curvature is equal in all directions, the iso-surface is a section of the surface of a sphere and the sphericity index is 1. If the iso-surface is a section of the surface of a cylinder, the sphericity index is 0.75. It is important to exclude cylindrical shapes as these are normally blood vessels.
- High Sphericity Index Region for Sphere-Like Object Seed
- A high threshold (e.g. 0.90) is applied to the sphericity index SI(p) (step 340), so that a set of foreground voxels is obtained for which SI(p) is above the threshold, and the foreground voxels are grouped together into connected regions. This grouping together may be done by region growing from an ungrouped foreground voxel, so as iteratively to add neighboring foreground voxels to the group until no neighboring foreground voxels exist. The process is then repeated from another ungrouped foreground voxel to define another group, until all foreground voxels belong to a group. Neighbors may be added in each of the three spatial dimensions, so that the region grows in three dimensions. The result is one or more highly spherical regions within the image. In the sample nodule N, this highly spherical region might extend only to the core C.
- The high threshold may be fixed by the software, or may be variable by the user, for example within the range 0.8 to 1.0.
- Region Growing Based on a Relaxed Sphericity Index Threshold
- Each of the highly spherical regions is used as an object seed for three-dimensional region growing. To each object seed, neighboring voxels above a relaxed shape-index threshold (e.g. SI(p)>0.80) are added using a three-dimensional region growing technique (step 350). The region-growing technique is applied iteratively to the region, so that neighboring voxels above the relaxed sphericity index threshold are added to the region and new neighbors are then added if they are above the relaxed threshold, and the process continues until there are no new neighbors above the relaxed threshold. The result is one or more detected regions including connected areas of lower sphericity. In the example of
FIG. 4 , the detected region may grow as far as the boundary B. - The relaxed threshold may be fixed by the software, or may be variable by the user, for example within the range 0.75 to 0.85, but must in any case be lower than the high threshold.
- The detected regions may be highlighted for display in the original image, or may be displayed without the original image. The detected regions may be viewed by the radiologist as an aid to diagnosis, or may be provided as input to further processing steps to calculate physical characteristics and/or to perform automatic diagnosis.
- Results
- Lung Phantom Data
-
FIGS. 5 a-5 c and 6 a-6 c show the results of the spherical object enhancement on two different phantoms, with a) an original scan image, b) the scan image with the detected regions enhanced, and c) the detected regions without the original image. - Real Lung Data
-
FIGS. 7 a, 7 b to 11 a, 11 b show single slice CT scans with a) the original scan image and b) the scan image with the detected regions enhanced. - Conclusion
- The proposed method has been implemented and tested on both phantom and clinical lung images. It demonstrates high performance in detecting objects such as lung nodules.
- Spherical Filtering
- An optional spherical enhancement step may be applied to the detected regions, to enhance lung nodules in a CT lung image by using spherical filtering (step 360).
- The spherical filtering process is based on image convolution with a spheroid kernel. The filter kernel has two distinct regions: a positively biased spherical inner region that has a diameter of the filter size, and a negatively biased outer shell region that has an inner diameter that is the filter size and an outer diameter that is less than twice the inner diameter, and is preferably set so that the volumes of the inner and outer shell regions are equal.
- With reference to
FIG. 12 , suppose the inner and outer radii are R1 and R2 respectively. The condition for inner region volume and outer region volume to be the same is V1=V2, where -
- The filter kernel defines a volumetric weighting function such that points within the inner region are positively weighted, while points in the outer region are negatively weighted. In a simple example, the positive weight is +1 and the negative weight is −1. The volumetric weighting function is then convolved with the scan image data, and the convolution is summed to calculate a convolution strength. In the simple example, this means that the convolution strength is the sum of the intensities in the outer region subtracted from the sum of the intensities in the inner region.
- With reference to
FIG. 13 , for each detected region, the maximum diameter d of the detected region is set as the initial diameter of the spherical inner region of the filter kernel, and the centre c of the filter kernel is set as the midpoint of the diameter d. The outer diameter of the outer shell region is set so that the volumes of the inner and outer regions are the same. - The radius R1 is then varied stepwise through a range R1±ε, where ε is a small difference, such as 20% of R1. For each stepwise variation, R2 is varied correspondingly so that the inner and outer regions have the same volume, and the convolution strength is calculated. The maximum convolution strength is recorded, and the spherical filter with the corresponding value of R1 is used to enhance the image. For example, the image may be convolved with the spherical filter and the convoluted image may be output for display.
- In an alternative embodiment, the spherical filtering may be applied to the sphericity map rather than to the original image.
- Experimental Results
- In the following tests the kernel diameters used are:
-
- 5, 7, 9, 11, 13, 15 pixels
- 4.9, 6.3, 7.7, 9.1, 10.5 millimeters
- The maximum convolution results (strength) and the size of the kernel are recorded and saved in the output image.
FIGS. 14 a and 14 b show a) an original and b) spherically filtered image of a phantom, whileFIGS. 15 a, 15 b to 20 a, 20 b show a) original and b) spherically filtered images from actual CT lung scans. - Conclusion
- The spheroid filtering method has been implemented and tested on both phantom and clinical lung images, with good results where the nodules were generally spherical in shape.
- The embodiments above are described by way of example, and are not intended to limit the scope of the invention. Various alternatives may be envisaged which nevertheless fall within the scope of the claims.
Claims (30)
1. A method of identifying a nodule in a computed tomography scan image of a lung, comprising:
(a) identifying a region of high sphericity within the image;
(b) extending the region to include connected points of lower sphericity; and
(c) outputting the extended object as an identified nodule.
2. The method of claim 1 , wherein step (a) includes calculating a sphericity index map of the image, and identifying connected points with a high sphericity index as belonging to said region.
3. The method of claim 2 , wherein the sphericity index indicates the variation in curvature of an iso-intensity surface in the region around each point.
4. The method of claim 3 , wherein the sphericity index is calculated from a partial derivative of intensity around each point.
5. The method of claim 4 , wherein the sphericity index is calculated from the first and second partial derivatives of intensity in three dimensions.
6. The method of claim 2 , wherein the sphericity index map is calculated on a smoothed image.
7. The method of claim 2 , wherein the step of identifying points having a high sphericity index comprises detecting whether each said point has a sphericity index above a predetermined high sphericity index threshold.
8. The method of claim 7 , wherein step (b) includes performing three-dimensional region growing from said region to add said connected points of lower sphericity.
9. The method of claim 8 , wherein said connected points are determined as having lower sphericity if their sphericity index is above a relaxed sphericity index threshold lower than said high sphericity index threshold.
10. The method of claim 1 , further including performing filtering around the extended region using a spherical filter.
11. The method of claim 10 , wherein the spherical filter comprises an inner spherical region of positive weight and an outer spherical region of negative weight.
12. The method of claim 11 , wherein the inner and outer spherical regions have approximately equal volumes.
13. The method of claim 11 , wherein the inner spherical region has a diameter approximately equal to the diameter of the extended region.
14. The method of claim 11 , wherein the filtering step comprises convolving the inner and outer spherical regions with the scan image.
15. The method of claim 14 , wherein the diameter of the inner spherical region is varied so as to determine a maximum strength of said convolution, and the spherical filtering corresponding to the maximum convolution strength is applied to the scan image to generate a spherically enhanced output image.
16. The method of claim 2 , further including performing filtering around the extended region using a spherical filter comprising an inner spherical region of positive weight and an outer spherical region of negative weight, wherein the filtering step comprises convolving the inner and outer spherical regions with the sphericity index map.
17. The method of claim 16 , wherein the diameter of the inner spherical region is varied so as to determine a maximum strength of said convolution, and the spherical filtering corresponding to the maximum convolution strength is applied to the sphericity index map to generate a spherically enhanced output image.
18. Apparatus for identifying a nodule in a computed tomography scan image of a lung, comprising:
(a) means for identifying a region of high sphericity within the image;
(b) means for extending the region to include connected points of lower sphericity; and
(c) means for outputting the extended object as an identified nodule.
19. The apparatus of claim 18 , wherein the means for identifying the region includes means for calculating a sphericity index map of the image, and for identifying connected points with a high sphericity index as belonging to said region.
20. The apparatus of claim 19 , wherein the sphericity index indicates the variation in curvature of an iso-intensity surface in the region around each point.
21. The apparatus of claim 20 , wherein the sphericity index is calculated from a partial derivative of intensity around each point.
22. The apparatus of claim 21 , wherein the sphericity index is calculated from the first and second partial derivatives of intensity in three dimensions.
23. The apparatus of claim 19 , wherein the sphericity index map is calculated on a smoothed image.
24. The apparatus of claim 19 , wherein the means for identifying points having a high sphericity index comprises means for detecting whether each said point has a sphericity index above a predetermined high sphericity index threshold.
25. The apparatus of claim 24 , wherein the means for extending the region is arranged to perform three-dimensional region growing from said object to add said connected points of lower sphericity.
26. The apparatus of claim 25 , wherein said connected points are determined as having lower sphericity if their sphericity index is above a relaxed sphericity index threshold lower than said high sphericity index threshold.
27. A method of identifying a nodule in a computed tomography scan image of a lung, comprising:
(a) calculating a sphericity index map of the image, wherein the sphericity index indicates the variation in curvature of an iso-intensity surface around each point of the image;
(b) identifying a connected region of high sphericity index within the image;
(c) performing three-dimensional region growing from said region to add connected points of lower sphericity; and
(d) outputting the extended object as an identified nodule.
28. Apparatus for identifying a nodule in a computed tomography scan image of a lung, comprising:
(a) means for calculating a sphericity index map of the image, wherein the sphericity index indicates the variation in curvature of an iso-intensity surface around each point of the image;
(b) means for identifying a connected region of high sphericity index within the image;
(c) means for performing three-dimensional region growing from said region to add connected points of lower sphericity; and
(d) means for outputting the extended object as an identified nodule.
29. A computer program product readable by a machine and embodying program code arranged to perform the following method steps for identifying a nodule in a computed tomography scan image of a lung, when executed by the machine:
(a) calculating a sphericity index map of the image, wherein the sphericity index indicates the variation in curvature of an iso-intensity surface around each point of the image;
(b) identifying a connected region of high sphericity index within the image;
(c) performing three-dimensional region growing from said region to add connected points of lower sphericity; and
(d) outputting the extended object as an identified nodule.
30. An article comprising a medium for storing instructions to enable a processor-based system to:
(a) identify a region of high sphericity based on a computed tomography scan image of a lung;
(b) extend the region to include connected points of lower sphericity; and
(c) output the extended object as an identified nodule.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/260,651 US20090123049A1 (en) | 2004-05-20 | 2008-10-29 | Nodule Detection |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0411284A GB2414295B (en) | 2004-05-20 | 2004-05-20 | Nodule detection |
GB0411284.3 | 2004-05-20 | ||
US10/868,892 US7460701B2 (en) | 2004-05-20 | 2004-06-17 | Nodule detection |
US12/260,651 US20090123049A1 (en) | 2004-05-20 | 2008-10-29 | Nodule Detection |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/868,892 Continuation US7460701B2 (en) | 2004-05-20 | 2004-06-17 | Nodule detection |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090123049A1 true US20090123049A1 (en) | 2009-05-14 |
Family
ID=32607668
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/868,892 Expired - Fee Related US7460701B2 (en) | 2004-05-20 | 2004-06-17 | Nodule detection |
US12/260,651 Abandoned US20090123049A1 (en) | 2004-05-20 | 2008-10-29 | Nodule Detection |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/868,892 Expired - Fee Related US7460701B2 (en) | 2004-05-20 | 2004-06-17 | Nodule detection |
Country Status (9)
Country | Link |
---|---|
US (2) | US7460701B2 (en) |
EP (1) | EP1755457A1 (en) |
JP (1) | JP2007537811A (en) |
KR (1) | KR20070083388A (en) |
CN (1) | CN101001572A (en) |
AU (1) | AU2005244641A1 (en) |
CA (1) | CA2567184A1 (en) |
GB (2) | GB2451367B (en) |
WO (1) | WO2005112769A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU194720U1 (en) * | 2019-10-15 | 2019-12-19 | Закрытое акционерное общество "Научно-исследовательский институт интроскопии МНПО "СПЕКТР" | X-RAY FILTER |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2451367B (en) | 2004-05-20 | 2009-05-27 | Medicsight Plc | Nodule Detection |
US7627173B2 (en) * | 2004-08-02 | 2009-12-01 | Siemens Medical Solutions Usa, Inc. | GGN segmentation in pulmonary images for accuracy and consistency |
US8229200B2 (en) * | 2005-03-14 | 2012-07-24 | General Electric Company | Methods and systems for monitoring tumor burden |
US20110044526A1 (en) * | 2008-04-18 | 2011-02-24 | Chao Liu | Process and apparatus for lung nodule segmentation in a chest radiograph |
US10083515B2 (en) * | 2008-11-25 | 2018-09-25 | Algotec Systems Ltd. | Method and system for segmenting medical imaging data according to a skeletal atlas |
EP2189942A3 (en) * | 2008-11-25 | 2010-12-15 | Algotec Systems Ltd. | Method and system for registering a medical image |
US8478007B2 (en) | 2008-12-12 | 2013-07-02 | Electronics And Telecommunications Research Institute | Method for detecting ground glass opacity using chest computed tomography |
KR101144964B1 (en) * | 2010-10-21 | 2012-05-11 | 전남대학교산학협력단 | System for Detection of Interstitial Lung Diseases and Method Therefor |
KR101805619B1 (en) | 2011-01-25 | 2017-12-07 | 삼성전자주식회사 | Apparatus and method for creating optimal 2-dimensional medical image automatically from 3-dimensional medical image |
JP5980490B2 (en) * | 2011-10-18 | 2016-08-31 | オリンパス株式会社 | Image processing apparatus, operation method of image processing apparatus, and image processing program |
CA2884167C (en) * | 2012-09-13 | 2020-05-12 | The Regents Of The University Of California | System and method for automated detection of lung nodules in medical images |
CN107180431B (en) * | 2017-04-13 | 2020-07-14 | 辽宁工业大学 | Effective semi-automatic blood vessel segmentation method in CT image |
CN109242839B (en) * | 2018-08-29 | 2022-04-22 | 上海市肺科医院 | CT image pulmonary nodule benign and malignant classification method based on novel neural network model |
KR102152385B1 (en) * | 2019-08-08 | 2020-09-04 | 주식회사 딥노이드 | Apparatus and method for detecting singularity |
JP7462188B2 (en) | 2021-03-12 | 2024-04-05 | 株式会社スペック・システム | Medical image processing device, medical image processing method, and program |
CN114187252B (en) * | 2021-12-03 | 2022-09-20 | 推想医疗科技股份有限公司 | Image processing method and device, and method and device for adjusting detection frame |
CN115035021A (en) * | 2022-04-20 | 2022-09-09 | 什维新智医疗科技(上海)有限公司 | Mammary nodule side echo analysis device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6125194A (en) * | 1996-02-06 | 2000-09-26 | Caelum Research Corporation | Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing |
US6404908B1 (en) * | 1998-05-28 | 2002-06-11 | R2 Technology, Inc. | Method and system for fast detection of lines in medical images |
US20030167001A1 (en) * | 2001-11-23 | 2003-09-04 | Allain Pascal Raymond | Method for the detection and automatic characterization of nodules in a tomographic image and a system of medical imaging by tomodensimetry |
US20040086161A1 (en) * | 2002-11-05 | 2004-05-06 | Radhika Sivaramakrishna | Automated detection of lung nodules from multi-slice CT image data |
US6882743B2 (en) * | 2001-11-29 | 2005-04-19 | Siemens Corporate Research, Inc. | Automated lung nodule segmentation using dynamic programming and EM based classification |
US6909797B2 (en) * | 1996-07-10 | 2005-06-21 | R2 Technology, Inc. | Density nodule detection in 3-D digital images |
US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
US20070140541A1 (en) * | 2002-12-04 | 2007-06-21 | Bae Kyongtae T | Method and apparatus for automated detection of target structures from medical images using a 3d morphological matching algorithm |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63163680A (en) * | 1986-12-26 | 1988-07-07 | Toshiba Corp | Picture contour extracting method |
JP3783116B2 (en) * | 1996-09-25 | 2006-06-07 | 富士写真フイルム株式会社 | Radiation image enhancement processing method and apparatus |
JP2001511374A (en) * | 1997-07-25 | 2001-08-14 | アーチ・デベロップメント・コーポレーション | Method and system for segmenting lung region of lateral chest radiograph |
US6898303B2 (en) | 2000-01-18 | 2005-05-24 | Arch Development Corporation | Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans |
US6944330B2 (en) * | 2000-09-07 | 2005-09-13 | Siemens Corporate Research, Inc. | Interactive computer-aided diagnosis method and system for assisting diagnosis of lung nodules in digital volumetric medical images |
US20040175034A1 (en) * | 2001-06-20 | 2004-09-09 | Rafael Wiemker | Method for segmentation of digital images |
WO2003024184A2 (en) * | 2001-09-14 | 2003-03-27 | Cornell Research Foundation, Inc. | System, method and apparatus for small pulmonary nodule computer aided diagnosis from computed tomography scans |
US7336809B2 (en) * | 2001-11-23 | 2008-02-26 | R2 Technology, Inc. | Segmentation in medical images |
JP2004065800A (en) * | 2002-08-09 | 2004-03-04 | Ge Medical Systems Global Technology Co Llc | Sector differentiating method and sector differentiating treatment apparatus |
GB2451367B (en) | 2004-05-20 | 2009-05-27 | Medicsight Plc | Nodule Detection |
-
2004
- 2004-05-20 GB GB0819225A patent/GB2451367B/en not_active Expired - Fee Related
- 2004-05-20 GB GB0411284A patent/GB2414295B/en not_active Expired - Fee Related
- 2004-06-17 US US10/868,892 patent/US7460701B2/en not_active Expired - Fee Related
-
2005
- 2005-05-13 CN CNA2005800208210A patent/CN101001572A/en active Pending
- 2005-05-13 KR KR1020067026790A patent/KR20070083388A/en not_active Application Discontinuation
- 2005-05-13 AU AU2005244641A patent/AU2005244641A1/en not_active Abandoned
- 2005-05-13 JP JP2007517398A patent/JP2007537811A/en not_active Ceased
- 2005-05-13 CA CA002567184A patent/CA2567184A1/en not_active Abandoned
- 2005-05-13 WO PCT/GB2005/001837 patent/WO2005112769A1/en active Application Filing
- 2005-05-13 EP EP05746470A patent/EP1755457A1/en not_active Withdrawn
-
2008
- 2008-10-29 US US12/260,651 patent/US20090123049A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6125194A (en) * | 1996-02-06 | 2000-09-26 | Caelum Research Corporation | Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing |
US6909797B2 (en) * | 1996-07-10 | 2005-06-21 | R2 Technology, Inc. | Density nodule detection in 3-D digital images |
US6404908B1 (en) * | 1998-05-28 | 2002-06-11 | R2 Technology, Inc. | Method and system for fast detection of lines in medical images |
US20030167001A1 (en) * | 2001-11-23 | 2003-09-04 | Allain Pascal Raymond | Method for the detection and automatic characterization of nodules in a tomographic image and a system of medical imaging by tomodensimetry |
US6882743B2 (en) * | 2001-11-29 | 2005-04-19 | Siemens Corporate Research, Inc. | Automated lung nodule segmentation using dynamic programming and EM based classification |
US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
US20040086161A1 (en) * | 2002-11-05 | 2004-05-06 | Radhika Sivaramakrishna | Automated detection of lung nodules from multi-slice CT image data |
US20070140541A1 (en) * | 2002-12-04 | 2007-06-21 | Bae Kyongtae T | Method and apparatus for automated detection of target structures from medical images using a 3d morphological matching algorithm |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU194720U1 (en) * | 2019-10-15 | 2019-12-19 | Закрытое акционерное общество "Научно-исследовательский институт интроскопии МНПО "СПЕКТР" | X-RAY FILTER |
Also Published As
Publication number | Publication date |
---|---|
GB0411284D0 (en) | 2004-06-23 |
JP2007537811A (en) | 2007-12-27 |
GB2414295B (en) | 2009-05-20 |
US7460701B2 (en) | 2008-12-02 |
GB2414295A (en) | 2005-11-23 |
AU2005244641A1 (en) | 2005-12-01 |
CN101001572A (en) | 2007-07-18 |
WO2005112769A1 (en) | 2005-12-01 |
GB2451367B (en) | 2009-05-27 |
GB0819225D0 (en) | 2008-11-26 |
EP1755457A1 (en) | 2007-02-28 |
CA2567184A1 (en) | 2005-12-01 |
US20050259856A1 (en) | 2005-11-24 |
GB2451367A (en) | 2009-01-28 |
KR20070083388A (en) | 2007-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090123049A1 (en) | Nodule Detection | |
US7574031B2 (en) | Nodule boundary detection | |
US8111896B2 (en) | Method and system for automatic recognition of preneoplastic anomalies in anatomic structures based on an improved region-growing segmentation, and commputer program therefor | |
EP2916737B1 (en) | System and method for automated detection of lung nodules in medical images | |
US7058210B2 (en) | Method and system for lung disease detection | |
NL2010613C2 (en) | Systems, apparatus and processes for automated medical image segmentation using a statistical model field of the disclosure. | |
EP2116973B1 (en) | Method for interactively determining a bounding surface for segmenting a lesion in a medical image | |
JP2006521118A (en) | Method, system, and computer program product for computer-aided detection of nodules with a three-dimensional shape enhancement filter | |
CN110992377B (en) | Image segmentation method, device, computer-readable storage medium and equipment | |
US20080187204A1 (en) | System, method and apparatus for small pulmonary nodule computer aided diagnosis from computed tomography scans | |
US20020006216A1 (en) | Method, system and computer readable medium for the two-dimensional and three-dimensional detection of lesions in computed tomography scans | |
US8107702B2 (en) | Process and system for automatically recognising preneoplastic abnormalities in anatomical structures, and corresponding computer program | |
EP1934940B1 (en) | Image processing method for boundary extraction between at least two tissues, the boundary being found as the cost minimum path connecting a start point and an end point using the fast marching algorithm with the cost function decreases the more likely a pixel is not on the boundary | |
CN107633514B (en) | Pulmonary nodule peripheral blood vessel quantitative evaluation system and method | |
US9014447B2 (en) | System and method for detection of lesions in three-dimensional digital medical image | |
JP2011526508A (en) | Segmentation of medical images | |
EP2998929B1 (en) | Transformation of 3-d object segmentation in 3-d medical image | |
JP2004520923A (en) | How to segment digital images | |
US8036442B2 (en) | Method for the processing of radiological images for a detection of radiological signs | |
GB2463906A (en) | Identification of medical image objects using local dispersion and Hessian matrix parameters | |
US8165375B2 (en) | Method and system for registering CT data sets | |
Retico et al. | A voxel-based neural approach (VBNA) to identify lung nodules in the ANODE09 study | |
Thomaz et al. | Liver segmentation from MDCT using regiongrowing based on t location-scale distribution | |
Hanzawa et al. | Development of a computer aided diagnosis system for three dimensional breast CT images | |
Hara et al. | Development of automated detection and classification methods of masses on 3D breast ultrasound images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDICSIGHT PLC, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DEHMESHKI, JAMSHID, DR.;REEL/FRAME:021758/0012 Effective date: 20050404 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |