CN1947151B - A system and method for toboggan based object segmentation using divergent gradient field response in images - Google Patents

A system and method for toboggan based object segmentation using divergent gradient field response in images Download PDF

Info

Publication number
CN1947151B
CN1947151B CN2005800127887A CN200580012788A CN1947151B CN 1947151 B CN1947151 B CN 1947151B CN 2005800127887 A CN2005800127887 A CN 2005800127887A CN 200580012788 A CN200580012788 A CN 200580012788A CN 1947151 B CN1947151 B CN 1947151B
Authority
CN
China
Prior art keywords
image
masking
response
pictorial element
gradient field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2005800127887A
Other languages
Chinese (zh)
Other versions
CN1947151A (en
Inventor
L·博戈尼
J·梁
S·佩里亚斯瓦米
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare GmbH
Original Assignee
Siemens Medical Solutions USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Medical Solutions USA Inc filed Critical Siemens Medical Solutions USA Inc
Publication of CN1947151A publication Critical patent/CN1947151A/en
Application granted granted Critical
Publication of CN1947151B publication Critical patent/CN1947151B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A method and device for segmenting one or more candidates in an image having image elements is disclosed. The method includes identifying a location for one of the candidates in the image, where the location is based at a given image element, and computing one or more response values at neighboring image elements that are in a neighborhood of the location. Image element clusters are created from the computed response values and one or more of the image element clusters are selected as object segmentations for one or more of the candidates.

Description

Carry out method and apparatus with divergent gradient field response based on the Object Segmentation of downhill race
The cross reference of related application
The application requires in the U.S. Provisional Application No.60/547 of submission on February 23rd, 2004,002 rights and interests, the title of this U.S. Provisional Application is " Toboggan BasedObject Segmentation Using Divergent Gradient Field Response InImages (using the Object Segmentation based on downhill race of divergent gradient field response in image) ", and this U.S. Provisional Application is incorporated herein by reference in full at this.
Technical field
The present invention relates generally to the analysis of multidimensional image, and relate more particularly in the 3-D graphical analysis, use downhill race and divergent gradient field response (DGFR, Divergent Gradient FieldResponse).
The discussion of correlation technique
The medical imaging field is used for determining to have presented great advance since the anatomical abnormalities from X ray first.Medical imaging hardware makes progress these machines such as medical science resonance image-forming (MRI) scanner, computed axial tomography (CAT) scanner etc. with the form of the machine of renewal.Because produce the great amount of images data, so need research and development in the medical image that is scanned, to determine the image processing techniques of the existence of anatomical abnormalities automatically by these modern medicine scanners.
There is multiple difficulty in the identification anatomical structure in digital medical image.Relate in one aspect to the accuracy of identification.Be the speed of identification on the other hand.Because medical image helps the diagnosis disease or the patient's condition, so the speed of identification is for helping the doctor to realize that diagnosis is extremely important early.Therefore, need improve recognition technology, these recognition technologies provide the anatomical structure of accurately and apace discerning in the medical image.
Digital medical image uses and is fabricated from for example raw image data that scanner obtained of cat scan instrument, MRI etc.Digital medical image typically is 2-D image of being made up of pixel element or the 3-D image of being made up of volume element (" voxel ").This 2-D image or 3-D image use the medical image recognition technology to handle, to determine to exist the anatomical structure such as tumour, tumour, polyp etc.But under the situation of the number that provides the view data that produces by any given image scanning, preferably, automatic technique should be pointed out anatomical features in the selected zone of image to the doctor, to be used for further diagnosing any disease or the patient's condition.
Be used to determine the existence of anatomical structure in medical image based on the Feature Recognition technology.But, lock into accuracy problems based on the Feature Recognition technology.Therefore, need not based on the Feature Recognition technology, these recognition technologies provide the anatomical features that improves in the ground identification medical image.
Use the medical image analysis technology of downhill race, DGFR etc. that the graphical analysis that improves is provided.But,, can further be enhanced by the advantage of using these technology to obtain so if use the combination of these technology.Therefore, need to determine the combination of image analysis technology, this technical combinations is compared the result that improvement can be provided with the image analysis technology of routine.
The use of the DGFR technology of medical image analysis is disclosed in the U.S. Patent application that is entitled as " A SYSTEM AND METHOD FOR FILTERING AND AUTOMATICDETECTION OF CANDIDATE ANATOMICAL STRUCTURES IN MEDICALIMAGES (being used for filtering and detecting automatically at medical image the system and method for candidate's anatomical structure) " of Senthil Periaswamy and LucaBogoni, this patented claim was submitted to and sequence number is 10/985 on November 10th, 2004,548, this patented claim is incorporated herein by reference in full at this.
Use the downhill race technology in the U.S. Patent application that is entitled as " TOBOGGAN BASED SHAPE CHARECTERIZATION (based on the shape facility of downhill race) " of Luca Bogoni and JianmingLiang, to be disclosed at medical image analysis, this patented claim was submitted to and sequence number is 11/006 on Dec 7th, 2004,282, this patented claim is incorporated herein by reference in full at this.
General introduction
One aspect of the present invention relates to a kind of method and apparatus that is used for cutting apart at the image with pictorial element one or more candidate targets.This method comprises: discern the position of one of described candidate target in this image, wherein, this position is based on given pictorial element; And calculate one or more responses near the adjacent image element place that is in this position.Create pictorial element bunch according to the response calculated, and one or more this pictorial element bunch is selected as the Object Segmentation of one or more candidate targets.
Another object of the present invention relates to a kind of method, this method is used for analyzing candidate target by extracting the subimage volume from image this image, and wherein, this subimage volume comprises pictorial element, and use these pictorial elements to carry out downhill race, to produce one or more toboggan cluster.These toboggan cluster are combined, will being defined as final toboggan cluster with corresponding at least one toboggan cluster of one of candidate target, and use this final toboggan cluster to cut apart this subimage volume, to analyze one of described candidate target.
The accompanying drawing summary
Exemplary embodiment of the present invention is described with reference to accompanying drawing, wherein:
Fig. 1 shows the process flow diagram of the candidate target detection of using DGFR and downhill race in an embodiment of the present invention;
Fig. 2 show in the exemplary embodiment of the present invention shown in sub-volumes in the 3D orthogonal view of exemplary polyp;
Fig. 3 shows the normalized gradient field of the exemplary polyp in the exemplary embodiment of the present invention;
Fig. 4 shows and is of a size of 11 exemplary template vector masking-out (mask) in the exemplary embodiment of the present invention;
Fig. 5 shows the DGFR response image that the exemplary masking-out size 11 at shown in Fig. 4 in the exemplary embodiment of the present invention is generated;
Fig. 6 show in the exemplary embodiment of the present invention at the DGFR response image that exemplary masking-out generated that is of a size of 9;
Fig. 7 show in the exemplary embodiment of the present invention at the DGFR response image that exemplary masking-out generated that is of a size of 7;
Fig. 8 show in the exemplary embodiment of the present invention at the DGFR response image that exemplary masking-out generated that is of a size of 5;
Fig. 9 shows the downhill race technology in the exemplary embodiment of the present invention;
Figure 10 shows and uses masking-out to be of a size of formed toboggan cluster in the DGFR response in 11 the exemplary embodiment;
Figure 11 shows the formed toboggan cluster on the sub-volumes of original image response in the exemplary embodiment of using masking-out to be of a size of 11 DGFR;
Figure 12 shows the flow process Figure 42 in conjunction with the process of toboggan cluster in the exemplary embodiment of the present invention;
Figure 13 shows the axial view of the toboggan cluster of the toboggan cluster that is expanded comprising in the exemplary embodiment of the present invention;
Figure 14 shows the axial view of the toboggan cluster of the toboggan cluster that is expanded comprising in the exemplary embodiment of the present invention;
Figure 15 shows in exemplary embodiment of the present invention formed bunch after the cohesive process of carrying out bunch;
Figure 16 shows the final toboggan cluster that is obtained in the exemplary embodiment of the present invention after carrying out morphological operation; And
Figure 17 shows the illustrative computer that is used in the exemplary embodiment of the present invention.
Exemplary embodiment describes in detail
Exemplary embodiment of the present invention is described with reference to the accompanying drawings.
Fig. 1 shows the process flow diagram of the candidate target detection of using DGFR and downhill race in an embodiment of the present invention.Flow process Figure 10 starts from step 12, in step 12, according to the position in the original image volume extract subimage volume I (x, y, z).This sub-volumes or isotropic or anisotropic.This subimage volume has fully covered needs to detect its candidate target that exists (a plurality of candidate target) in this image volume.This original sub-volume is described in the context of Fig. 2 below.Use the Object Segmentation based on downhill race (TBOS) (below be called as TBOS-DGFR) of divergent gradient field response (DGFR) suppose, use manual or automatic process, interested candidate target is placed in the image volume.Zone around the image that is positioned is a sub-volumes, and the characteristic of this candidate target need be determined.
Fig. 2 show in the exemplary embodiment of the present invention shown in the 3D orthogonal view of exemplary polyp in the sub-volumes.Being used to detect in the virtual colonoscopy of colon cancer, the polyp in the colon is regarded as candidate target as example.Those it will be apparent to one skilled in the art that this exemplary polyp only is an example, and (in the medical image or non-medical images in) other any candidate targets can both be detected.If can the compute gradient field and can carry out downhill race thereon, so just can be processed from multi-form image with any dimension, to detect candidate target.At this, (x, y z) are the example that has the sub-volumes of green strength image and comprise polyp to I.
(x, y z) can be determined by using mouse or other similar pointing device (not shown) to click polyp candidate object shown on screen by the user this sub-volumes I.Replacedly, this position candidate can be located automatically by detection module.At measuring the essential polyp segmentation of carrying out in this automatic breath.The polyp segmentation process has proposed to determine the difficult problem on interpolation surface (surface flat or higher-order) that polyp is separated with colon wall.
When the size fit of masking-out size and given polyp, this DGFR technology produces best response.But before polyp was cut apart and measured, the size of this polyp was typically unknown.Therefore, need calculate the DGFR response at a plurality of masking-out sizes, this causes the DGFR response of multiple scale, and wherein different masking-out sizes provides the basis of multiple scale.
Axial view window 241 shows image sub-volume I (x, y, z) orthogonal view of the exemplary polyp in.Axial view 26 1Show XZ axial plane view, this XZ axial plane view shows the polyp in the original image sub-volumes.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 1Show the XY plan view of this polyp.Axial view 32 1Show the YZ plan view of this polyp.
Return with reference to Fig. 1, in step 14, the normalized gradient field of this sub-volumes is calculated, to be used for further calculating.The direction of gradient is represented in the normalized gradient field.Estimate the normalized gradient field by gradient fields being divided by its amplitude.The normalized gradient field need be calculated, so that be independent of the intensity in the original image.Normalized gradient field example has been described in the context of Fig. 3 below.
Fig. 3 shows the normalized gradient field of the exemplary polyp in the exemplary embodiment of the present invention.Axial view window 24 2Show image sub-volume I (x, y, z) orthogonal view of the normalized gradient field of the exemplary polyp in.(x, y z) calculate shown gradient fields according to image sub-volume I.Axial view 26 2Show the XZ axial plane view of the normalized gradient field of the polyp in the original image sub-volumes.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 2Show the XY plan view of the normalized gradient field of this polyp.Axial view 32 2Show the YZ plan view of the normalized gradient field of this polyp.This normalized gradient field (I x(x, y, z), I y(x, y, z), I z(x, y, z)) represent, and in Fig. 3, be depicted as small arrow.
Return with reference to Fig. 1, in step 16, the normalized gradient field of being calculated is used to calculate DGFR (divergent gradient field response) response of the normalized gradient field of multiple scale.(x, y z) are defined as gradient fields (I to DGFR response DGFR x, I y, I z) be of a size of the convolution of the template vector masking-out of S.This template vector field masking-out comes into question in the context of Fig. 4 below.The convolution of expressing with equation form is listed below:
DGFR ( x , y , z ) = Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M x ( i , j , k ) I x ( x - i , y - j , z - k ) +
Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M y ( i , j , k ) I y ( x - i , y - j , z - k ) +
Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M z ( i , j , k ) I z ( x - i , y - j , z - k ) - - - ( 1 )
The template vector field masking-out M (M of masking-out size S wherein x(x, y, z), M y(x, y, z), M z(x, y, z)) be defined as:
M x ( i , j , k ) = i / ( i 2 + j 2 + k 2 ) - - - ( 2 )
M y ( i , j , k ) = j / ( i 2 + j 2 + k 2 ) - - - ( 3 )
M z ( i , j , k ) = k / ( i 2 + j 2 + k 2 ) - - - ( 4 )
Ω=[floor (S/2), floor (S/2)] wherein.
Top convolution is vectorial convolution.Though defined masking-out M can not be considered to be separated with an observation point, this masking-out M can utilize singular value decomposition to be similar to, and therefore can realize the quick realization of convolution.
DGFR is a kind of filter method (with its simplest form), is again a kind of algorithm of automatic detection that is used to carry out candidate's anatomical structure of complexity.For example, DGFR can be used to carry out polyp of colon, the aneurysm that automatic detection is used for detecting colon cancer, the lung tubercle that is used for detection of lung cancer etc.DGFR also can be used to obtain other descriptive characteristics of candidate's damage, is beneficial to its identification and classification.
The DGFR technology is described below.Suppose that (x, y z) are the green strength image volume to I, and this green strength image volume includes the polyp example that its three axial view are illustrated in Fig. 3.
Fig. 4 shows and is of a size of 11 exemplary template vector masking-out in the exemplary embodiment of the present invention.Being of a size of 11 exemplary three dimensional vector masking-out illustrates with the view along it.Axial view window 243 shows the orthogonal view that is of a size of 11 vectorial masking-out.Axial view 26 3Show the XZ axial plane view of this masking-out.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 3Show the XY plan view of this vector masking-out.Axial view 32 3Show the YZ plan view of this vector masking-out.
The template vector masking-out comprises the filter factor of DGFR.This template vector masking-out is used to and gradient vector field convolution mutually, to produce gradient field response.
Use different size, also be that the masking-out of different convolution kernels is emphasized the DGFR image response of foundation structure with generation, wherein convolution provides the highest response.Therefore, in this example, bead and semiglobe will produce response to the masking-out of reduced size (also being 5,7 and 9); And bigger structure will produce higher response to the masking-out with large-size (also being 21,23 and 25).But bigger structure is because the local symmetry of structure also may produce high response with less masking-out.The aspect of the position of the response that produces by less masking-out is used to produce in the following discussion/in conjunction with the high frequency details of divided polyp.
Fig. 5,6,7 and 8 shows masking-out and is of a size of 11,9,7 and 5 DGFR response image.Need a plurality of masking-out sizes,, and therefore must on multiple scale, produce a plurality of DGFR responses according to different masking-out sizes because the size of polyp is unknown in this stage.
Fig. 5 shows the DGFR response image that the exemplary masking-out size 11 at shown in Fig. 4 in the exemplary embodiment of the present invention is produced.Axial view window 244 shows and uses the orthogonal view be of a size of 11 the DGFR response image that vectorial masking-out produced.Axial view 26 4Show the XZ axial plane view that masking-out is of a size of 11 DGFR response.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 4Show the XY plan view that masking-out is of a size of 11 DGFR response.Axial view 32 4Show the YZ plan view that masking-out is of a size of 11 DGFR response.This DGFR response produces by vectorial convolution operation, and wherein (in this example) is of a size of 11 masking-out and is used to normalized gradient field (I shown in top equation 1 to 4 x(x, y, z), I y(x, y, z), I z(x, y, z)).
Fig. 6 show in the exemplary embodiment of the present invention at the DGFR response image that exemplary masking-out produced that is of a size of 9.Axial view window 24 5Show and use the orthogonal view be of a size of 9 the DGFR response image that vectorial masking-out produced.Axial view 26 5Show the XZ axial plane view that masking-out is of a size of 9 DGFR response.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 5Show XY plan view at the DGFR response of masking-out size 9.Axial view 32 5Show YZ plan view at the DGFR response of masking-out size 9.
Fig. 7 shows the DGFR response image that is produced at exemplary masking-out size 7 in the exemplary embodiment of the present invention.Axial view window 24 6Show and use the orthogonal view be of a size of 7 the DGFR response image that vectorial masking-out produced.Axial view 26 6Show XZ axial plane view at the DGFR response of masking-out size 7.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 6Show XY plan view at the DGFR response of masking-out size 7.Axial view 32 6Show YZ plan view at the DGFR response of masking-out size 7.
Fig. 8 show in the exemplary embodiment of the present invention at the DGFR response image that exemplary masking-out produced that is of a size of 5.Axial view window 24 7Show and use the orthogonal view be of a size of 5 the DGFR response image that vectorial masking-out produced.Axial view 26 7Show XZ axial plane view at the DGFR response of masking-out size 5.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 7Show XY plan view at the DGFR response of masking-out size 5.Axial view 32 7Show YZ plan view at the DGFR response of masking-out size 5.
Return with reference to Fig. 1,, use DGFR to carry out downhill race as downhill race potential (Tobogganpotential) in step 18, with virtually in its adjacent area slip image pixel or voxel form bunch.Use the example of image sub-volume that downhill race is shown below.
Fig. 9 shows the downhill race technology in the exemplary embodiment of the present invention.For exemplary purposes, use the two dimensional image space that downhill race is discussed.Downhill race is a kind of non-repetitive, the one-parameter technology with the operation of linear operation number of times.Downhill race realizes the linear operation number of times, because it is only handled once each image pixel/voxel.In at least one embodiment of the present invention, calculate the downhill race potential according to the original image volume, and this downhill race potential depend on use and image in will divided object.Therefore, downhill race potential is used to determine the glide direction at each pixel place.On the contrary, normally the uncontinuity or the local contrast of image are measured in unique input of downhill race.
In at least one embodiment of the present invention, use the virtual coloscope inspection to extract polyp.Be used for extracting the response image of polyp by producing from original image volume application DGFR.Shown in image section 34 in, all pixels that slide to same position are grouped into together, thereby and image volume are divided into voxel clusters set.
Image section 34 shows the downhill race process in 5 * 5 downhill race potentials in the 2-D image space.The potential value at each pixel P1-P25 place of the numeral of drawing a circle that is associated with arrow.Produce potential value by image volume being used DGFR (divergent gradient field response) in order to produce the DGFR response.Each pixel " cunning " has minimum potential to it neighbor.In this example, all pixels slide into same position, and this position is called as the concentrated position P1 with zero-bit gesture.Pixel has formed single bunch to this slip of concentrated position.
The neighbor of selecting to have minimum potential that slides through of pixel is determined.For example, pixel P2 has potential 27, and its neighbor P3, P8 and P7 have potential 20,12 and 14 respectively.Because each pixel is all slided to the neighbor with minimum potential, so the pixel P8 with minimum potential 12 among P2 pixel slides three neighbor P3, P8 and the P7.
Another example of pixel P4 is described below.Has potential 20,12,6,6 and 8 respectively with pixel P4 adjacent pixels P3, P8, P9, P10 and P5.Pixel P9 and P10 are the neighbors that has minimum potential 6 in other neighbors P3, the P8 of pixel P4 and P5.According to the predetermined selection criterion that is used for selecting between the neighbor of identical minimum potential having, pixel P4 slides to pixel P9.
Pixel P1 has zero minimum potential, and therefore its all neighbor P13, P14, P15, P19, P24, P23, P22 and P18 slide to pixel P1, and therefore forms single bunch, and this single bunch is " concentrated position ".Therefore, image volume can be split into the voxel clusters set.
In the process of downhill race, each voxel (in 3-D) and pixel (in 2-D) slide into/climb one of its neighbor according to the potential of being calculated.Though above-mentioned example shows the neighbor that pixel slides has minimum potential, this only is an example, and the calculating of application and downhill race potential is depended in the selection of neighbor.For example, pixel can slide to or climb to and have the position of maximum or minimum potential.
In at least one exemplary embodiment of polyp segmentation, when DGFR response is used as the downhill race potential, so just select neighbor with maximum potential, wherein each voxel all climbs to and has the neighbor of most significant digit gesture.If given voxel has the potential all higher than its any neighbor, it just no longer climbs so, and itself just becomes concentrated position.This process is that each voxel of given DGFR response generates downhill race direction and downhill race label.All voxels of same concentrated position of climbing are associated with a unique bunch label, and are grouped into a toboggan cluster.
Figure 10 shows the toboggan cluster of using masking-out to be of a size of the formation in DGFR response in 11 the exemplary embodiment.This toboggan cluster is represented as axial view window 24 8In small circle (36 1, 38 1With 40 1).Axial view window 24 8Show the orthogonal view that is of a size of the toboggan cluster on 11 the DGFR response image that vectorial masking-out produced in use.Axial view 26 8Show and comprise XZ toboggan cluster view 36 1The axial plane view of toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 8Show and comprise XY toboggan cluster view 38 1The axial plane view of toboggan cluster.Axial view 32 8Show and comprise YZ toboggan cluster view 40 1The axial plane view of toboggan cluster.
Figure 11 shows the formed toboggan cluster on the sub-volumes of original image response in the exemplary embodiment of using masking-out to be of a size of 11 DGFR.This toboggan cluster is represented as axial view window 24 9In small circle.Axial view window 24 9Show the orthogonal view that is of a size of the toboggan cluster on 11 the DGFR response image that vectorial masking-out produced in use.Axial view 26 9Show and comprise XZ toboggan cluster view 36 2The axial plane view of toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 9Show and comprise XY toboggan cluster view 38 2The axial plane view of toboggan cluster.Axial view 32 9Show and comprise YZ toboggan cluster view 40 2The axial plane view of toboggan cluster.
The example technique that is used to optimize the downhill race process is discussed below.In some applications, this downhill race process can only be used to the zonule, also, needn't slide in this sub-volumes/climb for all voxels.For example, under the situation of polyp segmentation, only the zone along the polyp wall is interested, and needn't climb/slide for the voxel of representing air or bone.These less important voxels can be according to known intensity level with the relevant Huo Sifeierdeshi unit (HU, Houndsfield Units) that is associated with air and bone and got threshold value in advance.
The DGFR response also can be got threshold value, makes that any voxel with response lower than selected value is not processed.Therefore, get threshold value can be better to wanting processed zone to improve, and and then it can also remove unnecessary calculating, thereby quicken the downhill race process.Downhill race is performed with every kind of scale, and therefore must all have toboggan cluster at every kind of scale place.But, get threshold value by above-mentioned responding to intensity with to DGFR, may not final toboggan cluster just for certain scale (multiple scale).
Consider the DGFR response below.To the support of DGFR response and the ratio that is symmetrically of gradient fields.Also promptly, gradient fields is symmetrical more, and response is just big more.When the centrostigma of gradient fields is consistent with the center of the masking-out of catching response, disperses the field and will provide the highest response.Now, for (also promptly be in shape spherical and be connected to colon wall via bar) of the desirable band base of a fruit or there is not (also promptly hemispheric) polyp of the base of a fruit, the strongest gradient will concentrate on central authorities, and its amplitude will be supported by strong edge transition.The corresponding to template masking-out of the diameter of its size and this structure will produce the strongest response.This masking-out is called as " capturing masking-out (capturing mask) ".
This response from DGFR depends in part on the symmetry of polypoid structure, and depends on the centrostigma of dispersing the field.When handling identical polypoid structure, will have and the field of misalignment more and more, best field than capturing the big masking-out of masking-out, and therefore response will reduce.Less masking-out is still old support, and may have than capturing the higher response of masking-out.This can understand by imagining the less masking-out that slides to the edge radially from the center.Under the situation of restriction, in this case, masking-out is of a size of 5, for example for the polyp of the no base of a fruit, will come from hemisphere more than half for the support by the response that this masking-out produced.Therefore for less masking-out, although gradient fields may not exclusively be aimed at, all response may be higher.
By anatomical changeability of top observation and given divided structure, needn't form the single toboggan cluster of the complete polyp segmentation of representative.Therefore, need respond formed toboggan cluster to the DGFR that utilizes multiple polyp segmentation scale place and carry out combination.
As from can finding out Figure 10 and 11, toboggan cluster is collected on the polyp regions.Obtained by the zone of downhill race in polyp regions by " concentrating ".
Return with reference to Fig. 1, in step 20, by bunch being combined in together of downhill race.The process of combination toboggan cluster is explained in the context of Figure 12 below.
In step 22, handled sub-volumes is cut apart, to detect one or more candidate targets.
Figure 12 shows the flow process Figure 42 in conjunction with the process of toboggan cluster in the exemplary embodiment of the present invention.The description of cohesive process is with reference to the example shown in Fig. 9-11 and the 13-16 (hereinafter referred to as " shown in example ").In step 44, initial cluster is selected.As mentioned above, the toboggan cluster of multiple scale is obtained by the masking-out of response image being used different size.Owing to used threshold value, the toboggan cluster of any big scale (also being big masking-out) can not be arranged.Therefore, need to determine initial cluster.Search procedure is considered the masking-out that next is bigger then from available maximum masking-out, up to finding the toboggan cluster that comprises the detection position.This initial toboggan cluster is considered to " base cluster ".
In an example shown, if used threshold value (0.3), so from 23 (from top to bottom to 13 according to scale; From big to small) DGFR response does not form bunch in inspection positions; Initial cluster (also being base cluster) is found at scale 11 places, and as shown in Figure 10, wherein this base cluster is added in the DGFR response, and on identical bunch of original sub-volume that is added among Figure 10.
In step 46, this base cluster is expanded.Finish the expansion of base cluster by optionally comprising the voxel in the toboggan cluster that response is produced according to the DGFR at next less masking-out place.In step 48, require all bunches of all voxels in the covering base cluster all found.
In step 50, be performed as described below at the repetitive process of each bunch that finds in step 48.Suppose that base cluster has B voxel, the new cocooning tool that consideration will comprise has C voxel, and the total number of the voxel in base cluster or new bunch is D, and p1 is defined as (B+C-D)/B so, and p2 is defined as (B+C-D)/C.If bunch satisfy condition: (p1>comprise threshold value 1) and (p2>comprise threshold value 2), then only expand base cluster by comprising from the voxel of this bunch.Shown in this example in, in exemplary, comprise threshold value 1 and be set as 0.6, be set as 0.3 and comprise threshold value 2.
In step 52, that is expanded bunch is set as base cluster.In step 54, repeat this expansion process, up to the DGFR response that arrives minimum masking-out place.In the present example, Zui Xiao masking-out is of a size of 5.Formed bunch is illustrated in Figure 15.
Figure 13 and 14 shows the axial view that is used in the toboggan cluster in the above-mentioned bunch cohesive process in exemplary embodiment of the present invention.In an example shown, only have one bunch, its voxel is marked (56 with "+" symbol in Figure 13 and 14 1-2, 58 1-2, 60 1-2).New be expanded bunch covered all or the voxel (36 of big at least number percent in the base cluster 3-4, 38 3-4, 40 3-4), these voxels are represented with " o " in Figure 13 and 14.Usually, need a plurality of bunches of all voxels that cover in the base cluster.If have a plurality of bunches to guarantee the progressively expansion of this base cluster, each in these bunches is all evaluated so, all satisfies the following condition that comprises to verify each bunch.
Figure 13 shows the axial view of the toboggan cluster that comprises the toboggan cluster of being expanded in the exemplary embodiment of the present invention.In Figure 13, this toboggan cluster is represented as axial view window 24 10In small circle.Axial view window 24 10Show the orthogonal view of this toboggan cluster.Axial view 26 10Show and comprise XZ toboggan cluster view 36 3With XZ extended clusters view 56 1The axial plane view of toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 10Show and comprise XY toboggan cluster view 38 3With XY extended clusters view 58 1The axial plane view of toboggan cluster.Axial view 32 10Show and comprise YZ toboggan cluster view 40 3With YZ extended clusters view 60 1The axial plane view of toboggan cluster.
Figure 14 shows the axial view of the toboggan cluster that comprises the toboggan cluster of being expanded in the exemplary embodiment of the present invention.In Figure 14, this toboggan cluster is represented as axial view window 24 11In small circle.Axial view window 24 11Show the orthogonal view of this toboggan cluster.Axial view 26 11Show and comprise XZ toboggan cluster view 36 4With XZ extended clusters view 56 2The axial plane view of toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 11Show and comprise XY toboggan cluster view 38 4With XY extended clusters view 58 2The axial plane view of toboggan cluster.Axial view 32 11Show and comprise YZ toboggan cluster view 40 4With YZ extended clusters view 60 2The axial plane view of toboggan cluster.
Figure 15 shows in exemplary embodiment of the present invention formed bunch after the cohesive process of carrying out bunch.In Figure 15, formed toboggan cluster is represented as axial view window 24 12In small circle.Axial view window 24 12Show the orthogonal view of formed toboggan cluster.Axial view 26 12Show and comprise XZ toboggan cluster view 36 5The axial plane view of formed toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 12Show and comprise XY toboggan cluster view 38 5The axial plane view of formed toboggan cluster.Axial view 32 11Show and comprise YZ toboggan cluster view 40 5The axial plane view of formed toboggan cluster.
Figure 16 shows and carry out the final toboggan cluster that is obtained after the morphological operation in exemplary embodiment of the present invention.In Figure 16, final toboggan cluster is represented as axial view window 24 13In small circle.Axial view window 24 13Show the orthogonal view of final toboggan cluster.Axial view 26 13Show and comprise XZ toboggan cluster view 36 6The axial plane view of final toboggan cluster.Cross-hair 28 is positioned, with indication polyp existing and the position in this sub-volumes.Axial view 30 13Show and comprise XY toboggan cluster view 38 6The axial plane view of final toboggan cluster.Axial view 32 11Show and comprise YZ toboggan cluster view 40 6The axial plane view of final toboggan cluster.
Morphological operation is described below.Final bunch from said process may not illustrate this polyp surface fully, therefore, adopts morphological operation (for example expand, sealing etc.) to finish the final toboggan cluster as polyp segmentation.Comprise the voxel that forms the polyp surface owing to concentrate on, so include more than n1 airborne voxel and application process of expansion during more than the voxel in this bunch of n2 when the expansion masking-out.By requiring part expansion masking-out to include the airborne voxel of a number percent, expansion is limited to bunch part that stretches towards intracolic inner chamber (air).In an example shown, n1=12 and n2=3, final toboggan cluster is presented in Figure 15.
Cross over various masking-outs bunch combination may produce and comprise foraminate final bunch.This is because bunch basis has the DGFR response of different masking-outs is extracted and confined mode in growth.In order to fill these gaps and also to make its outer shape level and smooth, applied morphology sealing on these final bunch.In an example shown, do not comprise existing gap and other voxel by the morphology sealing.Toboggan cluster among Figure 16 provides final polyp segmentation.
Final divided image can utilize the mould shapes that is used to improve initial segmentation of divided candidate target (being polyp in this example) and known kind compared and further improve.This will guarantee that final divided candidate target is consistent with the prototype topology of known candidate target.In medical image, this prototype candidate target can be tumour, polyp, tubercle etc.
With reference to Figure 17, according to exemplary embodiment of the present invention, be used to realize that computer system 101 of the present invention can especially comprise: CPU (central processing unit) (CPU) 102, storer 103 and I/O (I/O) interface 104.This computer system 101 is coupled by I/O interface 104 and display 105 with such as the multiple input media 106 of mouse and keyboard usually.Support circuit can comprise the circuit such as cache memory, power supply, clock circuit and communication bus.Storer 103 can comprise random-access memory (ram), ROM (read-only memory) (ROM), disc driver, tape drive etc. or its combination.Exemplary embodiment of the present invention can be implemented as the routine 107 that is stored in the storer 103 and moves by CPU 102, to handle the signal from signal source 108.Thereby computer system 101 is general-purpose computing systems, and in exemplary embodiment of the present invention, when operation routine 107 of the present invention, this general-purpose computing system becomes dedicated computer system.
This computer platform 101 also comprises operating system and micro-instruction code.Various processes described here and function can be part micro-instruction code or the certain applications programs of moving by operating system (or its combination).In addition, other various peripheral units can be connected to computer platform, such as being connected to other data storage device and printing equipment.
Should further understand, because some described in the accompanying drawing are formed system units and method step can be realized with software, thus the actual connection between the system unit (or process steps) may depend in exemplary embodiment of the present invention the present invention in its mode of programming difference.The given religious doctrine of the present invention that is here provided, those of ordinary skill in the art will have the ability to anticipate these and similarly implementation or configuration of the present invention.
Though the present invention is shown and described with reference to its exemplary embodiment especially, but, those person of skill in the art will appreciate that, under situation about not deviating from as the appended defined the spirit and scope of the present invention of claim, wherein can carry out the multiple variation on form and the details.

Claims (13)

1. method that is used for cutting apart one or more candidate targets at image with pictorial element, the step that this method comprises is:
Discern the position of one of described candidate target in described image, wherein this position is based on the given pictorial element in the described pictorial element;
Calculate one or more responses near the one or more adjacent image elements place that is positioned at the described position;
Create one or more pictorial elements bunch according to the response of being calculated; And
Select one or more pictorial elements bunch Object Segmentation as one or more candidate targets,
Wherein the step of calculated response value also comprises:
From described image, use one or more template masking-outs to calculate one or more divergent gradient field responses, and the step that it is characterized in that wherein creating pictorial element bunch comprise the steps:
Use described one or more divergent gradient field response to carry out downhill race, to produce one or more toboggan cluster.
2. method according to claim 1, the step of wherein creating pictorial element bunch comprises:
Come from described pictorial element, to form base cluster according to the divergent gradient field response that is calculated at least one first template masking-out; And
Come described base cluster is improved according to the divergent gradient field response that is calculated at least one second template masking-out.
3. method according to claim 2, the step that also comprises is:
Calculate the normalized gradient field of described image.
4. method according to claim 3, wherein the step that also comprises of the step of calculated response value is:
Carry out the vectorial convolution of described normalized gradient field and one or more template masking-outs.
5. method according to claim 4, wherein, described template masking-out comprises:
M x ( i , j , k ) = i / ( i 2 + j 2 + k 2 )
M y ( i , j , k ) = j / ( i 2 + j 2 + k 2 )
M z ( i , j , k ) = k / ( i 2 + j 2 + k 2 ) .
6. method according to claim 5, wherein, described vectorial convolution operation comprises:
DGFR ( x , y , z ) = Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M x ( i , j , k ) I x ( x - i , y - j , z - k ) +
Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M y ( i , j , k ) I y ( x - i , y - j , z - k ) +
Σ k ∈ Ω Σ j ∈ Ω Σ i ∈ Ω M z ( i , j , k ) I z ( x - i , y - j , z - k ) ,
Ω=[floor (S/2), floor (S/2)] wherein, and S is the masking-out size.
7. method according to claim 2, the step that also comprises is:
From described toboggan cluster, select base cluster corresponding to the maximum template masking-out the template masking-out that detects at least one candidate target at it; And
If selected toboggan cluster satisfies the predetermined threshold value criterion that comprises, then repeatedly usually expand described base cluster by the corresponding image primitive of selected toboggan cluster that comprises and from template masking-out, obtain less than described maximum masking-out.
8. method according to claim 1, the step of wherein creating image bunch comprises:
Use described one or more divergent gradient field response to be used as one or more downhill race potentials of each pictorial element; And
Use described one or more divergent gradient field response to come on described pictorial element, to carry out the downhill race operation, to determine toboggan cluster.
9. method according to claim 8, the step of wherein carrying out described downhill race operation also comprises:
Concentrated position in described pictorial element, the given pictorial element from described pictorial element slides into adjacent pictorial element.
10. method according to claim 8 wherein, is carried out slip towards the concentrated position with one of minimum and maximum downhill race potential value.
11. method according to claim 8, the step of wherein carrying out described downhill race operation also comprises:
Concentrated position in described pictorial element, the given pictorial element from the described pictorial element adjacent pictorial element that climbs.
12. method according to claim 11 wherein, is carried out described climbing towards the concentrated position with one of minimum and maximum downhill race potential value.
13. an equipment that is used for cutting apart at the image with pictorial element one or more candidate targets, this equipment comprises:
Be used for discerning the device of one of candidate target in the position of described image, wherein, this position is based on the given pictorial element in the described pictorial element;
Be used for calculating the device of one or more responses being positioned near the described position one or more adjacent image elements place;
Be used for creating the device of one or more pictorial elements bunch according to the response of being calculated; And
Be used to select one or more pictorial elements bunch device as the Object Segmentation of one or more candidate targets,
The device that wherein is used for the calculated response value also comprises:
Be used for using one or more template masking-outs to calculate the device of one or more divergent gradient field responses from described image, and
The device that it is characterized in that wherein being used to creating pictorial element bunch comprises and is used to use described one or more divergent gradient field response to carry out downhill race to produce the device of one or more toboggan cluster.
CN2005800127887A 2004-02-23 2005-02-23 A system and method for toboggan based object segmentation using divergent gradient field response in images Expired - Fee Related CN1947151B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US54700204P 2004-02-23 2004-02-23
US60/547,002 2004-02-23
US11/062,411 2005-02-22
US11/062,411 US7526115B2 (en) 2004-02-23 2005-02-22 System and method for toboggan based object segmentation using divergent gradient field response in images
PCT/US2005/005694 WO2005083633A2 (en) 2004-02-23 2005-02-23 A system and method for toboggan based object segmentation using divergent gradient field response in images

Publications (2)

Publication Number Publication Date
CN1947151A CN1947151A (en) 2007-04-11
CN1947151B true CN1947151B (en) 2011-01-26

Family

ID=34864027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2005800127887A Expired - Fee Related CN1947151B (en) 2004-02-23 2005-02-23 A system and method for toboggan based object segmentation using divergent gradient field response in images

Country Status (8)

Country Link
US (1) US7526115B2 (en)
EP (1) EP1719080B1 (en)
JP (1) JP4879028B2 (en)
CN (1) CN1947151B (en)
AU (1) AU2005216314A1 (en)
CA (1) CA2557122C (en)
DE (1) DE602005009923D1 (en)
WO (1) WO2005083633A2 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7480412B2 (en) * 2003-12-16 2009-01-20 Siemens Medical Solutions Usa, Inc. Toboggan-based shape characterization
US20060209063A1 (en) * 2004-10-12 2006-09-21 Jianming Liang Toboggan-based method for automatic detection and segmentation of objects in image data
US7912294B2 (en) * 2005-05-27 2011-03-22 Siemens Medical Solutions Usa, Inc. System and method for toboggan-based object detection in cutting planes
JP2007272466A (en) * 2006-03-30 2007-10-18 National Institute Of Advanced Industrial & Technology Multi-peak function segmentation method employing pixel based gradient clustering
JP4894369B2 (en) * 2006-06-19 2012-03-14 富士通株式会社 3D model image processing device
US8023703B2 (en) * 2006-07-06 2011-09-20 The United States of America as represented by the Secretary of the Department of Health and Human Services, National Institues of Health Hybrid segmentation of anatomical structure
US7961923B2 (en) * 2006-08-22 2011-06-14 Siemens Medical Solutions Usa, Inc. Method for detection and visional enhancement of blood vessels and pulmonary emboli
US7925065B2 (en) * 2006-08-22 2011-04-12 Siemens Medical Solutions Usa, Inc. Finding blob-like structures using diverging gradient field response
US7940977B2 (en) 2006-10-25 2011-05-10 Rcadia Medical Imaging Ltd. Method and system for automatic analysis of blood vessel structures to identify calcium or soft plaque pathologies
US7860283B2 (en) 2006-10-25 2010-12-28 Rcadia Medical Imaging Ltd. Method and system for the presentation of blood vessel structures and identified pathologies
US7873194B2 (en) 2006-10-25 2011-01-18 Rcadia Medical Imaging Ltd. Method and system for automatic analysis of blood vessel structures and pathologies in support of a triple rule-out procedure
US7983459B2 (en) 2006-10-25 2011-07-19 Rcadia Medical Imaging Ltd. Creating a blood vessel tree from imaging data
US7940970B2 (en) 2006-10-25 2011-05-10 Rcadia Medical Imaging, Ltd Method and system for automatic quality control used in computerized analysis of CT angiography
US8036440B2 (en) * 2007-02-05 2011-10-11 Siemens Medical Solutions Usa, Inc. System and method for computer aided detection of pulmonary embolism in tobogganing in CT angiography
US8494235B2 (en) * 2007-06-04 2013-07-23 Siemens Medical Solutions Usa, Inc. Automatic detection of lymph nodes
US20090067494A1 (en) * 2007-09-06 2009-03-12 Sony Corporation, A Japanese Corporation Enhancing the coding of video by post multi-modal coding
US8200466B2 (en) 2008-07-21 2012-06-12 The Board Of Trustees Of The Leland Stanford Junior University Method for tuning patient-specific cardiovascular simulations
US9405886B2 (en) 2009-03-17 2016-08-02 The Board Of Trustees Of The Leland Stanford Junior University Method for determining cardiovascular information
US8379985B2 (en) * 2009-07-03 2013-02-19 Sony Corporation Dominant gradient method for finding focused objects
US8157742B2 (en) 2010-08-12 2012-04-17 Heartflow, Inc. Method and system for patient-specific modeling of blood flow
US8315812B2 (en) 2010-08-12 2012-11-20 Heartflow, Inc. Method and system for patient-specific modeling of blood flow
US8548778B1 (en) 2012-05-14 2013-10-01 Heartflow, Inc. Method and system for providing information from a patient-specific model of blood flow
US10157467B2 (en) 2015-08-07 2018-12-18 Arizona Board Of Regents On Behalf Of Arizona State University System and method for detecting central pulmonary embolism in CT pulmonary angiography images
CN107169487B (en) * 2017-04-19 2020-02-07 西安电子科技大学 Salient object detection method based on superpixel segmentation and depth feature positioning
CN113223016B (en) * 2021-05-13 2024-08-20 上海西虹桥导航技术有限公司 Image segmentation method and device for plant seedlings, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5608812A (en) * 1990-09-14 1997-03-04 Fuji Photo Film Co., Ltd. Abnormal pattern detecting apparatus, pattern finding apparatus, and linear pattern width calculating apparatus
CN1168056A (en) * 1996-04-19 1997-12-17 菲利浦电子有限公司 Method of image segmentation
US6125215A (en) * 1995-03-29 2000-09-26 Fuji Photo Film Co., Ltd. Image processing method and apparatus
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2582665B2 (en) * 1990-09-14 1997-02-19 富士写真フイルム株式会社 Abnormal shadow detector
JP2582667B2 (en) * 1990-09-14 1997-02-19 富士写真フイルム株式会社 Linear pattern width calculator
JP2582664B2 (en) * 1990-09-14 1997-02-19 富士写真フイルム株式会社 Pattern recognition device
JP2582666B2 (en) * 1990-09-14 1997-02-19 富士写真フイルム株式会社 Abnormal shadow detector
JP2003515368A (en) 1999-11-24 2003-05-07 コンファーマ インコーポレイテッド Convolutional filtering of similar data to get an improved visual representation of the image.
US7043064B2 (en) * 2001-05-04 2006-05-09 The Board Of Trustees Of The Leland Stanford Junior University Method for characterizing shapes in medical images
JP2005506140A (en) * 2001-10-16 2005-03-03 ザ・ユニバーシティー・オブ・シカゴ Computer-aided 3D lesion detection method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5608812A (en) * 1990-09-14 1997-03-04 Fuji Photo Film Co., Ltd. Abnormal pattern detecting apparatus, pattern finding apparatus, and linear pattern width calculating apparatus
US6125215A (en) * 1995-03-29 2000-09-26 Fuji Photo Film Co., Ltd. Image processing method and apparatus
CN1168056A (en) * 1996-04-19 1997-12-17 菲利浦电子有限公司 Method of image segmentation
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Eric Mortensen, Bryan Morse, William Barrett, Jayaram Udupa.Adaptive Boundary Detection Using "Live-Wire "Two-Dimensional Dynamic Programming.1992 IEEE.1992,1992635-638. *
Hidefumi Kobatake, Yukiyasu Yoshinaga and MasayukiMurakami.AUTOMATIC DETECTION OF MALIGNANT TUMORS ONMAMMOGRAM.1994 IEEE.1994,1994407-410. *
John Fairfield.Toboggan Contrast Enhancement for Contrast Segmentation.1990 IEEE.1990,1990712-716. *
MORTENSEN E N ET AL.Toboggan-Based Intelligent Scissors with a Four-ParameterEdge Model.1999 IEEE.1999,1999452-458. *
同上.

Also Published As

Publication number Publication date
DE602005009923D1 (en) 2008-11-06
US7526115B2 (en) 2009-04-28
AU2005216314A1 (en) 2005-09-09
EP1719080A2 (en) 2006-11-08
US20050185838A1 (en) 2005-08-25
WO2005083633A2 (en) 2005-09-09
WO2005083633A3 (en) 2006-01-26
JP2007524488A (en) 2007-08-30
CN1947151A (en) 2007-04-11
EP1719080B1 (en) 2008-09-24
CA2557122C (en) 2011-04-19
JP4879028B2 (en) 2012-02-15
CA2557122A1 (en) 2005-09-09

Similar Documents

Publication Publication Date Title
CN1947151B (en) A system and method for toboggan based object segmentation using divergent gradient field response in images
JP4999163B2 (en) Image processing method, apparatus, and program
US7876938B2 (en) System and method for whole body landmark detection, segmentation and change quantification in digital images
US7602965B2 (en) Object detection using cross-section analysis
Alilou et al. A comprehensive framework for automatic detection of pulmonary nodules in lung CT images
JP4660546B2 (en) Method for characterizing objects in digitized images and computer-readable program storage
CN103164852B (en) Image processing apparatus and image processing method
EP1975877B1 (en) Method for point-of-interest attraction in digital images
CN101014977A (en) Lesion boundary detection
US9042618B2 (en) Method and system for detection 3D spinal geometry using iterated marginal space learning
El-Baz et al. Three-dimensional shape analysis using spherical harmonics for early assessment of detected lung nodules
JP2005506140A5 (en)
US20090016583A1 (en) System and Method for Detecting Spherical and Ellipsoidal Objects Using Cutting Planes
US11783476B2 (en) System and method for analyzing three-dimensional image data
WO2021183765A1 (en) Automated detection of tumors based on image processing
Osman et al. Lung nodule diagnosis using 3D template matching
US7460701B2 (en) Nodule detection
Liu et al. Accurate and robust pulmonary nodule detection by 3D feature pyramid network with self-supervised feature learning
AU2005220588A1 (en) Using corner pixels as seeds for detection of convex objects
CN112686932B (en) Image registration method for medical image, image processing method and medium
WO2010034968A1 (en) Computer-implemented lesion detection method and apparatus
GB2457022A (en) Creating a fuzzy inference model for medical image analysis
US8165375B2 (en) Method and system for registering CT data sets
Bruntha et al. Application of YOLOv3 in Lung Nodule Detection
Peskin et al. An Automated Method for Locating Phantom Nodules in Anthropomorphic Thoracic Phantom CT Studies

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200303

Address after: Erlangen

Patentee after: Siemens Healthcare GmbH

Address before: Pennsylvania, USA

Patentee before: Siemens Medical Solutions USA, Inc.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110126

Termination date: 20210223

CF01 Termination of patent right due to non-payment of annual fee