WO2016086744A1 - A method and system for image processing - Google Patents

A method and system for image processing Download PDF

Info

Publication number
WO2016086744A1
WO2016086744A1 PCT/CN2015/093506 CN2015093506W WO2016086744A1 WO 2016086744 A1 WO2016086744 A1 WO 2016086744A1 CN 2015093506 W CN2015093506 W CN 2015093506W WO 2016086744 A1 WO2016086744 A1 WO 2016086744A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
segmentation
present disclosure
data
Prior art date
Application number
PCT/CN2015/093506
Other languages
French (fr)
Inventor
Lilong WANG
Jieyan MA
Jiyong Wang
Ce Wang
Lianghai JIN
Zhao Zhang
Fei Wang
Cheng Li
Qi DUAN
Wenqing Liu
Chuanfeng LU
Jie QU
Yong Luo
Liangliang Pan
Rui Wang
Shili CAO
Yufei MAO
Bin Tan
Wenhua Wang
Shuai WANG
Fanfan ZHANG
Original Assignee
Shanghai United Imaging Healthcare Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201410720069.1A external-priority patent/CN105719333B/en
Priority claimed from CN201410840692.0A external-priority patent/CN105809656B/en
Priority claimed from CN201510216037.2A external-priority patent/CN104766340B/en
Priority claimed from CN201510224781.7A external-priority patent/CN104809730B/en
Priority claimed from CN201510310371.4A external-priority patent/CN104851107B/en
Priority claimed from CN201510313660.XA external-priority patent/CN104851108B/en
Priority claimed from CN201510390970.1A external-priority patent/CN104899926B/en
Priority claimed from CN201510526053.1A external-priority patent/CN105096332B/en
Priority to US15/323,035 priority Critical patent/US10181191B2/en
Priority to GB1709225.5A priority patent/GB2547399B/en
Priority to EP15865201.6A priority patent/EP3213296B1/en
Application filed by Shanghai United Imaging Healthcare Co., Ltd. filed Critical Shanghai United Imaging Healthcare Co., Ltd.
Publication of WO2016086744A1 publication Critical patent/WO2016086744A1/en
Priority to US16/247,080 priority patent/US11094067B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Definitions

  • the present disclosure generally relates to image processing, and more particularly, to a method and system for image segmentation.
  • Image processing has a significant impact in medical filed, such as Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed tomography (CT) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • DSA Digital Subtraction Angiography
  • MRI Magnetic Resonance Imaging
  • MRA Magnetic Resonance Angiography
  • CT Computed
  • An image segmentation (or “recognition, ” “classification, ” “extraction, ” “identification, ” etc. ) may be performed to establish a realistic subject model by dividing or partitioning a medical image into one or more sub-regions. Segmentation is a procedure of image processing and a means to extract quantitative information relating to a target subject from a medical image. Different subjects may be segmented in different ways to improve the accuracy of the segmentation. The accuracy of an image segmentation may affect the accuracy of a disease diagnosis. Thus, it is desirable to improve the accuracy of image segmentation.
  • an image segmentation method and apparatus are provided.
  • the image segmentation method may include one or more of the following operations.
  • Volume data of a medical image may be acquired.
  • Seed points for bone growing may be acquired.
  • One or more parameters for bone growing may be initialized or updated.
  • the parameters may include, for example, a first threshold, a second parameter, a boundary parameter.
  • the first threshold may be fixed.
  • the second threshold and the boundary parameter may be variable.
  • Bone growing may be performed based on, for example, the first threshold, the second parameter, and the boundary parameter to acquire the pixels that may belong to or are associated with a bone tissue.
  • the first threshold may be used to perform a preliminary determination of a bone tissue.
  • the second threshold may be used to determine a suspicious bone tissue.
  • the boundary parameter may be used to determine the boundary of a bone tissue.
  • the increment of pixels of the nth bone growing and the (n-1) th bone growing in a bone tissue may be assessed, if the increment exceeds a predefined threshold.
  • the parameters used in the (n-1) th bone growing may be used to perform a new bone growing. Otherwise, the parameters may be updated.
  • the n may be an integer greater than or equal to 2.
  • the bone growing performed based on the first threshold, the second threshold, and the boundary parameter to acquire the pixels that may belong to the bone tissue may include one or more of the following operations.
  • a seed point may be put into a growing point sequence.
  • a growing point may be removed from the growing point sequence and the growing point may be determined as a bone tissue.
  • the pixels in the vicinity of the growing point may be identified based on, for example, the first threshold, the second threshold, and/or the boundary parameter.
  • the pixels that satisfy all the three parameters may be put into the growing point sequence.
  • the pixels that satisfy only the first threshold may be determined as a bone tissue. It may be determined whether the growing point in the growing point sequence is exhausted. If the growing point is not exhausted, the bone may continue to grow. If the growing point is exhausted, the bone may stop growing and all the pixels belong to the bone tissue may be acquired.
  • the pixels that meet the first threshold may be determined as a bone tissue, and the pixels that only meet the first threshold may be determined as a suspicious bone tissue.
  • the pixels that meet the first threshold and the second threshold may be determined as a decided bone tissue.
  • the pixels that meet the first threshold and the boundary parameter, but the second threshold, may be determined as a boundary of a bone tissue.
  • the second threshold may be greater than the first threshold, and the second threshold may decrease as the iteration proceeds.
  • the boundary parameter may decrease as the iteration process.
  • the parameters used in the nth bone growing may be used to perform a new bone growing.
  • an image segmentation apparatus may include a processing unit, a storage unit and a display unit.
  • the storage unit may be configured to store computer executable command.
  • the processing unit may be configured to performing one or more of the following operations based on the commands stored in the storage unit.
  • Volume data of a medical image may be acquired. Seed points for bone growing may be acquired.
  • the parameters for bone growing may be initialized or updated.
  • the parameters may include a first threshold, a second parameter, a boundary parameter.
  • the first threshold may be fixed.
  • the second threshold and the boundary parameter may be variable.
  • a bone growing may be performed based on the first threshold, the second parameter and the boundary parameter to acquire all pixels that belong to a bone tissue.
  • the first threshold may be used to perform a preliminary determination of a bone tissue.
  • the second threshold may be used to determine a suspicious bone tissue.
  • the boundary parameter may be used to determine the boundary of a bone tissue.
  • the increment of pixels of the nth bone growing and the (n-1) th bone growing in a bone tissue may be assessed, if the increment exceeds a predefined threshold, the parameters used in the (n-1) th bone growing may be used to perform a new bone growing. Otherwise, the parameters may be updated.
  • n may be an integer greater than or equal to 2.
  • a bronchus extracting method is provided.
  • the bronchial tree may be labeled as or divided into different levels from trachea and primary bronchus to lobar bronchi and segmental bronchi, et al.
  • the trachea may be 0-level
  • the primary bronchi may be 1-level
  • do on Firstly, a low-level bronchus may be extracted from a breast CT image. Then a terminal bronchiole may be extracted based on the low- level bronchus by an energy-based 3D reconstruction method. The low-level bronchus and the terminal bronchiole may be fused together to generated a bronchus.
  • a bronchus extracting system may include a first module, a second module, and a third module.
  • the first module may be configured to extract a low-level bronchus.
  • the second module may be configured to extract a terminal bronchiole based on the extracted low-level bronchus by an energy-based 3D reconstruction method.
  • the third module may be configured to fuse the low-level bronchus and the terminal bronchiole together to generate a bronchus.
  • the method of extracting a low-level bronchus may include a region growing based method and a morphology based method.
  • the energy-based 3D reconstruction method may include assessing whether a voxel belong to a terminal bronchiole according to its growing potential energy in different states.
  • the energy-based 3D reconstruction method may include adjusting a weighted value controlling the growing within a bronchus to guarantee a convergent iteration in each growing potential energy computing process.
  • the energy-based 3D reconstruction method may include comparing the volume or radius of a connected region in the terminal bronchiole with a threshold. In some embodiments, if the volume or radius exceed the threshold, the connected region may be removed from the result of the present iterative process.
  • a method for image segmentation of tumors may include one or more of the following operations.
  • a region of interest (ROI) where the tumor may be located may be determined.
  • a coarse segmentation of a tumor based on a feature classification method may be performed.
  • a fine segmentation of the tumor may be performed on the result of the coarse segmentation using a level set method (LSM) .
  • LSM level set method
  • a positive seed point may be sampled randomly among the voxels in the ROI, where a tumor may be located.
  • a negative seed point may be also sampled randomly among the voxels in the ROI, where the tumor may be absent. Training may be performed using the positive seed point and/or the negative seed point, a linear or nonlinear classifier may be acquired. The classification of each voxel may be performed to acquire the result of the coarse segmentation.
  • the method of determining where a tumor may be located may include the following operations: a line segment of length d1 on a slice of tumor may be drawn by the user, which may traverse the tumor.
  • the line segment may be drawn on the slice which has a relatively larger cross-section compared with other slices.
  • a line segment may be a part of a line that is bounded by two distinct end points.
  • the region within the cube may be deemed as the ROI, where the tumor may be located.
  • the region where the tumor may be located may be completely within the ROI defined by the cube.
  • the result of the coarse segmentation may be used to initialize a distance function. Multiple iterations of the distance function may be performed to acquire the fine segmentation of the tumor.
  • a method for image segmentation of lung may include one or more of the following operations.
  • a coarse segmentation of the lung may be performed, and a target region may be acquired.
  • the target region may be divided into a central region, and a boundary region surrounding the central region.
  • the voxels located in the central region may be identified as part of one or more nodules; the voxels located in the boundary region may need to be further identified whether they are part of one or more nodules.
  • the voxels located outside of the target region may not be identified as part of a nodule.
  • a fine segmentation of the lung may be performed using a classifier to classify and label each voxel, acquiring a parting segmentation region, which may be separated from the boundary region.
  • the target region and the parting segmentation region may be fused together.
  • the shape of the boundary region may be an annulus.
  • the coarse segmentation of lung may be based on a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the coarse segmentation of the lung may be performed based on region growing segmentation, which may include the following operations.
  • a long axis may be determined by a user, which may be on the slice with maximum cross-section of ground-glass nodules (GGN) of the lung image.
  • An ROI may be determined based on the long axis. Filtering may be performed in an ROI to acquire a filtered image. A first region growing on the filtered image may be performed. A dynamic segmentation region may be acquired. Gray-scale histogram in the ROI may be calculated to acquire a histogram vector image. A second region growing on the histogram vector image may be performed to acquire a static segmentation region. The dynamic segmentation region and the static segmentation region may be fused together to acquire the target region.
  • GGN ground-glass nodules
  • the fine segmentation of the lung may be based on the result of the coarse segmentation of lung.
  • the fine segmentation of the lung may be performed using a classifier to classify and label the voxels in the boundary region, which may include the following operations.
  • An ROI may be determined. Filtering may be performed to the images within the ROI to acquire a feature vector image. The feature vector image may be combined with the weights of a feature vector, which may have been trained offline.
  • a linear discriminant analysis (LDA) probability field image may be acquired. Region growing on the LDA probability field image may be performed. Parting segmentation regions with labels of the classifier may be acquired. In addition, the parting segmentation region may be fused together with the target region.
  • LDA linear discriminant analysis
  • an image data processing method may include one or more of the following operations.
  • a data set of a three-dimensional image may be acquired. Different components in the data set may be identified. The different components may be stored in different storages.
  • a rendering instruction may be provided by a user. The data in different storages may be rendered by corresponding image processors. The rendered data may be displayed.
  • the data set may include a plurality of voxels.
  • the identification of the different components in the data set may include at least one of the one dimensional feature information or multi-dimensional feature information of the voxels.
  • the data set of the three-dimensional image may include medical scanning data of an object.
  • the different components may include a removable region and a reserved region based on an instruction, for example, a rendering instruction, provided by a user.
  • the rendering instruction data may be rendered accordingly.
  • the data may include establishing thread controls corresponding to the CPU task queue and the GPU task queue respectively, parsing the rendering instruction of the user, rendering after distributing the rendering task to the task queue of CPU or GPU.
  • an image data processing system may include an acquisition module, a segmentation module, a storage control module, a rendering control module, and a display module.
  • the acquisition module may be configured to acquire data set of the three-dimensional image.
  • the data set may be comprised of voxels.
  • the segmentation module may be configured to identify different components in the data set.
  • the storage control module may be configured to control the data of different components into different storages.
  • the rendering control module may be configured to receive the rendering instruction of the user’s input, control the rendering the data in corresponding storages by corresponding image processors.
  • the display module may be configured to display the data after rendering.
  • the segmentation module may include an identification block.
  • the identification block may be configured to identify at least one of the one dimensional feature information and multi-dimensional feature information of the voxels.
  • the data set of the three-dimensional image may include medical scanning data of an object; the different components may include a removable region and a reserved region based on requirement of the user.
  • the rendering control module may include a thread control block, a distribution block.
  • the thread control block may be configured to establish control thread corresponding to the task queue of CPU and GPU, respectively.
  • the distribution block may be configured to parse rendering instructions of user, distribute the rendering task according to instructions of user to the task queue of CPU or GPU, and execute image rendering.
  • an image data processing method may include providing volume data and segmentation information of the volume data, rendering the volume data.
  • the segmentation information of the volume data may be pre-processed before the rendering.
  • the pre-process of the volume data may include generating boundary data corresponding to the spatial boundary structure of each tissue, loading the boundary data and the volume data into an image processing unit, and rendering iteratively.
  • the boundary structure may be a two-dimensional boundary line, or a three-dimensional boundary surface.
  • the boundary structure may be mesh grid structure.
  • the iterative rendering may be based on depth information.
  • Each iterative rendering may be based on the depth information of last iterative rendering.
  • the image data processing method may further include providing an observation point, and observation light of different directions from the observation point.
  • the observation light may intersect the boundary structure at several intersection points.
  • the intersection points in a same depth may be located in a same depth surface.
  • the depth may be the distance between the intersection point and the observation point.
  • an image processing method includes one or more of the following operations. Partitioning a region of interest into three regions, a first region, a second region, and a third region.
  • the characteristic value T i of the pixels of the third region may be calculated based on a first method. Assessing the characteristic value T i of the pixels of the third region based on a second method to generate a filled image.
  • the first region may be the region whose pixel values are 1.
  • the characteristic value of the first region may be set as T high .
  • the second region may be the boundary region of the ROI, the characteristic value of the second region may be set as T low .
  • T high -T low >100.
  • the second method may include one or more of the following operations. Acquiring the characteristic value of the T i pixel in the third region. If B ⁇ Ti ⁇ Thihg, the pixel may be extracted. Otherwise, the value of the pixel may be set as the background color.
  • an image processing apparatus may include a distribution unit, a first determination unit, and a second determination unit.
  • the distribution unit may be configured to partition a region of interest into three regions, a first region, a second region, and a third region.
  • the first determination unit may be configure to calculate the characteristic value T i of a pixel i in the third region.
  • the second determination unit may be configured to assess the characteristic value T i of a pixel i in the third region to generate a filled image.
  • a spine position method may include one or more of the following operations.
  • a CT image sequence may be loaded and the CT image sequence may include a plurality of CT images.
  • the bone regions of every CT image in the CT image sequence may be segmented.
  • the bone region with the maximum area may be determined as a spine region.
  • An anchor point of the spine region in every CT image may be determined, the anchor point may be validated based on some known information.
  • the anchor point may be calibrated.
  • the load of the CT image sequence may include one or more of the following operations.
  • a preliminary spine region may be selected after the CT image sequence is loaded.
  • the preliminary region may be preprocessed to segment one or more bone regions.
  • the validation of the anchor point may include one or more of the following operations. If the y coordinate of the anchor point is greater than H/4, and the shift of the x coordinate from the central position is less than H/4, the anchor point may be valid. Otherwise, the anchor point may be invalid.
  • the calibration of the anchor point may include one or more of the following operations. Calibrating the anchor points whose coordinate are abnormal. Calibrating the anchor points that lack coordinate.
  • an image segmentation method may include one or more of the following operations.
  • a first image may be transformed into a first 2D image.
  • the pixel values of the 2D image may correspond to those of the first image.
  • the first image may be a binary image that is generated based on an ROI of a first CT image.
  • a first region may be acquired based on the 2D image.
  • the first region may be the highlighted part of the first 2D image.
  • a second region may be acquired based on the first CT image.
  • the second region may be the highlighted part of each 2D slice of the first CT image corresponding to the first region spatially.
  • a second CT image may be calibrated based on the second region to generated a calibrated second CT image.
  • the first CT image and the second CT image are both 3D CT images, and have the same size and number of 2D slices.
  • the first CT image may be an original CT image of a lung
  • the second CT image may be a segmented CT image based on the original CT image of a lung.
  • the transformation of the first image into the first 2D image may include one or more of the following operations.
  • the first image may be projected along the human body to generate the 2D image.
  • the pixel values of the 2D image may correspond to those of the first image.
  • the highlighted part of the 2D image may be acquired based on clustering, or threshold segmentation.
  • the first region may be the highlighted part that has the most pixels among multiple highlighted parts.
  • the regions corresponding to the first region in the slices of the first image may have the same spatial position with the first region.
  • the second region may be acquired by performing threshold segmentation on the region of every 2D slice corresponding to the first region of the first CT image.
  • the calibration of the second CT image may include setting the pixels of the regions corresponding to the second region in every 2D slice of the second CT image as the background color of the second CT image.
  • an image segmentation apparatus may include a transformation unit, a first acquisition unit, a second acquisition unit, a calibration unit.
  • the transformation unit may be configured to transform the first image into a first 2D image.
  • the pixel values of the 2D image may correspond to those of the first image.
  • the first acquisition unit may be configured to acquire a first region, the first region may be the highlighted part of the 2D image.
  • the second acquisition unit may be configured to acquire a second region, the second region may be the highlighted part of each 2D slice of the first CT image corresponding to the first region spatially.
  • the calibration unit may be configured to calibrate the second CT image based on the second region to generate a calibrated second CT image.
  • the first CT image and the second CT image are both 3D CT images, and have the same size and number of 2D slices.
  • the image segmentation apparatus may further include a determination unit.
  • the determination unit may be configured to determine the highlighted part that has the most pixels among multiple highlighted parts as the first region.
  • the calibration unit may further include pixel value reset unit.
  • the pixel value reset unit may be configured to set the pixels of the regions corresponding to the second region in every 2D slice of the second CT image as the background color of the second CT image.
  • a hepatic artery segmentation method is provided.
  • a CT data including a CT image and a corresponding liver mask may be acquired. Then a seed point of the hepatic artery may be selected.
  • a first binary volume data and a second binary volume data may be generated by a segmentation method. The segmentation may be performed by segmenting the CT image data based on the CT value of the seed point.
  • the first binary volume data may include a backbone and a rib connected therewith.
  • the second binary volume data may include a backbone, a rib connected therewith and a hepatic artery.
  • a third binary volume data may be generated by removing a part same with the first binary volume data from the second binary volume data.
  • the hepatic artery may be generated through a 3D region growing in the third binary volume data based on the seed point selected before.
  • the first binary volume data may be generated by performing a threshold segmentation for backbone on the CT image data.
  • the second binary volume data may be generated by performing a threshold segmentation for hepatic artery on the CT image data.
  • the seed point selecting method may further include: S11. Determining the slices to select the seed point of the hepatic artery; S12. Selecting the seed point and computing its CT value.
  • the generating method of the first binary volume data may further include: performing a coarse segmentation on the original CT image data and acquiring a bone binary volume data including a bone and other tissue; subdividing the bone binary volume data into region slice and re-segmenting the region slice by using a superposition strategy and acquiring the first binary volume data including a backbone and a rib connected therewith.
  • the hepatic artery segmentation method may further include post-pressing the hepatic artery.
  • the post-processing method may include an artery repair.
  • FIG. 1 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure
  • FIG. 2 is a block diagram depicting an image processing system according to some embodiments of the present disclosure
  • FIG. 3 is a flowchart illustrating a process for image segmentation according to some embodiments of the present disclosure
  • FIG. 4 illustrates a process for region growing according to some embodiments of the present disclosure
  • FIG. 5 is a flowchart illustrating a bronchus extracting method according to some embodiments of the present disclosure
  • FIG. 6 is a flowchart illustrating a terminal bronchiole extracting method according to some embodiments of the present disclosure
  • FIG. 7 is a block diagram depicting an image processing system for extracting a bronchus according to some embodiments in the present disclosure
  • FIG. 8 is a block diagram illustrating a processing unit of image processing according to some embodiments of the present disclosure.
  • FIG. 9 is a flowchart illustrating a process for image segmentation according to some embodiments of the present disclosure.
  • FIG. 10 is a flowchart illustrating a process for coarse segmentation of tumors according to some embodiments of the present disclosure
  • FIG. 11 is a flowchart illustrating a process for lung segmentation according to some embodiments of the present disclosure
  • FIG. 12 is a flowchart illustrating a process for coarse segmentation of lung according to some embodiments of the present disclosure
  • FIG. 13 is a flowchart illustrating a process for fine segmentation of a lung image according to some embodiments of the present disclosure
  • FIG. 14 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure.
  • FIG. 15 is a block diagram depicting an image processing system according to some embodiments of the present disclosure.
  • FIG. 16 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure.
  • FIG. 17 is a flowchart illustrating an image identification method according to some embodiments of the present disclosure.
  • FIG. 18 is a block diagram depicting an image processing system according to some embodiments of the present disclosure.
  • FIG. 19 is a block diagram depicting a segmentation module according to some embodiments of the present disclosure.
  • FIG. 20 is a block diagram depicting an identification block according to some embodiments of the present disclosure.
  • FIG. 21 is a block diagram depicting a rendering control module according to some embodiments of the present disclosure.
  • FIG. 22 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure.
  • FIG. 23 is an illustration of rendering tissues by way of an iterative rendering method according to some embodiments of the present disclosure.
  • FIG. 24 is a flowchart illustrating a process for post segmentation processing according to some embodiments of the present disclosure
  • FIG. 25 is a flowchart illustrating a process for post segmentation processing according to some embodiments of the present disclosure
  • FIG. 26 is a block diagram of an image processing system according to some embodiments of the present disclosure.
  • FIG. 27 is a flowchart illustrating a process for spine location according to some embodiments of the present disclosure.
  • FIG. 28 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure
  • FIG. 29 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure.
  • FIG. 30 illustrates a block diagram of an image processing system according to some embodiments of the present disclosure
  • FIG. 31 is a flowchart illustrating a method for hepatic artery segmentation according to some embodiments of the present disclosure.
  • FIG. 32 is a flowchart illustrating a method for hepatic artery segmentation according to some embodiments of the present disclosure.
  • system, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
  • an image segmentation (or “recognition, ” “classification, ” “extraction, ” etc. ) may be performed to establish a realistic subject models by dividing or partitioning a medical image into one or more constituent sub-regions.
  • the medical imaging system may be various modalities including Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed tomography (CT) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • DSA Digital Subtraction Angiography
  • MRI Magnetic Resonance Imaging
  • MRA Magnetic Resonance Angiography
  • CT Computed tom
  • the subject may be an organ, a texture, a region, an object, a lesion, a tumor, or the like, or any combination thereof.
  • the subject may include a head, a breast, a lung, a trachea, a pleura, a mediastinum, an abdomen, a long intestine, a small intestine, a bladder, a gallbladder, a triple warmer, a pelvic cavity, a backbone, extremities, a skeleton, a blood vessel, or the like, or any combination thereof.
  • the medical image may include a 2D image and/or a 3D image.
  • its tiniest distinguishable element may be termed as a pixel.
  • its tiniest distinguishable element may be termed as a voxel ( “a volumetric pixel” or “avolume pixel” ) .
  • the 3D image may also be seen as a series of 2D slices or 2D layers.
  • the segmentation process is usually performed by recognizing one or more similar information of a pixel and/or a voxel in the image.
  • the similar information may be characteristics or features including gray level, mean gray level, intensity, texture, color, contrast, brightness, or the like, or any combination thereof.
  • some spatial properties of the pixel and/or voxel may also be considered in a segmentation process.
  • FIG. 1 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure. As described in the figure, the image processing method may include steps such as acquiring data, pre-processing the data, segmenting the data 130 and/or post-processing the data.
  • data may be acquired.
  • the data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the data acquired in step 101 may include an image data, an image mask and a parameter.
  • the image data may be an initial image data or an image data after treatment.
  • the treatment may be a pre-processing, an image segmentation, or a post-processing.
  • the image may be a two dimensional (2D) image or a three dimensional (3D) image.
  • the image may be acquired from any medical imaging system as described elsewhere in the present disclosure.
  • the image mask may be a binary mask to shade a specific area of an image.
  • the image mask may be used to supply a screen, extract an area of interest, and/or make a specific image.
  • the parameter may be some information input by a user or an external source.
  • the parameter may include a threshold, a characteristic, a CT value, a region of interest (ROI) , a number of iteration times, a seed point, an equation, an algorithm, or the like, or any combination thereof.
  • the data acquired in step 110 may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the data may also be a mode, a priori information, an expert-defined rule, or the like, or any combination thereof for specific subject. These information may be trained or self-studied before or during an image segmentation. However, those variations and modifications do not depart from the scope of the present disclosure.
  • the data acquired may be pre-processed according to some embodiments of the present disclosure.
  • the pre-processing step 102 may be used to make the data adaptive to segment.
  • the pre-processing of data may include suppressing, weakening and/or removing a detail, a mutation, a noise, or the like, or any combination thereof.
  • the pre-processing of data may include performing an image smoothing and an image enhancing.
  • the smoothing process may be in a spatial domain and/or a frequency domain.
  • the spatial domain smoothing method may process the image pixel and/or voxel directly.
  • the frequency domain smoothing method may process a transformation value firstly acquired from the image and then inverse transform the transformation value into a space domain.
  • the imaging smoothing method may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, or the like, or any combination thereof.
  • the pre-processing step 102 may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the pre-processing step 102 may be removed or omitted in some embodiments.
  • the pre-processing method may also include some other image processing techniques including, e.g., an image sharpening process, an image segmentation, a rigid regulation, a non-rigid registration, or the like, or any combination thereof.
  • image processing techniques including, e.g., an image sharpening process, an image segmentation, a rigid regulation, a non-rigid registration, or the like, or any combination thereof.
  • those variations and modifications do not depart from the scope of the present disclosure.
  • the data with pre-processing may be segmented.
  • the pre-processing step 102 may be not a necessary and the original data may be segmented directly without a pre-processing.
  • the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
  • the three types may provide different choices to a user or an operator and allow the user or the operator participate in the image process in various degrees.
  • a parameter of the segmentation may be determined by the user or the operator.
  • the parameter may be a threshold level, a homogeneity criterion, a function, an equation, an algorithm, a model, or the like, or any combination thereof.
  • the segmentation may be incorporated with some information about a desired subject including, e.g., a priori information, an optimized method, an expert-defined rule, a model, or the like, or any combination thereof.
  • the information may also be updated by training or self-learning.
  • the semi-automatic type the user or the operator may supervise the segmentation process in a certain level.
  • each model may include some characteristics that be used to determine whether it is adaptive to a patient.
  • the characteristics may include a patient information, an operator information, an instrument information, or the like, or any combination thereof.
  • the patient information may include ethnicity, citizenship, religion, gender, age, matrimony, height, weight, medical history, job, personal habits, organ, tissue, or the like, or any combination thereof.
  • the operator information may include a department, a title, an experience, an operating history, or the like, or any combination thereof.
  • the instrument information may include model, running status, etc. of the imaging system. When an image segmentation need to be performed, one or more of the characteristics described above may be checked and fitted to a proper pre-existing model.
  • the segmentation step 103 may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the segmentation step may have more than one segmenting operation.
  • the segmenting operation may include a coarse segmentation and a fine segmentation in one segmentation process.
  • two or more segmentation methods may be used in one segmentation process.
  • those variations, changes and modifications do not depart from the scope of the present disclosure.
  • the data segmented may be post-processed in some embodiments. It should be noted that the post-processing step 104 may be not necessary and the original data may be segmented directly without a post-processing.
  • the post-processing technique may include a 2D post-processing technique, a 3D post-processing technique and other technique.
  • the 2D post-processing technique may include a multi-planar reformation (MPR) , a curved planar reformation (CPR) , a computed volume reconstruction (CVR) , a volume rendering (VR) , or the like, or any combination thereof.
  • MPR multi-planar reformation
  • CPR curved planar reformation
  • VR computed volume reconstruction
  • VR volume rendering
  • the 3D post-processing technique may include a 3D surface reconstruction, a 3D volume reconstruction, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , an average intensity projection (AIP) , an X-ray simulation projection, a volume rendering (VR) , or the like, or any combination thereof.
  • VIP volume intensity projection
  • MIP maximum intensity projection
  • Min-IP minimum intensity projection
  • AIP average intensity projection
  • X-ray simulation projection a volume rendering (VR) , or the like, or any combination thereof.
  • the other technique may include a repair process, a render process, a filling process, or the like, or any combination thereof.
  • the post-processing step 104 may be variable, changeable, or adjustable based on the spirits of the present disclosure. Merely by way for example, the post-processing step may be omitted. For another example, the data post-processed may be returned to the step 103 and performed by a segmentation. For still another example, the post-processing step may be divided into more than one steps. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
  • the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the steps may be added, deleted, exchanged, replaced, modified, etc.
  • the pre-processing and/or post-processing may be deleted.
  • the order of the steps in the image processing method may be changed.
  • some steps may be executed repeatedly.
  • FIG. 2 is a block diagram illustrating an image processing system according to some embodiments of the present disclosure.
  • the image processing system 202 may include a processing unit 202-1, a storage unit 202-1.
  • the imaging system 202 may be also integrated, connected or coupled to an input/output unit 201.
  • the image processing system 202 may be a single component or integrated with the imaging system as described elsewhere in the present disclosure.
  • the image processing system 202 may also be configured to process different kinds of data acquired from the imaging system as described elsewhere in the present disclosure. It should be noted that the assembly or the components of the image processing system 202 may be variable in other embodiments.
  • the input/output unit 201 may be configured to perform an input and/or an output function in the image processing system 202. Some information may be input into or output from the processing unit 202-1 and/or the storage unit 202-2. In some embodiments, the information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the model may be a pre-existing model of a region of interest (ROI) as described elsewhere in the present disclosure.
  • ROI region of interest
  • the input/output unit 201 may include a display, a keyboard, a mouse, a touch screen, an interface, a microphone, a sensor, a wireless communication, a cloud, or the like, or any combination thereof.
  • the wireless communication may be a Bluetooth, a Near Field Communication (NFC) , a wireless local area network (WLAN) , a WiFi, a Wireless a Wide Area Network (WWAN) , or the like, or any combination thereof.
  • the information may be input from by or output to an external subject.
  • the external subject may be a user, an operator, a patient, a robot, a floppy disk, a hard disk, a wireless terminal, a processing unit, a storage unit, or the like, or any combination thereof.
  • the input/output unit may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the input/output unit 201 may be subdivided into an input module and an output module.
  • the input/output unit 201 may be integrated in one module.
  • the input/output unit 201 may be more than one.
  • the input/output unit 201 may be omitted.
  • those variations, changes and modifications do not depart from the scope of the present disclosure.
  • the image processing system may include a processing unit 202-1 and a storage unit 202-2.
  • the processing unit 202-1 may be configured or used to process the information as described elsewhere in the present disclosure from the input/output 201 and/or the storage unit 202-2.
  • the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • CPU Central Processing Unit
  • ASIC Application-Specific Integrated Circuit
  • ASIP Application-Specific Instruction-Set Processor
  • GPU Graphics Processing Unit
  • PPU Physics Processing Unit
  • the processing unit 202-1 may perform a pre-processing operation, a co-processing operation, a post-processing operation.
  • the pre-processing operation may include suppressing, weakening and/or removing a detail, a mutation, a noise, or the like, or any combination thereof.
  • the co-processing operation may include an image reconstruction, an image segmentation, etc.
  • the post-processing operation may include a 2D post-processing technique, a 3D post-processing technique and other technique. The apparatus used in different process may be different or the same.
  • the image smoothing in the pre-processing step may be executed by an apparatus including, e.g., a median filter, a Gaussian filter, a mean-value filter, a normalized filter, a bilateral filter, or the like, or any combination thereof.
  • an apparatus including, e.g., a median filter, a Gaussian filter, a mean-value filter, a normalized filter, a bilateral filter, or the like, or any combination thereof.
  • the processing unit 202-1 may include a parameter, an algorithm, a software, a program, or the like, or any combination thereof.
  • the parameters may include a threshold, a CT value, a region of interest, a number of iteration times, a seed point, or the like, or any combination thereof.
  • the algorithm may include some algorithms used to perform a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the software for image segmentation may include Brain Imaging Center (BIC) software toolbox, BrainSuite, Statistical Parametric Mapping (SPM) , FMRIB Software Library (FSL) , Eikona3D, FreeSurfer, Insight Segmentation and Registration Toolkit (ITK) , Analyze, 3D Slicer, Cavass, Medical Image Processing, Analysis and Visualization (MIPAV) , or the like, or any combination thereof.
  • BIC Brain Imaging Center
  • SPM Statistical Parametric Mapping
  • FSL FMRIB Software Library
  • ITK Insight Segmentation and Registration Toolkit
  • MIPAV Medical Image Processing, Analysis and Visualization
  • the storage unit 202-2 may be connected or coupled with the input/output unit 201 and/or the processing unit 202-1. In some embodiments, the storage unit 202-2 may acquire information from or output information to the input/output unit 201 and/or the processing unit 202- 1.
  • the information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the number acquired from or output to the processing unit 202-1 may include a threshold, a CT value, a number of iteration times, or the like, or any combination thereof.
  • the algorithm acquired from or output to the processing unit 202-1 may include a series of image processing methods.
  • the image processing method for segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the image acquired from or output to the processing unit 202-1 may include a pre-processing image, a co-processing image, a post-processing image, or the like, or any combination thereof.
  • the model acquired from or output to the processing unit 202-1 may include pre-existing model corresponding to different ethnicity, citizenship, religion, gender, age, matrimony, height, weight, medical history, job, personal habits, organ, tissue, or the like, or any combination thereof.
  • the storage unit 202-2 may also be an external storage or an internal storage.
  • the external storage may include a magnetic storage, an optical storage, a solid state storage, a smart cloud storage, a hard disk, a soft disk, or the like, or any combination thereof.
  • the magnetic storage may be a cassette tape, a floppy disk, etc.
  • the optical storage may be a CD, a DVD, etc.
  • the solid state storage may be a flash memory including a memory card, a memory stick, etc.
  • the internal storage may be a core memory, a Random Access Memory (RAM) , a Read Only Memory (ROM) , a Complementary Metal Oxide Semiconductor Memory (CMOS) , or the like, or any combination thereof.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • CMOS Complementary Metal Oxide Semiconductor Memory
  • the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the processing unit 202-1 and/or the storage unit 202-2 may be a single unit or a combination with two or more sub-units.
  • one or more components of the image processing system may be simplified or integrated.
  • the storage unit 202-2 may be omitted or integrated with the processing unit 202-1, i.e., the storage unit and the processing unit may be a same device.
  • those variations, changes and modifications do not depart from the scope of the present disclosure.
  • FIG. 3 is a flowchart illustrating a process of image segmentation according to some embodiments of the present disclosure.
  • volume data may be acquired.
  • the volume data may be corresponding to medical images, and may comprise a group of 2D slices that may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • X-ray X-ray
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • step 302 at least a seed point may be acquired, and at least a parameter may be initialized or updated.
  • the seed point may be acquired based on criterion including pixels in a certain grayscale range, pixels evenly spaced on a grid, etc.
  • the parameters may include a first threshold, a second threshold, and a boundary threshold.
  • the parameters may be iteratively updated until a certain criteria is meet, for example, a threshold.
  • the first threshold may be configured to determine a bone tissue in a CT image
  • the second threshold may be configured to determine a suspicious bone tissue in a CT image
  • the boundary threshold may be configured to determine a boundary tissue of a bone in the CT image.
  • the first threshold may be fixed during an iteration
  • the second threshold and the boundary threshold may be variable during the iteration.
  • the second threshold and the boundary threshold may be initialized so that a relatively stable restriction may be achieved. They may be optimized in the iteration.
  • the first threshold and the second threshold may both refer to CT value.
  • the first threshold may be set as 80 HU and the second threshold may be set as 305 HU.
  • the boundary threshold may be calculated based on the threshold of a mathematical operator, for example, Laplace operator.
  • the Laplace threshold may be set as 120.
  • a region growing may be performed on the seed point iteratively base on the parameters described in step 302.
  • the region meet the first threshold may be determined as a bone tissue.
  • the region meet the first threshold may be determined based on the second threshold as well.
  • the region meet both the first threshold and the second threshold may be determined as a definite bone tissue.
  • the region meet the first threshold solely but the second threshold may be determined as a suspicious bone tissue.
  • the region meet the first threshold but second threshold may be further determined based on the boundary threshold as well. If the boundary threshold is meet, the region may be determined as a boundary tissue.
  • the parameters may be used to determine a growing point. Merely by way of example, the region meet all the three parameters may be determined as a growing point.
  • the region meet first threshold solely may be determined as a bone tissue.
  • the increment of pixels in two adjacent iterations may be assessed based on a threshold.
  • the number of pixels in the nth iteration may be compared with the number of pixels in the (n-1) th iteration. If the increment of pixels surpass a threshold, the iteratively performed region growing may be terminated and step 305 may be performed. If the increment of pixels is within the threshold, step 302 may be performed and the parameters may be updated.
  • n may be an integer that is greater than or equal to 2.
  • the increment of the pixels may be used to eliminate the influence of adjacent tissues of the bone that is under region growing, and determine the boundary of the bone.
  • the number of pixels in the nth iteration may be denoted as Sum (n)
  • the number of pixels in the (n-1) th iteration may be denoted as Sum (n-1)
  • the increment of pixels may be denoted as [Sum (n) -Sum (n-1) ] /Sum(n-1) .
  • step 305 may be performed. Otherwise, step 302 may be performed and the parameters may be updated.
  • a new region growing may be performed based on the parameters generated in step 304.
  • the parameters of the (n-1) th (n may be greater than or equal to 2) iteration may be used to performed the new bone growing in step 305.
  • the parameters may include a first threshold, a second threshold, a boundary threshold, etc.
  • the first threshold may be fixed.
  • the second threshold and the boundary threshold may keep decreasing during the iteration.
  • FIG. 4 illustrates a process of region growing according to some embodiments of the present disclosure.
  • a seed point may be put into a growing point sequence. Taking the bone growing as an example, the seed point may be put into a bone growing point sequence.
  • the seed point may be acquired based on criterion including pixels in a certain grayscale range, pixels evenly spaced on a grid, etc.
  • a growing point may be removed from the growing point sequence, for example, a bone growing point sequence, and determined as a bone tissue.
  • the pixels in the neighborhoods of the growing point may be assessed based on the parameters mentioned in FIG. 3.
  • the parameters may include a first threshold, a second threshold, a boundary threshold, etc. If the pixels in the neighborhoods of the growing point meet the parameters, step 404 may be performed and the neighborhoods meet the parameters may be put into the growing point sequence. If the neighborhood meet the first threshold solely, step 405 may be performed and the neighborhoods may be determined as bone tissues.
  • Step 406 may be performed to determine if the growing point has been exhausted. If the growing point is exhausted, step 407 may be performed and the region growing may be terminated. The number of pixels of the tissue may be acquired. The tissue may be a bone tissue. If the growing point is not exhausted. Step 402 may be performed again.
  • bronchus extracting embodiments For illustration purposes, some image processing embodiments about a bronchus extracting method will be described below. It should be noted that the bronchus extracting embodiments are merely for better understanding and not intended to limit the scope of the present disclosure.
  • FIG. 5 is a flowchart illustrating a bronchus extracting method according to some embodiments of the present disclosure.
  • the bronchus extracting process may include acquiring a low-level bronchus, extracting a terminal bronchiole and acquiring a trachea.
  • some steps may be removed, displaced or added in the bronchus extracting process as described elsewhere in the present disclosure.
  • a bronchial tree may provide a system for a trachea (or windpipe) to service a lung with transporting inhaling oxygen and exhaling dioxide.
  • the bronchial tree may include a bronchus, a bronchiole, and an alveoli.
  • the bronchus may be one of the two main bronchi of the trachea.
  • the trachea may have a long tube structure and descend from a throat down to a thoracic cavity. For each main bronchus, it may subdivide into a secondary bronchus and the secondary bronchus may divide further into a tertiary bronchus.
  • the tertiary bronchus may divide into a terminal bronchiole.
  • the alveoli including a duct and an air sac may allow exchange of the gas in the blood.
  • the bronchial tree may be divided into several levels according to its division conduction. Merely by way for example, the number of levels may be any value chosen from 10 to 24. For example, in an image segmentation, the number of levels may be set according to some specific demands.
  • the bronchial tree may also be divided into a low-level bronchi and a terminal bronchiole. For illustration purposes, an exemplary embodiments of a 13 level bronchial tree may be described below.
  • the low-level bronchus may include one or more relative primary levels, e.g., 1 to 3 levels, 1-4 levels, 1-5 levels or other variations.
  • the terminal bronchiole may include one or more relative terminal levels, e.g., 4-13 levels, 5-13 levels, 6-13 levels or other variations. It should be noted that the division of the low-level bronchus and the terminal bronchiole is not limited here and the above embodiments are merely for illustration purposes and not intended to limit the scope of the present disclosure.
  • the bronchial tree with low-level bronchi may be acquired according to some embodiments of the present disclosure.
  • the low-level bronchus may be extracted by an image segmentation.
  • the method of an image segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • a region growing based method and a morphology based method will be described below, it should be noted that it is not intended to limit the scope of the present disclosure.
  • one or more seed points may be deposited in a great airway by an automatic method or an interactive method to perform an initialization.
  • the airway may be a trachea.
  • the region growing may include judging whether an adjacent point (apixel or a voxel) meet a fused standard under a fixed or a self-adaptive threshold.
  • the region growing method may be based on a characteristic or a feature including a gray level, a mean gray level, an intensity, a texture, a color, a contrast, a brightness, or the like, or any combination thereof.
  • the region growing method based on the gray characteristic will be described in detail below.
  • the method may include searching a seed point according to the tube-like structure property of the trachea and performing region growing.
  • a voxel may be determined as belonging to a bronchus if it meet one or more thresholds.
  • the thresholds may include a first threshold and a second threshold.
  • the first threshold may be applied to judge whether a voxel belong to a bronchus.
  • the first threshold may be an upper limit of HU value for a voxel in a bronchus and termed as a MaxAirwayHUThreshold.
  • the second threshold may be used to distinguish a voxel in a bronchus and a voxel in a bronchus wall.
  • the second threshold may be an upper limit of difference of a HU value between the voxel in a bronchus and the voxel out a bronchus.
  • the second threshold may be termed as a MaxLunmenWallDiffThreshold. For example, if the two thresholds are satisfied synchronously, i.e., the HU value of a voxel V is lower than the first threshold and the difference between each HU value of all 26 voxels adjacent to the voxel V and the HU value of the voxel V is lower than the second threshold, then the voxel V may be determined as belonging to a bronchus.
  • the 26 voxels are merely for exemplary purposes, the number of the adjacent voxels may be any natural number, e.g., 1, 2, 3, 4, etc.
  • the bronchus region may be extracted by a series of filters. See C. Pisupati, L. Wolff, W. Mitzner, and E. Zerhouni, “Segmentation of 3-D pulmonary trees using mathematical morphology, ” in Proc. Math. Morphol. Appl. Image Signal Process, May 1996, pp. 409-416, of which are hereby incorporated by reference in their entirety.
  • the image segmentation method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the low-level bronchus may be extracted by any image segmentation method as described elsewhere in the present disclosure.
  • the image segmentation method may be a single method or any combination of the known methods as described elsewhere in the present disclosure. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
  • a terminal bronchiole may be extracted based on the low-level bronchus.
  • the extracting method may be an energy-based 3D reconstruction method.
  • the energy-based 3D reconstruction is a growing method based on a constraint condition of local energy minimization. According to the Markov Random Field Theory, the energy function may be relative to a growing potential energy. For each voxel, the growing potential energy of two states may be calculated. The two states may include a state that the voxel belongs to a terminal bronchiole and a state that the voxel doesn’ t belong to a terminal bronchiole. If the latter growing potential energy were bigger than the former, the voxel may be determined as belonging to a terminal bronchiole, and vice versa.
  • the energy-based 3D reconstruction method will be described in detail later.
  • the low-level bronchus and the terminal bronchiole may be fused together through an image fusion method to generate a bronchus extraction result.
  • the fusion method may be divided into a spatial domain fusion and a transform domain fusion.
  • the fusion method may include a high pass filtering technique, a HIS transform based image fusion, a PCA based image fusion, a wavelet transform image fusion, a pair-wise spatial frequency matching, or the like, or any combination thereof.
  • the bronchus extracting method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • a data acquiring step may be added to FIG. 5.
  • the bronchia acquired after step 503 may also be post-processed by a method as described elsewhere in the present disclosure. However, those variations, changes and modification do not depart from the scope of the present disclosure.
  • the energy-based 3D reconstruction is a growing method based on a constraint condition of local energy minimization.
  • the energy function ⁇ may be relative to the growing potential energy V prop , which may be calculated by:
  • kT is a fixed value.
  • the growing potential energy V prop in different states st x may be calculated for a voxel x. For example, if the voxel does not belong to a terminal bronchus, the st x is assigned by 0. If the voxel belongs to a terminal bronchus, the st x is assigned by 1. In some embodiments, if V prop (x, 1) -V prop (x, 1) >0, the voxel x may be determined as belonging to a terminal bronchiole.
  • V radial may be used to support a growth in a radial direction
  • V distal may be used to support a growth in a distal direction
  • V control may be used to limit the growth in a bronchus tube.
  • the former three weighted terms may be calculated by the following equations:
  • v may be a voxel set
  • v 26 (x) may represent the 26 adjacent voxels of the voxel x
  • v n may be a set of voxel adjacent to the voxel x
  • v f may be a set of voxel apart from the voxel x
  • st represents a state
  • K may be a normalizing factor that guarantee the growing anisotropy in a terminal bronchiole.
  • F is a HU value of the voxel x.
  • FIG. 6 illustrates a terminal bronchiole extracting method based on an energy-based 3D reconstruction process according to some embodiments of the present disclosure.
  • the parameters may include a first voxel set, an iteration times, a weight assigned to the weighted term, or the like, or any combination thereof.
  • the weights of the V radial , V distal , and V control may be assigned by w 1 , w 2 , and w 3 , respectively.
  • step 604 the growing potential energy in two states may be compared. If V prop (p, 1) >V prop (p, 0) , it will enter into step 605, otherwise it will enter into step 606.
  • step 605 the voxel p may be thrusted into the second voxel set S 2 .
  • step 660 whether all voxels p that adjacent to every voxel x in the first voxel set S 1 are traversed or not may be judged. If the answer is yes, it will enter into step 607, otherwise it will return to step 603.
  • step 607 the number of the voxels in the first voxel set S 1 and in the second voxel set S 2 may be compared. If the number of the voxels in S1 is bigger than the number of the voxels in S 2 , it will enter into step 608, otherwise it will return to step 609.
  • step 608 the second voxel set S 2 will be cleaned off, i.e., the condition that the growing terminal bronchus is bigger than the original bronchus may be determined as not correct and abandoned.
  • step 609 all the connected regions D in the second voxel set S 2 may be searched.
  • the searching method may be through a calculation.
  • a radius r of each connected region d may be calculated.
  • the voxel amount n d and/or the radius r of the connected region d may be compared to a threshold respectively.
  • n T may be used to represent a threshold of the voxel amount n d
  • r T may be used to represent a threshold of the radius r.
  • the voxel amount n d is bigger than n T , or the radius r is bigger than r T , it will enter into step 611, otherwise it will enter into step 612.
  • step 611 the voxel x in the connected region that does not satisfy the demand in step 610 may be removed from the second voxel set S 2 .
  • step 612 the voxel in the second voxel set S 2 satisfying the demand in step 610 may be added into the results I f of the terminal bronchiole.
  • step 614 if the first voxel set S 1 is a non-null set and the number of iteration is no bigger than N T , it will return to step 602, otherwise it will end the iteration process.
  • a terminal bronchiole segmentation result I f may be acquired after the iteration process.
  • a weighted terms self-adjusting method may be added in the energy-based 3D reconstruction process.
  • the weight value chosen to self-adjust may be w 1 , w 2 , w 3 or any combination thereof.
  • the weight value w 3 of V control may be self-adjusted to control the terminal bronchus’s growing in a bronchus tube.
  • the weight value w 3 of V control may be initialized at the same time.
  • the energy-based 3D reconstruction process method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • one or more steps in FIG. 6 may be exchanged, added or removed.
  • the steps extracting a bronchial tree may be added before an energy-based 3D reconstruction process, and/or an image fusion process may be added after the energy-based 3D reconstruction process.
  • the parameters in some step may be changed or replaced.
  • thresholds such as n T , r T , or N T and parameters such as w 1 , w 2 , or w 3 may be varied or changed according to specific implementation scenarios. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
  • FIG. 7 illustrates a diagram of an image processing system for extracting a bronchus according to some embodiments of the present disclosure.
  • an image processing system 202 may include a processing unit 202-1, and a storage unit 202-2.
  • the imaging processing system 202 may also be integrated, connected or coupled to an input/output unit 201 as described elsewhere in the present disclosure. It should be noted that the image system is merely for exemplary purposes and not intended to limit the scope of the present disclosure.
  • the processing unit 202-1 may be configured to process some information.
  • the information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • CPU Central Processing Unit
  • ASIC Application-Specific Integrated Circuit
  • ASIP Application-Specific Instruction-Set Processor
  • GPU Graphics Processing Unit
  • PPU Physics Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • PLD Programmable Logic Device
  • Controller a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • the processing unit 202 may further include a first module 701, a second module 702 and a third module 703.
  • the three modules may work cooperatively or independently.
  • the first module 701 may be configured to extract a low-level bronchus
  • the second module 702 may be configured to extract a terminal bronchiole
  • the third module may be configured to fuse the low-level bronchus and the terminal bronchiole.
  • one single module chosen from the above three modules may also perform all the functions independently without help from the other modules.
  • each module of 701, 702 and 703 may include a processer, a hardware, a software, an algorithm, a memory, or the like, or any combination thereof.
  • module 701, 702 and 703 may also share a same component such as a processer, a hardware, a software, an algorithm, a memory, or the like, or any combination thereof.
  • module 701, 702 and 703 may share the same input/output unit 201 and the storage unit 202-2 in FIG. 6.
  • the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • some components may be exchanged, added or removed in the image processing.
  • the input/output unit may be integrated into the image processing system.
  • the three modules in the processing unit may be integrated into one module or divided into more than three modules.
  • FIG. 8 is a block diagram illustrating an exemplary processing unit of image processing system according to some embodiments of the present disclosure.
  • the processing unit 202-1 may be configured or used to process the information as described elsewhere in the present disclosure form the input/output 201 and/or the storage unit 202-2.
  • the processing unit 202-1 may be configured to perform a pre-processing operation, a co-processing operation, a post-processing operation.
  • the pre-processing operation may include suppressing, weakening and/or removing a detail, a mutation, a noise, or the like, or any combination thereof.
  • the co-processing operation may include an image reconstruction, an image segmentation, etc.
  • the post-processing operation may include a 2D post-processing technique, a 3D post-processing technique and other technique.
  • the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • CPU Central Processing Unit
  • ASIC Application-Specific Integrated Circuit
  • ASIP Application-Specific Instruction-Set Processor
  • GPU Graphics Processing Unit
  • PPU Physics Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • PLD Programmable Logic Device
  • the processing unit 202-1 may include a segmentation module 801.
  • the segmentation module 801 may include a pre-process block, a recognition block, a coarse segmentation block, a fine segmentation block, a fusion block and a post-process block.
  • the pre-process block may be configured or used to perform one or more pre-processing operations.
  • the recognition block may be configured or used to recognize or/and determine one or more regions of interest (ROI) .
  • the coarse segmentation block may be configured or used to perform a coarse segmentation.
  • the fine segmentation block may be configured or used to perform a fine segmentation.
  • the fusion segmentation block may be configured or used to fuse the results of the coarse segmentation block and the fine segmentation block or process the result of fine segmentation block.
  • the post-process block may be configured or used to perform one or more post-processing operations.
  • the recognition block may be configured or used to recognize and/or determine an ROI wherein a tumor located; the coarse segmentation block may be configured or used to perform a coarse segmentation of tumor based on a feature classification method; the fine segmentation block may be configured or used to perform a fine segmentation of tumor based on level set method (LSM) ; the fusion segmentation block may be configured or used to process the result of the fine segmentation block in which the result of the coarse segmentation block is utilized.
  • LSM level set method
  • the recognition block may be configured or used to recognize or/and determine one or more ROIs wherein a lung may locate;
  • the coarse segmentation block may be configured or used to perform a coarse segmentation of the lung, acquiring one or more target regions, which is/are divided into one or more central regions and one or more boundary regions;
  • the fine segmentation block may be configured or used to perform a fine segmentation of the lung by making use of one or more classifiers, acquiring one or more parting segmentation regions, which is/are separated from the boundary regions;
  • the fusion segmentation block may be configured or used to fuse the result of the fine segmentation block and the result of the coarse segmentation block, merely by way of example, to fuse the target regions and the parting segmentation regions.
  • the post-process block may be configured or used to performing image smoothing, the methods of image smoothing may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, smoothing with mathematical morphology or the like, or any combination thereof.
  • processing unit 202-1 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • multiple variations and modifications may be made under the teachings of the present disclosure.
  • those variations and modifications do not depart from the scope of the present disclosure.
  • the assembly and/or function of the processing unit 202-1 may be varied or changed.
  • the assembly and/or function of the segmentation module 801 may be varied or changed.
  • the pre-process block, the recognition block and the post-process block may not exist in the segmentation module 801; the function of recognizing or/and determining one or more ROIs may be performed in other module (s) or blocks; the coarse segmentation and the fine segmentation may be performed in one module/block or may be performed in more blocks; the fusion segmentation block may be configured or used to process the result of coarse segmentation block; the function of each blocks described in the segmentation module 801 may be performed one or more times.
  • FIG. 9 is a flowchart illustrating an exemplary process of image segmentation according to some embodiments of the present disclosure.
  • the data may be image data, image mask data, parameter, or the like, or any combination thereof.
  • the image data may be initial image data or processed data.
  • the processing may be an image segmentation.
  • the image may be a 2D image or a 3D image.
  • the image may be a CT image, an MR image, an Ultrasound image, a PET image, or the like, or any combination thereof.
  • the image mask may be a binary mask extracted from a specific area of an image.
  • the image mask may be used to supply a screen, extract a region of interest, and/or make a specific image.
  • the parameter may be some information input by a user or an external source.
  • the parameter may include a threshold, a CT value, a region of interest (ROI) , a number of iterations, a seed point, an equation, an algorithm, or the like, or any combination thereof.
  • the data acquired in step 901 may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the data may also be a mode, a prior information, an expert-defined rule, or the like, or any combination thereof for specific subject. These information may be trained or self-study before or during an image segmentation. However, those variations and modifications do not depart from the scope of the present disclosure.
  • one or more ROIs may be determined by means including an automatic identification, a semi-automatic identification, or a manual marking by user.
  • the automatic identification may refer to an automatic identification of different tissues, organs, normal regions, abnormal regions, or the like, or any combination thereof.
  • the semi-automatic identification may refer to a semi-automatic identification of different tissues, organs, normal regions, abnormal regions, or the like, or any combination thereof.
  • the user may perform one or more operations, in addition to some processing performed automatically by the system, before one or more ROIs are determined.
  • the manual marking may refer to one or more ROIs determined or located by the user based on manual marking.
  • the coarse segmentation may be performed in step 903.
  • the fine segmentation may be performed in step 904.
  • the coarse segmentation and the fine segmentation may be performed in a single step.
  • the methods of segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the post-process may be performed in step 905.
  • the post-processing techniques may include a 2D post-processing technique, a 3D post-processing technique and other technique.
  • the 2D post-processing technique may include a multi-planar reformation (MPR) , a curved planar reformation (CPR) , a computed volume reconstruction (CVR) , a volume rendering (VR) , or the like, or any combination thereof.
  • MPR multi-planar reformation
  • CPR curved planar reformation
  • VR computed volume reconstruction
  • VR volume rendering
  • the 3D post-processing technique may include a 3D surface reconstruction, a 3D volume reconstruction, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , an average intensity projection (AIP) , an X-ray simulation projection, a volume rendering (VR) , or the like, or any combination thereof.
  • the other technique may include a repair process, an image filling process, etc.
  • one or more ROIs wherein a tumor locate, may be detected or recognized in step 902.
  • the detection or the recognition may be based on an automatic identification, a semi-automatic identification, or a manual marking; the coarse segmentation of tumor based on feature classification method may be performed in step 903; the fine segmentation of tumors based on level set method may be performed in step 904; the post-process, which may be performed in step 905, may include the processing of the result of the fine segmentation.
  • an ROI wherein a lung locate may be recognized or detected in step 902.
  • the detection or the recognition may be based on an automatic identification, a semi-automatic identification, or a manual marking;
  • the coarse segmentation of lung may be performed in step 903 to acquire one or more target regions, which is/are divided into one or more central regions and one or more boundary regions;
  • the fine segmentation of lung may be performed in step 904, which may make use of one or more classifiers to acquire one or more parting segmentation regions that is/are separated from the boundary region;
  • the post-process which may be performed in step 905, may include the processing of fusing the result of step 903 and the result of step 904, more specifically, fusing the target region and the parting segmentation region.
  • step 901, step 902, step 903, step 904 and step 905 may be performed sequentially at an order other than that described above in FIG. 9.
  • Step 901, step 902, step 903, step 904 and step 905 may be performed concurrently or selectively.
  • Step 901, step 902, step 903, step 904 and step 905 may be merged into a single step or divided into a number of steps.
  • a pre-process may be performed after step 901 and before step 902; the pre-process may also be performed after step 902 and before step 903.
  • FIG. 10 is a flowchart illustrating an exemplary process of a coarse segmentation of tumors according to some embodiments of the present disclosure.
  • an ROI wherein a tumor locate may be determined in step 1001.
  • a number of 2D slices corresponding to a tumor may be acquired, and each 2D slice may indicate a cross section of the tumor.
  • a tumor region in which a tumor locate may be recognized by a user in each 2D slice.
  • a line segment of length d 1 on a slice of tumor may be drawn by the user, which may traverse the tumor.
  • the line segment may be drawn on the slice which has a relatively larger cross-section compared with other slices.
  • a line segment may be a part of a line that is bounded by two distinct end points.
  • the region within the cube may be deemed as the ROI, wherein the tumor locate.
  • the region wherein the tumor locate may be completely within the ROI, which determined by the cube.
  • a pre-processing may be performed on the ROI.
  • the pre-processing may include resolution normalization in each direction of the image , such as X axis, Y axis and Z axis, enhancing the contrast of the image, as well as reducing the noise of the image, acquiring a new image, etc.
  • one or more seed points may be sampled.
  • one or more positive seed points may be sampled from the ROI.
  • the positive seed points may be sampled randomly among the voxels in ROI determined in step 1002, wherein the tumor locate.
  • the positive seed points may be taken as sampling seed points of target tumor region.
  • One or more negative seed points may be also sampled from ROI.
  • the negative seed points may be sampled randomly among the voxels in ROI determined in step 1002, wherein the tumor do not locate.
  • the negative seed points may be taken as sampling seed points of normal region.
  • the positive seed points may be part of the tumor.
  • the internal region may be completely restricted in the tumor region.
  • an external region centered at the midpoint of the straight line d 1 with a radius ranging from d 1 ⁇ t to d 1 ⁇ r (1 ⁇ t ⁇ r) .
  • n negative seed points may be sampled in the external region.
  • the negative seed points may be part of normal tissues surrounding the tumor.
  • step 1004 features of gray-scale histogram of the seed points may be calculated.
  • the gray-scale histogram of an image represents the distribution of the pixels in the image over the gray-level scale.
  • the features of gray-scale histogram may be used to training and/or to classifying.
  • the grayscale of each image corresponded to each seed point may be normalized as a fixed value, denoted by S.
  • Normalized S-dimensional feature of gray-scale histogram in a spherical neighborhood, centered of each seed point with a radius of j, may be calculated.
  • Training and classifying based on S-dimensional feature of gray-scale histogram to acquire parameters of classifier may be performed, acquiring parameters of classifier.
  • j may be an integer greater than 1, such as 7.
  • a linear discriminant analysis (LDA) training may be performed to acquire one or more linear or nonlinear classifiers.
  • the LDA training may be based on the gray-scale histograms of the positive seed points and the negative seed points acquired in the step 1004.
  • the linear classifier C may be showed in Equation 6:
  • T is the threshold of classifier
  • y is the feature of weighted value
  • w 1 , w 2 ...w S are weighted values
  • v 1 , v 2 ...v S are features of the gray-scale histograms
  • S may denote the grayscale
  • T and w 1 , w 2 ...w S may be generated based on the LDA training.
  • step 1006 features of the gray-scale histograms of one or more voxels in the image I may be calculated.
  • the linear classifier C may be used to traverse every voxel in the image I.
  • step 1007 the linear classifier C, which is trained in step 1005, may be used to assess each voxel of the image I to determine whether the voxel is part of tumor or not.
  • step 1008 the result of the coarse segmentation may be acquired.
  • the result of the coarse segmentation may be shown in Equation 7:
  • x may denote one voxel in the image I.
  • step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be performed sequentially at an order other than that described in FIG. 10.
  • Step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be performed concurrently or selectively.
  • Step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be merged as a single step or divided into a number of steps.
  • step 1003 and step 1004 may be merged as a single step.
  • one or more other operations may be performed before/after or in step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008.
  • a fine segmentation may be performed after acquiring the result of the coarse segmentation.
  • the fine segmentation of the tumor may be performed based on level set method (LSM) .
  • LSM utilize dynamic variational boundaries for an image segmentation, embedding active contours into a time-dependent partial differential equations (PDE) function ⁇ . It is then possible to approximate the evolution of active contours implicitly by tracking the zero level set ⁇ (t) .
  • the implicit interface ⁇ may be comprised of a single or a series of zero isocontours. The evolution of ⁇ is totally determined by the numerical level set equation, which is shown in Equation 8.
  • ⁇ 0 (x) represents the initial contour
  • F represents the comprehensive forces, including the internal force from the inter face geometry (e.g., mean curvature, contour length and area) and the external force from image gradient and/or artificial momentums.
  • the level set function ⁇ converts the 2D image segmentation problem in to a 3D problem.
  • the time step and the grid space should comply with the Courant–Friedreichs–Lewy (CFL) condition, and the level set function ⁇ should be re-initiated periodically as a signed distance function.
  • CFL Courant–Friedreichs–Lewy
  • ⁇ ( ⁇ ) is a penalty momentum of ⁇ , deviating from the signed distance function, ⁇ (g, ⁇ ) incorporates an image gradient information, ⁇ ( ⁇ ) denotes the Dirac function, the constants ⁇ , ⁇ and v control the individual contributions of ⁇ ( ⁇ ) and ⁇ (g, ⁇ ) .
  • Equation 7 The result of the coarse segmentation, shown in Equation 7, may be as initial values of ⁇ 0 (x) .
  • the initial contour of ⁇ 0 (x) may be seen as the contour acquired after performing the coarse segmentation of tumor. Then ⁇ 0 (x) may be shown in Equation 10:
  • d may denote a predefined distance
  • a voxel is judged as a part of the tumor after the coarse segmentation, the voxel may be located inside of the initial contour, and the initial value of ⁇ 0 (x) may be plus d. If the voxel is judged as a part of the normal tissues after the coarse segmentation, the voxel may be located outside of the initial contour, and the initial value of ⁇ 0 (x) may be minus d. Iterative calculations may be performed based on Equation 11 until the number of iterations K is meet.
  • Equation 12 The result of the fine segmentation based on level set method, more specifically, the final result of the segmentation of the tumor may be shown in Equation 12.
  • ⁇ K (x) may be used to indicate a distance function, when ⁇ K (x) is greater than or equal to 0, it may indicate that the voxel x may be located on the contour or inside the contour, the voxel x may be deemed to be a part of the tumor; when ⁇ K (x) is smaller than 0, it may indicate that the voxel x may be located outside of the contour, the voxel x may be deemed not to be a part of the tumor.
  • the segmentation method provided herein may be used for the segmentation of liver tumor.
  • the segmentation method provided herein may be used for tumors located in other parts of human body.
  • the method provided herein may also be used to segment a tissue or an organ, for example, a spherical tissue.
  • the method provided herein may be used for the segmentation of a lung tumor.
  • FIG. 11 is a flowchart illustrating an exemplary process of lung segmentation according to some embodiments of the present disclosure.
  • a coarse segmentation of lung may be performed in step 1101, a target region may be acquired.
  • the target region may be divided into a central region and a boundary region, which may be surrounding the central region.
  • the voxels in the central region may be identified as part of nodules; the voxels in the boundary region may need to be further identified whether they are parts of nodules.
  • the voxels located outside of the target region may be not identified as part of one or more nodules.
  • the shape of the boundary region may be a regular annulus.
  • each voxel in the boundary region may be classified and labelled by making use of one or more classifiers, acquiring a parting segmentation region, which may be separated from the boundary region.
  • step 1103 the target region acquired in step 1101 and the parting segmentation region acquired in step 1102 may be fused together.
  • the segmentation result acquired in step 1103 may be comprised of all of the voxels in the central regions and part of voxels in the boundary region.
  • Post-processing may be performed in step 1104 to further improve the result acquired in the step 1103.
  • the post-processing may include performing image smoothing.
  • the methods of image smoothing may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, smoothing with mathematical morphology or the like, or any combination thereof.
  • step 1101, step 1102, step 1103, and step 1104 may be performed sequentially at an order other than that described in FIG. 11.
  • Step 1101, step 1102, step 1103, and step 1104 may be performed concurrently or selectively.
  • Step 1101, step 1102, step 1103, and step 1104 may be merged as a single step or divided into a number of steps.
  • One or more other operations may be performed before/after or in step 1101, step 1102, step 1103, and step 1104.
  • pre-process may be performed after step 1101 and before step 1102.
  • shape of the boundary region may be an annulus, regular or irregular other shapes, or the like, or any combination thereof.
  • the boundary regions may completely or partially surround the central region.
  • FIG. 12 is a flowchart illustrating a process of a coarse segmentation of lung according to some embodiments of the present disclosure.
  • the coarse segmentation of lung may be performed based on region growing segmentation.
  • a long axis may be determined by a user, which may be on the slice with maximum cross-section of ground-glass nodules (GGN) of the lung image.
  • GGN ground-glass nodules
  • the long axis may be a straight line segment and traverse abnormal regions as many as possible.
  • An ROI may be determined based on the long axis in step 1202. Merely by way of example, assuming the length of the long axis is 1, the ROI may be determined by the diagonals of a cube with an edge length of 1.
  • a filtering may be performed on the ROI, acquiring a filtered image.
  • the filtering may be a mean filtering, a median filtering, or the like, or any combination thereof.
  • a window calculated based on the mean filtering and the gray-scale histogram, may be proportional to the length of long axis.
  • the filtering may be automatically adjusted in accordance with the different volume of the nodules.
  • a first region growing may be performed on the filtered image, acquiring a dynamic segmentation region.
  • the operations of the first region growing based on distance function may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the mean filtered image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a dynamic segmentation region may be acquired.
  • step 1205 the gray-scale histogram of the ROI may be calculated, histogram vector image may be acquired.
  • a second region growing on the histogram vector images may be performed, a static segmentation region may be acquired.
  • the operations of the second region growing based on distance function may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the histogram vector image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a static segmentation region may be acquired.
  • the dynamic segmentation region acquired in step 1204 and the static segmentation region acquired in step 1206 may be fused together, the target region may be acquired.
  • the overlapping region of the dynamic segmentation region with the static segmentation region may be determined as the central region of the target region.
  • the central region may be determined as the nodule region.
  • the region located inside of the dynamic segmentation region but outside of the static segmentation region may be determined as the boundary region, wherein nodules need to be further identified.
  • the region located outside of the dynamic segmentation region may not be determined as the target region.
  • the coarse segmentation of lung may be based on a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be performed sequentially at an order other than that described above in FIG. 12.
  • Step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be performed concurrently or selectively.
  • Step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be merged as a single step or divided into a number of steps.
  • One or more other operations may be performed before/after or in performing step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207.
  • a pre-process may be performed after the step 1201 and before the step 1202.
  • FIG. 13 is a flowchart illustrating a process of a fine segmentation of lung according to some embodiments of the present disclosure.
  • the fine segmentation of lung may be performed by making use of one or more classifiers to classify and label each voxel in the boundary regions.
  • An ROI may be determined in step 1301.
  • the ROI may have been determined in the processing of the coarse segmentation that is described elsewhere in the present disclosure, or be determined again.
  • filtering may be performed to the images in the ROI, a feature vector image may be acquired.
  • the window of the filter may be proportional to the length of the long axis.
  • the filtering may include Butterworth filter, gradient filter, Gabor filter, Volterra filter, Hessian filter, Chebyshev filter, Bessel filter, Elliptic filter, Constant k filter, m-derived filter, filter based on histogram, or the like, or any combination thereof.
  • the feature vector image may be combined with the weights of the feature vector, which may have been trained offline, a LDA probability field image may be acquired.
  • a region growing on the LDA probability field image may be performed, acquiring a parting segmentation region with labels of classifiers.
  • the region growing may be based on the LDA probability field, which may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the LDA probability field image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a segmentation region with the labels of the classifier may be acquired.
  • step 1301, step 1302, step 1303 and step 1304 may be performed sequentially at an order other than that described above in connection with FIG. 13.
  • Step 1301, step 1302, step 1303 and step 1304 may be performed concurrently or selectively.
  • Step 1301, step 1302, step 1303 and step 1304 may be merged as a single step or divided into a number of steps.
  • One or more other operations may be performed before/after or in step 1301, step 1302, step 1303 and step 1304.
  • the parting segmentation regions acquired in step 1304 may be fused together with the target region, which may be part of process of the fine segmentation of lung.
  • FIG. 14 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure.
  • the processing method may include an acquisition process 1401, a pre-processing process 1402, and a rendering process 1403.
  • one or more steps may be omitted. In some other embodiments, one or more steps may be executed repeatedly.
  • the image data may be acquired from the input/output unit 201as described elsewhere in the present disclosure.
  • the data may be an image data, an image mask, a parameter data, or the like, or any combination thereof.
  • the image data may be an initial image data or an image data after treatment.
  • the treatment may be an image segmentation.
  • the image may be a 2D one or a 3D one.
  • the image may be acquired from any medical imaging system as described elsewhere in the present disclosure.
  • the image mask may be a binary mask to shade a specific area of an image needed to be processed. The image mask may be used to supply a screen, extract an area of interest, and/or make a specific image.
  • the parameter may be some information input by a user or an external source.
  • the parameter may include a threshold, a CT value, a region of interest, a number of iteration times, a seed point, an equation, an algorithm, an instruction, or the like, or any combination thereof.
  • the image data may be pre-processed before rendering in some embodiments.
  • the pre-processing step 1402 may include a segmentation, a storage, a processing after segmentation, or the like, or any combination thereof.
  • the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
  • different components may be stored in different storage modules, and then be rendered.
  • the storage module may include a CPU memory, a GPU video memory, a RAM, a ROM, a hardware, a software, a tape, a CD, a DVD, or the like, or any combination thereof.
  • the processing after segmentation may include generating a boundary data of the segmentation information.
  • the pre-processing step 1402 may be not necessary, the rendering step 1403 may be executed after the acquiring step 1401 directly.
  • the image date after pre-processing may be rendered.
  • the rendering strategy may include a direct volume rendering, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , a hardware-accelerated volume rendering, an average intensity projection (AIP) , or the like, or any combination thereof.
  • the above description about the image processing method is merely an example according to some embodiments of the present disclosure.
  • the form and details of the image processing method may be modified or varied without departing from the principles of the present disclosure.
  • the image data acquired in step 1401 may be three-dimensional data, two-dimensional data, one-dimensional data, or the like, or any combination thereof.
  • the pre-processing step 1402 may include any processing method to improve the image.
  • different components or tissues may be rendered according to different specific implementation scenarios. The modifications and variations are still within the scope of the present disclosure described above.
  • FIG. 15 illustrates an image processing system according to some embodiments of the present disclosure.
  • the image processing system 202 may include a pre-process unit 1501, and a storage/rendering unit 1502 In some other embodiment, the image processing system 202 may be also integrated, connected or coupled to an input/output unit 201, a data processing system 202.
  • the input/output unit 201 may be configured to input an image data, input a user instruction, output an image and/or data, display an image, or the like, or any combination thereof.
  • the input/output unit 201 may include a data acquisition module, a display module, a user input module, or the like, or any combination thereof.
  • the input/output unit 201 may include a mouse, a keyboard, a stylus, a touch screen, a sensor, an interface, a microphone, a wireless communication, a cloud, or the like which may detect input, or any combination thereof.
  • the pre-process unit 1501 may be configured to pre-process the image data before rendering.
  • the pre-process unit 1501 may be a segmentation module, a control module, a storage module, or the like, or any combination thereof.
  • the storage/rendering unit 1502 may be configured to render and/or store the image data.
  • the storage/rendering unit 1502 may further include a rendering module, a storage module. Mere way for example, the rendering module and the storage module may be a same module to execute rendering and storage simultaneously.
  • the same module may include a Central Processing Unit (CPU) , a Graphics Processing Unit (GPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • ASIC Application-Specific Integrated Circuit
  • ASIP Application-Specific Instruction-Set Processor
  • PPU Physical Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • PLD Programmable Logic Device
  • Controller a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • the input/output unit 201 may be any input and/or output systems such as an image detector.
  • the input/output unit 201 may be omitted for a data processing system.
  • the rendering module and the storage module may be the same module or may also be two different ones. The modifications and variations are still within the scope of the present disclosure described above.
  • FIG. 16 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure.
  • FIG. 16 is an exemplary embodiment to illustrate the FIG. 14.
  • the image processing method may include an acquisition process 1601, a segmentation process 1602, a storage process 1603, a rendering process 1604, and a display process 1605.
  • step 1601 data may be acquired.
  • the data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the data acquired in step 1601 may include a data set of a three-dimensional image.
  • the data set of the three- dimensional image may include a voxel.
  • the voxel herein may be the smallest distinguishable unit in the three-dimensional segmentation.
  • a target region may be determined during an interaction with a user.
  • the target region may be determined by an input information from the user and may also be determined by a given region of the user.
  • the input information of the user may be imported through an input/output unit.
  • the input/output unit may be a peripheral device including a mouse, a pen, a keyboard, a stylus, a touch screen, a sensor, an interface, a microphone, a wireless communication, a cloud, or the like which may detect input, or any combination thereof.
  • the user may depict a boundary of the target region on a display by a mouse or a pen.
  • the target region may be a removable target region, a reserved target region, or their combination.
  • a corresponding loading content may be selected according to a demand of a user or a type of an executing task.
  • the executing task may be the removable region, the reserved region, or the like, or any combination thereof.
  • a medical visual field is described as an example of the image processing method.
  • the three-dimensional data set may be a medical scanning data of a subject acquired from any medical imaging system as described elsewhere in the present disclosure.
  • a portion of air may occupy more than one half of the whole data when executing the Direct Volume Rendering task. In practical applications, the air portion may be removed by means of segmentation step since it is not cared in some embodiments.
  • the different components in the data set may be identified according to a practical application demand in the VR task.
  • the data set may be divided into an air portion and a non-air portion.
  • the air portion may be regarded as a removable region
  • the non-air portion may be regarded as a reserved region.
  • the different components in the data set may not be identified in some embodiments.
  • MPR Multiple Planar Reconstruction
  • CPR Curved Planar Reconstruction
  • the air portion information may be used to perform an auxiliary diagnosis.
  • the air portion herein may be not removed.
  • the above description about the segmentation step is merely an example according to some embodiments of the present disclosure.
  • the form and details of the segmentation method may be modified or varied without departing from the principles of the present disclosure.
  • the two embodiments described above may be used in combination.
  • the different three-dimensional rendering task may be distributed to different hardware according to the characteristics of task and the demands of the user.
  • the air portion and non-air portion described above may be merely an example in the medical visual field.
  • the different components may be other subject including a substance, a tissue, an organ, an object, a specimen, a body, or the like, or any combination thereof.
  • the subjects may be identified and then rendered to reduce resource cost of hardware in the system.
  • the modifications and variations are still within the scope of the present disclosure described above.
  • the identification of the different components in the data set may include an identification of information from a voxel.
  • the information may be a one dimensional characteristic, a multi-dimensional characteristic, or the combination thereof.
  • the characteristic information may be gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any combination thereof.
  • the information may further include a toleration range.
  • the voxels from different components may present a different one dimensional and/or a three-dimensional characteristic.
  • the different components may be segmented or identified by judging whether the feature information is within the range of the same toleration range.
  • the toleration range may be a threshold.
  • the voxels whose one dimensional and/or multi-dimensional characteristic within the threshold may be regarded as a same component.
  • the identification method may be a segmentation including, e.g., a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type as described elsewhere in the present disclosure.
  • the three-dimensional image data after the identification may be managed by a corresponding space management algorithm.
  • the image data may be stored through a data structure including an array, a record, a union, a set, a graph, a tree, a class, or the like, or any combination thereof.
  • the space management algorithm may include an Octree technology, a K-Dimensional (KD) tree, a Binary Space Partition (BSP) tree, or the like, or any combination thereof.
  • the different components of the subject may be stored in different storage modules.
  • the storage module may include an external storage or an internal storage.
  • the external storage may include a magnetic storage, an optical storage, a solid state storage, a smart cloud storage, a hard disk, a soft disk, or the like, or any combination thereof.
  • the magnetic storage may be a cassette tape, a floppy disk, etc.
  • the optical storage may be a CD, a DVD, etc.
  • the solid state storage may be a flash memory including a memory card, a memory stick, etc.
  • the internal storage may be a core memory including a Central Processing Unit (CPU) , a Graphics Processing Unit (GPU) , a Random Access Memory (RAM) , a Read Only Memory (ROM) , a Complementary Metal Oxide Semiconductor Memory (CMOS) , or the like, or any combination thereof.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • CMOS Complementary Metal Oxide Semiconductor Memory
  • CMOS Complementary Metal Oxide Semiconductor Memory
  • the above description about the storage method is merely an example according to some embodiments of the present disclosure.
  • the form and details of the storage method may be modified or varied without departing from the principles of the present disclosure.
  • the whole volume data and the non-air portion may be stored in a same storage module, such as a CPU memory, a GPU video memory, or the like.
  • the type of storage modules may be changed to any storage as described elsewhere in the present disclosure. The modifications and variations are still within the scope of the present disclosure described above.
  • the data in the storage modules may be rendered by a corresponding data processer after receiving an rendering instruction.
  • a CPU memory and a GPU video memory may be described in detail.
  • the image data stored in a memory may be loaded and/or processed by the CPU and the image data stored in a video memory may be loaded and/or processed by the GPU.
  • the GPU may be configured to perform a computing in a VR task to improve the execution efficiency.
  • the CPU may be configured to perform a MPR task or a CPR task for the giant amount of data in the air auxiliary diagnosis.
  • the rendering task of the CPU and the GPU may be separated and executed in different control threads in order to balance the load of rendering task and optimize the performance of system.
  • the control threads may be established corresponding to a CPU task queue and a GPU task queue, respectively.
  • the rendering task may be distributed to the CPU task queue or a GPU task queue to execute according to an instruction from the user.
  • the rendering task of the CPU and the GPU may share a same control thread.
  • the GPU when executing the VR task, may execute a complex calculation. At the same time, the resources occupancy rate of the CPU may be low enough to execute a request of multi-user and multi-task simultaneously.
  • the MPR or the CPR task in order to ensure the execution efficiency and cache consistency, several tasks may be executed in sequence on the premise of limiting the occupancy rate of the CPU. The CPU may support a request of other task at the same time.
  • the above description about the rendering method is merely an example according to some embodiments of the present disclosure.
  • the form and details of the rendering method may be modified or varied without departing from the principles of the present disclosure.
  • the image rendering task may be executed in some other hardware but not limited the CPU and the GPU.
  • it may be an ASIC, an ASIP, a PPU, a DSP, or the like, or any combination thereof.
  • the modifications and variations are still within the scope of the present disclosure described above.
  • a rendering result may be collected and/or transmitted through a unified criterion.
  • the rendering result may be collected through a unified framework and the rendering result may be transmitted to an interface through a unified port.
  • the interface herein may be any displayable device but not limited here.
  • the rendering result may be collected and/or transmitted through a diverse criterion.
  • the above description about the data processing method is merely an example according to some embodiments of the present disclosure.
  • the form and details of the data processing method may be modified or varied without departing from the principles of the present disclosure.
  • the image data after identification may be rendered immediately without saving in a storage.
  • the storage and processor may be the same apparatus and may also be two different apparatuses.
  • the image processing method may be modified such as adding a pre-processing step before the identification, adding a post-processing step after the rendering task. The modifications and variations are still within the scope of the present disclosure described above.
  • the information for identification may be a one dimensional characteristic, a multi-dimensional characteristic, or the combination thereof.
  • the characteristic information may be gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any combination thereof.
  • FIG. 17 illustrates an image identification method according to some embodiments of the present disclosure.
  • an identification process 1602 may further include a characteristic detecting process 1701, a judgment process 1702, and a labeling process 1703.
  • a characteristic detecting process 1701 may be omitted or exchanged.
  • a judgment process 1702 may be omitted or exchanged.
  • one or more steps may be executed repeatedly.
  • a characteristic of a voxel may be detected.
  • the characteristic information FIG. 17 may be one dimensional characteristic, e.g., gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any combination thereof.
  • the one dimensional characteristic in FIG. 17 is color and spatial property of a voxel.
  • the color may further be a gray value and the spatial property may further be an arrangement order of the voxel or a position of the voxel in the three dimensional space.
  • a same component may be judged.
  • a tolerance range may be set.
  • the tolerance range may be corresponding to the difference of a color value or a position of the voxel.
  • the voxels with an adjacent position and have a color value below the tolerance range may be regarded as belonging to a same component.
  • the voxels with a great difference of color value or a great distance in position which beyond the tolerance range may be more likely to be judged as different tissues. So the data in the three-dimensional image may be identified according to comprehensive comparison of the arrangement orders and color values of the voxels.
  • the different components may be labeled respectively.
  • a same label may be marked on the voxels with adjacent arrangement and color values within a preset tolerance range.
  • FIG. 17 is merely an example according to some embodiments of the present disclosure.
  • the form and details of the segmentation or identification method may be modified or varied without departing from the principles of the present disclosure.
  • the different components may be judged according to other characteristics including gray level, mean gray level, intensity, texture, contrast, brightness, or the like, or any combination thereof.
  • the identification may be a complex image segmentation as described elsewhere in the present disclosure. The modifications and variations are still within the scope of the present disclosure described above.
  • the different components in the data set may be identified according to a multi-dimensional characteristic information.
  • the multi-dimensional feature information may include providing a pre-determined neighbor region. A voxel and the voxels within the neighbor region may be performed by a statistics to acquire a multi-dimensional characteristic in the statistics.
  • the statistics may be based on calculating an information entropy of each voxel.
  • the information entropy may reflect a comprehensive characteristics of a gray information distribution within a certain region near the voxel in a statistical significance.
  • a threshold may be set to compare the difference of the information entropy between the voxel and the voxel near a certain region. If the difference of information entropy is less than the pre-determined threshold, the gray value distributions from different voxels may be considered to be consistent with each other and the two voxels may be regarded as belonging to a same component. On the contrary, the gray value distribution may be considered to be inconsistent if the information entropy difference exceed the preset threshold and the two voxels may be regarded as belonging to different components.
  • the statistics may be based on a Gaussian mixture model.
  • the feature of each voxel in the three-dimensional data set may be characterized by using several Gaussian models.
  • all the voxels in a same region may be fitted by several Gaussian models and their corresponding Gaussian parameters may be acquired.
  • the number of the Gaussian models may be determined according to the characteristic information of the voxel.
  • the Gaussian parameters may include an expectation, a variance, or their combination.
  • the Gaussian parameters of different voxels may be compared. The voxels whose Gaussian parameters are in a same range may be considered to be a same component.
  • the statistics may be based on a feature detection algorithm.
  • a Scale-Invariant Feature Transform (SIFT) is described as an example.
  • the voxels within a same region may be calculated by Gaussian mixture model under different resolutions to acquire a vector set.
  • the voxels may be compared to judge whether they belong to a same component.
  • the embodiments may also be used to some complex scenarios such as identifying some tissues with similar gray values (e.g., a lung and a heart) or some tissues coupled to each other while has different functions (e.g., a liver, a pancreas and a gallbladder) .
  • the statistics may be based on speeded up robust features (SURF) , principal components analysis speeded up robust features (PCA-SURF) gradient location and orientation histogram (GLOH) , histogram of oriented gradients (HOG) , or the like, or any combination thereof.
  • SURF speeded up robust features
  • PCA-SURF principal components analysis speeded up robust features
  • GLOH gradient location and orientation histogram
  • HOG histogram of oriented gradients
  • FIG. 18 illustrates an image processing system according to some embodiments of the present disclosure.
  • the image processing system 202 may include a pre-process unit 1501, a storage/rendering unit 1502.
  • the input/output unit 201 may be configured to input image data, input user instructions, output image and/or data, display image, or the like, or any combination thereof.
  • the input/output unit 201 may include an acquisition module 1801, a display module 1802, etc.
  • the pre-process unit 1501 may be configured to pre-process the image data.
  • the pre-process unit 1501 may include a segmentation module 1501-1, a storage control module 1501-2, a rendering control module 1501-3, or the like, or any combination thereof.
  • the segmentation module 1501 may be configured to perform an identification on the image data.
  • the storage/rendering unit 1502 may be configured to render and/or store the image data.
  • the storage/rendering unit 1502 may include a rendering module, a storage module, etc. Mere way for example, the rendering module and the storage module may be an integrated module to execute both rendering and storage.
  • the integrated module may include a CPU, a GPU, or other image processors, or any combination thereof.
  • the acquisition module 1801 may be configured to acquire data.
  • the data herein may include a data set, a user instruction, or the like, or any combination thereof.
  • the data set of three-dimensional images may be medical scanning data of a subject as described elsewhere in the present disclosure.
  • the data set herein may include a voxel.
  • the voxel may be the smallest distinguishable unit in the three-dimensional segmentation.
  • the user instruction may include an instruction of rendering, an instruction of control, an instruction of acquisition, an instruction of display, or the like, or any combination thereof.
  • the display module 1802 may be configured to display the data.
  • the data herein may include data after rendering, data after storage, data after segmentation, data after acquisition, or the like, or any combination thereof.
  • the segmentation module 1501-1 may be configured to segment or identify different components in the data set.
  • the different components may include a removable region, a reserved region, or other portion, or the combination thereof according to user demands.
  • the storage control module 1501-2 may be configured to control data of different components to store in different storage modules.
  • the rendering control module 1501-3 may be configured to control the rendering after receiving a rendering instruction.
  • the rendering instruction may include rendering the data in corresponding storage modules.
  • the storage /rendering unit 1502 may be configured to store the data, or render the data, or their combination.
  • the above description about the image processing system is merely an example according to some embodiments of the present disclosure.
  • the form and details of the image processing system may be modified or varied without departing from the principles of the present disclosure.
  • the display module 1802 may be not necessary in some embodiments of the present disclosure.
  • the control modules of storage/the control modules of rendering may be not necessary in some embodiments of the present disclosure. The modifications and variations are still within the scope of the present disclosure described above.
  • the segmentation module 1501-1 may further include an identification block 1901 and a judgment block 1902.
  • the identification block 1901 may be configured to identify a characteristic information of voxel including a one dimensional characteristic, a multi-dimensional characteristic, or their combination.
  • the judgment block 1902 may be configured to provide a threshold and judge the characteristic information of the voxels.
  • the identification block 1901 may include a detection sub-block 2001, a judgment sub-block 2002, a labeling sub-block 2003.
  • the detection sub-block 2001 may be configured to detect the arrangement order and the color value of the voxels in the data set.
  • the judgment sub-block 2002 may be configured to judge the voxels of adjacent arrangement and the color value difference within a preset threshold as a same component.
  • the labeling sub-block 2003 may be configured to label different components respectively.
  • the identification module 1901 may further include an acquisition block.
  • the acquisition block may be configured to acquire the multi-dimensional feature information.
  • the acquisition block may be configured to provide a neighbor region and acquire the multi-dimensional feature information according to statistics based on voxels and voxels within the range of neighborhood region.
  • the statistics herein may include a statistics method based on a Gaussian mixture model, a statistics based on information entropy of the voxel, a statistics method based on time-invariant space-state feature model, or the like, or any combination thereof.
  • the weight in the combination of each statistics method may be varied according to different demands in the combination use in some embodiments of the present disclosure.
  • the storage control module 1501-2 may include a first storage control block and a second storage control block.
  • the first storage control block may be configured to load the data set of a three-dimensional image into a CPU memory.
  • the second storage control block may be configured to load the reserved region of the three-dimensional image into a GPU video memory.
  • storage/rendering unit 1502 may include a CPU memory, a GPU video memory, a RAM, a ROM, a hardware, a software, a tape, a CD, a DVD, an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
  • ASIC Application-Specific Integrated Circuit
  • ASIP Application-Specific Instruction-Set Processor
  • PPU Physical Processing Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • PLD Programmable Logic Device
  • the rendering control module 1501-3 may include a thread control block 2101, and a distribution block 2102.
  • the thread control block 2101 may be configured to establish a thread corresponding to the task queue of the CPU and the GPU, respectively.
  • the distribution block 2102 may be configured to analyze a rendering instruction of the user and distribute the rendering task to the task queue of the CPU or the GPU which execute image rendering.
  • FIG. 22 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure.
  • the image processing method may include acquiring a segmentation information of volume data process, preprocessing the segmentation information, generating mesh data of each organs’ boundary information process, loading the mesh data and the volume data in a GPU process, and iterative rendering process.
  • a segmentation information of a volume data may be acquired.
  • the acquisition of the segmentation information may include segmenting the volume data into several human tissues and choosing a corresponding rendering strategy based on the segmentation information of different tissues.
  • the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
  • the segmentation information may be preprocessed and boundary data may be generated.
  • the boundary data may be a mesh grid data corresponding to boundary structures of each tissue within the volume data.
  • the boundary structure may be a two-dimensional boundary line, a three-dimensional boundary surface, etc.
  • the boundary structure of the mesh grid data may include a triangle or quadrate, or an irregular shape, or any combination thereof.
  • the triangle or quadrate may be formed by connecting three or four corresponding volume data.
  • the mesh grid data may include information e.g., a corresponding coordinate of each point therein, a normal vector of a plane where the grid located, a structure information of an adjacent volume data, or the like, or any combination thereof.
  • the boundary structure of the mesh grid data may be comprised of other shapes such as a pentagon, a hexagon, or the like, or any combination thereof.
  • the tissue herein may be not limited to human tissues or organs, other components such as air, water, or the like may be tissues as well. The modifications and variations are still within the scope of the present disclosure described above.
  • the preprocessing of the segmentation information may include providing an observation point, providing observation light of different directions from the observation point.
  • the observation light may intersect the boundary structure at several intersection points.
  • the intersection points in a same depth may be located in a same depth surface.
  • the depth may be the distance between the intersection point and the observation point.
  • tissue transmitted by light may be described as an example.
  • a region of interest may be marked as Mask_ID_1, Mask_ID_2, Mask_ID_3, etc.
  • a region of disinterest may be marked as Mask_ID_Default.
  • the tissues may include a tissue 2301, a tissue 2302 and a tissue 2303 which locates therebetween.
  • the tissue 2301 and tissue 2302 may be regarded as a region of interest and may be marked as Mask_ID_1, Mask_ID_2, respectively.
  • the tissue 2303 of disinterest may be marked uniformly as Mask_ID_default.
  • the light may start from a point O, and passes through the tissues along the directions as shown in the FIG. 23.
  • the light may intersect with the boundaries of the tissue 2301 and the tissue 2302 at the points from number 1 to number 12.
  • Number 1-12 may represent the twelve intersections, respectively. In some embodiments, the intersections 1-12 may be acquired by passing several times based on a Depth Peeling algorithm.
  • a mesh structure may be generated according to the edge of volume data in a Mask_ID.
  • the mesh structure may include three/four point coordinates of the triangular/quadrilateral facets, a normal vector of a plane, a Mask_ID information of an adjacent volume data in a three-dimensional space, or the like, or any combination thereof.
  • the adjacent volume data in the three dimension space may volume data which is on the top, bottom, left and right of the present volume data.
  • the intersection point 1 and 2 as shown in FIG. 23 may be described as an example.
  • the Mask information of a volume data point before the intersection point 1 may be Mask_ID_Default.
  • the Mask information of a volume data point after the intersection point 1 may be Mask_ID_1.
  • the Mask information of a volume data point before the intersection point 2 may be Mask_ID_1.
  • the Mask information of a volume data point after the intersection point 2 may be Mask_ID_Default. The rest may be done in the same manner.
  • FIG. 23 is merely an example according to some embodiments of the present disclosure.
  • the form and details of the segmentation may be modified or varied without departing from the principles of the present disclosure.
  • other preprocessing method may be executed to acquire mesh data of the segmentation information. The modifications and variations are still within the scope of the present disclosure described above.
  • the volume data and the mesh data may be loaded into a GPU to execute a rendering.
  • the volume data and mesh data may be loaded into the GPU.
  • the mesh data may be processed the Depth Peeling algorithm.
  • the Depth Peeling algorithm may be a multi-slice rendering method based on z-buffer.
  • the rendering of each slice may be based on the depth value of a previous slice.
  • the Depth Peeling algorithm may include Pass of many times.
  • the first Pass may be marked as Pass (1) and the ith Pass may be marked as Pass (i) .
  • a depth information and a position information of the Pass (i) acquired may be marked as Depth_i, Position_i, respectively.
  • the Depth_i and/or the Position_i may be stored in a Texture of a Frame Buffer Object (FBO) .
  • FBO Frame Buffer Object
  • the volume data may be rendered iteratively.
  • the rendering may include acquiring a maximum depth surface and a minimum depth surface.
  • the maximum depth surface may include the intersection points which may be at a farthest boundary surface along the light and in a same curved surface as well.
  • the minimum depth surface may include the intersection points which may be at a nearest boundary surface along the light and in a same curved surface as well.
  • the iterative rendering may be executed from the minimum depth surface to the maximum depth surface.
  • the iterative rendering may end at the maximum depth surface.
  • the iterative rendering may include judging whether a boundary structure acquired from a current iteration and the boundary structure acquired from the last iteration belong to a same boundary structure. If so, the corresponding volume data may be accessed and rendered according to the position information of the current and the last interaction. If not, the end of the current rendering may be regarded as the start of the next rendering sampling. A boundary structure of the next iteration may be continued to look for.
  • the access of the position information may include consulting the boundary data, confirming the corresponding tissue type for rendering, choosing a corresponding rendering strategy and a color table.
  • the Depth Peeling Pass (1) may be executed to acquire a maximum depth (marked as Depth_max) and a minimum depth (marked as Depth_min) of the mesh data.
  • the Depth_max and the Depth_min may be stored in the Texture of the FBO (Texture_Depth_Min, Texture_Depth_Max) .
  • the Pass (i) may be executed to acquire Depth_i.
  • the Depth_i may be compared with the Depth_max. If Depth_i>Depth_max, the iteration may be ended. If not, the step 2206 may be executed.
  • the depth information Depth_i-1 and position information Positin_i-1 may be acquired from Pass (i-1) .
  • the Pass (i) may be executed to acquire Depth_i and Position_i.
  • the Depth_i and the Position_i may be stored in the Texture (Texture_Depth_i, Texture_Depth_i) .
  • step 2208 the mesh data of the Pass (i) and Pass (i-1) may be judged whether belong to a same tissue. If so, the step 2209 may be executed. If not, it may return to step 2205.
  • step 2209 the voxels of the volume data may be accessed from the Position_i-1 as the start to Position _i as the end.
  • the rendering result may be fused with the last rendering step of the same tissue. Then it may return to the step 2205.
  • the fusion of the rendering result with the last rendering step of the same tissue may be illustrated by combining the two figures of FIG. 22 and FIG. 23.
  • the Depth Peeling algorithm may be used in the GPU.
  • the depth and position information Point_1 of the intersection point 1 may be acquired according to Pass (1) .
  • the depth and position information Point_2 of the intersection point 2 may be acquired according to Pass (2) .
  • the Point_1 and Point_2 may be compared, and the intersection point 1 and 2 may belong to a same tissue, marked as Mask_ID_1. So the different rendering strategies may be used from point 1 to point 2, the rendering result may be marked as Result_Mask_ID_1_1_2.
  • the Point_5 and Point_6 may be acquired according to Pass (5) and Pass (6) .
  • the intersection point 5 and 6 may belong to a same tissue, marked as Mask_ID_1.
  • the rendering result may be marked as Result_Mask_ID_1_5_6.
  • the Result_Mask_ID_1_1_2 and Result_Mask_ID_1_5_6 may be fused to acquire a Result_Mask_ID_1_Blend.
  • Pass (7) and Pass (8) may be executed in a similar way.
  • the Result_Mask_ID_1_7_8 may be fused with the Result_Mask_ID_1_Blend to acquire a new Result_Mask_ID_1_Blend.
  • Pass (11) and Pass (12) may executed to acquire the Result_Mask_ID_1_11_12.
  • the Result_Mask_ID_1_11_12 may be fused with the Result_Mask_ID_1_Blend to acquire a new Result_Mask_ID_1_Blend.
  • the rendering result may be acquired as Result_Mask_ID_1_Blend.
  • the subsequent Pass (i) may be executed to acquire and store Depth_i and Position_i.
  • the depth value may be compared with the Dpeth_max. If Depth_i ⁇ Dpeth_max, the Depth_i-1 and the Position_i-1 of the Pass (i-1) in the FBO may be read.
  • the image process method may end until the comparison result is Depth_i>Dpeth_max.
  • the mesh data of Pass (i) and Pass (i-1) may be judged whether belong to a same boundary of a tissue. If so, the voxels of the volume data may be accessed from the Position_i-1 as the start to Position_i as the end.
  • the rendering result may be acquired by different rendering strategies. If not, it may return to the step 2205.
  • the rendering strategy may include a direct volume rendering, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , a hardware-accelerated volume rendering, an average intensity projection (AIP) , or the like, or any combination thereof.
  • VIP volume intensity projection
  • MIP maximum intensity projection
  • Min-IP minimum intensity projection
  • AIP average intensity projection
  • the iterative rendering is merely an example according to some embodiments of the present disclosure.
  • the form and details of the iterative rendering may be modified or varied without departing from the principles of the present disclosure.
  • the iterative rendering of tissues may be executed without the fusion step.
  • one or more steps may be omitted.
  • one or more steps may be executed repeatedly. The modifications and variations are still within the scope of the present disclosure described above.
  • FIG. 24 is a flowchart illustrating a process of post segmentation processing according to some embodiments of the present disclosure.
  • an image may be acquired.
  • the image may be a binary image (mask) after a 2D segmentation or a 3D segmentation.
  • the image may be acquired based on techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • X-ray X-ray
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • a region of interest may be extracted from the image, and the ROI may be partitioned into three regions, a first region, a second region, and a third region.
  • the first region may be determined as a target tissue (e.g., a bone tissue) , merely by way of example, for a binary image, the first region may be constituted by all pixels whose value is 1.
  • the characteristic value of the first region may be denoted by T high .
  • the second region may be the boundary region of the target tissue.
  • the characteristic value of the second region may be denoted as T low that is relatively small compared with T high .
  • the characteristic value of the third region may be denoted as T i .
  • a first method may be used to calculate the characteristic value T i of a pixel in the third region.
  • the pixel value may fluctuate in a range.
  • the pixel value of a medical image may correlate to the CT value of the tissue.
  • the CT value may be used to indicate the extent of attenuation when the X-ray transverses a tissue.
  • Different tissues may have different CT values than may fluctuate in a range.
  • the CT value of bones may be equal or less than 1000 HU
  • the CT value of soft tissues may range from 20 to 70 HU
  • the CT value of water may be 0, or ranging from -10 HU to 10 HU. Therefore, the CT value may be used to determine the type of tissues.
  • the first region may be used to analogy the high temperature field, and has a fixed characteristic value T high .
  • the second region may be used to analogy the low temperature filed, and has a fixed characteristic value T low .
  • T low may be less than T high .
  • T high -T low >100.
  • the characteristic value may be the brightness of a pixel.
  • the characteristic value T i of each pixel in the third region may be calculated to determine whether the pixel belongs to the target tissue so that the target tissue may be filled.
  • the pixels of the third region may be assessed based on a second method. Specifically, the characteristic value T i of the pixels may be assessed. If assess result is positive, the pixels may be extracted for filling the target tissue. If the assess result is negative, the pixels may be assigned to the value of the background.
  • FIG. 25 is a flowchart illustrating a process of post segmentation processing according to some embodiments of the present disclosure.
  • an image may be acquired.
  • the image may be a binary image that is generated based on segmentation methods including a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
  • the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
  • an ROI may be selected, and the ROI may be partitioned into three regions, a first region, a second region, a third region.
  • the ROI may be selected either manually or automatically.
  • the ROI may be a region of an image that lacks some information, for example, boundary information.
  • the first region may be the region with high brightness, and the pixel values of the first region may be 1.
  • the second region may be the boundary region that is acquired automatically.
  • the rest may be the third region.
  • step 2503 may be calculated by:
  • T low is a constant, as diffusion effects is neglected, in some embodiments, ⁇ may be 1.
  • T i is the characteristic value of pixel i in n+1 th iteration and t moment.
  • T i may be the characteristic value of pixel i
  • i may be a natural number.
  • x i may be the space coordinate of pixel I in the third region.
  • T low is the characteristic value of T low .
  • T high of the first region may be defined as 200, and T low of the second region may be defined as 0.
  • the image may be filled based on the calculation of the characteristic value of the third region.
  • T high -T low >100.
  • T low may be iteratively calculated based on an initial value of T low .
  • step 2504 the difference of the characteristic value in the n+1 th iteration and the characteristic value in the nth iteration may be assessed, whether T n+1 -T n ⁇ A. In some embodiments, A ⁇ 10 -6 . It should be noted that A may be set based on experience. If the difference is less than A, step 2507 may be performed, and the characteristic value of pixel i may be assigned as Otherwise, step 2505 may be performed, may be substituted by and the iterative calculation may continue.
  • the methods for solving equation 13 may include Conjugate gradient method, incomplete Cholesey conjugate gradient algorithm, the strong implicit method, Gauss method, or the like, or any combination thereof.
  • step 2505 the number of iterations may assessed. If the number of iterations exceeds 200, the iteration may be terminated and step 2507 may be performed, the characteristic value T i may be acquired. It should be noted that step 2504 and step 2505 are utilized to determine the convergence of equation 1. In some embodiments of the present disclosure, either step 2504 or step 2505 is satisfied, the iteration may be terminated. In some embodiments, the iteration may be terminated when both step 2504 and step 2505 are satisfied.
  • the characteristic value T i may be assessed. If T i meets B ⁇ Ti ⁇ T high , the pixel corresponding to T i may be extracted and the region to be filled may be determined. Otherwise, the pixel corresponding to T i may be assigned to the value of the background of the image in step 2509. Taking the binary image as an example, the pixel may be assigned to a value of 0. It should be noted that threshold B is adjustable. B may be adjusted under different conditions.
  • step 2510 a filled image based on the characteristic value T i may be generated.
  • FIG. 26 is a block diagram of an image processing system according to some embodiments of the present disclosure.
  • the image processing system 202 may include an image acquisition module 2601, a distribution module 2602, a first determination module 2603, and a second determination 2604.
  • the input/output unit 201 may communicate bidirectionally with the image processing system. The communication is described elsewhere in the present disclosure.
  • the image acquisition module 2601 may be configured to acquire images including 2D image, 3D image, binary images generated based the 2D images or 3D images.
  • the distribution module 2602 may be configured to partition an ROI into three regions, a first region, a second region, and a third region.
  • the first determination module 2603 may be configured to calculate the characteristic value T i of a pixel i in the third region.
  • the first determination module may further include a convergence block and an iteration block (not shown in the figure) .
  • the convergence block may be configured to determine the convergence of the characteristic value T i of the third region. The determination may be based on an equation:
  • T low is a constant, as diffusion effects is neglected, in some embodiments, ⁇ may be 1.
  • is the characteristic value of pixel i in n+1 th iteration and t moment.
  • T i may be the characteristic value of pixel i, and i may be a natural number.
  • x i may be the space coordinate of pixel I in the third region.
  • T low is initialized as T low . If T n+1 -T n ⁇ A, the iteration may be terminated, and the characteristic value Ti of pixel i may be determined. Otherwise, the iteration may continue to proceed.
  • A ⁇ 10 -6 . It should be noted that A may be set based on experience.
  • the iteration block may be configured to perform an iteration. In some embodiments, may be iteratively calculated based on an initial value of T low . If the number of iterations exceeds 200, the iteration may be terminated and the characteristic value T i may be acquired. The characteristic value T i of all the pixels in the third region may be acquired based on the method provided herein.
  • the first determination module 2603 may terminate the iteration. In some embodiments, the iteration may be terminated when both A of the convergence block and the number of iterations of the iteration block are meet.
  • a threshold B may be set by the second determination module 2604. It should be noted that B may be either fixed or variable. B may be adjusted to adjust the boundary of a target tissue. In some embodiments, T low ⁇ B ⁇ T high .
  • the distribution module 2602 may partition an ROI into more than three regions.
  • FIG. 27 is a flowchart illustrating a process of spine location according to some embodiments of the present disclosure.
  • a CT image sequence may be loaded.
  • the CT image sequence may comprise a plurality of 2D slices.
  • the CT image sequence may be generated based on multi-slice spiral CT (MSCT) .
  • MSCT multi-slice spiral CT
  • the CT image sequence may be a portal venous phase image sequence, an arterial phase image sequence, etc.
  • the CT image sequence may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • X-ray X-ray
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • a preliminary spine region may be segmented based on the CT image sequence. Specifically, a preliminary spine region may be segmented in each 2D slice of the CT image sequence. The preliminary spine region may be segmented based on some known information, for example, the direction of a CT scan, the anatomical position of the spine in a human body, etc. In some embodiments of the present disclosure, the preliminary spine region may be segmented based on an x coordinate and a y coordinate.
  • the x coordinate of the preliminary spine region may be [H/4, 7H/8]
  • the y coordinate of the preliminary spine region may be [W/3, 3W/4] . W may denote the width of the CT image
  • H may denote the height of the CT image.
  • a preprocessing may be performed on the preliminary spine region to segment bone regions in step 2703.
  • the preprocessing may include threshold segmentation, region connection, or the like, or any combination thereof.
  • the threshold segmentation may be performed to segment the bone regions.
  • the grey value of a bone region may be greater than that of surrounding regions.
  • the region connection may be performed to connect the bone regions that are not continuous.
  • an inflation operation may be performed on small regions to connect the small regions that may constitute a big region.
  • a morphology algorithm may be used to eliminate small holes and disconnections.
  • a spine region may be determined. Specifically, a spine region may be determined in each 2D slice of the CT image sequence. In some embodiments of the present disclosure, among the bone regions segmented in step 2703, the bone region having the maximum area may be determined as the spine region.
  • an anchor point may be determined and validated in every 2D slice of the CT image sequence.
  • the midpoint in the toppest row of the spine region determined in step 2740 may be determined as the anchor point.
  • the coordinate of the anchor point may be donated as (x med , y top ) , where y top may denote the y coordinate of the toppest row, and x med may denote the x coordinate of the midpoint.
  • the anchor point may be validated by:
  • y top >H/4 may insure that the y coordinate of the anchor point locates in the middle or the lower position
  • abs (x med -W/2) may be used to control the shift extent of the x coordinate from the central position.
  • the width of the CT image may be denoted by W, while the height of the CT image may be denoted by H. Therefore, if the y coordinate of the anchor point is greater than H/4, and the shift of the x coordinate from the central position is less than H/4, the anchor point may be valid. Otherwise, the anchor point may be determined invalid.
  • the anchor point of every 2D slice determined in step 2705 may be calibrated.
  • the calibration may include calibrating the anchor points whose coordinate is abnormal, calibrating the anchor points that lack coordinate, etc.
  • the anchor points may be arranged based on the slice number of the 2D slices.
  • the slice number may be denoted as [0, 1..., N]
  • the number of anchor points may be denoted as M
  • the z coordinate t k ⁇ [0, N]
  • k 1, 2, ...M
  • i ⁇ j, t i ⁇ t j a sliding window may be defined, the anchor points to be calibrated may be positioned in the sliding windows sequentially.
  • the position mean value of the anchor points P medium may be calculated.
  • the size of the sliding window may be denoted as S window .
  • the subscript of the starting point in the sliding window may be calculated by:
  • N 1 max (k-S window /2, 1) (Equation 16)
  • the subscript of the ending point in the sliding window may be calculated by:
  • N 2 min (N 1 +S window -1, M) (Equation 17)
  • the coordinate sequence in the sliding window may be denoted as:
  • the mean value P medium may be calculated by:
  • d i may denote the Euclidean distance sum of the ith anchor point in P and other anchor points
  • d min may denote the minimum Euclidean distance sum among all the Euclidean distance sums.
  • the coordinate corresponding to d min may be assigned to P medium .
  • D M and D P may be calculated to calibrate the anchor points.
  • DM and DP may be calculated by:
  • DM may denote the Euclidean distance of current coordinate and the position mean value
  • DP may denote the Euclidean distance of the current coordinate and the previous coordinate.
  • a threshold TH may be defined to assess DM and DP. If DM ⁇ TH and DP ⁇ TH, then the current coordinate may be valid. Otherwise, the current coordinate may be substituted by the coordinate of DM if DM is less than DP, wise versa.
  • the coordinate may be calculated based on adjacent coordinates.
  • the fixed anchor point sequence may be denoted as:
  • the anchor points that lack coordinate may be denoted as t ⁇ [0, N] , and t ⁇ t1, t2, ...tM. If the anchor point lacks coordinate locates in the beginning, t ⁇ t1, the coordinate may be fixed as (x (1) , y (1) , t) ; if t>t M , the coordinate may be fixed a (x (M) , y (M) , t) .
  • the coordinate may be fixed based on adjacent coordinates.
  • the adjacent anchor points of t are t k and t k+1
  • the coordinates corresponding to tk and tk+1 are (x (tk) , y (tk) , z (tk) ) and (x (tk+1) , y (tk+1) , z (tk+1) ) .
  • the coordinate may be fixed by:
  • the fixed coordinate may be denoted as (x t , y t , t) .
  • FIG. 28 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure.
  • a first image may be transformed into a first 2D image.
  • the pixel values of the 2D image may correspond to those of the first image, for example, correspond to the pixels that have the same spatial position in every 2D slice of the first image.
  • the first image may be a binary image that is generated based on segmenting an ROI out of a first CT image.
  • the ROI may be segmented from the first CT image by performing a threshold segmentation.
  • the first CT image may be the original CT image that is generated for diagnosis.
  • the first image may comprise a group of 2D slices. Therefore, the pixel values of the 2D image may correspond to those of each 2D slice of the first image spatially.
  • the first CT image is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • the first CT image may be acquired based on techniques including digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) , positron emission tomography (PET) , X-ray, magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or any combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the first CT image may be generated based on multimodal techniques.
  • the multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • the first 2D image may be generated by projecting the first image onto an anatomical plane.
  • the anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the pixel values of the 2D image may correspond to those of each 2D slice of the first image.
  • a pixel value of the 2D image may be a sum of spatially corresponded pixels (e.g., the pixel values that are not 0) of all the 2D slices of the first image.
  • the projection of the first image may be based on maximum intensity projection (MIP) , temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or any combination thereof.
  • MIP maximum intensity projection
  • tMIP temporal maximum intensity projection
  • MiniP minimum intensity projection
  • VED virtual endoscopic display
  • a first region may be acquired based on the 2D image.
  • the first region may be a high brightness region of the 2D image.
  • the first region may be acquired based on techniques including cluster, variable threshold etc.
  • a second region may be acquired.
  • the second region may be acquired based on the region of each 2D slice of the first CT image corresponding to the first region spatially, the high brightness region of the region may be determined as the second region.
  • each 2D slice of the first CT image may have a high brightness region that may be determined as a second region, each 2D slice may have a second region.
  • a second CT image may be calibrated based on the second region.
  • the regions corresponding to the second region of the second CT image may be removed.
  • the second CT image may be acquired after a preliminary segmentation, for example, a lung image after preliminary segmentation.
  • the pixel values of the second CT image corresponding to the second region may be set as the pixel values of the background of the second CT image.
  • the second region of the first CT image may be determined as at least part of a skeleton that may lead to a false positive of a diagnosis.
  • the lesion areas of false positive may be reduced effectively.
  • the first CT image may be the original lung scan CT image
  • the second CT image may be acquired by performing segmentation on the first CT image.
  • the chest comprises bone tissues such as spine and rib
  • the second CT image may comprise skeleton area such as spine area.
  • the skeleton area may cause false positives in diagnosis.
  • the skeleton area of the second CT image may be removed so that diagnosis accuracy may be improved effectively.
  • FIG. 29 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure.
  • a first CT image may be acquired.
  • the first CT image may be acquired by scanning a lung. The scan may be based on a regular CT scan, an enhanced CT scan, etc.
  • the first CT image is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • the first CT image may be acquired based on techniques including digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) , positron emission tomography (PET) , X-ray, magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or any combination thereof.
  • DSA digital subtraction angiography
  • CT computed tomography
  • CTA computed tomography angiography
  • PET positron emission tomography
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiography
  • SPECT single-photon emission computerized tomography
  • US ultrasound scanning
  • the first CT image may be generated based on multimodal techniques.
  • the multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
  • the first CT image may be segmented to generate a second CT image.
  • a preliminary segmentation may be performed on the first CT image to generate the second CT image.
  • a threshold segmentation may be performed on the first CT image to generate a first image.
  • the first image may be a binary image.
  • a high brightness region of the CT image may be extracted, and the high brightness region may be determined as a skeleton area.
  • the first image may be a binary image after threshold segmentation.
  • the threshold segmentation may be based on grey value difference of the ROI (e.g., a lung) and the background. When the threshold is determined, the pixels having values greater than the threshold may be determined as the ROI, while the pixels having values less than the threshold may be determined as the background. It should be noted that the threshold may be selected based on experimental data.
  • a projection may be performed on the first image to generate a 2D image.
  • the first 2D image may be generated by projecting the first image onto an anatomical plane.
  • the anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc.
  • the pixel values of the 2D image may correspond to those of each 2D slice of the first image.
  • a pixel value of the 2D image may be a sum of spatially corresponded pixels of all the 2D slices of the first image.
  • the projection of the first image may be based on maximum intensity projection (MIP) , temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or any combination thereof.
  • MIP maximum intensity projection
  • tMIP temporal maximum intensity projection
  • MiniP minimum intensity projection
  • VED virtual endoscopic display
  • a pixel value of the 2D image may be a sum of spatially corresponded pixel values of the binary image
  • the spatially corresponded pixel values may be restricted to 1, or 0, or any combination thereof.
  • the pixel values of the binary image (the first image) range from 0 to a value greater than the threshold (e.g., 255)
  • a pixel value of the 2D image may be a sum of spatially corresponded pixel values of the binary image (e.g., CT value) .
  • a first region may be acquired based on the 2D image.
  • the first region may be the region of the 2D image that has the maximum brightness.
  • the first region may be acquired based on techniques including cluster (e.g., Gaussian mixture model) , variable threshold etc.
  • cluster e.g., Gaussian mixture model
  • variable threshold e.g., a number of regions with high brightness (e.g., spine area, pelvic area) may be acquired by using the techniques mentioned above. Under this circumstance, the region among the multiple regions that has the most pixels may be determined as the first region.
  • a threshold segmentation may be performed on the first CT image to generate a second region.
  • the threshold segmentation may be performed on each 2D slice of the first CT image, specifically, on the region corresponding to the first region.
  • a second region may be acquired in each 2D slice of the first CT image.
  • the second region in each 2D slice may a high brightness region.
  • the first region may be determined as a rough spine area.
  • the region corresponding to the first region of every 2D slice of the first CT image may be determined as the rough spine area.
  • the threshold segmentation may be performed on the region to determine the real spine area, namely, the second region. It should be noted that the correspondence of the region of every 2D slice of the first CT image and the first region may be a spatial correspondence, merely by way of example, they are corresponded in spatial position. The threshold segmentation is described elsewhere in the present disclosure.
  • the second CT image may be calibrated based on the second region to acquire a calibrated second CT image.
  • the second CT image may have the same size and amount of 2D slices with the first CT image.
  • the region of a 2D slice of the second CT image corresponding to the second region may be removed.
  • the pixel values of the region may be configured as those of the background of the second CT image.
  • the method described in the present disclosure is compatible with that of the art.
  • the method of the art may be used to perform a preliminary determination of a bone tissue, and the method described in the present disclosure may be used to perform a fine determination of the bone tissue to improve the accuracy of lung segmentation and the diagnosis of pulmonary nodules.
  • the method described herein may be used in the diagnosis of kidney stones so that the false positive may be reduced during the diagnosis.
  • FIG. 30 illustrates a block diagram of an image processing system according to some embodiments of the present disclosure.
  • the image processing system 202 may include a transformation module 3001, a first acquisition module 3002, a second acquisition module 3003, and a calibration module 3004.
  • the input/output unit 201 may communicate bidirectionally with the image processing system. The communication is described elsewhere in the present disclosure.
  • the transformation module 3001 may be configured to transform a first image into 2D image.
  • the pixel values of the 2D image may correspond to those of the first image, for example, correspond to the pixels that have the same spatial position in every 2D slice of the first image.
  • the first image may be a binary image that is generated based on segmenting an ROI out of a first CT image.
  • the ROI may be segmented from the first CT image by performing a threshold segmentation.
  • the first CT image may be the original CT image that is generated for diagnosis.
  • the first image may comprise a group of 2D slices. Therefore, the pixel values of the 2D image may correspond to those of each 2D slice of the first image spatially.
  • the first acquisition module 3002 may be configured to acquire a first region.
  • the first region may be acquired based on the 2D image.
  • the first region may be a high brightness region of the 2D image.
  • the first region may be acquired based on techniques including cluster, variable threshold etc.
  • the second acquisition module 3003 may be configured to acquire a second region.
  • the second region may be acquired based on the first CT image.
  • the second region may be acquired based on the region of each 2D slice of the first CT image corresponding to the first region spatially, the high brightness region of the region may be determined as the second region.
  • each 2D slice of the first CT image may have a high brightness region that may be determined as a second region, each 2D slice may have a second region.
  • the calibration module 3004 may be configured to calibrate a second CT image based on the second region.
  • the regions corresponding to the second region of the second CT image may be removed.
  • the second CT image may be acquired after a preliminary segmentation, for example, a lung image after preliminary segmentation.
  • the pixel values of the second CT image corresponding to the second region may be set as the pixel values of the background of the second CT image.
  • a calibrated second CT image may be acquired.
  • the calibration module 3004 may further include a pixel reset block (now shown in the figure) that may be configured to set the pixel values of the second CT image corresponding to the second region as the pixel values of the background of the second CT image.
  • the image processing system 202 may further include a determination module that may be configured to determine a high brightness region that has the most pixels as the first region.
  • hepatic artery segmentation For illustration purposes, some image processing embodiments about a hepatic artery segmentation will be described below. It should be noted that the hepatic artery segmentation embodiments are merely for better understanding and not intended to limit the scope of the present disclosure.
  • FIG. 31 is a flowchart illustrating a hepatic artery segmentation method according to some embodiments of the present disclosure.
  • a CT data may be entered.
  • the CT data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof.
  • the CT data may be a CT image and a corresponding liver mask. While in other embodiments, the CT data may be other information. It should be noted that the CT data is merely for illustration purposes, the data from other medical imaging system as described elsewhere in the present disclosure may also be applicable to the hepatic artery segmentation method.
  • a seed point of the hepatic artery may be selected, searched or positioned.
  • the seed point selecting method may be manual or automatic.
  • the seed point may be selected by a criterion.
  • the selecting criterion may include judging whether a point (apixel or a voxel) is in a certain grayscale range or evenly spaced on a grid or a square.
  • a first and second binary volume data may be generated based on the seed point selected in step 3102.
  • the first binary volume data may include a backbone and a rib connected therewith.
  • the second binary volume data may include a backbone, a rib connected therewith and a hepatic artery.
  • the first and second binary volume data may be generated by an image segmentation method.
  • the segmentation method may be any one as described elsewhere in the present disclosure.
  • a third binary volume data may be generated.
  • the third binary volume data may be acquired by removing a part same with the first binary volume data from the second binary volume data.
  • the hepatic artery segmentation result may be generated through a 3D region growing in the third binary volume data based on the seed point selected in step 3102.
  • the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the hepatic artery may also be post-processed after the 3D region growing.
  • the post-processing method may be any one as described elsewhere in the present disclosure. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
  • FIG. 32 will be described in detail as an exemplary embodiment of a hepatic artery segmentation based on a CT image.
  • the other medical system may include Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like,
  • DSA Digital Subtraction Angiography
  • MRI Magnetic Resonance Imaging
  • MRA Magnetic Reson
  • the image processing method is not limited to a hepatic artery and may also be applicable to other subjects including an organ, a texture, a region, an object, a lesion, a tumor, etc.
  • the subject may include a head, a breast, a lung, a trachea, a pleura, a mediastinum, an abdomen, a long intestine, a small intestine, a bladder, a gallbladder, a triple warmer, a pelvic cavity, a backbone, extremities, a skeleton, a blood vessel, or the like, or any combination thereof.
  • a hepatic artery CT image data and its corresponding liver mask may be acquired in step 3201. Other information may also be input in this step as described elsewhere in the present disclosure. Then a seed point of the hepatic artery may be selected.
  • the seed point selecting process may be divided into step 3202 and step 3203. For illustration purposes, the seed point selecting process will be described in detail by an example of processing a hepatic artery CT image of N slices and its corresponding liver mask.
  • a slice for seed point selecting may be determined based on the hepatic sequence data of the image.
  • the number of the slice may be a one or more.
  • an example of determining a slice for selecting the seed point may be described in detail below:
  • a starting slice and an ending slice of a liver may be searched from the liver mask and written as Ls and Le, respectively.
  • the maximum area MaxArea of a single liver slice may also be searched.
  • Vs is the first slice whose liver area is no less than MaxArea/2 when comparing a liver area from Ls to Le in the liver mask slice by slice.
  • Ve is the last slice whose liver area is no less than MaxArea/2 when comparing a liver area from Ls to Le in the liver mask slice by slice.
  • Area (i) may be the liver area of slice i in the liver mask.
  • the hepatic artery CT data may be selected and segmented slice by slice from Vs to Ve.
  • a connected region whose circular degree is no less than 0.7 may be searched.
  • a connected region with the minimax area among the connected regions above may be determined as an artery, and the centroid thereof may be determined as a seed point.
  • an example of determining a seed point may be described in detail below:
  • the CT image data of slice i (i ⁇ [Vs, Ve] ) may be selected and performed by the following process in turn till a seed point is selected and a CT value thereof is calculated.
  • a segmentation may be performed based on an empirical value Tev and a coarse binary image including the hepatic artery target may be generated.
  • Tev is a CT value chosen from 70 to 110, i.e., 70 ⁇ Tev ⁇ 110.
  • step c) A open computation in morphology may be performed on the result from step b) and a tiny connection in the image may be broken.
  • Ta may be a value chosen from 200 to 300 and Tb may be a value chosen from 1000 to 1200, i.e., 200 ⁇ Ta ⁇ 300 and 1000 ⁇ Tb ⁇ 2000.
  • a target region for selecting a seed point may be determined.
  • the starting row and the ending row of the target region may be written as Rs and Re, respectively.
  • the starting column and the ending column of the target region may be written as Cs and Ce, respectively.
  • the starting row and the ending row of the selecting region may be written as Rs’ and Re’ , respectively.
  • the starting column and the ending column of the selecting region may be written as Cs’ and Ce’ , respectively.
  • the starting row and the ending row of the target region may be searched perpendicularily and horizontally in the binary image.
  • the center of the target region P (cX, cY) may be determined according to the equations below:
  • the starting row and the ending row of the selecting region may be determined by using a offset.
  • the offset may be used to expand the selecting scope and it may increase along with the increasing of number of the failure selecting slices.
  • the equations to calculate the selecting region may be below:
  • a connected region whose circular degree is no less than 0.7 (i.e., Cd ⁇ 0.7) in the selecting region may be searched out.
  • the connected region with the minimax area among the connected regions above may be determined as an artery, and the centroid thereof may be determined as a seed point. If there was no such a seed point, it may enter into a next searching.
  • a CT value of the seed point may be calculated after steps a) to f) .
  • the calculating method may include choosing a mean value of an n ⁇ n square whose center is the seed point.
  • n may be any natural number such as 3, 5, etc.
  • Step 3204 and step 3205 may be used to generate a first binary volume data.
  • the first binary volume data may include a backbone and a rib connected therewith.
  • the first binary volume data may be generated by a threshold segmentation based on the CT value of the seed point.
  • the threshold segmentation may use a piecewise function below:
  • t is the CT value of the seed point.
  • Symbol a and b is an upper limit and a lower limit of a CT value in the segmentation, respectively.
  • T1, T2 and T3 may be the pre-determined segmentation threshold. Note that the piecewise function type and its parameters may be variable according to specific implementation scenarios.
  • a coarse segmentation may be performed on the original CT image data.
  • the result after step 3204 may be a bone binary volume data including a bone and other tissue.
  • the bone binary volume data may be subdivided and re-segmented through a sequence slice superposition strategy.
  • the result after step 3205 may be a first binary volume data including a backbone and a rib connected therewith. For illustration purposes, an example of generating the first binary volume data of step 3205 may be described in detail below:
  • a starting row and an ending row in perpendicular direction in the target region of each slice within the bone binary volume data may be searched.
  • the sequence slices of the bone binary volume data may be subdivided into several regions. All parts including a backbone and a rib connected therewith in each slice may be segmented by using a sequence slice superposition strategy. Merely by way for example, the sequence slices may be subdivided into four region slice, written as [1, Ls] , [Ls+1, Sz] , [Sz+1, Le] , respectively. Wherein Ls and Le are the starting slice and the ending slice of a liver, respectively. Sz is the slice that include the seed point. N is the number of slices of the original CT image data. Each region slice may be performed by the following steps:
  • the starting slice and the ending slice of the region slice is Lm and Ln, respectively.
  • the superposition starting slice dL may be assigned as Lm.
  • the region slice that already with superposition may be written as preA and its area may be preSA.
  • the slice i may be chosen in turn to determine its selecting scope for a backbone area.
  • the starting row and the ending row of this target region may be written as Rps and Rpe, respectively.
  • the height and width of the binary image may be written as Height and Width, respectively.
  • starting row Rps’ , ending row Rpe’ , starting column Cps’ , and ending column Cpe’ of the backbone area may be calculated by the equations below:
  • a connected region in the backbone selecting scope may be searched and the area SA of the connected region may be calculated. If SA is no less than an area pre-determined value AreaSize, the count value may be added with 1 and it then enter into a next slice’s searching. Otherwise it may enter into a next slice’s searching directly.
  • AreaSize may any value chosen from 300 to 600, i.e., 300 ⁇ AreaSize ⁇ 600.
  • the count value is no less than a pre-determined slice number SliceSize
  • the data from the superposition slice dL to i maybe superimposed.
  • the mean value of the starting row and ending row in the target region of the superimposing slices may be calculated.
  • the row range of the backbone in the superposition image may be calculated according to step b2) .
  • a sequence slice without processing may be superimposed directly then.
  • the connected region in the superposition image may be regarded as the region of the backbone and a rib connected therewith.
  • step b) The result from step b) may be expanded by one or more pixels or voxels in the perpendicular direction to reduce the interference in 3D region growing process.
  • the number of the pixels or voxels may be any natural number, such as 2 or 3.
  • a second binary volume data including a backbone, a rib connected therewith and a hepatic artery may be generated.
  • the generating method may be a threshold segmentation for hepatic artery based on the CT value of the seed point.
  • the threshold segmentation may use a piecewise function below:
  • t is the CT value of the seed point.
  • Symbol c and d is an upper limit and a lower limit of a CT value in the segmentation, respectively.
  • T4, T5 and T6 may be the pre-determined segmentation threshold. Note that the piecewise function type and its parameters may be variable according to specific implementation scenarios.
  • the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the order of generating the first binary volume data and the second binary volume data may be changed.
  • the second binary volume data may be generated before the first binary volume data or they may be at the same time.
  • a third binary volume data may be generated.
  • the third binary volume data may be acquired by removing a part same with the first binary volume data from the second binary volume data.
  • the hepatic artery segment result may be generated through a 3D region growing in the third binary volume data based on the seed point.
  • the region growing may refer to any embodiments as described elsewhere in the present disclosure. Merely by way for example, some embodiments for region growing may be performed by judging whether a point (a pixel or voxel) belong to the same connected region with the seed point.
  • a hepatic artery image may be generated.
  • the image may be output or saved according to specific implementation scenarios.
  • Step 3208 is a post-processing for the hepatic artery image generated from step 3201 to 3207.
  • the post-processing method may be any one as described elsewhere in the present disclosure.
  • an exemplary embodiment of repairing the artery may be described in detail below and not intended to limit the scope of the present disclosure.
  • the mean value AM of CT value in the hepatic artery image may be calculated.
  • a connected region which is marked as an artery in each slice may be regarded as a seed point set.
  • a 3D region growing may be performed.
  • an example of step 2 may be described in detail below:
  • a connected region which is marked as an artery in each slice may be searched row by row and the mean value M of CT value in the connected region may be calculated.
  • Each point in the connected region may be regarded as a seed point and a region growing may be performed based on the seed point.
  • a point doesn’t be marked as a target point and its CT value satisfy a requirement it may be stopped to grow.
  • the requirement of a CT value may be no less than ⁇ M and no bigger than ⁇ M.
  • Symbol ⁇ is a lower limit control coefficient whose value may satisfy a requirement, e.g., 0.9 ⁇ 1.
  • Symbol ⁇ is an upper limit control coefficient whose value may satisfy a requirement, e.g., 1 ⁇ 1.05.
  • an edge of the connected region which is marked as an artery may be performed by a threshold segmentation to repair the artery.
  • a threshold segmentation to repair the artery.
  • Each slice which is marked as an artery may be traversed by a unit of n ⁇ n square.
  • Each square may include a point which is marked as an artery, or a point without the mark, or their combination.
  • a mean value T of CT value of the point with and without mark may be calculated.
  • the point without mark may be segmented by a threshold of MaxT and the point whose CT value is no less than MaxT may be marked as an artery.
  • MaxT may be the maximum of T and ⁇ AM.
  • Symbol ⁇ is an edge expanding control coefficient which is no bigger than 1.
  • step a) and step b) may be repeated till the artery become replete.
  • the number of repetition may be set according to specific implementation scenarios. For example, the number of repetition may be 3, 4, 5, etc.
  • the tiny breach may be connected by a closed computation in morphology.
  • the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure.
  • the steps may be added, deleted, exchanged, replaced, modified, etc.
  • the sub-steps in each step may have no such specific boundaries and they may be integrated.
  • the order of the steps in the image processing may be changed, e.g., the generation of the first and second binary volume data may be in a random order.
  • some steps such as the post-processing may be deleted.
  • those variations, changes and modifications do not depart from the scope of the present disclosure.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a "block, " “module, ” “engine, ” “unit, ” “component, “ or “system. " Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A method and system for image processing is provided. The technique incudes acquiring data related to the processing, performing a pre-processing of the data, performing a segmentation of a subject, performing a post-processing of the result of the segmentation and managing storage of the data. The post-process incudes a calibrating and a rendering of the data.

Description

A METHOD AND SYSTEM FOR IMAGE PROCESSING
CROSS-REFERENCE TO RELATED APPLICATIONS
 This application claims priority of Chinese Patent Application No. 201510526053.1 filed on August 25, 2015, Chinese Patent Application No. 201510224781.7 filed on May 5, 2015, Chinese Patent Application No. 201510216037.2 filed on April 30, 2015, Chinese Patent Application No. 201510390970.1 filed on July 6, 2015, Chinese Patent Application No. 201410720069.1 filed on December 2, 2014, Chinese Patent Application No. 201410840692.0 filed on December 29, 2014, Chinese Patent Application No. 201510310371.4 filed on June 8, 2015, and Chinese Patent Application No. 201510313660. X filed on June 9, 2015, the entire contents of each of which are hereby incorporated by reference.
TECHNICAL FIELD
 The present disclosure generally relates to image processing, and more particularly, to a method and system for image segmentation.
BACKGROUND
 Image processing has a significant impact in medical filed, such as Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed tomography (CT) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof. An image segmentation (or “recognition, ” “classification, ” “extraction, ” “identification, ” etc. ) may be performed to establish a realistic subject model by dividing or partitioning a medical image into one or more sub-regions. Segmentation is a procedure of image processing and a means to extract quantitative information relating to a target subject from a medical image. Different subjects may be segmented in different ways to improve the accuracy of the segmentation. The accuracy of an image segmentation may affect the accuracy of a disease diagnosis. Thus, it is desirable to improve the accuracy of image segmentation.
SUMMARY
 In one aspect of the present disclosure, an image segmentation method and apparatus are provided. The image segmentation method may include one or more of the following operations. Volume data of a medical image may be acquired. Seed points for bone growing may be acquired. One or more parameters for bone growing may be initialized or updated. The parameters may include, for example, a first threshold, a second parameter, a boundary parameter. The first threshold may be fixed. The second threshold and the boundary parameter may be variable. Bone growing may be performed based on, for example, the first threshold, the second parameter, and the boundary parameter to acquire the pixels that may belong to or are associated with a bone tissue. The first threshold may be used to perform a preliminary determination of a bone tissue. The second threshold may be used to determine a suspicious bone tissue. The boundary parameter may be used to determine the boundary of a bone tissue. The increment of pixels of the nth bone growing and the (n-1) th bone growing in a bone tissue may be assessed, if the increment exceeds a predefined threshold. The parameters used in the (n-1) th bone growing may be used to perform a new bone growing. Otherwise, the parameters may be updated. The n may be an integer greater than or equal to 2.
 In some embodiments of the image segmentation, the bone growing performed based on the first threshold, the second threshold, and the boundary parameter to acquire the pixels that may belong to the bone tissue may include one or more of the following operations. A seed point may be put into a growing point sequence. A growing point may be removed from the growing point sequence and the growing point may be determined as a bone tissue. The pixels in the vicinity of the growing point may be identified based on, for example, the first threshold, the second threshold, and/or the boundary parameter. The pixels that satisfy all the three parameters may be put into the growing point sequence. The pixels that satisfy only the first threshold may be determined as a bone tissue. It may be determined whether the growing point in the growing point sequence is exhausted. If the growing point is not exhausted, the bone may continue to grow. If the growing point is exhausted, the bone may stop growing and all the pixels belong to the bone tissue may be acquired.
 In some embodiments of the image segmentation, the pixels that meet the first threshold may be determined as a bone tissue, and the pixels that only meet the first threshold may be determined as a suspicious bone tissue. The pixels that meet the first threshold and the second  threshold may be determined as a decided bone tissue. The pixels that meet the first threshold and the boundary parameter, but the second threshold, may be determined as a boundary of a bone tissue.
 In some embodiments, the second threshold may be greater than the first threshold, and the second threshold may decrease as the iteration proceeds.
 In some embodiments, the boundary parameter may decrease as the iteration process. 
 In some embodiments, the parameters used in the nth bone growing may be used to perform a new bone growing.
 In another aspect of the present disclosure, an image segmentation apparatus is provided. In some embodiments, the image segmentation apparatus may include a processing unit, a storage unit and a display unit. The storage unit may be configured to store computer executable command. The processing unit may be configured to performing one or more of the following operations based on the commands stored in the storage unit. Volume data of a medical image may be acquired. Seed points for bone growing may be acquired. The parameters for bone growing may be initialized or updated. The parameters may include a first threshold, a second parameter, a boundary parameter. The first threshold may be fixed. The second threshold and the boundary parameter may be variable. A bone growing may be performed based on the first threshold, the second parameter and the boundary parameter to acquire all pixels that belong to a bone tissue. The first threshold may be used to perform a preliminary determination of a bone tissue. The second threshold may be used to determine a suspicious bone tissue. The boundary parameter may be used to determine the boundary of a bone tissue. The increment of pixels of the nth bone growing and the (n-1) th bone growing in a bone tissue may be assessed, if the increment exceeds a predefined threshold, the parameters used in the (n-1) th bone growing may be used to perform a new bone growing. Otherwise, the parameters may be updated. n may be an integer greater than or equal to 2.
 In still another aspect of the present disclosure, a bronchus extracting method is provided. The bronchial tree may be labeled as or divided into different levels from trachea and primary bronchus to lobar bronchi and segmental bronchi, et al. For example, the trachea may be 0-level, the primary bronchi may be 1-level, and do on. Firstly, a low-level bronchus may be extracted from a breast CT image. Then a terminal bronchiole may be extracted based on the low- level bronchus by an energy-based 3D reconstruction method. The low-level bronchus and the terminal bronchiole may be fused together to generated a bronchus.
 In still another aspect of the present disclosure, a bronchus extracting system is provided. The bronchus extracting system may include a first module, a second module, and a third module. The first module may be configured to extract a low-level bronchus. The second module may be configured to extract a terminal bronchiole based on the extracted low-level bronchus by an energy-based 3D reconstruction method. The third module may be configured to fuse the low-level bronchus and the terminal bronchiole together to generate a bronchus.
 In some embodiments, the method of extracting a low-level bronchus may include a region growing based method and a morphology based method.
 In some embodiments, the energy-based 3D reconstruction method may include assessing whether a voxel belong to a terminal bronchiole according to its growing potential energy in different states.
 In some embodiments, the energy-based 3D reconstruction method may include adjusting a weighted value controlling the growing within a bronchus to guarantee a convergent iteration in each growing potential energy computing process.
 In some embodiments, the energy-based 3D reconstruction method may include comparing the volume or radius of a connected region in the terminal bronchiole with a threshold. In some embodiments, if the volume or radius exceed the threshold, the connected region may be removed from the result of the present iterative process.
 In still another aspect of the present disclosure, a method for image segmentation of tumors is provided. The method for image segmentation of tumors may include one or more of the following operations. A region of interest (ROI) where the tumor may be located may be determined. A coarse segmentation of a tumor based on a feature classification method may be performed. A fine segmentation of the tumor may be performed on the result of the coarse segmentation using a level set method (LSM) .
 In some embodiments of the present disclosure, a positive seed point may be sampled randomly among the voxels in the ROI, where a tumor may be located. A negative seed point may be also sampled randomly among the voxels in the ROI, where the tumor may be absent. Training may be performed using the positive seed point and/or the negative seed point, a linear or nonlinear  classifier may be acquired. The classification of each voxel may be performed to acquire the result of the coarse segmentation.
 In some embodiments of the present disclosure, the method of determining where a tumor may be located may include the following operations: a line segment of length d1 on a slice of tumor may be drawn by the user, which may traverse the tumor. In some embodiments, the line segment may be drawn on the slice which has a relatively larger cross-section compared with other slices. A line segment may be a part of a line that is bounded by two distinct end points. A cube of edge length d2 (d2=d1·r, 1<r<2) , centered at the midpoint of the line segment may be formed. The region within the cube may be deemed as the ROI, where the tumor may be located. Merely by way of example, the region where the tumor may be located may be completely within the ROI defined by the cube.
 In some embodiments of the present disclosure, the result of the coarse segmentation may be used to initialize a distance function. Multiple iterations of the distance function may be performed to acquire the fine segmentation of the tumor.
 In still another aspect of the present disclosure, a method for image segmentation of lung is provided. The method for image segmentation of lung may include one or more of the following operations. A coarse segmentation of the lung may be performed, and a target region may be acquired. The target region may be divided into a central region, and a boundary region surrounding the central region. The voxels located in the central region may be identified as part of one or more nodules; the voxels located in the boundary region may need to be further identified whether they are part of one or more nodules. The voxels located outside of the target region may not be identified as part of a nodule. A fine segmentation of the lung may be performed using a classifier to classify and label each voxel, acquiring a parting segmentation region, which may be separated from the boundary region. The target region and the parting segmentation region may be fused together.
 In some embodiments of the present disclosure, the shape of the boundary region may be an annulus.
 In some embodiments of the present disclosure, the coarse segmentation of lung may be based on a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network  segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. Merely by way of example, the coarse segmentation of the lung may be performed based on region growing segmentation, which may include the following operations. A long axis may be determined by a user, which may be on the slice with maximum cross-section of ground-glass nodules (GGN) of the lung image. An ROI may be determined based on the long axis. Filtering may be performed in an ROI to acquire a filtered image. A first region growing on the filtered image may be performed. A dynamic segmentation region may be acquired. Gray-scale histogram in the ROI may be calculated to acquire a histogram vector image. A second region growing on the histogram vector image may be performed to acquire a static segmentation region. The dynamic segmentation region and the static segmentation region may be fused together to acquire the target region.
 In some embodiments of the present disclosure, the fine segmentation of the lung may be based on the result of the coarse segmentation of lung. Merely by way of example, the fine segmentation of the lung may be performed using a classifier to classify and label the voxels in the boundary region, which may include the following operations. An ROI may be determined. Filtering may be performed to the images within the ROI to acquire a feature vector image. The feature vector image may be combined with the weights of a feature vector, which may have been trained offline. A linear discriminant analysis (LDA) probability field image may be acquired. Region growing on the LDA probability field image may be performed. Parting segmentation regions with labels of the classifier may be acquired. In addition, the parting segmentation region may be fused together with the target region.
 In still another aspect of the present disclosure, an image data processing method is provided. In some embodiments, the image data processing method may include one or more of the following operations. A data set of a three-dimensional image may be acquired. Different components in the data set may be identified. The different components may be stored in different storages. A rendering instruction may be provided by a user. The data in different storages may be rendered by corresponding image processors. The rendered data may be displayed. The data set may include a plurality of voxels.
 In some embodiments, the identification of the different components in the data set may include at least one of the one dimensional feature information or multi-dimensional feature information of the voxels.
 In some embodiments, the data set of the three-dimensional image may include medical scanning data of an object. The different components may include a removable region and a reserved region based on an instruction, for example, a rendering instruction, provided by a user.
 In some embodiments, after receiving the rendering instruction data may be rendered accordingly. The data may include establishing thread controls corresponding to the CPU task queue and the GPU task queue respectively, parsing the rendering instruction of the user, rendering after distributing the rendering task to the task queue of CPU or GPU.
 In another aspect of the present disclosure, an image data processing system is provided. The image data processing system may include an acquisition module, a segmentation module, a storage control module, a rendering control module, and a display module. The acquisition module may be configured to acquire data set of the three-dimensional image. The data set may be comprised of voxels. The segmentation module may be configured to identify different components in the data set. The storage control module may be configured to control the data of different components into different storages. The rendering control module may be configured to receive the rendering instruction of the user’s input, control the rendering the data in corresponding storages by corresponding image processors. The display module may be configured to display the data after rendering.
 In some embodiments, the segmentation module may include an identification block. The identification block may be configured to identify at least one of the one dimensional feature information and multi-dimensional feature information of the voxels.
 In some embodiments, the data set of the three-dimensional image may include medical scanning data of an object; the different components may include a removable region and a reserved region based on requirement of the user.
 In some embodiments, the rendering control module may include a thread control block, a distribution block. The thread control block may be configured to establish control thread corresponding to the task queue of CPU and GPU, respectively. The distribution block may be configured to parse rendering instructions of user, distribute the rendering task according to instructions of user to the task queue of CPU or GPU, and execute image rendering.
 In still another aspect of the present disclosure, an image data processing method is provided. The image data processing method may include providing volume data and segmentation information of the volume data, rendering the volume data. In some embodiments, the segmentation information of the volume data may be pre-processed before the rendering. The pre-process of the volume data may include generating boundary data corresponding to the spatial boundary structure of each tissue, loading the boundary data and the volume data into an image processing unit, and rendering iteratively.
 In some embodiments, the boundary structure may be a two-dimensional boundary line, or a three-dimensional boundary surface.
 In some embodiments, the boundary structure may be mesh grid structure.
 In some embodiments, the iterative rendering may be based on depth information. Each iterative rendering may be based on the depth information of last iterative rendering.
 In some embodiments, the image data processing method may further include providing an observation point, and observation light of different directions from the observation point. The observation light may intersect the boundary structure at several intersection points. The intersection points in a same depth may be located in a same depth surface. The depth may be the distance between the intersection point and the observation point.
 In still another aspect of the present disclosure, an image processing method is provided. The image processing method include one or more of the following operations. Partitioning a region of interest into three regions, a first region, a second region, and a third region. The characteristic value Ti of the pixels of the third region may be calculated based on a first method. Assessing the characteristic value Ti of the pixels of the third region based on a second method to generate a filled image.
 In some embodiments, the first region may be the region whose pixel values are 1. The characteristic value of the first region may be set as Thigh. The second region may be the boundary region of the ROI, the characteristic value of the second region may be set as Tlow. Thigh-Tlow>100.
 In some embodiments, the second method may include one or more of the following operations. Acquiring the characteristic value of the Ti pixel in the third region. If B<Ti<Thihg, the pixel may be extracted. Otherwise, the value of the pixel may be set as the background color.
 In a second aspect of the present disclosure, an image processing apparatus is provided. The image processing unit may include a distribution unit, a first determination unit, and a second  determination unit. The distribution unit may be configured to partition a region of interest into three regions, a first region, a second region, and a third region. The first determination unit may be configure to calculate the characteristic value Ti of a pixel i in the third region. The second determination unit may be configured to assess the characteristic value Ti of a pixel i in the third region to generate a filled image.
 In still another aspect of the present disclosure, a spine position method is provided. The spine method may include one or more of the following operations. A CT image sequence may be loaded and the CT image sequence may include a plurality of CT images. The bone regions of every CT image in the CT image sequence may be segmented. In every CT image, the bone region with the maximum area may be determined as a spine region. An anchor point of the spine region in every CT image may be determined, the anchor point may be validated based on some known information. The anchor point may be calibrated.
 In some embodiments, the load of the CT image sequence may include one or more of the following operations. A preliminary spine region may be selected after the CT image sequence is loaded. The preliminary region may be preprocessed to segment one or more bone regions.
 In some embodiments, the validation of the anchor point may include one or more of the following operations. If the y coordinate of the anchor point is greater than H/4, and the shift of the x coordinate from the central position is less than H/4, the anchor point may be valid. Otherwise, the anchor point may be invalid.
 In some embodiments, the calibration of the anchor point may include one or more of the following operations. Calibrating the anchor points whose coordinate are abnormal. Calibrating the anchor points that lack coordinate.
 In still another aspect of the present disclosure, an image segmentation method is provided. The image segmentation method may include one or more of the following operations. A first image may be transformed into a first 2D image. The pixel values of the 2D image may correspond to those of the first image. The first image may be a binary image that is generated based on an ROI of a first CT image. A first region may be acquired based on the 2D image. The first region may be the highlighted part of the first 2D image. A second region may be acquired based on the first CT image. The second region may be the highlighted part of each 2D slice of the first CT image corresponding to the first region spatially. A second CT image may be calibrated based on the second region to generated a calibrated second CT image. The first CT  image and the second CT image are both 3D CT images, and have the same size and number of 2D slices.
 In some embodiments, the first CT image may be an original CT image of a lung, and the second CT image may be a segmented CT image based on the original CT image of a lung.
 In some embodiments, the transformation of the first image into the first 2D image may include one or more of the following operations. In some embodiments, the first image may be projected along the human body to generate the 2D image. The pixel values of the 2D image may correspond to those of the first image. In some embodiments, the highlighted part of the 2D image may be acquired based on clustering, or threshold segmentation.
 In some embodiments, the first region may be the highlighted part that has the most pixels among multiple highlighted parts.
 In some embodiments, the regions corresponding to the first region in the slices of the first image may have the same spatial position with the first region.
 In some embodiments, the second region may be acquired by performing threshold segmentation on the region of every 2D slice corresponding to the first region of the first CT image.
 In some embodiments, the calibration of the second CT image may include setting the pixels of the regions corresponding to the second region in every 2D slice of the second CT image as the background color of the second CT image.
 In still another aspect of the present disclosure, an image segmentation apparatus is provided. The image segmentation apparatus may include a transformation unit, a first acquisition unit, a second acquisition unit, a calibration unit. The transformation unit may be configured to transform the first image into a first 2D image. The pixel values of the 2D image may correspond to those of the first image. The first acquisition unit may be configured to acquire a first region, the first region may be the highlighted part of the 2D image. The second acquisition unit may be configured to acquire a second region, the second region may be the highlighted part of each 2D slice of the first CT image corresponding to the first region spatially. The calibration unit may be configured to calibrate the second CT image based on the second region to generate a calibrated second CT image. The first CT image and the second CT image are both 3D CT images, and have the same size and number of 2D slices.
 In some embodiments, the image segmentation apparatus may further include a determination unit. The determination unit may be configured to determine the highlighted part that has the most pixels among multiple highlighted parts as the first region.
 In some embodiments, the calibration unit may further include pixel value reset unit. The pixel value reset unit may be configured to set the pixels of the regions corresponding to the second region in every 2D slice of the second CT image as the background color of the second CT image.
 In still another aspect of the present disclosure, a hepatic artery segmentation method is provided. In some embodiments, a CT data including a CT image and a corresponding liver mask may be acquired. Then a seed point of the hepatic artery may be selected. A first binary volume data and a second binary volume data may be generated by a segmentation method. The segmentation may be performed by segmenting the CT image data based on the CT value of the seed point. The first binary volume data may include a backbone and a rib connected therewith. The second binary volume data may include a backbone, a rib connected therewith and a hepatic artery. A third binary volume data may be generated by removing a part same with the first binary volume data from the second binary volume data. The hepatic artery may be generated through a 3D region growing in the third binary volume data based on the seed point selected before.
 In some embodiments, the first binary volume data may be generated by performing a threshold segmentation for backbone on the CT image data. The second binary volume data may be generated by performing a threshold segmentation for hepatic artery on the CT image data.
 In some embodiments, the seed point selecting method may further include: S11. Determining the slices to select the seed point of the hepatic artery; S12. Selecting the seed point and computing its CT value.
 In some embodiments, the generating method of the first binary volume data may further include: performing a coarse segmentation on the original CT image data and acquiring a bone binary volume data including a bone and other tissue; subdividing the bone binary volume data into region slice and re-segmenting the region slice by using a superposition strategy and acquiring the first binary volume data including a backbone and a rib connected therewith.
 In some embodiments, the hepatic artery segmentation method may further include post-pressing the hepatic artery. In some embodiments, the post-processing method may include an artery repair.
BRIEF DESCRIPTION OF THE DRAWINGS
 The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
 FIG. 1 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure;
 FIG. 2 is a block diagram depicting an image processing system according to some embodiments of the present disclosure;
 FIG. 3 is a flowchart illustrating a process for image segmentation according to some embodiments of the present disclosure;
 FIG. 4 illustrates a process for region growing according to some embodiments of the present disclosure;
 FIG. 5 is a flowchart illustrating a bronchus extracting method according to some embodiments of the present disclosure;
 FIG. 6 is a flowchart illustrating a terminal bronchiole extracting method according to some embodiments of the present disclosure;
 FIG. 7 is a block diagram depicting an image processing system for extracting a bronchus according to some embodiments in the present disclosure;
 FIG. 8 is a block diagram illustrating a processing unit of image processing according to some embodiments of the present disclosure;
 FIG. 9 is a flowchart illustrating a process for image segmentation according to some embodiments of the present disclosure;
 FIG. 10 is a flowchart illustrating a process for coarse segmentation of tumors according to some embodiments of the present disclosure;
 FIG. 11 is a flowchart illustrating a process for lung segmentation according to some embodiments of the present disclosure;
 FIG. 12 is a flowchart illustrating a process for coarse segmentation of lung according to some embodiments of the present disclosure;
 FIG. 13 is a flowchart illustrating a process for fine segmentation of a lung image according to some embodiments of the present disclosure;
 FIG. 14 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure;
 FIG. 15 is a block diagram depicting an image processing system according to some embodiments of the present disclosure;
 FIG. 16 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure;
 FIG. 17 is a flowchart illustrating an image identification method according to some embodiments of the present disclosure;
 FIG. 18 is a block diagram depicting an image processing system according to some embodiments of the present disclosure;
 FIG. 19 is a block diagram depicting a segmentation module according to some embodiments of the present disclosure;
 FIG. 20 is a block diagram depicting an identification block according to some embodiments of the present disclosure;
 FIG. 21 is a block diagram depicting a rendering control module according to some embodiments of the present disclosure;
 FIG. 22 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure;
 FIG. 23 is an illustration of rendering tissues by way of an iterative rendering method according to some embodiments of the present disclosure;
 FIG. 24 is a flowchart illustrating a process for post segmentation processing according to some embodiments of the present disclosure;
 FIG. 25 is a flowchart illustrating a process for post segmentation processing according to some embodiments of the present disclosure;
 FIG. 26 is a block diagram of an image processing system according to some embodiments of the present disclosure;
 FIG. 27 is a flowchart illustrating a process for spine location according to some embodiments of the present disclosure;
 FIG. 28 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure;
 FIG. 29 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure;
 FIG. 30 illustrates a block diagram of an image processing system according to some embodiments of the present disclosure;
 FIG. 31 is a flowchart illustrating a method for hepatic artery segmentation according to some embodiments of the present disclosure; and
 FIG. 32 is a flowchart illustrating a method for hepatic artery segmentation according to some embodiments of the present disclosure.
DETAILED DESCRIPTION
 In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirits and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims.
 It will be understood that the term “system, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
 It will be understood that when a unit, module or block is referred to as being “on, ” “connected to” or “coupled to” another unit, module, or block, it may be directly on, connected or coupled to the other unit, module, or block, or intervening unit, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
 The terminology used herein is for the purposes of describing particular examples and embodiments only, and is not intended to be limiting. As used herein, the singular forms "a, " "an" and "the" may be intended to include the plural forms as well, unless the context clearly indicates  otherwise. It will be further understood that the terms "include, " and/or "comprising, " when used in this disclosure, specify the presence of integers, devices, behaviors, stated features, steps, elements, operations, and/or components, but do not exclude the presence or addition of one or more other integers, devices, behaviors, features, steps, elements, operations, components, and/or groups thereof.
 In a medical imaging process, an image segmentation (or “recognition, ” “classification, ” “extraction, ” etc. ) may be performed to establish a realistic subject models by dividing or partitioning a medical image into one or more constituent sub-regions. In some embodiments, the medical imaging system may be various modalities including Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed tomography (CT) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof. In some embodiments, the subject may be an organ, a texture, a region, an object, a lesion, a tumor, or the like, or any combination thereof. Merely by way for example, the subject may include a head, a breast, a lung, a trachea, a pleura, a mediastinum, an abdomen, a long intestine, a small intestine, a bladder, a gallbladder, a triple warmer, a pelvic cavity, a backbone, extremities, a skeleton, a blood vessel, or the like, or any combination thereof. In some embodiments, the medical image may include a 2D image and/or a 3D image. In the 2D image, its tiniest distinguishable element may be termed as a pixel. In the 3D image, its tiniest distinguishable element may be termed as a voxel ( “a volumetric pixel” or “avolume pixel” ) . In some embodiments, the 3D image may also be seen as a series of 2D slices or 2D layers.
 The segmentation process is usually performed by recognizing one or more similar information of a pixel and/or a voxel in the image. In some embodiments, the similar information may be characteristics or features including gray level, mean gray level, intensity, texture, color, contrast, brightness, or the like, or any combination thereof. In some embodiments, some spatial properties of the pixel and/or voxel may also be considered in a segmentation process.
 For illustration purposes, the following description is provided to help better understanding a segmentation process. It is understood that this is not intended to limit the scope  the present disclosure. For persons having ordinary skills in the art, a certain amount of variations, changes and/or modifications may be deducted under guidance of the present disclosure. Those variations, changes and/or modifications do not depart from the scope of the present disclosure.
 FIG. 1 is a flowchart illustrating a method for image processing according to some embodiments of the present disclosure. As described in the figure, the image processing method may include steps such as acquiring data, pre-processing the data, segmenting the data 130 and/or post-processing the data.
 In step 101, data may be acquired. In some embodiments, the data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. Merely by way for example, the data acquired in step 101 may include an image data, an image mask and a parameter. In some embodiments, the image data may be an initial image data or an image data after treatment. Merely by way for example, the treatment may be a pre-processing, an image segmentation, or a post-processing. The image may be a two dimensional (2D) image or a three dimensional (3D) image. The image may be acquired from any medical imaging system as described elsewhere in the present disclosure. In some embodiments, the image mask may be a binary mask to shade a specific area of an image. The image mask may be used to supply a screen, extract an area of interest, and/or make a specific image. In some embodiments, the parameter may be some information input by a user or an external source. For example, the parameter may include a threshold, a characteristic, a CT value, a region of interest (ROI) , a number of iteration times, a seed point, an equation, an algorithm, or the like, or any combination thereof.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. The data acquired in step 110 may be variable, changeable, or adjustable based on the spirits of the present disclosure. For example, the data may also be a mode, a priori information, an expert-defined rule, or the like, or any combination thereof for specific subject. These information may be trained or self-studied before or during an image segmentation. However, those variations and modifications do not depart from the scope of the present disclosure.
 In step 102, the data acquired may be pre-processed according to some embodiments of the present disclosure. The pre-processing step 102 may be used to make the data adaptive to segment. For example, the pre-processing of data may include suppressing, weakening and/or  removing a detail, a mutation, a noise, or the like, or any combination thereof. In some embodiments, the pre-processing of data may include performing an image smoothing and an image enhancing. The smoothing process may be in a spatial domain and/or a frequency domain. In some embodiments, the spatial domain smoothing method may process the image pixel and/or voxel directly. The frequency domain smoothing method may process a transformation value firstly acquired from the image and then inverse transform the transformation value into a space domain. The imaging smoothing method may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, or the like, or any combination thereof.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. The pre-processing step 102 may be variable, changeable, or adjustable based on the spirits of the present disclosure. For example, the pre-processing step 102 may be removed or omitted in some embodiments. For another example, the pre-processing method may also include some other image processing techniques including, e.g., an image sharpening process, an image segmentation, a rigid regulation, a non-rigid registration, or the like, or any combination thereof. However, those variations and modifications do not depart from the scope of the present disclosure.
 In step 103, the data with pre-processing may be segmented. It should be noted that the pre-processing step 102 may be not a necessary and the original data may be segmented directly without a pre-processing. In some embodiments, the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type. The three types may provide different choices to a user or an operator and allow the user or the operator participate in the image process in various degrees. In some embodiments of the manual type, a parameter of the segmentation may be determined by the user or the operator. The parameter may be a threshold level, a homogeneity criterion, a function, an equation, an  algorithm, a model, or the like, or any combination thereof. In some embodiments of the automatic type, the segmentation may be incorporated with some information about a desired subject including, e.g., a priori information, an optimized method, an expert-defined rule, a model, or the like, or any combination thereof. The information may also be updated by training or self-learning. In some embodiments of the semi-automatic type, the user or the operator may supervise the segmentation process in a certain level.
 For illustration purposes, an automatic segmentation will be described below and not intended to limit the scope of the present disclosure. In some embodiments, there may be some pre-existing models of a region of interest (ROI) to be selected before, during, and/or after the segmentation in an automatic segmentation process. Each model may include some characteristics that be used to determine whether it is adaptive to a patient. The characteristics may include a patient information, an operator information, an instrument information, or the like, or any combination thereof. In some embodiments, the patient information may include ethnicity, citizenship, religion, gender, age, matrimony, height, weight, medical history, job, personal habits, organ, tissue, or the like, or any combination thereof. The operator information may include a department, a title, an experience, an operating history, or the like, or any combination thereof. The instrument information may include model, running status, etc. of the imaging system. When an image segmentation need to be performed, one or more of the characteristics described above may be checked and fitted to a proper pre-existing model.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. The segmentation step 103 may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the segmentation step may have more than one segmenting operation. For example, the segmenting operation may include a coarse segmentation and a fine segmentation in one segmentation process. For another example, two or more segmentation methods may be used in one segmentation process. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 In step 104, the data segmented may be post-processed in some embodiments. It should be noted that the post-processing step 104 may be not necessary and the original data may be segmented directly without a post-processing. The post-processing technique may include a 2D post-processing technique, a 3D post-processing technique and other technique. In some  embodiments, the 2D post-processing technique may include a multi-planar reformation (MPR) , a curved planar reformation (CPR) , a computed volume reconstruction (CVR) , a volume rendering (VR) , or the like, or any combination thereof. In some embodiments, the 3D post-processing technique may include a 3D surface reconstruction, a 3D volume reconstruction, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , an average intensity projection (AIP) , an X-ray simulation projection, a volume rendering (VR) , or the like, or any combination thereof. In some embodiments, there may be other post-processing technique. The other technique may include a repair process, a render process, a filling process, or the like, or any combination thereof.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. The post-processing step 104 may be variable, changeable, or adjustable based on the spirits of the present disclosure. Merely by way for example, the post-processing step may be omitted. For another example, the data post-processed may be returned to the step 103 and performed by a segmentation. For still another example, the post-processing step may be divided into more than one steps. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 It also should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the steps may be added, deleted, exchanged, replaced, modified, etc. For example, the pre-processing and/or post-processing may be deleted. For another example, the order of the steps in the image processing method may be changed. For still another example, some steps may be executed repeatedly. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 FIG. 2 is a block diagram illustrating an image processing system according to some embodiments of the present disclosure. As described in the figure, the image processing system 202 may include a processing unit 202-1, a storage unit 202-1. In some embodiments, the imaging system 202 may be also integrated, connected or coupled to an input/output unit 201. The image processing system 202 may be a single component or integrated with the imaging system as described elsewhere in the present disclosure. The image processing system 202 may also be configured to process different kinds of data acquired from the imaging system as described  elsewhere in the present disclosure. It should be noted that the assembly or the components of the image processing system 202 may be variable in other embodiments.
 The input/output unit 201 may be configured to perform an input and/or an output function in the image processing system 202. Some information may be input into or output from the processing unit 202-1 and/or the storage unit 202-2. In some embodiments, the information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. Merely by way for example, the model may be a pre-existing model of a region of interest (ROI) as described elsewhere in the present disclosure. In some embodiments, the input/output unit 201 may include a display, a keyboard, a mouse, a touch screen, an interface, a microphone, a sensor, a wireless communication, a cloud, or the like, or any combination thereof. Merely by way for example, the wireless communication may be a Bluetooth, a Near Field Communication (NFC) , a wireless local area network (WLAN) , a WiFi, a Wireless a Wide Area Network (WWAN) , or the like, or any combination thereof. In some embodiments, the information may be input from by or output to an external subject. The external subject may be a user, an operator, a patient, a robot, a floppy disk, a hard disk, a wireless terminal, a processing unit, a storage unit, or the like, or any combination thereof.
 It also should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the input/output unit may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the input/output unit 201 may be subdivided into an input module and an output module. In some embodiments, the input/output unit 201 may be integrated in one module. In some embodiments, the input/output unit 201 may be more than one. In some embodiments, the input/output unit 201 may be omitted. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 The image processing system may include a processing unit 202-1 and a storage unit 202-2. In some embodiments, the processing unit 202-1 may be configured or used to process the information as described elsewhere in the present disclosure from the input/output 201 and/or the storage unit 202-2. In some embodiments, the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a  Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
 The processing unit 202-1 may perform a pre-processing operation, a co-processing operation, a post-processing operation. In some embodiments, the pre-processing operation may include suppressing, weakening and/or removing a detail, a mutation, a noise, or the like, or any combination thereof. In some embodiments, the co-processing operation may include an image reconstruction, an image segmentation, etc. In some embodiments, the post-processing operation may include a 2D post-processing technique, a 3D post-processing technique and other technique. The apparatus used in different process may be different or the same. Merely by way for example, the image smoothing in the pre-processing step may be executed by an apparatus including, e.g., a median filter, a Gaussian filter, a mean-value filter, a normalized filter, a bilateral filter, or the like, or any combination thereof.
 For illustration purposes, an exemplary processing unit performing an image segmentation will be described in detail below. The processing unit 202-1 may include a parameter, an algorithm, a software, a program, or the like, or any combination thereof. In some embodiments, the parameters may include a threshold, a CT value, a region of interest, a number of iteration times, a seed point, or the like, or any combination thereof. In some embodiments, the algorithm may include some algorithms used to perform a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the software for image segmentation may include Brain Imaging Center (BIC) software toolbox, BrainSuite, Statistical Parametric Mapping (SPM) , FMRIB Software Library (FSL) , Eikona3D, FreeSurfer, Insight Segmentation and Registration Toolkit (ITK) , Analyze, 3D Slicer, Cavass, Medical Image Processing, Analysis and Visualization (MIPAV) , or the like, or any combination thereof.
 The storage unit 202-2 may be connected or coupled with the input/output unit 201 and/or the processing unit 202-1. In some embodiments, the storage unit 202-2 may acquire information from or output information to the input/output unit 201 and/or the processing unit 202- 1. The information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. Merely by way for example, the number acquired from or output to the processing unit 202-1 may include a threshold, a CT value, a number of iteration times, or the like, or any combination thereof. The algorithm acquired from or output to the processing unit 202-1 may include a series of image processing methods. Merely by way for example, the image processing method for segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. The image acquired from or output to the processing unit 202-1 may include a pre-processing image, a co-processing image, a post-processing image, or the like, or any combination thereof. The model acquired from or output to the processing unit 202-1 may include pre-existing model corresponding to different ethnicity, citizenship, religion, gender, age, matrimony, height, weight, medical history, job, personal habits, organ, tissue, or the like, or any combination thereof.
 The storage unit 202-2 may also be an external storage or an internal storage. In some embodiments, the external storage may include a magnetic storage, an optical storage, a solid state storage, a smart cloud storage, a hard disk, a soft disk, or the like, or any combination thereof. The magnetic storage may be a cassette tape, a floppy disk, etc. The optical storage may be a CD, a DVD, etc. The solid state storage may be a flash memory including a memory card, a memory stick, etc. In some embodiments, the internal storage may be a core memory, a Random Access Memory (RAM) , a Read Only Memory (ROM) , a Complementary Metal Oxide Semiconductor Memory (CMOS) , or the like, or any combination thereof.
 It also should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the processing unit 202-1 and/or the storage unit 202-2 may be a single unit or a combination with two or more sub-units. In some embodiments, one or more components of the image processing system may be simplified or integrated. For  example, the storage unit 202-2 may be omitted or integrated with the processing unit 202-1, i.e., the storage unit and the processing unit may be a same device. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 FIG. 3 is a flowchart illustrating a process of image segmentation according to some embodiments of the present disclosure.
 In step 301, volume data may be acquired. The volume data may be corresponding to medical images, and may comprise a group of 2D slices that may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof. In some embodiments of the present disclosure, multimodal techniques may be used to acquire the volume data. The multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
 In step 302, at least a seed point may be acquired, and at least a parameter may be initialized or updated. The seed point may be acquired based on criterion including pixels in a certain grayscale range, pixels evenly spaced on a grid, etc.
 In some embodiments of the present disclosure, the parameters may include a first threshold, a second threshold, and a boundary threshold. The parameters may be iteratively updated until a certain criteria is meet, for example, a threshold. The first threshold may be configured to determine a bone tissue in a CT image, the second threshold may be configured to determine a suspicious bone tissue in a CT image, and the boundary threshold may be configured to determine a boundary tissue of a bone in the CT image. Merely by way of example, the first threshold may be fixed during an iteration, the second threshold and the boundary threshold may be variable during the iteration. The second threshold and the boundary threshold may be initialized so that a relatively stable restriction may be achieved. They may be optimized in the iteration.
 In some embodiments of the present disclosure, the first threshold and the second threshold may both refer to CT value. The first threshold may be set as 80 HU and the second  threshold may be set as 305 HU. The boundary threshold may be calculated based on the threshold of a mathematical operator, for example, Laplace operator. The Laplace threshold may be set as 120.
Figure PCTCN2015093506-appb-000001
 In step 303, a region growing may be performed on the seed point iteratively base on the parameters described in step 302. In some embodiments of the present disclosure, the region meet the first threshold may be determined as a bone tissue. The region meet the first threshold may be determined based on the second threshold as well. The region meet both the first threshold and the second threshold may be determined as a definite bone tissue. The region meet the first threshold solely but the second threshold may be determined as a suspicious bone tissue. The region meet the first threshold but second threshold may be further determined based on the boundary threshold as well. If the boundary threshold is meet, the region may be determined as a boundary tissue. The parameters may be used to determine a growing point. Merely by way of example, the region meet all the three parameters may be determined as a growing point. The region meet first threshold solely may be determined as a bone tissue.
 In step 304, the increment of pixels in two adjacent iterations may be assessed based on a threshold. In some embodiments of the present disclosure, the number of pixels in the nth iteration may be compared with the number of pixels in the (n-1) th iteration. If the increment of pixels surpass a threshold, the iteratively performed region growing may be terminated and step 305 may be performed. If the increment of pixels is within the threshold, step 302 may be performed and the parameters may be updated. Merely by way of example, n may be an integer that is greater than or equal to 2.
 In some embodiments of the present disclosure, the increment of the pixels may be used to eliminate the influence of adjacent tissues of the bone that is under region growing, and determine the boundary of the bone. Taking the nth operation as an example, the number of pixels in the nth iteration may be denoted as Sum (n) , and the number of pixels in the (n-1) th iteration may be denoted as Sum (n-1) , and the increment of pixels may be denoted as [Sum (n) -Sum (n-1) ] /Sum(n-1) . When the increment of pixels surpass a threshold, for example, 15%, step 305 may be performed. Otherwise, step 302 may be performed and the parameters may be updated.
 In step 305, a new region growing may be performed based on the parameters generated in step 304. Referring to the example described in step 304, when the increment of pixels surpass a threshold, the parameters of the (n-1) th (n may be greater than or equal to 2) iteration may be used to performed the new bone growing in step 305. The parameters may include a first threshold, a second threshold, a boundary threshold, etc. In some embodiments of the present disclosure, the first threshold may be fixed. The second threshold and the boundary threshold may keep decreasing during the iteration.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure. A mechanism of multi-level parameter that may include a number of parameters may be iteratively optimized to perform a region growing.
 FIG. 4 illustrates a process of region growing according to some embodiments of the present disclosure. In step 401, a seed point may be put into a growing point sequence. Taking the bone growing as an example, the seed point may be put into a bone growing point sequence. The seed point may be acquired based on criterion including pixels in a certain grayscale range, pixels evenly spaced on a grid, etc.
 In step 402, a growing point may be removed from the growing point sequence, for example, a bone growing point sequence, and determined as a bone tissue.
 In step 403, the pixels in the neighborhoods of the growing point may be assessed based on the parameters mentioned in FIG. 3. The parameters may include a first threshold, a second threshold, a boundary threshold, etc. If the pixels in the neighborhoods of the growing point meet the parameters, step 404 may be performed and the neighborhoods meet the parameters may be put into the growing point sequence. If the neighborhood meet the first threshold solely, step 405 may be performed and the neighborhoods may be determined as bone tissues. Step 406 may be performed to determine if the growing point has been exhausted. If the growing point is exhausted, step 407 may be performed and the region growing may be terminated. The number of pixels of the tissue may be acquired. The tissue may be a bone tissue. If the growing point is not exhausted. Step 402 may be performed again.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 For illustration purposes, some image processing embodiments about a bronchus extracting method will be described below. It should be noted that the bronchus extracting embodiments are merely for better understanding and not intended to limit the scope of the present disclosure.
 FIG. 5 is a flowchart illustrating a bronchus extracting method according to some embodiments of the present disclosure. As described in the figure, the bronchus extracting process may include acquiring a low-level bronchus, extracting a terminal bronchiole and acquiring a trachea. In some embodiments, some steps may be removed, displaced or added in the bronchus extracting process as described elsewhere in the present disclosure.
 In some embodiments, a bronchial tree may provide a system for a trachea (or windpipe) to service a lung with transporting inhaling oxygen and exhaling dioxide. The bronchial tree may include a bronchus, a bronchiole, and an alveoli. The bronchus may be one of the two main bronchi of the trachea. The trachea may have a long tube structure and descend from a throat down to a thoracic cavity. For each main bronchus, it may subdivide into a secondary bronchus and the secondary bronchus may divide further into a tertiary bronchus. The tertiary bronchus may divide into a terminal bronchiole. The alveoli including a duct and an air sac may allow exchange of the gas in the blood. In some embodiments, the bronchial tree may be divided into several levels according to its division conduction. Merely by way for example, the number of levels may be any value chosen from 10 to 24. For example, in an image segmentation, the number of levels may be set according to some specific demands. In some embodiments, the bronchial tree may also be divided into a low-level bronchi and a terminal bronchiole. For illustration purposes, an exemplary embodiments of a 13 level bronchial tree may be described below. In the embodiments, the low-level bronchus may include one or more relative primary levels, e.g., 1 to 3 levels, 1-4 levels, 1-5 levels or other variations. The terminal bronchiole may include one or more relative terminal levels, e.g., 4-13 levels, 5-13 levels, 6-13 levels or other variations. It should be noted that the division of the low-level bronchus and the terminal bronchiole is not limited here and the  above embodiments are merely for illustration purposes and not intended to limit the scope of the present disclosure.
 In step 501, the bronchial tree with low-level bronchi may be acquired according to some embodiments of the present disclosure. The low-level bronchus may be extracted by an image segmentation. In some embodiments, the method of an image segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. For illustration purposes, a region growing based method and a morphology based method will be described below, it should be noted that it is not intended to limit the scope of the present disclosure.
 In some embodiments of the region growing based method, one or more seed points may be deposited in a great airway by an automatic method or an interactive method to perform an initialization. In some embodiments, the airway may be a trachea. The region growing may include judging whether an adjacent point (apixel or a voxel) meet a fused standard under a fixed or a self-adaptive threshold. In some embodiments, the region growing method may be based on a characteristic or a feature including a gray level, a mean gray level, an intensity, a texture, a color, a contrast, a brightness, or the like, or any combination thereof. Merely by way for example, the region growing method based on the gray characteristic will be described in detail below. The method may include searching a seed point according to the tube-like structure property of the trachea and performing region growing. In some embodiments, a voxel may be determined as belonging to a bronchus if it meet one or more thresholds. For example, the thresholds may include a first threshold and a second threshold. The first threshold may be applied to judge whether a voxel belong to a bronchus. For example, the first threshold may be an upper limit of HU value for a voxel in a bronchus and termed as a MaxAirwayHUThreshold. The second threshold may be used to distinguish a voxel in a bronchus and a voxel in a bronchus wall. For example, the second threshold may be an upper limit of difference of a HU value between the voxel in a bronchus and the voxel out a bronchus. And the second threshold may be termed as a MaxLunmenWallDiffThreshold. For example, if the two thresholds are satisfied synchronously,  i.e., the HU value of a voxel V is lower than the first threshold and the difference between each HU value of all 26 voxels adjacent to the voxel V and the HU value of the voxel V is lower than the second threshold, then the voxel V may be determined as belonging to a bronchus. It should be noted that the 26 voxels are merely for exemplary purposes, the number of the adjacent voxels may be any natural number, e.g., 1, 2, 3, 4, etc.
 In some embodiments of the morphology based method, the bronchus region may be extracted by a series of filters. See C. Pisupati, L. Wolff, W. Mitzner, and E. Zerhouni, “Segmentation of 3-D pulmonary trees using mathematical morphology, ” in Proc. Math. Morphol. Appl. Image Signal Process, May 1996, pp. 409-416, of which are hereby incorporated by reference in their entirety.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, the image segmentation method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the low-level bronchus may be extracted by any image segmentation method as described elsewhere in the present disclosure. The image segmentation method may be a single method or any combination of the known methods as described elsewhere in the present disclosure. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 In step 502, a terminal bronchiole may be extracted based on the low-level bronchus. The extracting method may be an energy-based 3D reconstruction method. The energy-based 3D reconstruction is a growing method based on a constraint condition of local energy minimization. According to the Markov Random Field Theory, the energy function may be relative to a growing potential energy. For each voxel, the growing potential energy of two states may be calculated. The two states may include a state that the voxel belongs to a terminal bronchiole and a state that the voxel doesn’ t belong to a terminal bronchiole. If the latter growing potential energy were bigger than the former, the voxel may be determined as belonging to a terminal bronchiole, and vice versa. The energy-based 3D reconstruction method will be described in detail later.
 In step 503, the low-level bronchus and the terminal bronchiole may be fused together through an image fusion method to generate a bronchus extraction result. In some embodiments, the fusion method may be divided into a spatial domain fusion and a transform domain fusion. In some embodiments, the fusion method may include a high pass filtering technique, a HIS  transform based image fusion, a PCA based image fusion, a wavelet transform image fusion, a pair-wise spatial frequency matching, or the like, or any combination thereof.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, the bronchus extracting method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, a data acquiring step may be added to FIG. 5. In some embodiments, the bronchia acquired after step 503 may also be post-processed by a method as described elsewhere in the present disclosure. However, those variations, changes and modification do not depart from the scope of the present disclosure.
 For better understanding the present disclosure, some embodiments of extracting a terminal bronchiole will be described in detail below. It should be noted that these embodiments are not intended to limit the scope of the present disclosure. The energy-based 3D reconstruction is a growing method based on a constraint condition of local energy minimization. According to the Markov Random Field Theory, the energy function ε may be relative to the growing potential energy Vprop, which may be calculated by:
Figure PCTCN2015093506-appb-000002
 Wherein kT is a fixed value.
 The growing potential energy Vprop in different states stx may be calculated for a voxel x. For example, if the voxel does not belong to a terminal bronchus, the stx is assigned by 0. If the voxel belongs to a terminal bronchus, the stx is assigned by 1. In some embodiments, if Vprop (x, 1) -Vprop (x, 1) >0, the voxel x may be determined as belonging to a terminal bronchiole. Given the topology and density information of the terminal bronchiole, the Vprop may include a first weighted term Vradial, a second weighted term Vdistal and a third weighted term Vcontrol, i.e., Vprop=Vradial+Vdistal+Vcontrol. In some embodiments, the Vradial may be used to support a growth in a radial direction, the Vdistal may be used to support a growth in a distal direction and the Vcontrol may be used to limit the growth in a bronchus tube. The former three weighted terms may be calculated by the following equations:
Figure PCTCN2015093506-appb-000003
Figure PCTCN2015093506-appb-000004
Figure PCTCN2015093506-appb-000005
 Wherein x and y are voxel variables, v may be a voxel set, v26 (x) may represent the 26 adjacent voxels of the voxel x, vn, v (x) may be a set of voxel adjacent to the voxel x, vf, v (x) may be a set of voxel apart from the voxel x, st represents a state. K may be a normalizing factor that guarantee the growing anisotropy in a terminal bronchiole. F is a HU value of the voxel x.
 FIG. 6 illustrates a terminal bronchiole extracting method based on an energy-based 3D reconstruction process according to some embodiments of the present disclosure.
 As illustrated in the figure, in step 601, some parameters may be initialized. The parameters may include a first voxel set, an iteration times, a weight assigned to the weighted term, or the like, or any combination thereof. For example, the first voxel set may be initialized as S1=Is, the iteration times may be initialized as N=0, and the weights of the Vradial, Vdistal, and Vcontrol may be assigned by w1, w2, and w3, respectively.
 In step 602, a second voxel set S2 may be initialized as S2=Φ, wherein Φ is a non-null set.
 In step 603, the growing potential energy of each voxel p adjacent to each voxel in the first voxel set S1 may be calculated, i.e., Vprop=w1·Vradial+w2·Vdistal+w3·Vcontrol.
 In step 604, the growing potential energy in two states may be compared. If Vprop (p, 1) >Vprop (p, 0) , it will enter into step 605, otherwise it will enter into step 606. Vprop (p, 1) is the growing potential energy in the first state (stx=1) that the voxel p belongs to a terminal bronchiole, and Vprop (p, 0) is the growing potential energy in the second state (stx=0) that the voxel p doesn’t belong to a terminal bronchiole.
 In step 605, the voxel p may be thrusted into the second voxel set S2.
 In step 660, whether all voxels p that adjacent to every voxel x in the first voxel set S1 are traversed or not may be judged. If the answer is yes, it will enter into step 607, otherwise it will return to step 603.
 In step 607, the number of the voxels in the first voxel set S1 and in the second voxel set S2 may be compared. If the number of the voxels in S1 is bigger than the number of the voxels in S2, it will enter into step 608, otherwise it will return to step 609.
 In step 608, the second voxel set S2 will be cleaned off, i.e., the condition that the growing terminal bronchus is bigger than the original bronchus may be determined as not correct and abandoned.
 In step 609, all the connected regions D in the second voxel set S2 may be searched. The searching method may be through a calculation. Then a radius r of each connected region d may be calculated.
 In step 610, the voxel amount nd and/or the radius r of the connected region d may be compared to a threshold respectively. For example, nT may be used to represent a threshold of the voxel amount nd and rT may be used to represent a threshold of the radius r. In some embodiments, if the voxel amount nd is bigger than nT, or the radius r is bigger than rT, it will enter into step 611, otherwise it will enter into step 612.
 In step 611, the voxel x in the connected region that does not satisfy the demand in step 610 may be removed from the second voxel set S2.
 In step 612, the voxel in the second voxel set S2 satisfying the demand in step 610 may be added into the results If of the terminal bronchiole.
 In step 613, the second voxel set S2 may be assigned to the first voxel set S1 and the number of iteration may be added with 1, i.e., S1=S2 and N+=1.
 In step 614, if the first voxel set S1 is a non-null set and the number of iteration is no bigger than NT, it will return to step 602, otherwise it will end the iteration process. A terminal bronchiole segmentation result If may be acquired after the iteration process.
 In some embodiments, a weighted terms self-adjusting method may be added in the energy-based 3D reconstruction process. The weight value chosen to self-adjust may be w1, w2, w3 or any combination thereof. For example, the weight value w3 of Vcontrol may be self-adjusted to control the terminal bronchus’s growing in a bronchus tube. In step 602, the weight value w3 of Vcontrol may be initialized at the same time. In step 608, the weight value w3 of Vcontrol may be adjusted simultaneously, for example, w3+=Δw. It should be noted that w1 and/or w2 may also be self-adjusted in other embodiments.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the energy-based 3D reconstruction process method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, one or more steps in FIG. 6  may be exchanged, added or removed. For example, the steps extracting a bronchial tree may be added before an energy-based 3D reconstruction process, and/or an image fusion process may be added after the energy-based 3D reconstruction process. In some embodiments, the parameters in some step may be changed or replaced. For example, thresholds such as nT, rT, or NT and parameters such as w1, w2, or w3 may be varied or changed according to specific implementation scenarios. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 FIG. 7 illustrates a diagram of an image processing system for extracting a bronchus according to some embodiments of the present disclosure. As shown in FIG. 7, an image processing system 202 may include a processing unit 202-1, and a storage unit 202-2. In some embodiments, the imaging processing system 202 may also be integrated, connected or coupled to an input/output unit 201 as described elsewhere in the present disclosure. It should be noted that the image system is merely for exemplary purposes and not intended to limit the scope of the present disclosure.
 The processing unit 202-1 may be configured to process some information. Mere by way for example, the information may include data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. In some embodiments, the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
 In some embodiments, the processing unit 202 may further include a first module 701, a second module 702 and a third module 703. The three modules may work cooperatively or independently. For example, the first module 701 may be configured to extract a low-level bronchus, the second module 702 may be configured to extract a terminal bronchiole and the third module may be configured to fuse the low-level bronchus and the terminal bronchiole. For another example, one single module chosen from the above three modules may also perform all the functions independently without help from the other modules. In some embodiments, each module of 701, 702 and 703 may include a processer, a hardware, a software, an algorithm, a memory, or  the like, or any combination thereof. The categories, amount, size, or type of the components in different module may be different or may be identical according to specific implementation scenarios. In some embodiments, the  module  701, 702 and 703 may also share a same component such as a processer, a hardware, a software, an algorithm, a memory, or the like, or any combination thereof. For example,  module  701, 702 and 703 may share the same input/output unit 201 and the storage unit 202-2 in FIG. 6.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, some components may be exchanged, added or removed in the image processing. For example, the input/output unit may be integrated into the image processing system. For another example, the three modules in the processing unit may be integrated into one module or divided into more than three modules. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 FIG. 8 is a block diagram illustrating an exemplary processing unit of image processing system according to some embodiments of the present disclosure. The processing unit 202-1 may be configured or used to process the information as described elsewhere in the present disclosure form the input/output 201 and/or the storage unit 202-2. The processing unit 202-1 may be configured to perform a pre-processing operation, a co-processing operation, a post-processing operation. In some embodiments, the pre-processing operation may include suppressing, weakening and/or removing a detail, a mutation, a noise, or the like, or any combination thereof. In some embodiments, the co-processing operation may include an image reconstruction, an image segmentation, etc. In some embodiments, the post-processing operation may include a 2D post-processing technique, a 3D post-processing technique and other technique. In some embodiments, the processing unit 202-1 may be a Central Processing Unit (CPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Graphics Processing Unit (GPU) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
 For illustration purposes, the processing unit 202-1 may include a segmentation module 801. The segmentation module 801 may include a pre-process block, a recognition block, a coarse segmentation block, a fine segmentation block, a fusion block and a post-process block. The pre-process block may be configured or used to perform one or more pre-processing operations. The recognition block may be configured or used to recognize or/and determine one or more regions of interest (ROI) . The coarse segmentation block may be configured or used to perform a coarse segmentation. The fine segmentation block may be configured or used to perform a fine segmentation. The fusion segmentation block may be configured or used to fuse the results of the coarse segmentation block and the fine segmentation block or process the result of fine segmentation block. The post-process block may be configured or used to perform one or more post-processing operations.
 In some embodiments of the present disclosure, the recognition block may be configured or used to recognize and/or determine an ROI wherein a tumor located; the coarse segmentation block may be configured or used to perform a coarse segmentation of tumor based on a feature classification method; the fine segmentation block may be configured or used to perform a fine segmentation of tumor based on level set method (LSM) ; the fusion segmentation block may be configured or used to process the result of the fine segmentation block in which the result of the coarse segmentation block is utilized.
 In some embodiments of the present disclosure, the recognition block may be configured or used to recognize or/and determine one or more ROIs wherein a lung may locate; the coarse segmentation block may be configured or used to perform a coarse segmentation of the lung, acquiring one or more target regions, which is/are divided into one or more central regions and one or more boundary regions; the fine segmentation block may be configured or used to perform a fine segmentation of the lung by making use of one or more classifiers, acquiring one or more parting segmentation regions, which is/are separated from the boundary regions; the fusion segmentation block may be configured or used to fuse the result of the fine segmentation block and the result of the coarse segmentation block, merely by way of example, to fuse the target regions and the parting segmentation regions. The post-process block may be configured or used to performing image smoothing, the methods of image smoothing may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, smoothing with mathematical morphology or the like, or any combination thereof.
 It should be noted that the above description of the processing unit 202-1 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the assembly and/or function of the processing unit 202-1 may be varied or changed. The assembly and/or function of the segmentation module 801 may be varied or changed. Merely by way of example, The pre-process block, the recognition block and the post-process block may not exist in the segmentation module 801; the function of recognizing or/and determining one or more ROIs may be performed in other module (s) or blocks; the coarse segmentation and the fine segmentation may be performed in one module/block or may be performed in more blocks; the fusion segmentation block may be configured or used to process the result of coarse segmentation block; the function of each blocks described in the segmentation module 801 may be performed one or more times.
 FIG. 9 is a flowchart illustrating an exemplary process of image segmentation according to some embodiments of the present disclosure. In step 901, one or more kinds of data may be acquired. In some embodiments, the data may be image data, image mask data, parameter, or the like, or any combination thereof. In some embodiments, the image data may be initial image data or processed data. Merely by way of example, the processing may be an image segmentation. The image may be a 2D image or a 3D image. The image may be a CT image, an MR image, an Ultrasound image, a PET image, or the like, or any combination thereof. In some embodiments, the image mask may be a binary mask extracted from a specific area of an image. The image mask may be used to supply a screen, extract a region of interest, and/or make a specific image. The parameter may be some information input by a user or an external source. For example, the parameter may include a threshold, a CT value, a region of interest (ROI) , a number of iterations, a seed point, an equation, an algorithm, or the like, or any combination thereof.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. The data acquired in step 901 may be variable, changeable, or adjustable based on the spirits of the present disclosure. For example, the data may also be a mode, a prior information, an expert-defined rule, or the like, or any combination thereof for specific subject. These information may be trained or self-study before or during an image  segmentation. However, those variations and modifications do not depart from the scope of the present disclosure.
 In step 902, one or more ROIs may be determined by means including an automatic identification, a semi-automatic identification, or a manual marking by user. The automatic identification may refer to an automatic identification of different tissues, organs, normal regions, abnormal regions, or the like, or any combination thereof. The semi-automatic identification may refer to a semi-automatic identification of different tissues, organs, normal regions, abnormal regions, or the like, or any combination thereof. The user may perform one or more operations, in addition to some processing performed automatically by the system, before one or more ROIs are determined. The manual marking may refer to one or more ROIs determined or located by the user based on manual marking.
 The coarse segmentation may be performed in step 903. The fine segmentation may be performed in step 904. In some embodiments, the coarse segmentation and the fine segmentation may be performed in a single step. The methods of segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof.
 The post-process may be performed in step 905. The post-processing techniques may include a 2D post-processing technique, a 3D post-processing technique and other technique. In some embodiments, the 2D post-processing technique may include a multi-planar reformation (MPR) , a curved planar reformation (CPR) , a computed volume reconstruction (CVR) , a volume rendering (VR) , or the like, or any combination thereof. In some embodiments, the 3D post-processing technique may include a 3D surface reconstruction, a 3D volume reconstruction, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , an average intensity projection (AIP) , an X-ray simulation projection, a volume rendering (VR) , or the like, or any combination thereof. The other technique may include a repair process, an image filling process, etc.
 For illustration purposes, in some embodiments of the present disclosure, one or more ROIs, wherein a tumor locate, may be detected or recognized in step 902. The detection or the recognition may be based on an automatic identification, a semi-automatic identification, or a manual marking; the coarse segmentation of tumor based on feature classification method may be performed in step 903; the fine segmentation of tumors based on level set method may be performed in step 904; the post-process, which may be performed in step 905, may include the processing of the result of the fine segmentation.
 For illustration purposes, in some embodiments of the present disclosure, an ROI wherein a lung locate may be recognized or detected in step 902. The detection or the recognition may be based on an automatic identification, a semi-automatic identification, or a manual marking; the coarse segmentation of lung may be performed in step 903 to acquire one or more target regions, which is/are divided into one or more central regions and one or more boundary regions; the fine segmentation of lung may be performed in step 904, which may make use of one or more classifiers to acquire one or more parting segmentation regions that is/are separated from the boundary region; the post-process, which may be performed in step 905, may include the processing of fusing the result of step 903 and the result of step 904, more specifically, fusing the target region and the parting segmentation region.
 It should be noted that the flowchart described above is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be reduced under guidance of the present disclosure. Those variations and modifications do not depart from the scope of the present disclosure. For example, step 901, step 902, step 903, step 904 and step 905 may be performed sequentially at an order other than that described above in FIG. 9. Step 901, step 902, step 903, step 904 and step 905 may be performed concurrently or selectively. Step 901, step 902, step 903, step 904 and step 905 may be merged into a single step or divided into a number of steps. In addition, one or more other operations may be performed before/after or in performing step 901, step 902, step 903, step 904 and step 905. Merely by way of example, a pre-process may be performed after step 901 and before step 902; the pre-process may also be performed after step 902 and before step 903.
 FIG. 10 is a flowchart illustrating an exemplary process of a coarse segmentation of tumors according to some embodiments of the present disclosure. In some embodiments of the  present disclosure, an ROI wherein a tumor locate may be determined in step 1001. In some embodiments, a number of 2D slices corresponding to a tumor may be acquired, and each 2D slice may indicate a cross section of the tumor. Merely by way of example, a tumor region in which a tumor locate may be recognized by a user in each 2D slice. A line segment of length d1 on a slice of tumor may be drawn by the user, which may traverse the tumor. In some embodiments, the line segment may be drawn on the slice which has a relatively larger cross-section compared with other slices. A line segment may be a part of a line that is bounded by two distinct end points. A cube of edge length d2 (d2=d1·r, 1<r<2) , centered at the midpoint of the line segment may be shaped. The region within the cube may be deemed as the ROI, wherein the tumor locate. Merely by way of example, the region wherein the tumor locate may be completely within the ROI, which determined by the cube.
 In step 1002, a pre-processing may be performed on the ROI. Merely by way of example, the pre-processing may include resolution normalization in each direction of the image , such as X axis, Y axis and Z axis, enhancing the contrast of the image, as well as reducing the noise of the image, acquiring a new image, etc.
 In step 1003, one or more seed points may be sampled. Merely by way of example, one or more positive seed points may be sampled from the ROI. The positive seed points may be sampled randomly among the voxels in ROI determined in step 1002, wherein the tumor locate. The positive seed points may be taken as sampling seed points of target tumor region. One or more negative seed points may be also sampled from ROI. The negative seed points may be sampled randomly among the voxels in ROI determined in step 1002, wherein the tumor do not locate. The negative seed points may be taken as sampling seed points of normal region.
 In some embodiments of the present disclosure, given the straight line d1 drawn by the user, an internal region centered at the midpoint of the straight line d1 with a radius of d3 (d3=d1/3) may be created, m positive seed points may be sampled in the internal region. The positive seed points may be part of the tumor. In some embodiments, d3=d1/l, l may be an integer greater than 1. In some embodiments, the internal region may be completely restricted in the tumor region.
 In some embodiments of the present disclosure, given the straight line d1 drawn by the user, an external region centered at the midpoint of the straight line d1 with a radius ranging from d1·t to d1·r (1<t<r) . n negative seed points may be sampled in the external region. The negative seed points may be part of normal tissues surrounding the tumor.
 In step 1004, features of gray-scale histogram of the seed points may be calculated. The gray-scale histogram of an image represents the distribution of the pixels in the image over the gray-level scale. The features of gray-scale histogram may be used to training and/or to classifying.
 In some embodiments of the present disclosure, the grayscale of each image corresponded to each seed point may be normalized as a fixed value, denoted by S. Normalized S-dimensional feature of gray-scale histogram in a spherical neighborhood, centered of each seed point with a radius of j, may be calculated. Training and classifying based on S-dimensional feature of gray-scale histogram to acquire parameters of classifier may be performed, acquiring parameters of classifier. Merely by way of example, j may be an integer greater than 1, such as 7.
 In step 1005, a linear discriminant analysis (LDA) training may be performed to acquire one or more linear or nonlinear classifiers. The LDA training may be based on the gray-scale histograms of the positive seed points and the negative seed points acquired in the step 1004. Merely by way of example, the linear classifier C may be showed in Equation 6:
Figure PCTCN2015093506-appb-000006
Wherein T is the threshold of classifier, y is the feature of weighted value, w1, w2…wS are weighted values, v1, v2…vS are features of the gray-scale histograms, S may denote the grayscale, T and w1, w2…wS may be generated based on the LDA training.
 In step 1006, features of the gray-scale histograms of one or more voxels in the image I may be calculated. In some embodiments, the linear classifier C may be used to traverse every voxel in the image I.
 In step 1007, the linear classifier C, which is trained in step 1005, may be used to assess each voxel of the image I to determine whether the voxel is part of tumor or not.
 In step 1008, the result of the coarse segmentation may be acquired. Merely by way of example, the result of the coarse segmentation may be shown in Equation 7:
Figure PCTCN2015093506-appb-000007
Wherein x may denote one voxel in the image I.
 Note that the above exemplary equations for assessment or comparison are merely for illustration and not intended to limit the scope of the present disclosure. Some variations or modifications of the comparison equations may be obvious to persons having ordinary skills in the art.
 It should be noted that the flowchart described above is provided for the purposes of illustration, and may not intended to limit the scope the present disclosure. For persons having ordinary skills in the art, a certain amount of variations and modifications may be reduced under guidance of the present disclosure. Those variations do not depart from the scope of the present disclosure. For example, step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be performed sequentially at an order other than that described in FIG. 10. Step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be performed concurrently or selectively. Step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008 may be merged as a single step or divided into a number of steps. Merely by way of example, step 1003 and step 1004 may be merged as a single step. In addition, one or more other operations may be performed before/after or in step 1001, step 1002, step 1003, step1004, step 1005, step 1006, step 1007, and step 1008.
 In some embodiments of the present disclosure, a fine segmentation may be performed after acquiring the result of the coarse segmentation. Merely by way of example, the fine segmentation of the tumor may be performed based on level set method (LSM) . LSM utilize dynamic variational boundaries for an image segmentation, embedding active contours into a time-dependent partial differential equations (PDE) function φ . It is then possible to approximate the evolution of active contours implicitly by tracking the zero level set Γ (t) . The implicit interface Γ may be comprised of a single or a series of zero isocontours. The evolution of φ is totally determined by the numerical level set equation, which is shown in Equation 8.
Figure PCTCN2015093506-appb-000008
Wherein
Figure PCTCN2015093506-appb-000009
denotes the normal direction, φ0 (x) represents the initial contour and F represents the comprehensive forces, including the internal force from the inter face geometry (e.g., mean curvature, contour length and area) and the external force from image gradient and/or artificial momentums.
 The level set function φ converts the 2D image segmentation problem in to a 3D problem. There are other constraints for stable level set evolution, too. For instance, the time step and the grid space should comply with the Courant–Friedreichs–Lewy (CFL) condition, and the level set function φ should be re-initiated periodically as a signed distance function. In order to overcome these challenges, a fast level set formulation was proposed as shown in Equation 9.
Figure PCTCN2015093506-appb-000010
Wherein ζ (φ) is a penalty momentum of φ , deviating from the signed distance function, ξ (g, φ) incorporates an image gradient information, δ (φ) denotes the Dirac function, the constants μ, λ and v control the individual contributions of ζ (φ) and ξ (g, φ) .
 The result of the coarse segmentation, shown in Equation 7, may be as initial values of φ0 (x) . The initial contour of φ0 (x) may be seen as the contour acquired after performing the coarse segmentation of tumor. Then φ0 (x) may be shown in Equation 10:
Figure PCTCN2015093506-appb-000011
Wherein d may denote a predefined distance.
 If a voxel is judged as a part of the tumor after the coarse segmentation, the voxel may be located inside of the initial contour, and the initial value of φ0 (x) may be plus d. If the voxel is judged as a part of the normal tissues after the coarse segmentation, the voxel may be located outside of the initial contour, and the initial value of φ0 (x) may be minus d. Iterative calculations may be performed based on Equation 11 until the number of iterations K is meet.
φi+1 (x) =φi (x) +τ [μζ (φi) +ξ (g, φi) ]  (Equation 11)
 The result of the fine segmentation based on level set method, more specifically, the final result of the segmentation of the tumor may be shown in Equation 12.
Figure PCTCN2015093506-appb-000012
φK (x) may be used to indicate a distance function, when φK (x) is greater than or equal to 0, it may indicate that the voxel x may be located on the contour or inside the contour, the voxel x may be deemed to be a part of the tumor; when φK (x) is smaller than 0, it may indicate that the voxel x may be located outside of the contour, the voxel x may be deemed not to be a part of the tumor.
 Note that the above exemplary equations for assessment or comparison are merely for the purposes of illustration and not intended to limit the scope of the present disclosure. Some variations and/or modifications of the comparison equations may be obvious to the persons have ordinary skills in the art.
 It should be noted that the segmentation method provided herein may be used for the segmentation of liver tumor. However, for persons having ordinary skills in the art, the segmentation method provided herein may be used for tumors located in other parts of human body. Alternatively, the method provided herein may also be used to segment a tissue or an organ, for example, a spherical tissue. Merely by way of example, the method provided herein may be used for the segmentation of a lung tumor.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. 
 FIG. 11 is a flowchart illustrating an exemplary process of lung segmentation according to some embodiments of the present disclosure. In some embodiments of the present disclosure, a coarse segmentation of lung may be performed in step 1101, a target region may be acquired. The target region may be divided into a central region and a boundary region, which may be surrounding the central region. The voxels in the central region may be identified as part of nodules; the voxels in the boundary region may need to be further identified whether they are parts of nodules. The voxels located outside of the target region may be not identified as part of one or more nodules. Merely by way of example, the shape of the boundary region may be a regular annulus.
 In step 1102, each voxel in the boundary region may be classified and labelled by making use of one or more classifiers, acquiring a parting segmentation region, which may be separated from the boundary region.
 In step 1103, the target region acquired in step 1101 and the parting segmentation region acquired in step 1102 may be fused together. The segmentation result acquired in step 1103 may be comprised of all of the voxels in the central regions and part of voxels in the boundary region.
 Post-processing may be performed in step 1104 to further improve the result acquired in the step 1103. The post-processing may include performing image smoothing. The methods of image smoothing may include a median smoothing, a Gaussian smoothing, a mean smoothing, a normalized smoothing, a bilateral smoothing, smoothing with mathematical morphology or the like, or any combination thereof.
 It should be noted that the flowchart described above is provided for the purposes of illustration, and may not intended to limit the scope the present disclosure. For persons having ordinary skills in the art, a certain amount of variations and/or modifications may be reduced under guidance of the present disclosure. Those variations and/or modifications do not depart from the scope of the present disclosure. For example, step 1101, step 1102, step 1103, and step 1104 may be performed sequentially at an order other than that described in FIG. 11. Step 1101, step 1102, step 1103, and step 1104 may be performed concurrently or selectively. Step 1101, step 1102, step 1103, and step 1104 may be merged as a single step or divided into a number of steps. One or more other operations may be performed before/after or in step 1101, step 1102, step 1103, and step 1104. Merely by way of example, pre-process may be performed after step 1101 and before step 1102. In addition, the shape of the boundary region may be an annulus, regular or irregular other shapes, or the like, or any combination thereof. The boundary regions may completely or partially surround the central region.
 FIG. 12 is a flowchart illustrating a process of a coarse segmentation of lung according to some embodiments of the present disclosure. In some embodiments of the present disclosure, the coarse segmentation of lung may be performed based on region growing segmentation. In step 1201, a long axis may be determined by a user, which may be on the slice with maximum cross-section of ground-glass nodules (GGN) of the lung image. The long axis may be a straight line segment and traverse abnormal regions as many as possible.
 An ROI may be determined based on the long axis in step 1202. Merely by way of example, assuming the length of the long axis is 1, the ROI may be determined by the diagonals of a cube with an edge length of 1.
 In step 1203, a filtering may be performed on the ROI, acquiring a filtered image. The filtering may be a mean filtering, a median filtering, or the like, or any combination thereof. Merely by way of example, if a mean filtering is performed, a window, calculated based on the mean filtering and the gray-scale histogram, may be proportional to the length of long axis. The filtering may be automatically adjusted in accordance with the different volume of the nodules.
 In step 1204, a first region growing may be performed on the filtered image, acquiring a dynamic segmentation region. Merely by way of example, if the first region growing on the filtered image acquired after mean filtering is performed, the operations of the first region growing based on distance function may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the mean filtered image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a dynamic segmentation region may be acquired.
 In step 1205, the gray-scale histogram of the ROI may be calculated, histogram vector image may be acquired.
 In step 1206, a second region growing on the histogram vector images may be performed, a static segmentation region may be acquired. Merely by way of example, if the second region growing on the histogram vector image is performed, the operations of the second region growing based on distance function may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the histogram vector image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a static segmentation region may be acquired.
 In step 1207, the dynamic segmentation region acquired in step 1204 and the static segmentation region acquired in step 1206 may be fused together, the target region may be acquired. The overlapping region of the dynamic segmentation region with the static segmentation region may be determined as the central region of the target region. The central region may be determined as the nodule region. The region located inside of the dynamic segmentation region but outside of the static segmentation region may be determined as the boundary region, wherein nodules need to be further identified. The region located outside of the dynamic segmentation region may not be determined as the target region.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, a certain amount of variations and/or modifications may be reduced under guidance of the present disclosure. Those variations and/or modifications do not depart from the scope of the present disclosure. For example, the coarse segmentation of lung may be based on a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In addition, step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be performed sequentially at an order other than that described above in FIG. 12. Step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be performed concurrently or selectively. Step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207 may be merged as a single step or divided into a number of steps. One or more other operations may be performed before/after or in performing step 1201, step 1202, step 1203, step 1204, step 1205, step 1206 and step 1207. Merely by way of example, a pre-process may be performed after the step 1201 and before the step 1202.
 FIG. 13 is a flowchart illustrating a process of a fine segmentation of lung according to some embodiments of the present disclosure. In some embodiments of the present disclosure, the fine segmentation of lung may be performed by making use of one or more classifiers to classify and label each voxel in the boundary regions. An ROI may be determined in step 1301. In some  embodiments, the ROI may have been determined in the processing of the coarse segmentation that is described elsewhere in the present disclosure, or be determined again.
 In step 1302, filtering may be performed to the images in the ROI, a feature vector image may be acquired. The window of the filter may be proportional to the length of the long axis. The filtering may include Butterworth filter, gradient filter, Gabor filter, Volterra filter, Hessian filter, Chebyshev filter, Bessel filter, Elliptic filter, Constant k filter, m-derived filter, filter based on histogram, or the like, or any combination thereof.
 In step 1303, the feature vector image may be combined with the weights of the feature vector, which may have been trained offline, a LDA probability field image may be acquired.
 In step 1304, a region growing on the LDA probability field image may be performed, acquiring a parting segmentation region with labels of classifiers. Merely by way of example, the region growing may be based on the LDA probability field, which may be performed as following: the points of long axis determined by the user may be taken as seed points, performing region growing with one or more relatively restrictive initial thresholds; a segmentation region grown based on the LDA probability field image may cover at least partial of the long axis, the length of the long axis covered by the segmentation region may be assessed based on the initial thresholds; if the assessment fails, the thresholds may be eased and the assessment may continue; if the assessment succeeds, an expansion operation with restricted thresholds may be performed and a segmentation region with the labels of the classifier may be acquired.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, a certain amount of variations and/or modifications may be reduced under guidance of the present disclosure. Those variations and/or modifications do not depart from the scope of the present disclosure. For example, step 1301, step 1302, step 1303 and step 1304 may be performed sequentially at an order other than that described above in connection with FIG. 13. Step 1301, step 1302, step 1303 and step 1304 may be performed concurrently or selectively. Step 1301, step 1302, step 1303 and step 1304 may be merged as a single step or divided into a number of steps. One or more other operations may be performed before/after or in step 1301, step 1302, step 1303 and step 1304. Merely by way of example, the parting segmentation regions acquired in step 1304 may be fused together with the target region, which may be part of process of the fine segmentation of lung.
 FIG. 14 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure. As illustrated in FIG. 14, the processing method may include an acquisition process 1401, a pre-processing process 1402, and a rendering process 1403. In some embodiments of the present disclosure, one or more steps may be omitted. In some other embodiments, one or more steps may be executed repeatedly.
 In step 1401, the image data may be acquired from the input/output unit 201as described elsewhere in the present disclosure. In some embodiments, the data may be an image data, an image mask, a parameter data, or the like, or any combination thereof. In some embodiments, the image data may be an initial image data or an image data after treatment. Merely by way for example, the treatment may be an image segmentation. The image may be a 2D one or a 3D one. The image may be acquired from any medical imaging system as described elsewhere in the present disclosure. In some embodiments, the image mask may be a binary mask to shade a specific area of an image needed to be processed. The image mask may be used to supply a screen, extract an area of interest, and/or make a specific image. In some embodiments, the parameter may be some information input by a user or an external source. For example, the parameter may include a threshold, a CT value, a region of interest, a number of iteration times, a seed point, an equation, an algorithm, an instruction, or the like, or any combination thereof.
 In step 1402, the image data may be pre-processed before rendering in some embodiments. The pre-processing step 1402 may include a segmentation, a storage, a processing after segmentation, or the like, or any combination thereof. In some embodiments, the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type. In some embodiments, different components may be stored in different storage modules, and then be rendered. The storage module may include a CPU memory, a GPU video memory, a RAM, a ROM, a hardware, a software, a tape, a CD, a DVD, or the like, or any combination thereof. Merely by way for example, the  processing after segmentation may include generating a boundary data of the segmentation information. In some embodiments, the pre-processing step 1402 may be not necessary, the rendering step 1403 may be executed after the acquiring step 1401 directly.
 In step 1403, the image date after pre-processing may be rendered. In some embodiments of the present disclosure, the rendering strategy may include a direct volume rendering, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , a hardware-accelerated volume rendering, an average intensity projection (AIP) , or the like, or any combination thereof.
 It should be noted that the above description about the image processing method is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the image processing method, the form and details of the image processing method may be modified or varied without departing from the principles of the present disclosure. For example, the image data acquired in step 1401 may be three-dimensional data, two-dimensional data, one-dimensional data, or the like, or any combination thereof. For another example, the pre-processing step 1402 may include any processing method to improve the image. For still another example, different components or tissues may be rendered according to different specific implementation scenarios. The modifications and variations are still within the scope of the present disclosure described above.
 FIG. 15 illustrates an image processing system according to some embodiments of the present disclosure. The image processing system 202 may include a pre-process unit 1501, and a storage/rendering unit 1502  In some other embodiment, the image processing system 202 may be also integrated, connected or coupled to an input/output unit 201, a data processing system 202. 
 The input/output unit 201 may be configured to input an image data, input a user instruction, output an image and/or data, display an image, or the like, or any combination thereof. In some embodiments, the input/output unit 201 may include a data acquisition module, a display module, a user input module, or the like, or any combination thereof. Merely by way for example, the input/output unit 201 may include a mouse, a keyboard, a stylus, a touch screen, a sensor, an interface, a microphone, a wireless communication, a cloud, or the like which may detect input, or any combination thereof.
 The pre-process unit 1501 may be configured to pre-process the image data before rendering. In some embodiments of the present disclosure, the pre-process unit 1501 may be a segmentation module, a control module, a storage module, or the like, or any combination thereof.
 The storage/rendering unit 1502 may be configured to render and/or store the image data. In some embodiments, the storage/rendering unit 1502 may further include a rendering module, a storage module. Mere way for example, the rendering module and the storage module may be a same module to execute rendering and storage simultaneously. The same module may include a Central Processing Unit (CPU) , a Graphics Processing Unit (GPU) , an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
 It should be noted that the above description about the image processing system is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of image processing system, the form and details of the image processing system may be modified or varied without departing from the principles of the present disclosure. For example, the input/output unit 201 may be any input and/or output systems such as an image detector. For another example, the input/output unit 201 may be omitted for a data processing system. For still another example, the rendering module and the storage module may be the same module or may also be two different ones. The modifications and variations are still within the scope of the present disclosure described above.
 FIG. 16 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure. FIG. 16 is an exemplary embodiment to illustrate the FIG. 14. As shown in FIG. 16, the image processing method may include an acquisition process 1601, a segmentation process 1602, a storage process 1603, a rendering process 1604, and a display process 1605.
 In step 1601, data may be acquired. In some embodiments, the data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. Merely by way for example, the data acquired in step 1601 may include a data set of a three-dimensional image. The data set of the three- dimensional image may include a voxel. The voxel herein may be the smallest distinguishable unit in the three-dimensional segmentation.
 In step 1602, different components may be performed by an identification (include a segmentation) in the data set. In some embodiments, a target region may be determined during an interaction with a user. The target region may be determined by an input information from the user and may also be determined by a given region of the user. In some embodiments of the step 1602, the input information of the user may be imported through an input/output unit. The input/output unit may be a peripheral device including a mouse, a pen, a keyboard, a stylus, a touch screen, a sensor, an interface, a microphone, a wireless communication, a cloud, or the like which may detect input, or any combination thereof. Merely by way for example, the user may depict a boundary of the target region on a display by a mouse or a pen. The target region may be a removable target region, a reserved target region, or their combination.
 In some embodiments, a corresponding loading content may be selected according to a demand of a user or a type of an executing task. The executing task may be the removable region, the reserved region, or the like, or any combination thereof. For better understanding the present disclosure, a medical visual field is described as an example of the image processing method. The three-dimensional data set may be a medical scanning data of a subject acquired from any medical imaging system as described elsewhere in the present disclosure. Usually, a portion of air may occupy more than one half of the whole data when executing the Direct Volume Rendering task. In practical applications, the air portion may be removed by means of segmentation step since it is not cared in some embodiments. In some embodiments, the different components in the data set may be identified according to a practical application demand in the VR task. The data set may be divided into an air portion and a non-air portion. The air portion may be regarded as a removable region, the non-air portion may be regarded as a reserved region.
 It should be noted that the different components in the data set may not be identified in some embodiments. For example, when executing a Multiple Planar Reconstruction (MPR) task or a Curved Planar Reconstruction (CPR) task, the air portion information may be used to perform an auxiliary diagnosis. The air portion herein may be not removed.
 It should be noted that the above description about the segmentation step is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the image processing method, the form and  details of the segmentation method may be modified or varied without departing from the principles of the present disclosure. For example, the two embodiments described above may be used in combination. The different three-dimensional rendering task may be distributed to different hardware according to the characteristics of task and the demands of the user. For another example, the air portion and non-air portion described above may be merely an example in the medical visual field. The different components may be other subject including a substance, a tissue, an organ, an object, a specimen, a body, or the like, or any combination thereof. The subjects may be identified and then rendered to reduce resource cost of hardware in the system. The modifications and variations are still within the scope of the present disclosure described above.
 In some embodiments, the identification of the different components in the data set may include an identification of information from a voxel. The information may be a one dimensional characteristic, a multi-dimensional characteristic, or the combination thereof. In some embodiments, the characteristic information may be gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any combination thereof. In some embodiments, the information may further include a toleration range. The voxels from different components may present a different one dimensional and/or a three-dimensional characteristic. Thus the different components may be segmented or identified by judging whether the feature information is within the range of the same toleration range. Merely by way for example, the toleration range may be a threshold. The voxels whose one dimensional and/or multi-dimensional characteristic within the threshold may be regarded as a same component.
 In some embodiments, the identification method may be a segmentation including, e.g., a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type as described elsewhere in the present disclosure.
 In some embodiments, the three-dimensional image data after the identification may be managed by a corresponding space management algorithm. For example, the image data may  be stored through a data structure including an array, a record, a union, a set, a graph, a tree, a class, or the like, or any combination thereof. In some embodiments of the tree data structure, the space management algorithm may include an Octree technology, a K-Dimensional (KD) tree, a Binary Space Partition (BSP) tree, or the like, or any combination thereof.
 It should be noted that the above description about the identification method is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the segmentation or identification, the form and details of the segmentation or identification method may be modified or varied without departing from the principles of the present disclosure. For example, the method of identification or data management may be varied but not limited here. The modifications and variations are still within the scope of the present disclosure described above.
 In step 1603 illustrated in FIG. 16, the different components of the subject may be stored in different storage modules. In some embodiments, the storage module may include an external storage or an internal storage. In some embodiments, the external storage may include a magnetic storage, an optical storage, a solid state storage, a smart cloud storage, a hard disk, a soft disk, or the like, or any combination thereof. The magnetic storage may be a cassette tape, a floppy disk, etc. The optical storage may be a CD, a DVD, etc. The solid state storage may be a flash memory including a memory card, a memory stick, etc. In some embodiments, the internal storage may be a core memory including a Central Processing Unit (CPU) , a Graphics Processing Unit (GPU) , a Random Access Memory (RAM) , a Read Only Memory (ROM) , a Complementary Metal Oxide Semiconductor Memory (CMOS) , or the like, or any combination thereof. For better understanding the present disclosure, a CPU memory and a GPU video memory are described as an example. In some embodiments, the whole volume data of the three-dimensional data set may be stored in the CPU memory and the non-air portion may be stored in the GPU video memory.
 It should be noted that the above description about the storage method is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the storage, the form and details of the storage method may be modified or varied without departing from the principles of the present disclosure. For example, the whole volume data and the non-air portion may be stored in a same storage module, such as a CPU memory, a GPU video memory, or the like. For another example, the type of storage modules may be changed to any storage as described elsewhere in the present disclosure.  The modifications and variations are still within the scope of the present disclosure described above.
 In the step 1604 illustrated in FIG. 16, the data in the storage modules may be rendered by a corresponding data processer after receiving an rendering instruction. For illustration purposes, a CPU memory and a GPU video memory may be described in detail. In some embodiments, the image data stored in a memory may be loaded and/or processed by the CPU and the image data stored in a video memory may be loaded and/or processed by the GPU. The GPU may be configured to perform a computing in a VR task to improve the execution efficiency. The CPU may be configured to perform a MPR task or a CPR task for the giant amount of data in the air auxiliary diagnosis.
 In some embodiments, the rendering task of the CPU and the GPU may be separated and executed in different control threads in order to balance the load of rendering task and optimize the performance of system. The control threads may be established corresponding to a CPU task queue and a GPU task queue, respectively. The rendering task may be distributed to the CPU task queue or a GPU task queue to execute according to an instruction from the user. In some embodiments, the rendering task of the CPU and the GPU may share a same control thread.
 In some embodiments, when executing the VR task, the GPU may execute a complex calculation. At the same time, the resources occupancy rate of the CPU may be low enough to execute a request of multi-user and multi-task simultaneously. When executing the MPR or the CPR task, in order to ensure the execution efficiency and cache consistency, several tasks may be executed in sequence on the premise of limiting the occupancy rate of the CPU. The CPU may support a request of other task at the same time.
 It should be noted that the above description about the rendering method is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the rendering, the form and details of the rendering method may be modified or varied without departing from the principles of the present disclosure. In some embodiments, the image rendering task may be executed in some other hardware but not limited the CPU and the GPU. For example, it may be an ASIC, an ASIP, a PPU, a DSP, or the like, or any combination thereof. The modifications and variations are still within the scope of the present disclosure described above.
 In step 1605, the data after rendering may be displayed. In some embodiments, a rendering result may be collected and/or transmitted through a unified criterion. For example, the rendering result may be collected through a unified framework and the rendering result may be transmitted to an interface through a unified port. The interface herein may be any displayable device but not limited here. In some embodiments, the rendering result may be collected and/or transmitted through a diverse criterion.
 It should be noted that the above description about the data processing method is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the data processing method, the form and details of the data processing method may be modified or varied without departing from the principles of the present disclosure. For example, the image data after identification may be rendered immediately without saving in a storage. For another example, the storage and processor may be the same apparatus and may also be two different apparatuses. For still another example, the image processing method may be modified such as adding a pre-processing step before the identification, adding a post-processing step after the rendering task. The modifications and variations are still within the scope of the present disclosure described above.
 For better understanding the present disclosure, some embodiments of an identification may be described in detail below. It should be noted that these embodiments are not intended to limit the scope of the present disclosure. As described elsewhere in the present disclosure, the information for identification may be a one dimensional characteristic, a multi-dimensional characteristic, or the combination thereof. In some embodiments, the characteristic information may be gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any combination thereof.
 FIG. 17 illustrates an image identification method according to some embodiments of the present disclosure. As shown in FIG. 17, an identification process 1602 may further include a characteristic detecting process 1701, a judgment process 1702, and a labeling process 1703. In some embodiments, one or more steps may be omitted or exchanged. In some other embodiments, one or more steps may be executed repeatedly.
 In step 1701, a characteristic of a voxel may be detected. For illustration purposes, the characteristic information FIG. 17 may be one dimensional characteristic, e.g., gray level, mean gray level, intensity, texture, color, contrast, brightness, spatial property, or the like, or any  combination thereof. For example, the one dimensional characteristic in FIG. 17 is color and spatial property of a voxel. In some embodiments, the color may further be a gray value and the spatial property may further be an arrangement order of the voxel or a position of the voxel in the three dimensional space.
 In step 1702, a same component may be judged. In some embodiments, a tolerance range may be set. The tolerance range may be corresponding to the difference of a color value or a position of the voxel. For example, the voxels with an adjacent position and have a color value below the tolerance range may be regarded as belonging to a same component. The voxels with a great difference of color value or a great distance in position which beyond the tolerance range may be more likely to be judged as different tissues. So the data in the three-dimensional image may be identified according to comprehensive comparison of the arrangement orders and color values of the voxels.
 In step 1703, the different components may be labeled respectively. In some embodiments of the present disclosure, a same label may be marked on the voxels with adjacent arrangement and color values within a preset tolerance range.
 It should be noted that the above description about the FIG. 17 is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the segmentation or identification, the form and details of the segmentation or identification method may be modified or varied without departing from the principles of the present disclosure. For example, the different components may be judged according to other characteristics including gray level, mean gray level, intensity, texture, contrast, brightness, or the like, or any combination thereof. For another example, the identification may be a complex image segmentation as described elsewhere in the present disclosure. The modifications and variations are still within the scope of the present disclosure described above.
 In some embodiments of the present disclosure, the different components in the data set may be identified according to a multi-dimensional characteristic information. The multi-dimensional feature information may include providing a pre-determined neighbor region. A voxel and the voxels within the neighbor region may be performed by a statistics to acquire a multi-dimensional characteristic in the statistics.
 In some embodiments, the statistics may be based on calculating an information entropy of each voxel. The information entropy may reflect a comprehensive characteristics of a  gray information distribution within a certain region near the voxel in a statistical significance. A threshold may be set to compare the difference of the information entropy between the voxel and the voxel near a certain region. If the difference of information entropy is less than the pre-determined threshold, the gray value distributions from different voxels may be considered to be consistent with each other and the two voxels may be regarded as belonging to a same component. On the contrary, the gray value distribution may be considered to be inconsistent if the information entropy difference exceed the preset threshold and the two voxels may be regarded as belonging to different components.
 In some embodiments, the statistics may be based on a Gaussian mixture model. The feature of each voxel in the three-dimensional data set may be characterized by using several Gaussian models. In some embodiments, all the voxels in a same region may be fitted by several Gaussian models and their corresponding Gaussian parameters may be acquired. The number of the Gaussian models may be determined according to the characteristic information of the voxel. The Gaussian parameters may include an expectation, a variance, or their combination. In some embodiments, the Gaussian parameters of different voxels may be compared. The voxels whose Gaussian parameters are in a same range may be considered to be a same component.
 In some embodiments, the statistics may be based on a feature detection algorithm. For better understanding the present disclosure, a Scale-Invariant Feature Transform (SIFT) is described as an example. The voxels within a same region may be calculated by Gaussian mixture model under different resolutions to acquire a vector set. The voxels may be compared to judge whether they belong to a same component. The embodiments may also be used to some complex scenarios such as identifying some tissues with similar gray values (e.g., a lung and a heart) or some tissues coupled to each other while has different functions (e.g., a liver, a pancreas and a gallbladder) . In some embodiments, the statistics may be based on speeded up robust features (SURF) , principal components analysis speeded up robust features (PCA-SURF) gradient location and orientation histogram (GLOH) , histogram of oriented gradients (HOG) , or the like, or any combination thereof.
 FIG. 18 illustrates an image processing system according to some embodiments of the present disclosure. FIG. 18 may be seen as an embodiment of the FIG. 15. The image processing system 202 may include a pre-process unit 1501, a storage/rendering unit 1502. The input/output unit 201 may be configured to input image data, input user instructions, output image and/or data,  display image, or the like, or any combination thereof. In some embodiments, the input/output unit 201 may include an acquisition module 1801, a display module 1802, etc. The pre-process unit 1501 may be configured to pre-process the image data. In some embodiments, the pre-process unit 1501 may include a segmentation module 1501-1, a storage control module 1501-2, a rendering control module 1501-3, or the like, or any combination thereof. The segmentation module 1501 may be configured to perform an identification on the image data. The storage/rendering unit 1502 may be configured to render and/or store the image data. In some embodiments, the storage/rendering unit 1502 may include a rendering module, a storage module, etc. Mere way for example, the rendering module and the storage module may be an integrated module to execute both rendering and storage. The integrated module may include a CPU, a GPU, or other image processors, or any combination thereof.
 In some embodiments, the acquisition module 1801 may be configured to acquire data. The data herein may include a data set, a user instruction, or the like, or any combination thereof. The data set of three-dimensional images may be medical scanning data of a subject as described elsewhere in the present disclosure. The data set herein may include a voxel. The voxel may be the smallest distinguishable unit in the three-dimensional segmentation. The user instruction may include an instruction of rendering, an instruction of control, an instruction of acquisition, an instruction of display, or the like, or any combination thereof. In some embodiments, the display module 1802 may be configured to display the data. The data herein may include data after rendering, data after storage, data after segmentation, data after acquisition, or the like, or any combination thereof. In some embodiments, the segmentation module 1501-1 may be configured to segment or identify different components in the data set. The different components may include a removable region, a reserved region, or other portion, or the combination thereof according to user demands. In some embodiments, the storage control module 1501-2 may be configured to control data of different components to store in different storage modules. In some embodiments, the rendering control module 1501-3 may be configured to control the rendering after receiving a rendering instruction. The rendering instruction may include rendering the data in corresponding storage modules. In some embodiments, the storage /rendering unit 1502 may be configured to store the data, or render the data, or their combination.
 It should be noted that the above description about the image processing system is merely an example according to some embodiments of the present disclosure. Obviously, to those  skilled in the art, after understanding the basic principles of the image processing system, the form and details of the image processing system may be modified or varied without departing from the principles of the present disclosure. For example, the display module 1802 may be not necessary in some embodiments of the present disclosure. For another example, the control modules of storage/the control modules of rendering may be not necessary in some embodiments of the present disclosure. The modifications and variations are still within the scope of the present disclosure described above.
 In some embodiments, as shown in FIG. 19, the segmentation module 1501-1 may further include an identification block 1901 and a judgment block 1902. In some embodiments, the identification block 1901 may be configured to identify a characteristic information of voxel including a one dimensional characteristic, a multi-dimensional characteristic, or their combination. In some embodiments, the judgment block 1902 may be configured to provide a threshold and judge the characteristic information of the voxels.
 For better understanding the present disclosure, the identification of one dimensional feature information is described as an example of the segmentation module 1501-1. In some embodiments, as shown in FIG. 20, the identification block 1901 may include a detection sub-block 2001, a judgment sub-block 2002, a labeling sub-block 2003. In some embodiments of the identification block 1901, the detection sub-block 2001 may be configured to detect the arrangement order and the color value of the voxels in the data set. In some embodiments, the judgment sub-block 2002 may be configured to judge the voxels of adjacent arrangement and the color value difference within a preset threshold as a same component. In some embodiments, the labeling sub-block 2003 may be configured to label different components respectively.
 In some embodiments, the identification module 1901 may further include an acquisition block. The acquisition block may be configured to acquire the multi-dimensional feature information. In some embodiments, the acquisition block may be configured to provide a neighbor region and acquire the multi-dimensional feature information according to statistics based on voxels and voxels within the range of neighborhood region. The statistics herein may include a statistics method based on a Gaussian mixture model, a statistics based on information entropy of the voxel, a statistics method based on time-invariant space-state feature model, or the like, or any combination thereof. The weight in the combination of each statistics method may be  varied according to different demands in the combination use in some embodiments of the present disclosure.
 In some embodiments, the storage control module 1501-2 may include a first storage control block and a second storage control block. The first storage control block may be configured to load the data set of a three-dimensional image into a CPU memory. The second storage control block may be configured to load the reserved region of the three-dimensional image into a GPU video memory.
 In some embodiments, storage/rendering unit 1502 may include a CPU memory, a GPU video memory, a RAM, a ROM, a hardware, a software, a tape, a CD, a DVD, an Application-Specific Integrated Circuit (ASIC) , an Application-Specific Instruction-Set Processor (ASIP) , a Physics Processing Unit (PPU) , a Digital Signal Processor (DSP) , a Field Programmable Gate Array (FPGA) , a Programmable Logic Device (PLD) , a Controller, a Microcontroller unit, a Processor, a Microprocessor, an ARM, or the like, or any combination thereof.
 As shown in FIG. 21, in some embodiments, the rendering control module 1501-3 may include a thread control block 2101, and a distribution block 2102. The thread control block 2101 may be configured to establish a thread corresponding to the task queue of the CPU and the GPU, respectively. The distribution block 2102 may be configured to analyze a rendering instruction of the user and distribute the rendering task to the task queue of the CPU or the GPU which execute image rendering.
 It should be noted that the above description about the image processing system is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the image processing system, the form and details of the image processing system may be modified or varied without departing from the principles of the present disclosure. For example, all or parts of the steps described above may be finished by related hardware controlled by programs. The programs may be stored in any readable storage medium, such as RAM, ROM, disk, CD, or the like, or any combination thereof. For another example, one or more sub-block/block/module/unit may be omitted. The modifications and variations are still within the scope of the present disclosure described above.
 FIG. 22 illustrates a flowchart of an image processing method according to some embodiments of the present disclosure. The image processing method may include acquiring a segmentation information of volume data process, preprocessing the segmentation information,  generating mesh data of each organs’ boundary information process, loading the mesh data and the volume data in a GPU process, and iterative rendering process.
 In step 2201, a segmentation information of a volume data may be acquired. In some embodiments, the acquisition of the segmentation information may include segmenting the volume data into several human tissues and choosing a corresponding rendering strategy based on the segmentation information of different tissues. In some embodiments, the method of a segmentation may include a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
 In step 2202, the segmentation information may be preprocessed and boundary data may be generated. In some embodiments, the boundary data may be a mesh grid data corresponding to boundary structures of each tissue within the volume data. The boundary structure may be a two-dimensional boundary line, a three-dimensional boundary surface, etc. The boundary structure of the mesh grid data may include a triangle or quadrate, or an irregular shape, or any combination thereof. The triangle or quadrate may be formed by connecting three or four corresponding volume data. The mesh grid data may include information e.g., a corresponding coordinate of each point therein, a normal vector of a plane where the grid located, a structure information of an adjacent volume data, or the like, or any combination thereof.
 It should be noted that the above description about the segmentation of the volume data and the generation of the mesh data is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the method, the form and details of segmentation and generation may be modified or varied without departing from the principles of the present disclosure. For example, the boundary structure of the mesh grid data may be comprised of other shapes such as a pentagon, a hexagon, or the like, or any combination thereof. For another example, the tissue herein may be not limited to human tissues or organs, other components such as air, water, or the like may be tissues as well.  The modifications and variations are still within the scope of the present disclosure described above.
 The preprocessing of the segmentation information may include providing an observation point, providing observation light of different directions from the observation point. The observation light may intersect the boundary structure at several intersection points. The intersection points in a same depth may be located in a same depth surface. The depth may be the distance between the intersection point and the observation point.
 For better understanding the present disclosure, different tissues transmitted by light may be described as an example. In some embodiments, a region of interest (ROI) may be marked as Mask_ID_1, Mask_ID_2, Mask_ID_3, etc. A region of disinterest may be marked as Mask_ID_Default. As shown in FIG. 23, the tissues may include a tissue 2301, a tissue 2302 and a tissue 2303 which locates therebetween. In some embodiments, the tissue 2301 and tissue 2302 may be regarded as a region of interest and may be marked as Mask_ID_1, Mask_ID_2, respectively. The tissue 2303 of disinterest may be marked uniformly as Mask_ID_default.
 As shown in FIG. 23, the light may start from a point O, and passes through the tissues along the directions as shown in the FIG. 23. The light may intersect with the boundaries of the tissue 2301 and the tissue 2302 at the points from number 1 to number 12. Number 1-12 may represent the twelve intersections, respectively. In some embodiments, the intersections 1-12 may be acquired by passing several times based on a Depth Peeling algorithm.
 A mesh structure may be generated according to the edge of volume data in a Mask_ID. In some embodiments, the mesh structure may include three/four point coordinates of the triangular/quadrilateral facets, a normal vector of a plane, a Mask_ID information of an adjacent volume data in a three-dimensional space, or the like, or any combination thereof. The adjacent volume data in the three dimension space may volume data which is on the top, bottom, left and right of the present volume data. For better understanding, the intersection point 1 and 2 as shown in FIG. 23 may be described as an example. In some embodiments, the Mask information of a volume data point before the intersection point 1 may be Mask_ID_Default. The Mask information of a volume data point after the intersection point 1 may be Mask_ID_1. The Mask information of a volume data point before the intersection point 2 may be Mask_ID_1. The Mask information of a volume data point after the intersection point 2 may be Mask_ID_Default. The rest may be done in the same manner.
 It should be noted that the above description about FIG. 23 is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the method, the form and details of the segmentation may be modified or varied without departing from the principles of the present disclosure. For example, other preprocessing method may be executed to acquire mesh data of the segmentation information. The modifications and variations are still within the scope of the present disclosure described above.
 In step 2203, the volume data and the mesh data may be loaded into a GPU to execute a rendering. In some embodiments, the volume data and mesh data may be loaded into the GPU. The mesh data may be processed the Depth Peeling algorithm. The Depth Peeling algorithm may be a multi-slice rendering method based on z-buffer. The rendering of each slice may be based on the depth value of a previous slice. The Depth Peeling algorithm may include Pass of many times. In some embodiments, the first Pass may be marked as Pass (1) and the ith Pass may be marked as Pass (i) . A depth information and a position information of the Pass (i) acquired may be marked as Depth_i, Position_i, respectively. The Depth_i and/or the Position_i may be stored in a Texture of a Frame Buffer Object (FBO) .
 In step 2204-2210, the volume data may be rendered iteratively. The rendering may include acquiring a maximum depth surface and a minimum depth surface. The maximum depth surface may include the intersection points which may be at a farthest boundary surface along the light and in a same curved surface as well. The minimum depth surface may include the intersection points which may be at a nearest boundary surface along the light and in a same curved surface as well. The iterative rendering may be executed from the minimum depth surface to the maximum depth surface. The iterative rendering may end at the maximum depth surface.
 The iterative rendering may include judging whether a boundary structure acquired from a current iteration and the boundary structure acquired from the last iteration belong to a same boundary structure. If so, the corresponding volume data may be accessed and rendered according to the position information of the current and the last interaction. If not, the end of the current rendering may be regarded as the start of the next rendering sampling. A boundary structure of the next iteration may be continued to look for. In some embodiments, the access of the position information may include consulting the boundary data, confirming the corresponding tissue type for rendering, choosing a corresponding rendering strategy and a color table.
 In step 2204, the Depth Peeling Pass (1) may be executed to acquire a maximum depth (marked as Depth_max) and a minimum depth (marked as Depth_min) of the mesh data. The Depth_max and the Depth_min may be stored in the Texture of the FBO (Texture_Depth_Min, Texture_Depth_Max) .
 In step 2205, the Pass (i) may be executed to acquire Depth_i. The Depth_i may be compared with the Depth_max. If Depth_i>Depth_max, the iteration may be ended. If not, the step 2206 may be executed.
 In step 2206, the depth information Depth_i-1 and position information Positin_i-1 may be acquired from Pass (i-1) .
 In step 2207, the Pass (i) may be executed to acquire Depth_i and Position_i. The Depth_i and the Position_i may be stored in the Texture (Texture_Depth_i, Texture_Depth_i) .
 In step 2208, the mesh data of the Pass (i) and Pass (i-1) may be judged whether belong to a same tissue. If so, the step 2209 may be executed. If not, it may return to step 2205.
 In step 2209, the voxels of the volume data may be accessed from the Position_i-1 as the start to Position _i as the end.
 In step 2210, the rendering result may be fused with the last rendering step of the same tissue. Then it may return to the step 2205. In some embodiments, the fusion of the rendering result with the last rendering step of the same tissue may be illustrated by combining the two figures of FIG. 22 and FIG. 23. The Depth Peeling algorithm may be used in the GPU. The depth and position information Point_1 of the intersection point 1 may be acquired according to Pass (1) . The depth and position information Point_2 of the intersection point 2 may be acquired according to Pass (2) . The Point_1 and Point_2 may be compared, and the intersection point 1 and 2 may belong to a same tissue, marked as Mask_ID_1. So the different rendering strategies may be used from point 1 to point 2, the rendering result may be marked as Result_Mask_ID_1_1_2.
 The Point_5 and Point_6 may be acquired according to Pass (5) and Pass (6) . The intersection point 5 and 6 may belong to a same tissue, marked as Mask_ID_1. The rendering result may be marked as Result_Mask_ID_1_5_6. The Result_Mask_ID_1_1_2 and Result_Mask_ID_1_5_6may be fused to acquire a Result_Mask_ID_1_Blend.
 Pass (7) and Pass (8) may be executed in a similar way. The Result_Mask_ID_1_7_8 may be fused with the Result_Mask_ID_1_Blend to acquire a new Result_Mask_ID_1_Blend. Similarly, Pass (11) and Pass (12) may executed to acquire the Result_Mask_ID_1_11_12. The  Result_Mask_ID_1_11_12 may be fused with the Result_Mask_ID_1_Blend to acquire a new Result_Mask_ID_1_Blend. Finally, the rendering result may be acquired as Result_Mask_ID_1_Blend.
 The subsequent Pass (i) may be executed to acquire and store Depth_i and Position_i. The depth value may be compared with the Dpeth_max. If Depth_i<Dpeth_max, the Depth_i-1 and the Position_i-1 of the Pass (i-1) in the FBO may be read. The image process method may end until the comparison result is Depth_i>Dpeth_max. Then the mesh data of Pass (i) and Pass (i-1) may be judged whether belong to a same boundary of a tissue. If so, the voxels of the volume data may be accessed from the Position_i-1 as the start to Position_i as the end. The rendering result may be acquired by different rendering strategies. If not, it may return to the step 2205. In some embodiments of the present disclosure, the rendering strategy may include a direct volume rendering, a volume intensity projection (VIP) , a maximum intensity projection (MIP) , a minimum intensity projection (Min-IP) , a hardware-accelerated volume rendering, an average intensity projection (AIP) , or the like, or any combination thereof.
 It should be noted that the above description about the iterative rendering is merely an example according to some embodiments of the present disclosure. Obviously, to those skilled in the art, after understanding the basic principles of the method, the form and details of the iterative rendering may be modified or varied without departing from the principles of the present disclosure. For example, the iterative rendering of tissues may be executed without the fusion step. In some embodiments of the present disclosure, one or more steps may be omitted. In some other embodiments, one or more steps may be executed repeatedly. The modifications and variations are still within the scope of the present disclosure described above.
 FIG. 24 is a flowchart illustrating a process of post segmentation processing according to some embodiments of the present disclosure. In step 2401, an image may be acquired. In some embodiments, the image may be a binary image (mask) after a 2D segmentation or a 3D segmentation. The image may be acquired based on techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof. In some embodiments of the present disclosure, multimodal techniques may be used to acquire the volume data. The multimodal  techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
 In step 2402, a region of interest (ROI) may be extracted from the image, and the ROI may be partitioned into three regions, a first region, a second region, and a third region. In some embodiments, the first region may be determined as a target tissue (e.g., a bone tissue) , merely by way of example, for a binary image, the first region may be constituted by all pixels whose value is 1. The characteristic value of the first region may be denoted by Thigh. The second region may be the boundary region of the target tissue. The characteristic value of the second region may be denoted as Tlow that is relatively small compared with Thigh. The characteristic value of the third region may be denoted as Ti.
 It should be noted the method described herein may be used to fill multiple ROIs that that are spaced and needed to be filled separately.
 In step 2403, a first method may be used to calculate the characteristic value Ti of a pixel in the third region.
 It should be noted that in medical image processing, even for the same tissue, the pixel value may fluctuate in a range. Merely by way of example, the pixel value of a medical image may correlate to the CT value of the tissue. The CT value may be used to indicate the extent of attenuation when the X-ray transverses a tissue. Different tissues may have different CT values than may fluctuate in a range. For example, the CT value of bones may be equal or less than 1000 HU, the CT value of soft tissues may range from 20 to 70 HU, the CT value of water may be 0, or ranging from -10 HU to 10 HU. Therefore, the CT value may be used to determine the type of tissues.
 In physics, thermal flows from high temperature field to low temperature field so that thermal equilibrium may be reached. Temperature diffuses fast in the high temperature filed, and diffuses slowly between the high temperature field and a low temperature field. Also, temperature diffuses fast in the same object, and diffuses slowly in different objects. Therefore, by determining the variation of temperature, whether the temperature belongs to the same object or different objects may be determined.
 Concerning the first method mentioned in step 2403, the first region may be used to analogy the high temperature field, and has a fixed characteristic value Thigh. The second region  may be used to analogy the low temperature filed, and has a fixed characteristic value Tlow. Tlow may be less than Thigh. In some embodiments of the present disclosure, Thigh-Tlow>100. Merely by way of example, the characteristic value may be the brightness of a pixel. The characteristic value Ti of each pixel in the third region may be calculated to determine whether the pixel belongs to the target tissue so that the target tissue may be filled.
 In step 2404, the pixels of the third region may be assessed based on a second method. Specifically, the characteristic value Ti of the pixels may be assessed. If assess result is positive, the pixels may be extracted for filling the target tissue. If the assess result is negative, the pixels may be assigned to the value of the background.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 FIG. 25 is a flowchart illustrating a process of post segmentation processing according to some embodiments of the present disclosure. In step 2501, an image may be acquired. In some embodiments of the present disclosure, the image may be a binary image that is generated based on segmentation methods including a threshold segmentation, a region growing segmentation, a region split and/or merge segmentation, an edge tracing segmentation, a statistical pattern recognition, a C-means clustering segmentation, a deformable model segmentation, a graph search segmentation, a neural network segmentation, a geodesic minimal path segmentation, a target tracking segmentation, an atlas-based segmentation, a rule-based segmentation, a coupled surface segmentation, a model-based segmentation, a deformable organism segmentation, or the like, or any combination thereof. In some embodiments, the segmentation method may be performed in a manual type, a semi-automatic type or an automatic type.
 It should be noted that the accuracy requirement of the image in step 2501 may not be high. Coarse segmented images may be filled based on the method provided herein as well.
 In step 2502, an ROI may be selected, and the ROI may be partitioned into three regions, a first region, a second region, a third region. The ROI may be selected either manually or automatically. In some embodiments, the ROI may be a region of an image that lacks some information, for example, boundary information.
 In some embodiments of the present disclosure, the first region may be the region with high brightness, and the pixel values of the first region may be 1. The second region may be the boundary region that is acquired automatically. The rest may be the third region.
 In step 2503, 
Figure PCTCN2015093506-appb-000013
may be calculated by:
Figure PCTCN2015093506-appb-000014
 Where the initial value of
Figure PCTCN2015093506-appb-000015
may be Tlow. Α is a constant, as diffusion effects is neglected, in some embodiments, α may be 1. 
Figure PCTCN2015093506-appb-000016
is the characteristic value of pixel i in n+1th iteration and t moment. For example, when
Figure PCTCN2015093506-appb-000017
the characteristic value of pixel i may be Tlow in t1 moment. Ti may be the characteristic value of pixel i, and i may be a natural number. 
Figure PCTCN2015093506-appb-000018
may be the characteristic value of pixel i after the nth iteration. xi may be the space coordinate of pixel I in the third region. Preferably, 
Figure PCTCN2015093506-appb-000019
may be initialized as Tlow.
 In some embodiments of the present disclosure, Thigh of the first region may be defined as 200, and Tlow of the second region may be defined as 0. The image may be filled based on the calculation of the characteristic value of the third region. In some embodiments of the present disclosure, Thigh-Tlow>100.
 
Figure PCTCN2015093506-appb-000020
may be iteratively calculated based on an initial value of Tlow.
 In step 2504, the difference of the characteristic value in the n+1th iteration and the characteristic value in the nth iteration may be assessed, whether Tn+1-Tn<A. In some embodiments, A≤10-6. It should be noted that A may be set based on experience. If the difference is less than A, step 2507 may be performed, and the characteristic value of pixel i may be assigned as
Figure PCTCN2015093506-appb-000021
Otherwise, step 2505 may be performed, 
Figure PCTCN2015093506-appb-000022
may be substituted by
Figure PCTCN2015093506-appb-000023
and the iterative calculation may continue. The methods for solving equation 13 may include Conjugate gradient method, incomplete Cholesey conjugate gradient algorithm, the strong implicit method, Gauss method, or the like, or any combination thereof.
 In step 2505, the number of iterations may assessed. If the number of iterations exceeds 200, the iteration may be terminated and step 2507 may be performed, the characteristic value Ti may be acquired. It should be noted that step 2504 and step 2505 are utilized to determine the convergence of equation 1. In some embodiments of the present disclosure, either step 2504 or step 2505 is satisfied, the iteration may be terminated. In some embodiments, the iteration may be terminated when both step 2504 and step 2505 are satisfied.
 In step 2508, the characteristic value Ti may be assessed. If Ti meets B<Ti<Thigh, the pixel corresponding to Ti may be extracted and the region to be filled may be determined. Otherwise, the pixel corresponding to Ti may be assigned to the value of the background of the image in step 2509. Taking the binary image as an example, the pixel may be assigned to a value of 0. It should be noted that threshold B is adjustable. B may be adjusted under different conditions. 
 In step 2510, a filled image based on the characteristic value Ti may be generated.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 FIG. 26 is a block diagram of an image processing system according to some embodiments of the present disclosure. As shown in the figure, the image processing system 202 may include an image acquisition module 2601, a distribution module 2602, a first determination module 2603, and a second determination 2604. The input/output unit 201 may communicate bidirectionally with the image processing system. The communication is described elsewhere in the present disclosure. The image acquisition module 2601 may be configured to acquire images including 2D image, 3D image, binary images generated based the 2D images or 3D images. The distribution module 2602 may be configured to partition an ROI into three regions, a first region, a second region, and a third region. The first determination module 2603 may be configured to calculate the characteristic value Ti of a pixel i in the third region.
 In some embodiments of the present disclosure, the first determination module may further include a convergence block and an iteration block (not shown in the figure) . The convergence block may be configured to determine the convergence of the characteristic value Ti of the third region. The determination may be based on an equation:
Figure PCTCN2015093506-appb-000024
 Where the initial value of
Figure PCTCN2015093506-appb-000025
may be Tlow. Α is a constant, as diffusion effects is neglected, in some embodiments, α may be 1. 
Figure PCTCN2015093506-appb-000026
is the characteristic value of pixel i in n+1th iteration and t moment. For example, when
Figure PCTCN2015093506-appb-000027
the characteristic value of pixel i may be Tlow in t1 moment. Ti may be the characteristic value of pixel i, and i may be a natural number. 
Figure PCTCN2015093506-appb-000028
may  be the characteristic value of pixel i after the nth iteration. xi may be the space coordinate of pixel I in the third region. Preferably, 
Figure PCTCN2015093506-appb-000029
may be initialized as Tlow. If Tn+1-Tn<A, the iteration may be terminated, and the characteristic value Ti of pixel i may be determined. Otherwise, the iteration may continue to proceed. In some embodiments, A≤10-6. It should be noted that A may be set based on experience.
 The iteration block may be configured to perform an iteration. In some embodiments, 
Figure PCTCN2015093506-appb-000030
may be iteratively calculated based on an initial value of Tlow. If the number of iterations exceeds 200, the iteration may be terminated and the characteristic value Ti may be acquired. The characteristic value Ti of all the pixels in the third region may be acquired based on the method provided herein.
 It should be noted that either A of the convergence block or the number of iterations of the iteration block is meet, the first determination module 2603 may terminate the iteration. In some embodiments, the iteration may be terminated when both A of the convergence block and the number of iterations of the iteration block are meet.
 A threshold B may be set by the second determination module 2604. It should be noted that B may be either fixed or variable. B may be adjusted to adjust the boundary of a target tissue. In some embodiments, Tlow<B<Thigh.
 It should be noted that the block described above is provided for the purposes of illustration, and not intend to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the teaching the present disclosure. However, those teachings may not depart the protecting scope of the present disclosure. For example, the distribution module 2602 may partition an ROI into more than three regions.
 FIG. 27 is a flowchart illustrating a process of spine location according to some embodiments of the present disclosure. In step 2701, a CT image sequence may be loaded. The CT image sequence may comprise a plurality of 2D slices. In some embodiments, the CT image sequence may be generated based on multi-slice spiral CT (MSCT) . Merely by way of example, the CT image sequence may be a portal venous phase image sequence, an arterial phase image sequence, etc.
 Optionally the CT image sequence may be acquired by techniques including DSA (digital subtraction angiography) , CT (computed tomography) , CTA (computed tomography  angiography) , PET (positron emission tomography) , X-ray, MRI (magnetic resonance imaging) , MRA (magnetic resonance angiography) , SPECT (single-photon emission computerized tomography) , US (ultrasound scanning) , or the like, or a combination thereof. In some embodiments of the present disclosure, multimodal techniques may be used to acquire the volume data. The multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
 In step 2702, a preliminary spine region may be segmented based on the CT image sequence. Specifically, a preliminary spine region may be segmented in each 2D slice of the CT image sequence. The preliminary spine region may be segmented based on some known information, for example, the direction of a CT scan, the anatomical position of the spine in a human body, etc. In some embodiments of the present disclosure, the preliminary spine region may be segmented based on an x coordinate and a y coordinate. The x coordinate of the preliminary spine region may be [H/4, 7H/8] , and the y coordinate of the preliminary spine region may be [W/3, 3W/4] . W may denote the width of the CT image, and H may denote the height of the CT image.
 A preprocessing may be performed on the preliminary spine region to segment bone regions in step 2703. In each 2D slice of the CT image sequence, one or more bone regions may be segmented. In some embodiments of the present disclosure, the preprocessing may include threshold segmentation, region connection, or the like, or any combination thereof. The threshold segmentation may be performed to segment the bone regions. As in a CT image, the grey value of a bone region may be greater than that of surrounding regions. By setting a threshold of grey value, the bone regions may be segmented. The region connection may be performed to connect the bone regions that are not continuous. In some embodiments, an inflation operation may be performed on small regions to connect the small regions that may constitute a big region. Merely by way of example, a morphology algorithm may be used to eliminate small holes and disconnections.
 In step 2704, a spine region may be determined. Specifically, a spine region may be determined in each 2D slice of the CT image sequence. In some embodiments of the present  disclosure, among the bone regions segmented in step 2703, the bone region having the maximum area may be determined as the spine region.
 In step 2705, an anchor point may be determined and validated in every 2D slice of the CT image sequence. In some embodiments of the present disclosure, the midpoint in the toppest row of the spine region determined in step 2740 may be determined as the anchor point. In some embodiments, the coordinate of the anchor point may be donated as (xmed, ytop) , where ytop may denote the y coordinate of the toppest row, and xmed may denote the x coordinate of the midpoint. As spine locates in the central position of a human body, the anchor point may be validated by:
Figure PCTCN2015093506-appb-000031
 Where ytop>H/4 may insure that the y coordinate of the anchor point locates in the middle or the lower position, abs (xmed-W/2) may be used to control the shift extent of the x coordinate from the central position. The width of the CT image may be denoted by W, while the height of the CT image may be denoted by H. Therefore, if the y coordinate of the anchor point is greater than H/4, and the shift of the x coordinate from the central position is less than H/4, the anchor point may be valid. Otherwise, the anchor point may be determined invalid.
 In step 2706, the anchor point of every 2D slice determined in step 2705 may be calibrated. The calibration may include calibrating the anchor points whose coordinate is abnormal, calibrating the anchor points that lack coordinate, etc.
 As for calibrating the anchor points whose coordinate are abnormal, two measurements may be defined, Dm and Dp. Firstly, the anchor points may be arranged based on the slice number of the 2D slices. In some embodiments, the slice number may be denoted as [0, 1…, N] , the number of anchor points may be denoted as M, the coordinate sequence of the anchor points may be denoted as sp= { (x1, y1, t1) , (x2, y2, t2) , …, (xm, ym, tm) } , the z coordinate tk ∈ [0, N] , k=1, 2, …M, and when i<j, ti<tj. Secondly, a sliding window may be defined, the anchor points to be calibrated may be positioned in the sliding windows sequentially. The position mean value of the anchor points Pmedium may be calculated.
 Merely by way of example, the current coordinate may be denoted as spcurrent= (xk, yk, tk) , the size of the sliding window may be denoted as Swindow. The subscript of the starting point in the sliding window may be calculated by:
N1=max (k-Swindow/2, 1)  (Equation 16)
 The subscript of the ending point in the sliding window may be calculated by:
N2=min (N1+Swindow-1, M)  (Equation 17)
The coordinate sequence in the sliding window may be denoted as:
P={(xN1, yN1, tN1) , (xN1+1, yN1+1, tN1+1) , …(xN2, YN2, tN2) }  (Equation 18)
The mean value Pmedium may be calculated by:
Figure PCTCN2015093506-appb-000032
Figure PCTCN2015093506-appb-000033
 Where di may denote the Euclidean distance sum of the ith anchor point in P and other anchor points, dmin may denote the minimum Euclidean distance sum among all the Euclidean distance sums. The coordinate corresponding to dmin may be assigned to Pmedium,
Pmedium= (x (min) , y (min) )  (Equation 21)
 (x (min) , y (min) ) may denote the coordinate corresponding to dmin.
 Thirdly, DM and DP may be calculated to calibrate the anchor points. DM and DP may be calculated by:
Figure PCTCN2015093506-appb-000034
Figure PCTCN2015093506-appb-000035
 Where DM may denote the Euclidean distance of current coordinate and the position mean value, DP may denote the Euclidean distance of the current coordinate and the previous coordinate. A threshold TH may be defined to assess DM and DP. If DM<TH and DP<TH, then the current coordinate may be valid. Otherwise, the current coordinate may be substituted by the coordinate of DM if DM is less than DP, wise versa.
 As for calibrating the anchor points that lack coordinate, the coordinate may be calculated based on adjacent coordinates. Merely by way of example, the fixed anchor point sequence may be denoted as:
sp_fixed= { (x (1) , y (1) , t1) , (x (2) , y (2) , t2) , …, (x (M) , y (M) , tM) }  (Equation 24)
 The anchor points that lack coordinate may be denoted as t∈ [0, N] , and t≠t1, t2, …tM. If the anchor point lacks coordinate locates in the beginning, t<t1, the coordinate may be fixed as (x(1) , y (1) , t) ; if t>tM, the coordinate may be fixed a (x (M) , y (M) , t) .
 In some embodiments, if the anchor points that lack coordinate are neither in the beginning nor in the ending, the coordinate may be fixed based on adjacent coordinates. The  adjacent anchor points of t are tk and tk+1, the coordinates corresponding to tk and tk+1 are (x (tk) , y (tk) , z (tk) ) and (x (tk+1) , y (tk+1) , z (tk+1) ) . The coordinate may be fixed by:
Figure PCTCN2015093506-appb-000036
Figure PCTCN2015093506-appb-000037
 The fixed coordinate may be denoted as (xt, yt, t) .
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 FIG. 28 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure. In step 2801, a first image may be transformed into a first 2D image. The pixel values of the 2D image may correspond to those of the first image, for example, correspond to the pixels that have the same spatial position in every 2D slice of the first image. In some embodiments of the present disclosure, the first image may be a binary image that is generated based on segmenting an ROI out of a first CT image. The ROI may be segmented from the first CT image by performing a threshold segmentation. The first CT image may be the original CT image that is generated for diagnosis. Merely by way of example, the first image may comprise a group of 2D slices. Therefore, the pixel values of the 2D image may correspond to those of each 2D slice of the first image spatially.
 It should be noted that the first CT image is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Optionally, the first CT image may be acquired based on techniques including digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) , positron emission tomography (PET) , X-ray, magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or any combination thereof. In some embodiments of the present disclosure, the first CT image may be generated based on multimodal techniques. The multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial  magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
 The first 2D image may be generated by projecting the first image onto an anatomical plane. The anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. The pixel values of the 2D image may correspond to those of each 2D slice of the first image. Merely by way of example, a pixel value of the 2D image may be a sum of spatially corresponded pixels (e.g., the pixel values that are not 0) of all the 2D slices of the first image. Optionally, the projection of the first image may be based on maximum intensity projection (MIP) , temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or any combination thereof.
 In step 2802, a first region may be acquired based on the 2D image. In some embodiments, the first region may be a high brightness region of the 2D image. The first region may be acquired based on techniques including cluster, variable threshold etc.
 Instep2803, a second region may be acquired. In some embodiments of the present disclosure, the second region may be acquired based on the region of each 2D slice of the first CT image corresponding to the first region spatially, the high brightness region of the region may be determined as the second region. Merely by way of example, each 2D slice of the first CT image may have a high brightness region that may be determined as a second region, each 2D slice may have a second region.
 In step 2804, a second CT image may be calibrated based on the second region. The regions corresponding to the second region of the second CT image may be removed. The second CT image may be acquired after a preliminary segmentation, for example, a lung image after preliminary segmentation. Merely by way of example, the pixel values of the second CT image corresponding to the second region may be set as the pixel values of the background of the second CT image.
 It should be noted that in some embodiments of the present disclosure, the second region of the first CT image may be determined as at least part of a skeleton that may lead to a false positive of a diagnosis. By removing the regions of the second CT image corresponding to the second region of the first CT image, the lesion areas of false positive may be reduced effectively.
 In some embodiments of the present disclosure, the first CT image may be the original lung scan CT image, and the second CT image may be acquired by performing segmentation on the first CT image. As the chest comprises bone tissues such as spine and rib, the second CT image may comprise skeleton area such as spine area. The skeleton area may cause false positives in diagnosis. By using the method provided herein, the skeleton area of the second CT image may be removed so that diagnosis accuracy may be improved effectively.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 FIG. 29 is a flowchart illustrating a process of 3D image segmentation according to some embodiments of the present disclosure. In step 2901, a first CT image may be acquired. In some embodiments, in order to diagnose pulmonary nodules, the first CT image may be acquired by scanning a lung. The scan may be based on a regular CT scan, an enhanced CT scan, etc.
 It should be noted that the first CT image is provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. Optionally, the first CT image may be acquired based on techniques including digital subtraction angiography (DSA) , computed tomography (CT) , computed tomography angiography (CTA) , positron emission tomography (PET) , X-ray, magnetic resonance imaging (MRI) , magnetic resonance angiography (MRA) , single-photon emission computerized tomography (SPECT) , ultrasound scanning (US) , or the like, or any combination thereof. In some embodiments of the present disclosure, the first CT image may be generated based on multimodal techniques. The multimodal techniques may include CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof.
 In step 2902, the first CT image may be segmented to generate a second CT image. Merely by way of example, a preliminary segmentation may be performed on the first CT image to generate the second CT image.
 In step 2903, a threshold segmentation may be performed on the first CT image to generate a first image. The first image may be a binary image. After the threshold segmentation,  a high brightness region of the CT image may be extracted, and the high brightness region may be determined as a skeleton area. In some embodiments of the present, the first image may be a binary image after threshold segmentation. The threshold segmentation may be based on grey value difference of the ROI (e.g., a lung) and the background. When the threshold is determined, the pixels having values greater than the threshold may be determined as the ROI, while the pixels having values less than the threshold may be determined as the background. It should be noted that the threshold may be selected based on experimental data.
 In step 2904, a projection may be performed on the first image to generate a 2D image. The first 2D image may be generated by projecting the first image onto an anatomical plane. The anatomical plane may include a coronal plane, a sagittal plane, a transverse plane, etc. The pixel values of the 2D image may correspond to those of each 2D slice of the first image. In some embodiments, a pixel value of the 2D image may be a sum of spatially corresponded pixels of all the 2D slices of the first image. Optionally, the projection of the first image may be based on maximum intensity projection (MIP) , temporal maximum intensity projection (tMIP) , minimum intensity projection (MiniP) , virtual endoscopic display (VED) , or the like, or any combination thereof.
 Merely by way of example, if the pixel value of the binary image (the first image) ranges from 0 to 1, a pixel value of the 2D image may be a sum of spatially corresponded pixel values of the binary image, the spatially corresponded pixel values may be restricted to 1, or 0, or any combination thereof. As another example, if the pixel values of the binary image (the first image) range from 0 to a value greater than the threshold (e.g., 255) , a pixel value of the 2D image may be a sum of spatially corresponded pixel values of the binary image (e.g., CT value) .
 In step 2905, a first region may be acquired based on the 2D image. The first region may be the region of the 2D image that has the maximum brightness. The first region may be acquired based on techniques including cluster (e.g., Gaussian mixture model) , variable threshold etc. In some embodiments of the present disclosure, a number of regions with high brightness (e.g., spine area, pelvic area) may be acquired by using the techniques mentioned above. Under this circumstance, the region among the multiple regions that has the most pixels may be determined as the first region.
 In step 2906, a threshold segmentation may be performed on the first CT image to generate a second region. The threshold segmentation may be performed on each 2D slice of the  first CT image, specifically, on the region corresponding to the first region. After the threshold segmentation, a second region may be acquired in each 2D slice of the first CT image. The second region in each 2D slice may a high brightness region.
 It should be noted that in some embodiments of the present disclosure, the first region may be determined as a rough spine area. The region corresponding to the first region of every 2D slice of the first CT image may be determined as the rough spine area. The threshold segmentation may be performed on the region to determine the real spine area, namely, the second region. It should be noted that the correspondence of the region of every 2D slice of the first CT image and the first region may be a spatial correspondence, merely by way of example, they are corresponded in spatial position. The threshold segmentation is described elsewhere in the present disclosure.
 In step 2907, the second CT image may be calibrated based on the second region to acquire a calibrated second CT image. In some embodiments of the present disclosure, the second CT image may have the same size and amount of 2D slices with the first CT image. As every 2D slice of the first CT image may have a second region, the region of a 2D slice of the second CT image corresponding to the second region may be removed. Merely by way of example, to remove the region of the second CT image corresponding to the second region, the pixel values of the region may be configured as those of the background of the second CT image.
 It should be noted than the method described in the present disclosure is compatible with that of the art. In some embodiments, the method of the art may be used to perform a preliminary determination of a bone tissue, and the method described in the present disclosure may be used to perform a fine determination of the bone tissue to improve the accuracy of lung segmentation and the diagnosis of pulmonary nodules.
 It should be still noted that apart from lung segmentation, the method described herein may be used in the diagnosis of kidney stones so that the false positive may be reduced during the diagnosis.
 It should be noted that the flowchart depicted above is provided for the purposes of the illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modification may be conducted under the teaching of the present disclosure. However, those variations and modifications may not depart the scope of the present disclosure.
 FIG. 30 illustrates a block diagram of an image processing system according to some embodiments of the present disclosure. As shown in the figure, the image processing system 202 may include a transformation module 3001, a first acquisition module 3002, a second acquisition module 3003, and a calibration module 3004. The input/output unit 201 may communicate bidirectionally with the image processing system. The communication is described elsewhere in the present disclosure. The transformation module 3001 may be configured to transform a first image into 2D image. The pixel values of the 2D image may correspond to those of the first image, for example, correspond to the pixels that have the same spatial position in every 2D slice of the first image. In some embodiments of the present disclosure, the first image may be a binary image that is generated based on segmenting an ROI out of a first CT image. The ROI may be segmented from the first CT image by performing a threshold segmentation. The first CT image may be the original CT image that is generated for diagnosis. Merely by way of example, the first image may comprise a group of 2D slices. Therefore, the pixel values of the 2D image may correspond to those of each 2D slice of the first image spatially.
 The first acquisition module 3002 may be configured to acquire a first region. The first region may be acquired based on the 2D image. In some embodiments, the first region may be a high brightness region of the 2D image. The first region may be acquired based on techniques including cluster, variable threshold etc.
 The second acquisition module 3003 may be configured to acquire a second region. The second region may be acquired based on the first CT image. In some embodiments of the present disclosure, the second region may be acquired based on the region of each 2D slice of the first CT image corresponding to the first region spatially, the high brightness region of the region may be determined as the second region. Merely by way of example, each 2D slice of the first CT image may have a high brightness region that may be determined as a second region, each 2D slice may have a second region.
 The calibration module 3004 may be configured to calibrate a second CT image based on the second region. The regions corresponding to the second region of the second CT image may be removed. The second CT image may be acquired after a preliminary segmentation, for example, a lung image after preliminary segmentation. Merely by way of example, the pixel values of the second CT image corresponding to the second region may be set as the pixel values of the background of the second CT image. After the calibration, a calibrated second CT image may be  acquired. The calibration module 3004 may further include a pixel reset block (now shown in the figure) that may be configured to set the pixel values of the second CT image corresponding to the second region as the pixel values of the background of the second CT image.
 It should be noted that the block described above is provided for the purposes of illustration, and not intend to limit the scope of the present disclosure. For persons having ordinary skills in the art, various variations and modifications may be conducted under the teaching the present disclosure. However, those teachings may not depart the protecting scope of the present disclosure. For example, the image processing system 202 may further include a determination module that may be configured to determine a high brightness region that has the most pixels as the first region.
 For illustration purposes, some image processing embodiments about a hepatic artery segmentation will be described below. It should be noted that the hepatic artery segmentation embodiments are merely for better understanding and not intended to limit the scope of the present disclosure.
 FIG. 31 is a flowchart illustrating a hepatic artery segmentation method according to some embodiments of the present disclosure. In step 3101, a CT data may be entered. The CT data may be information including data, a number, a text, an image, a voice, a force, a model, an algorithm, a software, a program, or the like, or any combination thereof. In an exemplary embodiment, the CT data may be a CT image and a corresponding liver mask. While in other embodiments, the CT data may be other information. It should be noted that the CT data is merely for illustration purposes, the data from other medical imaging system as described elsewhere in the present disclosure may also be applicable to the hepatic artery segmentation method.
 In step 3102, a seed point of the hepatic artery may be selected, searched or positioned. The seed point selecting method may be manual or automatic. The seed point may be selected by a criterion. The selecting criterion may include judging whether a point (apixel or a voxel) is in a certain grayscale range or evenly spaced on a grid or a square.
 In step 3103, a first and second binary volume data may be generated based on the seed point selected in step 3102. In some embodiments, the first binary volume data may include a backbone and a rib connected therewith. The second binary volume data may include a backbone, a rib connected therewith and a hepatic artery. The first and second binary volume data may be  generated by an image segmentation method. The segmentation method may be any one as described elsewhere in the present disclosure.
 In step 3104, a third binary volume data may be generated. In some embodiments, the third binary volume data may be acquired by removing a part same with the first binary volume data from the second binary volume data.
 In step 3105, the hepatic artery segmentation result may be generated through a 3D region growing in the third binary volume data based on the seed point selected in step 3102.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing system may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the hepatic artery may also be post-processed after the 3D region growing. The post-processing method may be any one as described elsewhere in the present disclosure. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 For better illustration purposes, FIG. 32 will be described in detail as an exemplary embodiment of a hepatic artery segmentation based on a CT image. It should be noted that the image is not limited to a CT image but may also be applicable to other medical system. The other medical system may include Digital Subtraction Angiography (DSA) , Magnetic Resonance Imaging (MRI) , Magnetic Resonance Angiography (MRA) , Computed Tomography Angiography (CTA) , Ultrasound Scanning (US) , Positron Emission Tomography (PET) , Single-Photon Emission Computerized Tomography (SPECT) , CT-MR, CT-PET, CE-SPECT, DSA-MR, PET-MR, PET-US, SPECT-US, TMS (transcranial magnetic stimulation) -MR, US-CT, US-MR, X-ray-CT, X-ray-MR, X-ray-portal, X-ray-US, Video-CT, Vide-US, or the like, or any combination thereof. It also should be noted that the image processing method is not limited to a hepatic artery and may also be applicable to other subjects including an organ, a texture, a region, an object, a lesion, a tumor, etc. Merely by way for example, the subject may include a head, a breast, a lung, a trachea, a pleura, a mediastinum, an abdomen, a long intestine, a small intestine, a bladder, a gallbladder, a triple warmer, a pelvic cavity, a backbone, extremities, a skeleton, a blood vessel, or the like, or any combination thereof.
 As described in FIG. 32, a hepatic artery CT image data and its corresponding liver mask may be acquired in step 3201. Other information may also be input in this step as described  elsewhere in the present disclosure. Then a seed point of the hepatic artery may be selected. The seed point selecting process may be divided into step 3202 and step 3203. For illustration purposes, the seed point selecting process will be described in detail by an example of processing a hepatic artery CT image of N slices and its corresponding liver mask.
 In step 3102, a slice for seed point selecting may be determined based on the hepatic sequence data of the image. The number of the slice may be a one or more. For illustration purposes, an example of determining a slice for selecting the seed point may be described in detail below:
 a) A starting slice and an ending slice of a liver may be searched from the liver mask and written as Ls and Le, respectively. In this step, the maximum area MaxArea of a single liver slice may also be searched.
 b) A starting slice and an ending slice of a seed point of the hepatic artery may be searched and written as Vs and Ve, respectively. Vs is the first slice whose liver area is no less than MaxArea/2 when comparing a liver area from Ls to Le in the liver mask slice by slice. Ve is the last slice whose liver area is no less than MaxArea/2 when comparing a liver area from Ls to Le in the liver mask slice by slice.
 When Area (i) ≥MaxArea/2 (i=Ls, Ls+1, …, Le) , Vs=i;  (Equation 27)
 When Area (i) ≥MaxArea/2 (i=Le, Le-1, …, Ls) , Vs=i.  (Equation 28)
 Wherein Area (i) may be the liver area of slice i in the liver mask.
 In step 3203, the hepatic artery CT data may be selected and segmented slice by slice from Vs to Ve. A connected region whose circular degree is no less than 0.7 may be searched. A connected region with the minimax area among the connected regions above may be determined as an artery, and the centroid thereof may be determined as a seed point. For illustration purposes, an example of determining a seed point may be described in detail below:
 a) The CT image data of slice i (i∈ [Vs, Ve] ) may be selected and performed by the following process in turn till a seed point is selected and a CT value thereof is calculated.
 b) A segmentation may be performed based on an empirical value Tev and a coarse binary image including the hepatic artery target may be generated. In some embodiments, the Tev is a CT value chosen from 70 to 110, i.e., 70≤Tev≤110.
 c) A open computation in morphology may be performed on the result from step b) and a tiny connection in the image may be broken.
 d) A connected region whose area is small than Ta or bigger than Tb may be removed from the target region. In some embodiments, Ta may be a value chosen from 200 to 300 and Tb may be a value chosen from 1000 to 1200, i.e., 200≤Ta≤300 and 1000≤Tb≤2000.
 e) A target region for selecting a seed point may be determined. The starting row and the ending row of the target region may be written as Rs and Re, respectively. The starting column and the ending column of the target region may be written as Cs and Ce, respectively. The starting row and the ending row of the selecting region may be written as Rs’ and Re’ , respectively. The starting column and the ending column of the selecting region may be written as Cs’ and Ce’ , respectively. For illustration purposes, an exemplary embodiment of determining a seed point selecting region will be described in detail below:
 e1) The starting row and the ending row of the target region may be searched perpendicularily and horizontally in the binary image.
 e2) The center of the target region P (cX, cY) may be determined according to the equations below:
 
Figure PCTCN2015093506-appb-000038
 
Figure PCTCN2015093506-appb-000039
 e3) The starting row and the ending row of the selecting region may be determined by using a offset. The offset may be used to expand the selecting scope and it may increase along with the increasing of number of the failure selecting slices. The equations to calculate the selecting region may be below:
 
Figure PCTCN2015093506-appb-000040
 
Figure PCTCN2015093506-appb-000041
 
Figure PCTCN2015093506-appb-000042
 
Figure PCTCN2015093506-appb-000043
 f) A connected region whose circular degree is no less than 0.7 (i.e., Cd≥0.7) in the selecting region may be searched out. The connected region with the minimax area among the connected regions above may be determined as an artery, and the centroid thereof may be determined as a seed point. If there was no such a seed point, it may enter into a next searching.
 A CT value of the seed point may be calculated after steps a) to f) . The calculating method may include choosing a mean value of an n×n square whose center is the seed point. In some embodiments, n may be any natural number such as 3, 5, etc.
 Step 3204 and step 3205 may be used to generate a first binary volume data. The first binary volume data may include a backbone and a rib connected therewith. The first binary volume data may be generated by a threshold segmentation based on the CT value of the seed point. The threshold segmentation may use a piecewise function below:
 
Figure PCTCN2015093506-appb-000044
 Wherein t is the CT value of the seed point. Symbol a and b is an upper limit and a lower limit of a CT value in the segmentation, respectively. T1, T2 and T3 may be the pre-determined segmentation threshold. Note that the piecewise function type and its parameters may be variable according to specific implementation scenarios.
 In step 3204, a coarse segmentation may be performed on the original CT image data. The result after step 3204 may be a bone binary volume data including a bone and other tissue. In step 3205, the bone binary volume data may be subdivided and re-segmented through a sequence slice superposition strategy. The result after step 3205 may be a first binary volume data including a backbone and a rib connected therewith. For illustration purposes, an example of generating the first binary volume data of step 3205 may be described in detail below:
 a) A starting row and an ending row in perpendicular direction in the target region of each slice within the bone binary volume data may be searched.
 b) The sequence slices of the bone binary volume data may be subdivided into several regions. All parts including a backbone and a rib connected therewith in each slice may be segmented by using a sequence slice superposition strategy. Merely by way for example, the sequence slices may be subdivided into four region slice, written as [1, Ls] , [Ls+1, Sz] , [Sz+1, Le] , respectively. Wherein Ls and Le are the starting slice and the ending slice of a liver, respectively. Sz is the slice that include the seed point. N is the number of slices of the original CT image data. Each region slice may be performed by the following steps:
 b1) Assume that the starting slice and the ending slice of the region slice is Lm and Ln, respectively. The superposition starting slice dL may be assigned as Lm. The initial count value  may be Count=0. The region slice that already with superposition may be written as preA and its area may be preSA.
 b2) Staring with Lm, the slice i may be chosen in turn to determine its selecting scope for a backbone area. The starting row and the ending row of this target region may be written as Rps and Rpe, respectively. The height and width of the binary image may be written as Height and Width, respectively. Then starting row Rps’ , ending row Rpe’ , starting column Cps’ , and ending column Cpe’ of the backbone area may be calculated by the equations below:
 
Figure PCTCN2015093506-appb-000045
 Rpe′ = Height,  (Equation 37)
 
Figure PCTCN2015093506-appb-000046
 
Figure PCTCN2015093506-appb-000047
 b3) A connected region in the backbone selecting scope may be searched and the area SA of the connected region may be calculated. If SA is no less than an area pre-determined value AreaSize, the count value may be added with 1 and it then enter into a next slice’s searching. Otherwise it may enter into a next slice’s searching directly. In some embodiments, the AreaSize may any value chosen from 300 to 600, i.e., 300≤AreaSize≤600.
 b4) If the count value is no less than a pre-determined slice number SliceSize, the data from the superposition slice dL to i maybe superimposed. The mean value of the starting row and ending row in the target region of the superimposing slices may be calculated. Then the row range of the backbone in the superposition image may be calculated according to step b2) .
 b5) A connected region in the backbone within the superimposing image may be searched and the area SB of the connected region may be calculated. If PreSA-SB>DifSize (500≤DifSize≤800) , the PreSA may be regarded as the region of the backbone and a rib connected therewith from dL to the slice i. Otherwise the present superposition image may be regarded as the region of the backbone and a rib connected therewith from the dL to the slice i. In these embodiments, the PreSA may be updated by SB, i.e., PreSA=SB, dL=i+1, and Count=0.
 b6) A sequence slice without processing may be superimposed directly then. The connected region in the superposition image may be regarded as the region of the backbone and a rib connected therewith.
 c) The result from step b) may be expanded by one or more pixels or voxels in the perpendicular direction to reduce the interference in 3D region growing process. In some embodiments, the number of the pixels or voxels may be any natural number, such as 2 or 3.
 In step 3206, a second binary volume data including a backbone, a rib connected therewith and a hepatic artery may be generated. The generating method may be a threshold segmentation for hepatic artery based on the CT value of the seed point. The threshold segmentation may use a piecewise function below:
 
Figure PCTCN2015093506-appb-000048
 Wherein t is the CT value of the seed point. Symbol c and d is an upper limit and a lower limit of a CT value in the segmentation, respectively. T4, T5 and T6 may be the pre-determined segmentation threshold. Note that the piecewise function type and its parameters may be variable according to specific implementation scenarios.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the order of generating the first binary volume data and the second binary volume data may be changed. For example, the second binary volume data may be generated before the first binary volume data or they may be at the same time. However, those variations, changes and modifications do not depart from the scope of the present disclosure.
 Then a third binary volume data may be generated. In some embodiments, the third binary volume data may be acquired by removing a part same with the first binary volume data from the second binary volume data.
 In step 3207, the hepatic artery segment result may be generated through a 3D region growing in the third binary volume data based on the seed point. The region growing may refer to any embodiments as described elsewhere in the present disclosure. Merely by way for example, some embodiments for region growing may be performed by judging whether a point (a pixel or voxel) belong to the same connected region with the seed point.
 After step 3207, a hepatic artery image may be generated. The image may be output or saved according to specific implementation scenarios.
 Step 3208 is a post-processing for the hepatic artery image generated from step 3201 to 3207. The post-processing method may be any one as described elsewhere in the present disclosure. For illustration purposes, an exemplary embodiment of repairing the artery may be described in detail below and not intended to limit the scope of the present disclosure.
 1. According to the original CT image data, the mean value AM of CT value in the hepatic artery image may be calculated.
 2. A connected region which is marked as an artery in each slice may be regarded as a seed point set. According to the property of local CT value, a 3D region growing may be performed. For illustration purposes, an example of step 2 may be described in detail below:
 a) A connected region which is marked as an artery in each slice may be searched row by row and the mean value M of CT value in the connected region may be calculated.
 b) Each point in the connected region may be regarded as a seed point and a region growing may be performed based on the seed point. When a point doesn’t be marked as a target point and its CT value satisfy a requirement, it may be stopped to grow. In some embodiments, the requirement of a CT value may be no less than α·M and no bigger than β·M. Symbol α is a lower limit control coefficient whose value may satisfy a requirement, e.g., 0.9≤α≤1. Symbol β is an upper limit control coefficient whose value may satisfy a requirement, e.g., 1≤β≤1.05.
 3. According to property of local and overall CT value, an edge of the connected region which is marked as an artery may be performed by a threshold segmentation to repair the artery. For illustration purposes, an example of step 3 may be described in detail below:
 a) Each slice which is marked as an artery may be traversed by a unit of n×n square. Each square may include a point which is marked as an artery, or a point without the mark, or their combination. When the number of the point with mark and the point without mark in one square are both bigger than n·n/3, a mean value T of CT value of the point with and without mark may be calculated.
 b) The point without mark may be segmented by a threshold of MaxT and the point whose CT value is no less than MaxT may be marked as an artery. In some embodiments, MaxT may be the maximum of T and γ·AM. Symbol γ is an edge expanding control coefficient which is no bigger than 1.
 c) The step a) and step b) may be repeated till the artery become replete. The number of repetition may be set according to specific implementation scenarios. For example, the number of repetition may be 3, 4, 5, etc.
 4. The tiny breach may be connected by a closed computation in morphology.
 It should be noted that the above embodiments are for illustration purposes and not intended to limit the scope of the present disclosure. For persons having ordinary skill in the art, the image processing method may be variable, changeable, or adjustable based on the spirits of the present disclosure. In some embodiments, the steps may be added, deleted, exchanged, replaced, modified, etc. For example, the sub-steps in each step may have no such specific boundaries and they may be integrated. For another example, the order of the steps in the image processing may be changed, e.g., the generation of the first and second binary volume data may be in a random order. For still another example, some steps such as the post-processing may be deleted. However, those variations, changes and modifications do not depart from the scope of the present disclosure. 
 Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
 Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
 Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any  new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a "block, " “module, ” “engine, ” “unit, ” "component, " or "system. " Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
 A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
 Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
 Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure  discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution—e.g., an installation on an existing server or mobile device.
 Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.

Claims (49)

  1. A method of image segmentation comprising:
    determining a region of interest (ROI) wherein a tumor locate;
    performing a coarse segmentation of the tumor based on a feature classification method; and
    performing a fine segmentation of the tumor based on a level set method.
  2. The method of claim 1, the performing a coarse segmentation comprising:
    sampling a seed point randomly among voxels in the region with a rumor and in the region without a rumor of the ROI respectively;
    performing a classification training on the seed point; and
    performing a classification of each voxel based on the result of the classification training to acquire a coarse segmentation result.
  3. The method of claim 1, the determination of the ROI comprising:
    drawing a line with a length d1 on the cross-section of a tumor and traversing the tumor; and
    selecting a square with a length d2 and centering at the midpoint of the line, wherein the square comprises the ROI, and d2=d1·r, 1<r<2.
  4. The method of claim 3, the sampling a seed point comprising:
    creating an internal region centering at the midpoint of the line with a radius of d3, wherein a seed point is sampled and d3=d1/l, l>1; and
    creating an external region centering at the midpoint of the line with a radius ranging from d1·t to d1·r (1<t<r) , wherein a seed point is sampled.
  5. The method of claim 2, the performing a classification training on the seed point comprising:
    normalizing a gray level of the image as a fixed value S;
    calculating a normalized S-dimensional feature of gray-scale histogram in a spherical neighborhood centering of each seed point with a radius of j; and
    training and classifying based on the S-dimensional feature of gray-scale histogram to acquire a parameter of a classifier.
  6. The method of claim 2, performing a classification of each voxel based on the result of the classification training to acquire a coarse segmentation result comprising:
    calculating a feature of the gray-scale histogram of voxels in the image; and
    judging a voxel whether is part of tumor by the classifier.
  7. The method of claim 1, the performing a fine segmentation of the tumor based on a level set method comprising:
    initializing a distance function based on the result of the coarse segmentation result;
    performing a multiple iterations on the distance function; and
    acquiring a fine segmentation result.
  8. The method of claim 2, the sampling a seed point further comprising pre-processing the ROI.
  9. The method of claim 8, the pre-process of the ROI comprising:
    performing a normalization processing on a resolution in an X axis, a Y axis and a Z axis of the image;
    enhancing a contrast of the image; and
    reducing a noise of the image.
  10. An method of image data processing comprising:
    acquiring a data set of a three-dimensional image, wherein the data set comprises a voxel;
    identifying a component in the data set;
    storing the component in a storage; and
    receiving a rendering input instruction, rendering the data by an image processor.
  11. The method of claim 10, the identifying a component in the data set comprising identifying based on a one dimensional characteristic information, a multi-dimensional characteristic information of a voxel or their combination.
  12. The method of Claim 11, the identifying a component in the data set further comprising providing a tolerance range to determine voxels with a one dimensional or a multi-dimensional characteristic information within the tolerance range as a same component.
  13. The method of Claim 11, the one dimensional characteristic information comprising a gray value and a one dimensional spatial information of the voxel.
  14. The method of Claim 13, the identification based on a gray value and a one dimensional spatial information of a voxel comprising:
    detecting an arrangement order and a color value of the voxel in the data set;
    determining a voxel that adjacent to the arrangement order and that the color value difference is within a pre-determined threshold as a same component; and
    labeling the component.
  15. The method of Claim 11, the multi-dimensional characteristic information comprising:
    providing a pre-determined neighbor region of a voxel;
    performing statistics based the voxel and voxels within the neighbor region; and
    acquiring the multi-dimensional characteristic information based on the statistics.
  16. The method of Claim 15, the statistics comprising statistics based on a Gaussian mixture model, statistics based on an information entropy, and statistics based on a time-invariant space-state feature model.
  17. The method of Claim 10, wherein the data set of the three-dimensional image comprising medical scanning data of a subject; wherein the component comprising a removable region and a reserved region.
  18. The method of Claim 17, wherein the storing a component in a storage comprising:
    loading the data set of the three-dimensional image into a CPU memory; and
    loading a reserved region into a GPU video memory.
  19. The method of any one of Claims 10 to 17, wherein the rendering the data after receiving the rendering instruction comprising:
    establishing control threads corresponding to a CPU task queue and a GPU task queue respectively; and
    analyzing the rendering instruction and performing a rendering after distributing the rendering task to the task queue of the CPU or the GPU.
  20. An system of image data processing, comprising:
    an acquisition module configured to acquire a data set of a three-dimensional image, wherein the data set comprises a voxel;
    a segmentation module configured to identify a component in the data set;
    a storage control module configured to control the data of the component into a storage; and
    a rendering control module configured to receive a rendering instruction, and control the rendering the data by an image processors.
  21. The system of Claim 20, the segmentation module comprising:
    an identification block configured to identify based on a one dimensional characteristic information, a multi-dimensional characteristic information or their combination.
  22. The system of Claim 21, the segmentation module further comprising:
    a judgment block configured to provide a tolerance range and judge the voxel with a one dimensional or a multi-dimensional characteristic information within the tolerance range as a same component.
  23. The system of Claim 21, the one dimensional characteristic information comprising a gray value and a one dimensional spatial information of the voxel.
  24. The system of Claim 23, the identification block comprising:
    a detection sub-block configured to detect an arrangement order and a color value of the voxel in the data set;
    a judgment sub-block configured to determine a voxel that adjacent to the arrangement order and that the color value difference is within a pre-determined threshold as a same component; and
    a labeling sub-block configured to label a component.
  25. The system of Claim 21, the identification block further comprising:
    an acquisition block configured to acquire the multi-dimensional characteristic information, comprising:
    providing a neighbor region;
    performing statistics based on the voxel and voxels within the neighbor region; and
    acquiring the multi-dimensional characteristic information based on the statistics.
  26. The system of Claim 25, the statistics comprising statistics based on a Gaussian mixture model, statistics based on an information entropy, and statistics based on a time-invariant space-state model.
  27. The system of Claim 20, wherein the data set of the three-dimensional image comprising medical scanning data of a subject; wherein the component comprising a removable region and a reserved region.
  28. The system of Claim 27, the storage control module comprising:
    a first storage control block configured to load the data set of the three-dimensional image into a CPU memory; and
    a second storage control block configured to load a reserved region into a GPU video memory.
  29. The system of any one of Claims 20 to 28, the rendering control module comprising:
    a thread control block configured to establish control threads corresponding to a CPU task queue and a GPU task queue respectively; and
    a distribution block configured to analyze the rendering instruction, perform a rendering after distributing the rendering task to the task queue of the CPU or the GPU.
  30. A method comprising:
    a) retrieving a CT image sequence and segmenting a bone region of the CT image sequence;
    b) in each CT image of the CT image sequence, determining the bone region with the maximum area among a plurality of bone regions as a spine region;
    c) in each CT image of the CT image sequence, determining an anchor point in the toppest row of the spine region, validating the anchor point based on some known information;
    d) calibrating the anchor point.
  31. The method of Claim 30, the step a) further comprising:
    retrieving the CT image sequence, selecting a preliminary spine region;
    pre-processing the preliminary bone region to segment one or more bone regions.
  32. The method of Claim 31, the selecting a preliminary bone region comprising selecting and segmenting the CT image sequence based on the known information.
  33. The method of Claim 31, the pre-processing the preliminary spine region comprising:
    preforming a threshold segmentation on the preliminary spine region to generate one or more bone regions;
    eliminating deficiency and disconnections of the bone regions based on morphology algorithms.
  34. The method of Claim 30, the validating the anchor point comprising:
    if the y coordinate of the anchor point is greater than 1/4 of the height of the CT image, and the shift of the x coordinate from the central position is less than 1/4 of the width of the CT image, the anchor point is valid. Otherwise, the anchor point is not invalid.
  35. The method of Claim 30, the calibrating the anchor point comprising:
    calibrating an anchor point whose coordinate is abnormal; and
    calibrating an anchor point that lacks a coordinate.
  36. The method of Claim 35, the calibrating an anchor point that lacks a coordinate comprising:
    positioning the anchor points sequentially;
    defining a sliding window, positioning the anchor points to be calibrated in the sliding window sequentially, calculating a position mean value of the anchor points in the sliding window; and
    defining a first Euclidean distance of the current anchor point and the position mean value as DM, a second Euclidean distance of the current anchor point and the previous anchor point as DP. Either DM or DP is less than a threshold TH, the coordinate of the current anchor point may be substituted by that of the smaller of DM and DP.
  37. The method of Claim 35, the calibration of the anchor points that lack coordinate comprising fixing the anchor points that lack coordinate based on adjacent anchor points.
  38. A method of hepatic artery segmentation comprising:
    entering a CT image data and a corresponding liver mask and selecting a seed point of the hepatic artery;
    generating a first binary volume data and a second binary volume data through a segmentation based on the seed point, wherein the first binary comprises a backbone and a rib connected therewith and the second binary volume data comprises a backbone, a rib connected therewith and a hepatic artery;
    generating a third binary by removing a part same with the first binary volume data from the second binary volume data; and
    generating a hepatic artery through a 3D region growing in the third binary volume data based on the seed point.
  39. The method of Claim 38, the generating of the first binary volume data further comprising performing a threshold segmentation for backbone on the CT image data and the generating of the second binary volume data further comprising performing a threshold segmentation for hepatic artery on the CT image data.
  40. The method of Claim 38, the selecting of the seed point further comprising:
    S11. determining the slices to select the seed point of the hepatic artery; and
    S12. selecting the seed point and computing its CT value.
  41. The method of Claim 40, the S11 further comprising:
    searching a starting slice Ls and an ending slice Le of a liver from the liver mask and searching the maximum area MaxArea of a single liver slice; and
    comparing the area of the liver form Ls to Le. The first slice whose liver area is no less than MaxArea/2 is written as Vs. The last slice whose liver area is no less than MaxArea/2 is written as Ve.
  42. The method of Claim 41, the S12 further comprising:
    selecting and segmenting the CT image slice by slice from Ve to Vs. Selecting a connected region whose circular degree is no less than 0.7. The connected region with the minimax area is determined as an artery, and the centroid thereof is determined as a seed point; and
    selecting an n×n square whose center is the seed point and calculating a mean CT value thereof. The mean CT value is the CT value of the seed point. Wherein n is 3 or 5.
  43. The method of Claim 38, the generating of the first binary volume data further comprising:
    performing a coarse segmentation on the original CT image data and acquiring a bone binary volume data comprising a bone and other tissue; and
    subdividing the bone binary volume data into region slice and re-segmenting the region slice by using a superposition strategy and acquiring the first binary volume data comprising a backbone and a rib connected therewith.
  44. The method of Claim 43, the subdividing the bone binary volume data into region slice and re-segmenting the region slice by using a superposition strategy further comprising:
    S21. searching a starting row and an ending row in perpendicular direction in the target region of each slice within the bone binary volume data;
    S22. subdividing the sequence slices of the bone binary volume data into several regions. Segmenting all parts comprising a backbone and a rib connected therewith in each slice by using a sequence slice superposition strategy; and
    S23. performing an expansion on the result after S22 by one or more pixels or voxels in the perpendicular direction to reduce the interface in 3D region growing process.
  45. The method of Claim 39, the threshold segmentation for backbone may comprising using a piecewise function:
    Figure PCTCN2015093506-appb-100001
    wherein t is the CT value of the seed point. Symbol a and b is an upper limit and a lower limit of a CT value in the segmentation, respectively. T1, T2 and T3 is the pre-determined segmentation threshold;
    the threshold segmentation for hepatic artery may comprising using a piecewise function:
    Figure PCTCN2015093506-appb-100002
    wherein t is the CT value of the seed point. Symbol c and d is an upper limit and a lower limit of a CT value in the segmentation, respectively. T4, T5 and T6 is the pre-determined segmentation threshold.
  46. The method of Claim 38, the hepatic artery segmentation method further comprising post-processing the hepatic artery through an artery repair.
  47. The method of Claim 46, the pose-processing method further comprising:
    S31. calculating a mean value AM of the CT value in the hepatic artery image according to the original CT image data;
    S32. regarding a connected region which is marked as an artery in each slice as a seed point and performing a 3D region growing based on the seed point according to property of local CT value;
    S33. performing an artery repair through a threshold segmentation of an edge of the connected region which is marked as an artery according to the property of local and overall CT value;
    S34. connecting a tiny breach by a closed computation in morphology.
  48. The method of Claim 47, the S32 further comprising:
    searching a connected region which is marked as an artery in each slice row by row and calculating the mean value M of CT value in the connected region; and
    regarding each point in the connected region as a seed point and performing a region growing based on the seed point. When a point is not marked as a target point and its CT value satisfy a requirement, it may be stopped to grow. The requirement is that the CT value of a point is no less than α·M and no bigger than β·M. Wherein α is a lower limit control coefficient whose value satisfies a requirement of 0.9≤α≤1. β is an upper limit control coefficient whose value satisfies a requirement of 1≤β≤1.05.
  49. The method of Claim 47, the S33 further comprising:
    S331. traversing each slice which is marked as an artery by a unit of n×n square. Calculating a mean value T of CT value the point with and without mark when the number of the point with mark and the point without mark in one square are both bigger than n·n/3;
    S332. segmenting the point without mark by a threshold of MaxT and marking the point whose CT value is no less than MaxT as an artery. Wherein MaxT is the maximum of T and γ·AM, γ is an edge expanding control coefficient which is no bigger than 1; and
    S333. repeating S331 and S332 by three or five times.
PCT/CN2015/093506 2014-12-02 2015-10-31 A method and system for image processing WO2016086744A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP15865201.6A EP3213296B1 (en) 2014-12-02 2015-10-31 A method and system for image processing
GB1709225.5A GB2547399B (en) 2014-12-02 2015-10-31 A method and system for image processing
US15/323,035 US10181191B2 (en) 2014-12-02 2015-10-31 Methods and systems for identifying spine or bone regions in computed tomography image sequence
US16/247,080 US11094067B2 (en) 2014-12-02 2019-01-14 Method and system for image processing

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
CN201410720069.1A CN105719333B (en) 2014-12-02 2014-12-02 Three-dimensional image data processing method and device
CN201410720069.1 2014-12-02
CN201410840692.0A CN105809656B (en) 2014-12-29 2014-12-29 Medical image processing method and device
CN201410840692.0 2014-12-29
CN201510216037.2A CN104766340B (en) 2015-04-30 2015-04-30 A kind of image partition method
CN201510216037.2 2015-04-30
CN201510224781.7 2015-05-05
CN201510224781.7A CN104809730B (en) 2015-05-05 2015-05-05 The method and apparatus that tracheae is extracted from chest CT image
CN201510310371.4A CN104851107B (en) 2015-06-08 2015-06-08 Vertebra localization method based on CT sequence images
CN201510310371.4 2015-06-08
CN201510313660.X 2015-06-09
CN201510313660.XA CN104851108B (en) 2015-06-09 2015-06-09 Arteria hepatica dividing method based on CT images
CN201510390970.1 2015-07-06
CN201510390970.1A CN104899926B (en) 2015-07-06 2015-07-06 Medical image cutting method and device
CN201510526053.1 2015-08-25
CN201510526053.1A CN105096332B (en) 2015-08-25 2015-08-25 Medical image cutting method and device

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/323,035 A-371-Of-International US10181191B2 (en) 2014-12-02 2015-10-31 Methods and systems for identifying spine or bone regions in computed tomography image sequence
US16/247,080 Continuation US11094067B2 (en) 2014-12-02 2019-01-14 Method and system for image processing

Publications (1)

Publication Number Publication Date
WO2016086744A1 true WO2016086744A1 (en) 2016-06-09

Family

ID=56090984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/093506 WO2016086744A1 (en) 2014-12-02 2015-10-31 A method and system for image processing

Country Status (4)

Country Link
US (2) US10181191B2 (en)
EP (1) EP3213296B1 (en)
GB (2) GB2559013B (en)
WO (1) WO2016086744A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447044A (en) * 2017-11-21 2018-08-24 四川大学 A kind of osteomyelitis lesions analysis method based on medical figure registration
CN108597589A (en) * 2018-04-27 2018-09-28 上海联影医疗科技有限公司 Model generating method, object detection method and medical image system
US10282844B2 (en) 2015-05-05 2019-05-07 Shanghai United Imaging Healthcare Co., Ltd. System and method for image segmentation
CN109801270A (en) * 2018-12-29 2019-05-24 北京市商汤科技开发有限公司 Anchor point determines method and device, electronic equipment and storage medium
WO2020003036A1 (en) * 2018-06-26 2020-01-02 Sony Corporation Internal organ localization in computed tomography (ct) images
CN112036503A (en) * 2020-09-11 2020-12-04 浙江大华技术股份有限公司 Image processing method and device based on step-by-step threads and storage medium

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9877699B2 (en) 2012-03-26 2018-01-30 Teratech Corporation Tablet ultrasound system
WO2017092615A1 (en) 2015-11-30 2017-06-08 上海联影医疗科技有限公司 Computer aided diagnosis system and method
CN106022221B (en) * 2016-05-09 2021-11-30 腾讯科技(深圳)有限公司 Image processing method and system
CN106529555B (en) * 2016-11-04 2019-12-06 四川大学 DR (digital radiography) sheet lung contour extraction method based on full convolution network
WO2018111940A1 (en) * 2016-12-12 2018-06-21 Danny Ziyi Chen Segmenting ultrasound images
CN108470331B (en) * 2017-02-23 2021-12-21 富士通株式会社 Image processing apparatus, image processing method, and program
TW201903708A (en) * 2017-06-06 2019-01-16 國立陽明大學 Method and system for analyzing digital subtraction angiography images
US10762636B2 (en) 2017-06-29 2020-09-01 HealthMyne, Inc. Systems and methods for volumetric segmentation of structures in planar medical images
WO2019041262A1 (en) * 2017-08-31 2019-03-07 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image segmentation
US10580195B2 (en) * 2017-11-20 2020-03-03 Microsoft Technology Licensing, Llc Ray-triangle intersection testing with tetrahedral planes
US20210007806A1 (en) * 2018-03-21 2021-01-14 Vikas KARADE A method for obtaining 3-d deformity correction for bones
JP7067240B2 (en) * 2018-04-24 2022-05-16 富士通株式会社 Optimization calculation method, optimization calculation program and optimization calculation device
US11064902B2 (en) * 2018-06-29 2021-07-20 Mayo Foundation For Medical Education And Research Systems, methods, and media for automatically diagnosing intraductal papillary mucinous neosplasms using multi-modal magnetic resonance imaging data
EP3836827A4 (en) * 2018-08-17 2022-10-19 ChemImage Corporation Discrimination of calculi and tissues with molecular chemical imaging
CN109410220B (en) * 2018-10-16 2019-12-24 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
US11043296B2 (en) 2018-11-05 2021-06-22 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
JP7137074B2 (en) * 2018-12-26 2022-09-14 富士通株式会社 Optimization calculation method, optimization calculation device, and optimization calculation program
CN109919960B (en) * 2019-02-22 2023-04-07 西安工程大学 Image continuous edge detection method based on multi-scale Gabor filter
US11210554B2 (en) * 2019-03-21 2021-12-28 Illumina, Inc. Artificial intelligence-based generation of sequencing metadata
US11347965B2 (en) 2019-03-21 2022-05-31 Illumina, Inc. Training data generation for artificial intelligence-based sequencing
EP3959650A4 (en) * 2019-04-23 2023-05-17 The Johns Hopkins University Abdominal multi-organ segmentation with organ-attention networks
US11423306B2 (en) 2019-05-16 2022-08-23 Illumina, Inc. Systems and devices for characterization and performance analysis of pixel-based sequencing
US11593649B2 (en) 2019-05-16 2023-02-28 Illumina, Inc. Base calling using convolutions
WO2020251958A1 (en) * 2019-06-11 2020-12-17 Intuitive Surgical Operations, Inc. Detecting and representing anatomical features of an anatomical structure
US11308623B2 (en) * 2019-07-09 2022-04-19 The Johns Hopkins University System and method for multi-scale coarse-to-fine segmentation of images to detect pancreatic ductal adenocarcinoma
US11854281B2 (en) 2019-08-16 2023-12-26 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for processing brain images and extracting neuronal structures
CN110717417B (en) * 2019-09-25 2022-06-07 福建天泉教育科技有限公司 Depth map human body foreground extraction method and computer readable storage medium
CN110739049A (en) * 2019-10-10 2020-01-31 上海联影智能医疗科技有限公司 Image sketching method and device, storage medium and computer equipment
US11521314B2 (en) * 2019-12-31 2022-12-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image processing
CN115136244A (en) 2020-02-20 2022-09-30 因美纳有限公司 Many-to-many base interpretation based on artificial intelligence
CN111311583B (en) * 2020-02-24 2021-03-12 广州柏视医疗科技有限公司 Method for naming pulmonary trachea and blood vessel by sections
CN111539930B (en) * 2020-04-21 2022-06-21 浙江德尚韵兴医疗科技有限公司 Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
US20210334975A1 (en) * 2020-04-23 2021-10-28 Nvidia Corporation Image segmentation using one or more neural networks
CN111729283B (en) * 2020-06-19 2021-07-06 杭州赛鲁班网络科技有限公司 Training system and method based on mixed reality technology
CN112132837B (en) * 2020-08-19 2024-07-26 心医国际数字医疗系统(大连)有限公司 Chest bone automatic extraction method, system, electronic equipment and storage medium
CN114255272A (en) * 2020-09-25 2022-03-29 广东博智林机器人有限公司 Positioning method and device based on target image
CN112329796B (en) * 2020-11-12 2023-05-23 北京环境特性研究所 Infrared imaging cloud detection method and device based on visual saliency
JP7121818B1 (en) 2021-02-19 2022-08-18 Pdrファーマ株式会社 Program, image processing device and image processing method
US11854192B2 (en) 2021-03-03 2023-12-26 International Business Machines Corporation Multi-phase object contour refinement
US11923071B2 (en) * 2021-03-03 2024-03-05 International Business Machines Corporation Multi-phase object contour refinement
US20220336054A1 (en) 2021-04-15 2022-10-20 Illumina, Inc. Deep Convolutional Neural Networks to Predict Variant Pathogenicity using Three-Dimensional (3D) Protein Structures
WO2022261006A1 (en) * 2021-06-07 2022-12-15 Raytheon Company Organ segmentation in image
US12026934B2 (en) * 2021-11-20 2024-07-02 Xue Feng Systems and methods for training a convolutional neural network that is robust to missing input information
CN115546154B (en) * 2022-10-11 2024-02-06 数坤科技股份有限公司 Image processing method, device, computing equipment and storage medium
CN116681701B (en) * 2023-08-02 2023-11-03 青岛市妇女儿童医院(青岛市妇幼保健院、青岛市残疾儿童医疗康复中心、青岛市新生儿疾病筛查中心) Children lung ultrasonic image processing method
CN117576124B (en) * 2024-01-15 2024-04-30 福建智康云医疗科技有限公司 Abdominal ct image liver segmentation method and system based on artificial intelligence
CN118038064B (en) * 2024-04-11 2024-06-11 青岛宝迈得生物科技有限公司 Image segmentation method for hepatic duct and biliary tract calculus
CN118096988B (en) * 2024-04-24 2024-08-13 中国科学院宁波材料技术与工程研究所 Indoor complex scene high-fidelity real-time rendering method based on three-dimensional Gaussian representation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558999A (en) * 2009-05-25 2009-10-21 张俊华 Scoliosis X-ray image-assisted diagnostic system and method thereof
CN102411795A (en) * 2011-08-16 2012-04-11 中国人民解放军第三军医大学第一附属医院 Combined display method of multimode image of coronary artery
CN102573638A (en) * 2009-10-13 2012-07-11 新加坡科技研究局 A method and system for segmenting a liver object in an image
WO2012130132A1 (en) * 2011-03-31 2012-10-04 深圳迈瑞生物医疗电子股份有限公司 Spine centrum and intervertebral disk dividing methods, devices of the same, magnetic resonance imaging system
CN103489198A (en) * 2013-10-21 2014-01-01 钟映春 Method for partitioning brainstem areas automatically from MR (magnetic resonance) sequence images
CN103810363A (en) * 2012-11-09 2014-05-21 上海联影医疗科技有限公司 Blood vessel seed point selecting method and blood vessel extracting method in angiography
CN104766340A (en) * 2015-04-30 2015-07-08 上海联影医疗科技有限公司 Image segmentation method
CN104851107A (en) * 2015-06-08 2015-08-19 武汉联影医疗科技有限公司 Vertebra positioning method based on CT sequence image
CN104851108A (en) * 2015-06-09 2015-08-19 武汉联影医疗科技有限公司 Hepatic artery segmentation method based on CT image

Family Cites Families (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6882990B1 (en) * 1999-05-01 2005-04-19 Biowulf Technologies, Llc Methods of identifying biological patterns using multiple data sets
WO2001078005A2 (en) * 2000-04-11 2001-10-18 Cornell Research Foundation, Inc. System and method for three-dimensional image rendering and analysis
US7336809B2 (en) * 2001-11-23 2008-02-26 R2 Technology, Inc. Segmentation in medical images
US7020316B2 (en) * 2001-12-05 2006-03-28 Siemens Corporate Research, Inc. Vessel-feeding pulmonary nodule detection by volume projection analysis
US6908972B2 (en) * 2002-04-16 2005-06-21 Equistar Chemicals, Lp Method for making polyolefins
US6756455B2 (en) * 2002-05-31 2004-06-29 Equistar Chemicals, Lp High-temperature solution process for polyolefin manufacture
CN100370952C (en) 2004-12-30 2008-02-27 中国医学科学院北京协和医院 Method for processing lung images
AU2005329247A1 (en) * 2005-03-17 2006-09-21 Algotec Systems Ltd. Bone segmentation
US20080292194A1 (en) 2005-04-27 2008-11-27 Mark Schmidt Method and System for Automatic Detection and Segmentation of Tumors and Associated Edema (Swelling) in Magnetic Resonance (Mri) Images
WO2007062135A2 (en) * 2005-11-23 2007-05-31 Junji Shiraishi Computer-aided method for detection of interval changes in successive whole-body bone scans and related computer program product and system
CN100573581C (en) 2006-08-25 2009-12-23 西安理工大学 Semi-automatic partition method of lung CT image focus
US20080118136A1 (en) 2006-11-20 2008-05-22 The General Hospital Corporation Propagating Shell for Segmenting Objects with Fuzzy Boundaries, Automatic Volume Determination and Tumor Detection Using Computer Tomography
US8989468B2 (en) 2007-05-25 2015-03-24 Definiens Ag Generating an anatomical model using a rule-based segmentation and classification process
CN100586371C (en) 2007-10-17 2010-02-03 北京大学 Image processing process based on magnetic resonance three-dimensional renogram
US8526697B2 (en) 2008-02-15 2013-09-03 Koninklijke Philips N.V. Apparatus for segmenting an object comprising sub-objects
CN101393644B (en) 2008-08-15 2010-08-04 华中科技大学 Hepatic portal vein tree modeling method and system thereof
US8385688B2 (en) * 2008-08-27 2013-02-26 International Business Machines Corporation System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans
CN101373541B (en) 2008-10-17 2010-09-15 东软集团股份有限公司 Method and apparatus for drafting medical image body
US10083515B2 (en) * 2008-11-25 2018-09-25 Algotec Systems Ltd. Method and system for segmenting medical imaging data according to a skeletal atlas
CN102034232B (en) 2009-09-25 2013-09-04 上海电机学院 Method for segmenting medical image graph
KR101175426B1 (en) 2010-01-26 2012-08-20 삼성메디슨 주식회사 Ultrasound system and method for providing three-dimensional ultrasound image
CN101814193A (en) 2010-03-09 2010-08-25 哈尔滨工业大学 Real-time volume rendering method of three-dimensional heart data based on GPU (Graphic Processing Unit) acceleration
CN102243759B (en) 2010-05-10 2014-05-07 东北大学 Three-dimensional lung vessel image segmentation method based on geometric deformation model
CN101853509B (en) 2010-06-11 2012-05-09 西安电子科技大学 SAR (Synthetic Aperture Radar) image segmentation method based on Treelets and fuzzy C-means clustering
KR101503002B1 (en) * 2011-01-28 2015-04-01 주식회사 엘지화학 Metallocene compounds and olefin based polymer prepared by using the same
KR101090375B1 (en) 2011-03-14 2011-12-07 동국대학교 산학협력단 Ct image auto analysis method, recordable medium and apparatus for automatically calculating quantitative assessment index of chest-wall deformity based on automized initialization
US9269139B2 (en) * 2011-10-28 2016-02-23 Carestream Health, Inc. Rib suppression in radiographic images
CN102521833B (en) 2011-12-08 2014-01-29 东软集团股份有限公司 Method for obtaining tracheae tree from chest CT image and apparatus thereof
CN102622750A (en) 2012-02-24 2012-08-01 西安电子科技大学 Stomach computed tomography (CT) sequence image segmentation method based on interactive region growth
CN104507392B (en) 2012-09-07 2017-03-08 株式会社日立制作所 Image processing apparatus and image processing method
CN102903103B (en) 2012-09-11 2014-12-17 西安电子科技大学 Migratory active contour model based stomach CT (computerized tomography) sequence image segmentation method
CN102982531B (en) 2012-10-30 2015-09-16 深圳市旭东数字医学影像技术有限公司 Bronchial dividing method and system
CN103021008A (en) 2012-12-11 2013-04-03 湖南师范大学 Bone animation processing method based on programmable graphics processing unit (GPU)
DE102013206415A1 (en) 2013-04-11 2014-10-16 Siemens Aktiengesellschaft Automatic extraction of optimized output data
CN104156935B (en) 2013-05-14 2018-01-02 东芝医疗系统株式会社 Image segmenting device, image partition method and medical image equipment
KR101650092B1 (en) * 2013-08-01 2016-08-22 주식회사 엘지화학 Metallocene compound, catalyst composition comprising the same, and method for preparation of olefin-based polymer using the same
CN103617626A (en) 2013-12-16 2014-03-05 武汉狮图空间信息技术有限公司 Central processing unit (CPU) and ground power unit (GPU)-based remote-sensing image multi-scale heterogeneous parallel segmentation method
CN103700123B (en) 2013-12-19 2018-05-08 北京唯迈医疗设备有限公司 GPU based on CUDA frameworks accelerates x-ray image method for reconstructing and device
CN103679810B (en) 2013-12-26 2017-03-08 海信集团有限公司 The three-dimensional rebuilding method of liver's CT image
CN104063876B (en) 2014-01-10 2017-02-01 北京理工大学 Interactive image segmentation method
CN103871057A (en) 2014-03-11 2014-06-18 深圳市旭东数字医学影像技术有限公司 Magnetic resonance image-based bone segmentation method and system thereof
CN104091331B (en) 2014-06-27 2015-07-29 深圳开立生物医疗科技股份有限公司 A kind of ultrasonic focus image partition method, Apparatus and system
CN104143101A (en) 2014-07-01 2014-11-12 华南理工大学 Method for automatically identifying breast tumor area based on ultrasound image
CN104504737B (en) 2015-01-08 2018-01-12 深圳大学 A kind of method that three-dimensional tracheae tree is obtained from lung CT image
CN104809740B (en) 2015-05-26 2017-12-08 重庆大学 Knee cartilage image automatic segmentation method based on SVM and Hookean region growth
WO2017092615A1 (en) * 2015-11-30 2017-06-08 上海联影医疗科技有限公司 Computer aided diagnosis system and method
EP3654244A4 (en) * 2017-07-31 2020-11-04 Shanghai United Imaging Healthcare Co., Ltd. Method and system for extracting region of interest from volume data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558999A (en) * 2009-05-25 2009-10-21 张俊华 Scoliosis X-ray image-assisted diagnostic system and method thereof
CN102573638A (en) * 2009-10-13 2012-07-11 新加坡科技研究局 A method and system for segmenting a liver object in an image
WO2012130132A1 (en) * 2011-03-31 2012-10-04 深圳迈瑞生物医疗电子股份有限公司 Spine centrum and intervertebral disk dividing methods, devices of the same, magnetic resonance imaging system
CN102411795A (en) * 2011-08-16 2012-04-11 中国人民解放军第三军医大学第一附属医院 Combined display method of multimode image of coronary artery
CN103810363A (en) * 2012-11-09 2014-05-21 上海联影医疗科技有限公司 Blood vessel seed point selecting method and blood vessel extracting method in angiography
CN103489198A (en) * 2013-10-21 2014-01-01 钟映春 Method for partitioning brainstem areas automatically from MR (magnetic resonance) sequence images
CN104766340A (en) * 2015-04-30 2015-07-08 上海联影医疗科技有限公司 Image segmentation method
CN104851107A (en) * 2015-06-08 2015-08-19 武汉联影医疗科技有限公司 Vertebra positioning method based on CT sequence image
CN104851108A (en) * 2015-06-09 2015-08-19 武汉联影医疗科技有限公司 Hepatic artery segmentation method based on CT image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3213296A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282844B2 (en) 2015-05-05 2019-05-07 Shanghai United Imaging Healthcare Co., Ltd. System and method for image segmentation
US10482602B2 (en) 2015-05-05 2019-11-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for image segmentation
CN108447044A (en) * 2017-11-21 2018-08-24 四川大学 A kind of osteomyelitis lesions analysis method based on medical figure registration
CN108597589A (en) * 2018-04-27 2018-09-28 上海联影医疗科技有限公司 Model generating method, object detection method and medical image system
CN108597589B (en) * 2018-04-27 2022-07-05 上海联影医疗科技股份有限公司 Model generation method, target detection method and medical imaging system
WO2020003036A1 (en) * 2018-06-26 2020-01-02 Sony Corporation Internal organ localization in computed tomography (ct) images
US10621728B2 (en) 2018-06-26 2020-04-14 Sony Corporation Internal organ localization in computed tomography (CT) images
CN109801270A (en) * 2018-12-29 2019-05-24 北京市商汤科技开发有限公司 Anchor point determines method and device, electronic equipment and storage medium
CN109801270B (en) * 2018-12-29 2021-07-16 北京市商汤科技开发有限公司 Anchor point determining method and device, electronic equipment and storage medium
US11301726B2 (en) 2018-12-29 2022-04-12 Beijing Sensetime Technology Development Co., Ltd. Anchor determination method and apparatus, electronic device, and storage medium
CN112036503A (en) * 2020-09-11 2020-12-04 浙江大华技术股份有限公司 Image processing method and device based on step-by-step threads and storage medium

Also Published As

Publication number Publication date
EP3213296A4 (en) 2017-11-08
GB201719333D0 (en) 2018-01-03
US20190164291A1 (en) 2019-05-30
US10181191B2 (en) 2019-01-15
GB2559013B (en) 2019-07-17
GB201709225D0 (en) 2017-07-26
GB2547399A (en) 2017-08-16
US20170249744A1 (en) 2017-08-31
US11094067B2 (en) 2021-08-17
EP3213296A1 (en) 2017-09-06
EP3213296B1 (en) 2020-02-19
GB2559013A (en) 2018-07-25
GB2547399B (en) 2018-05-02

Similar Documents

Publication Publication Date Title
US11094067B2 (en) Method and system for image processing
US10482602B2 (en) System and method for image segmentation
JP6514325B2 (en) System and method for segmenting medical images based on anatomical landmark-based features
Vandemeulebroucke et al. Automated segmentation of a motion mask to preserve sliding motion in deformable registration of thoracic CT
EP2916738B1 (en) Lung, lobe, and fissure imaging systems and methods
Erdt et al. Regmentation: A new view of image segmentation and registration
Cobzas et al. 3D variational brain tumor segmentation using a high dimensional feature set
Mesejo et al. Biomedical image segmentation using geometric deformable models and metaheuristics
WO2017067127A1 (en) System and method for image registration in medical imaging system
Campadelli et al. A segmentation framework for abdominal organs from CT scans
US20170178307A1 (en) System and method for image registration in medical imaging system
US8150119B2 (en) Method and system for left ventricle endocardium surface segmentation using constrained optimal mesh smoothing
WO2009138202A1 (en) Method and system for lesion segmentation
US20220301224A1 (en) Systems and methods for image segmentation
CN117649400B (en) Image histology analysis method and system under abnormality detection framework
Kurugol et al. Centerline extraction with principal curve tracing to improve 3D level set esophagus segmentation in CT images
Valsecchi et al. Automatic evolutionary medical image segmentation using deformable models
Reska et al. Fast 3D segmentation of hepatic images combining region and boundary criteria
El-Baz et al. Fast, accurate unsupervised segmentation of 3d magnetic resonance angiography
US20240055124A1 (en) Image analysis method for improved clinical decision making
Hatt et al. A Convolutional Neural Network Approach to Automated Lung Bounding Box Estimation from Computed Tomography Scans
Militzer et al. Learning a prior model for automatic liver lesion segmentation in follow-up CT images
Qazi et al. Feature-driven model-based segmentation
Altarawneh 3D LIVER SEGMENTATION FROM ABDOMINAL COMPUTED TOMOGRAPHY SCANS BASED ON A NOVEL LEVEL SET MODEL
Preim Model-based visualization for intervention planning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15865201

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15323035

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015865201

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 201709225

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20151031