CN112598634A - CT image organ positioning method based on 3D CNN and iterative search - Google Patents
CT image organ positioning method based on 3D CNN and iterative search Download PDFInfo
- Publication number
- CN112598634A CN112598634A CN202011499386.7A CN202011499386A CN112598634A CN 112598634 A CN112598634 A CN 112598634A CN 202011499386 A CN202011499386 A CN 202011499386A CN 112598634 A CN112598634 A CN 112598634A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- organ
- cnn
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of medical image processing, in particular to a CT image organ positioning method based on 3D CNN and iterative search, which comprises the following steps: extracting all target organs and determining the positions of the corresponding target organs in the target CT image; based on the target organ, performing distance-divided multi-density block sampling; and setting a loss function, and performing iterative search by combining the training network model to obtain various organ positioning information. The invention can realize 3D bounding box prediction of all organs only by a single network, and is flexible and easy to reproduce. Meanwhile, the influence of the volume can be ignored, the original CT image with any size can be accurately predicted, an accurate positioning result is obtained, and the algorithm performance is greatly improved.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to a CT image organ positioning method based on 3D CNN and iterative search.
Background
Currently, CT organ localization is a commonly used preprocessing step in medical image processing, and aims to detect the presence or absence of an organ, and predict 3D bounding boxes of the organ if the organ is present. Through CT organ positioning, the region of interest can be extracted and then other medical image processing work can be carried out, for the subsequent algorithm, the storage consumption is reduced, and simultaneously the subsequent algorithm can be improved
The performance of the lifting algorithm.
In recent years, CNN-based deep learning methods have shown relatively strong performance in CT organ localization, and these networks that automatically extract features from original images tend to achieve better results than traditional hand-crafted based methods. These DL methods can be broadly divided into two categories, 2D CNN-based and 3D CNN-based methods, respectively. Firstly, based on a 2D CNN method, three 2D CNN networks are established for each organ to carry out organ positioning, the input of the networks are respectively 3 orthogonal 2D slices, the algorithm integrates the target organ existence classification result on the image orthogonal slices, and finally, the 3D bounding box of each organ is obtained. However, the method based on 2D slices usually needs to forward propagate a large number of similar 2D slices, which is time-consuming, and meanwhile, the spatial information between slices is less considered, and the prediction result is not accurate enough. The method based on 3D CNN utilizes 3DCNN network to carry out multi-organ positioning, enhances the input image into an original image, a gradient image and an Enhanced similarity map three-channel image, adopts three branch (axial branch, coronal branch, and saline branch) network, and is the method of state-of-the-art in the current field. However, the 3D ConvNets do not consider the spatial position prior relationship between the organ tissues of each part in each CT image, and such information can help us predict the organ position more accurately. Meanwhile, due to the limitation of video memory, these methods may need to down-sample the original CT image, which may further reduce the prediction accuracy.
Therefore, it is necessary to provide a technical solution to solve the above technical problems.
Disclosure of Invention
In view of this, the embodiment of the present invention provides a CT image organ positioning method based on 3D CNN and iterative search.
The first aspect of the embodiments of the present invention provides a method for positioning an organ in a CT image based on 3D CNN and iterative search, comprising: the method comprises the following steps:
s01: extracting all target organs and determining the positions of the corresponding target organs in the target CT image;
s03: based on the target organ, performing distance-divided multi-density block sampling;
s05: setting a loss function, and performing iterative search by combining the training network model to obtain positioning information of various organs;
s07: and judging the existence information of various organs according to the positioning information.
Preferably, in the present invention, before the step S01, the method further includes the step S00: and carrying out trilinear interpolation processing on the original CT image, and carrying out normalization processing to obtain an implementation target CT image.
Preferably, in the present invention, the HU value of the target image is limited to [ -1000,1600 ];
the normalization processing specifically comprises:
wherein data.min () represents the minimum value of the HU values of the target CT image; max () represents the maximum value of the HU value of the target CT image, the valueiRepresenting the ith HU value in the target CT image; the ValueiRepresenting the HU value of the ith in the target CT image after the normalization processing.
Preferably, in the present invention, the performing the distance-based multi-density block sampling based on the target organ specifically includes: the sub-distances are specifically 0-8 voxels, 8-16 voxels, 16-32 voxels, 32-128 voxels, and more than 128 voxels.
Preferably, in the present invention, the loss function is L ═ LIOU+Lreg+Lcls
Wherein L isIOURepresents B-BOX IOU loss, LregRepresents the loss of center distance, LclsRepresenting a two-classification loss.
Preferably, in the present invention, said L isIOUThe method specifically comprises the following steps:
wherein 1-IOU represents the overlapping area corresponding to the block characteristic parameter; d is the Euclidean distance between the prediction block of the other organ and the center point of the target block of the target organ, c is the diagonal length of the minimum rectangular box which can contain the prediction block and the target block of the target organ,corresponding to the distance of the central point; α v denotes an aspect ratio of the virtual block in the block feature parameter, the aspect ratio including a ratio of lengths of third coordinate axes in three-dimensional coordinates,wherein wgtAnd hgtWidth and height of a target block representing the target organ, w and h represent width and height of the prediction block.
Preferably, in the present invention, the positioning information includes each set of 8-dimensional vectors of 11 organs; wherein each set of 8-dimensional vectors comprises (center _ x, center _ y, center _ z, h, w, l, ind, ind _ cls)
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the invention is an end-to-end multi-organ positioning method based on 3D ConvNet, which can realize 3D bounding box prediction of all organs only by a single network, and is flexible and easy to reproduce. Meanwhile, the influence of the volume can be ignored, the original CT image with any size can be accurately predicted, an accurate positioning result is obtained, and the algorithm performance is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a CT image organ positioning method based on 3D CNN and iterative search according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of the training of the CT image organ positioning method based on 3D CNN and iterative search according to the second embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 shows that in a first aspect of the embodiment of the present invention, a CT image organ positioning method based on 3D CNN and iterative search is provided, which includes the following steps:
s01: extracting all target organs and determining the positions of the corresponding target organs in the target CT image;
the targeted training set uses 201 CT images in the LiTS database as the training set and the test set. The training and test sets shown included 11 abdominal organs, respectively: head, left Lung, right Lung, lift, spleens, pancrea, left kidney, right kidney, bladder, left ferromagnetic head, right ferromagnetic head. The slice size in the dataset was 512 x 512, the number of slices varied from 42 to 1026. The resolution within the slice is not very different, 0.56mm to 1.0mm, but the slice thickness varies very much, 0.45mm to 5.0 mm.
The ResNet50-3D method is used as a backbone in the target CT image.
S03: based on the target organ, performing distance-divided multi-density block sampling;
the invention adopts a distance-division multi-density block sampling mode to sample in different distance areas from a target organ: for the region far away from the target organ, low density sampling is performed, and it is expected that the organ position can be predicted in a fuzzy manner through the patch. For regions of intermediate distance, intermediate density sampling is performed, which is expected to allow more accurate prediction of other organ positions. For a patch in a short distance, high-density sampling is carried out, so that the organ position is required to be accurately predicted, and the length, width and height of the organ are also required to be predicted. Because in the 3D CT image, along with the increase of the distance from a certain point, the quantity of the samplable batch is exponentially increased, so that different sampling densities can be realized by simply sampling different distance ranges in an equal amount.
The method comprises the steps of establishing a connection model among all parts of a human body by adopting a relative position prior relation among human tissues, wherein the part can be obtained from a training set, a data set and other databases or can be obtained by self setting, and thus a full connection layer is added to a backbone in combination with a block sampling mode of a target organ on a target CT image to realize a training network model.
S05: setting a loss function, and performing iterative search by combining the training network model to obtain positioning information of various organs;
because the organs are 11, the finally obtained and output various organ positioning information are 88-dimensional vectors, wherein each group of vectors can be used for expressing position information, error information and the like corresponding to the target organ in detail.
S07: and judging the existence information of various organs according to the positioning information.
And training a second classifier to judge whether the central points of other organ blocks are within 128 voxels away from the target organ by setting the effective parameters of the positioning information and the final effective parameters obtained by loop iteration in an MES loss mode so as to judge the existence information of various organs.
Preferably, in the present invention, before the step S01, the method further includes the step S00: and carrying out trilinear interpolation processing on the original CT image, and carrying out normalization processing to obtain an implementation target CT image.
Preferably, in the present invention, the HU value of the target image is limited to [ -1000,1600 ];
the normalization processing specifically comprises:
wherein data.min () represents the minimum value of the HU values of the target CT image; max () represents the maximum value of the HU value of the target CT image, the valueiRepresenting the ith HU value in the target CT image; the ValueiRepresenting the ith of the target CT image after the normalization processingThe HU value.
Preferably, the resolution is unified by using a resampling method because the difference of the resolution of the CT images in the LiTS data set is very large and the training effect is very poor by directly using the difference. The method adopted by the invention is not limited by the size of the original image, so that downsampling is not required, and the resolution is uniformly adjusted to 0.8mm by 1.0mm by directly utilizing a trilinear interpolation method. In addition to adjusting the resolution, the data set is normalized by first limiting the HU values to [ -1000,1600], and then normalizing to [0,1] using the formula.
Therefore, the low-resolution image is expanded by the high-resolution image by using the difference method, and the prediction capability of the algorithm of the invention on the low-resolution image can be greatly improved.
Preferably, in the present invention, the performing the distance-based multi-density block sampling based on the target organ specifically includes: the sub-distances are specifically 0-8 voxels, 8-16 voxels, 16-32 voxels, 32-128 voxels, and more than 128 voxels.
Therefore, the algorithm provided by the invention can be ensured to have different levels of prediction capabilities on the patch with different distances of various organs, and continuous optimization from coarse to fine can be realized by matching with a loop iteration strategy.
Preferably, in the present invention, the loss function is L ═ LIOU+Lreg+Lcls
Wherein L isIOURepresents B-BOX IOU loss, LregRepresents the loss of center distance, LclsRepresenting a two-classification loss.
Preferably, in the present invention, said L isIOUThe method specifically comprises the following steps:
wherein 1-IOU represents the overlapping area corresponding to the block characteristic parameter; d is the Euclidean distance between the prediction block of the other organ and the target block center point of the target organ, c is the target distance between the prediction block and the target organThe diagonal length of the smallest rectangular box into which the block is contained,corresponding to the distance of the central point; α v denotes an aspect ratio of the virtual block in the block feature parameter, the aspect ratio including a ratio of lengths of third coordinate axes in three-dimensional coordinates,wherein wgtAnd hgtThe width and height of the target block representing the target organ, w and h represent the width and height of the prediction block, and gt is an abbreviation of ground truth and is the true width and height of the target block.
Here, the 3D B-BOX calculation method is adopted, and the ratio of the length of the third coordinate axis is increased on the basis of the 2D B-BOX method.
If the patch is 16-128 away from the center of an organ, the center-offset prediction (center _ x, center _ y, center _ z) of the organ participates in LregCalculating, otherwise not calculating LregHere, the mean square error MSE loss calculation is used directly. All the L should be included within and outside 128 voxels away from the center of an organclsThe MSE loss is also used here, and the two classifiers are trained to determine whether the patch center point is within 128 voxels of the organ, and it should be noted that the number of samples in the two classes should be balanced.
Due to the multi-density sampling, patch patches closer to an organ predict the position and size of the organ more accurately. Therefore, the result can be continuously optimized by adopting a loop prediction mode, namely, the patch is sampled again at the organ prediction result to predict the organ again so as to gradually approximate the optimal length, width, height and position of the organ B-BOX. In actual prediction, the non-overlapping sampling patch is sent into a network for judgment, and finally, the loop prediction of starting each organ with the highest classification value is selected for each organ. Sampling is carried out again at the predicted position of each organ and then the predicted position is sent into the network, and the predicted result of only the organ is taken from other organs in each prediction. Generally, iteration of loop prediction is performed for about 3 times to obtain a better prediction result. For the judgment of the existence of the organ in the original CT, a simple rule is adopted, if the two classification results are all False, the organ is judged to be absent, if the final prediction result of the organ exceeds the original CT range, the organ is also judged to be absent, and if not, the organ is judged to be present.
Preferably, in the present invention, the positioning information includes each set of 8-dimensional vectors of 11 organs; wherein each set of 8-dimensional vectors comprises (center _ x, center _ y, center _ z, h, w, l, ind, ind _ cls).
Wherein center _ x, center _ y and center _ z are respectively offsets on 3 coordinate axes between a center point corresponding to a certain organ label B-BOX and a center point of a current sampling block; h, w and l are length, width and height parameters of the B-BOX corresponding to the certain organ respectively; ind is a variable indicating whether the certain organ exists in the target CT image, if yes, the ind is 1, if not, the ind is 0, the ind does not participate in network training, and the ind is only used for judging whether the organ related prediction participates in loss calculation; ind _ cls is a binary variable, which is 0 if the distance between the certain organ and the center of the sample block is larger than 128 voxels, and is 1 otherwise. By setting this variable, a two-classifier can be trained to easily screen out the patch sampled 128 voxels away from the organ (i.e., the target organ) regardless of its prediction of the organ. The two-classifier is a simple task, which is equivalent to judging whether the center of the target organ is in a sphere with the diameter of 256 voxels, so that extremely high prediction accuracy can be obtained quickly.
As shown in fig. 2, a schematic diagram of a training method for CT image organ positioning based on 3D CNN and iterative search according to a second embodiment of the present invention is provided.
Local patches are sampled from the original CT dataset to predict the offset of the center of the circumscribed cube of the other organ relative to the target block center position determined for the target organ. Because human tissues in different CT images are very different, it is impractical to try to make prediction blocks of different local organs in the CT images have the capability of accurately predicting the position and the length, the width and the height of an organ, so that the training model has different prediction capabilities on the prediction blocks at various positions in the CT images. In the invention, sampling is carried out in different distance areas from a target organ by adopting a distance-division multi-density block sampling mode: for the region far away from the target organ, low density sampling is performed, and it is expected that the organ position can be predicted in a fuzzy manner through the patch. For regions of intermediate distance, intermediate density sampling is performed, which is expected to allow more accurate prediction of other organ positions. For a patch in a short distance, high-density sampling is carried out, so that the organ position is required to be accurately predicted, and the length, width and height of the organ are also required to be predicted. Because in the 3D CT image, along with the increase of the distance from a certain point, the quantity of the samplable batch is exponentially increased, so that different sampling densities can be realized by simply sampling different distance ranges in an equal amount.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (7)
1. The CT image organ positioning method based on 3D CNN and iterative search is characterized by comprising the following steps of:
s01: extracting all target organs and determining the positions of the corresponding target organs in the target CT image;
s03: based on the target organ, performing distance-divided multi-density block sampling;
s05: setting a loss function, and performing iterative search by combining the training network model to obtain positioning information of various organs;
s07: and judging the existence information of various organs according to the positioning information.
2. The 3D CNN and iterative search based CT image organ location method of claim 1, wherein: before the step S01, the method further includes step S00: and carrying out trilinear interpolation processing on the original CT image, and carrying out normalization processing to obtain an implementation target CT image.
3. The 3D CNN and iterative search based CT image organ location method of claim 2, wherein: the HU value of the target image is limited to [ -1000,1600 ];
the normalization processing specifically comprises:
wherein data.min () represents the minimum value of the HU values of the target CT image; max () represents the maximum value of the HU value of the target CT image, the valueiRepresenting the ith HU value in the target CT image; the ValueiRepresenting the HU value of the ith in the target CT image after the normalization processing.
4. The 3D CNN and iterative search based CT image organ location method of claim 1, wherein: the performing of the distance-divided multi-density block sampling based on the target organ specifically includes: the sub-distances are specifically 0-8 voxels, 8-16 voxels, 16-32 voxels, 32-128 voxels, and more than 128 voxels.
5. The 3D CNN and iterative search based CT image organ location method of claim 1, wherein: the loss function is L ═ LIOU+Lreg+Lcls
Wherein L isIOURepresents B-BOX IOU loss, LregRepresents the loss of center distance, LclsRepresenting a two-classification loss.
6. The method of claim 5, wherein the organ location of CT image based on 3D CNN and iterative search is as follows: said LIOUThe method specifically comprises the following steps:
wherein 1-IOU represents the overlapping area corresponding to the block characteristic parameter; d is the Euclidean distance between the prediction block of the other organ and the center point of the target block of the target organ, c is the diagonal length of the minimum rectangular box which can contain the prediction block and the target block of the target organ,corresponding to the distance of the central point; α v denotes an aspect ratio of the virtual block in the block feature parameter, the aspect ratio including a ratio of lengths of third coordinate axes in three-dimensional coordinates,wherein wgtAnd hgtWidth and height of a target block representing the target organ, w and h represent width and height of the prediction block.
7. The 3D CNN and iterative search based CT image organ location method of claim 1, wherein: the positioning information comprises each set of 8-dimensional vectors of 11 organs; wherein each set of 8-dimensional vectors comprises (center _ x, center _ y, center _ z, h, w, l, ind, ind _ cls).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011499386.7A CN112598634B (en) | 2020-12-18 | 2020-12-18 | CT image organ positioning method based on 3D CNN and iterative search |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011499386.7A CN112598634B (en) | 2020-12-18 | 2020-12-18 | CT image organ positioning method based on 3D CNN and iterative search |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112598634A true CN112598634A (en) | 2021-04-02 |
CN112598634B CN112598634B (en) | 2022-11-25 |
Family
ID=75199130
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011499386.7A Active CN112598634B (en) | 2020-12-18 | 2020-12-18 | CT image organ positioning method based on 3D CNN and iterative search |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112598634B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130223704A1 (en) * | 2012-02-28 | 2013-08-29 | Siemens Aktiengesellschaft | Method and System for Joint Multi-Organ Segmentation in Medical Image Data Using Local and Global Context |
WO2017062453A1 (en) * | 2015-10-05 | 2017-04-13 | The University Of North Carolina At Chapel Hill | Image segmentation of organs depicted in computed tomography images |
WO2017210690A1 (en) * | 2016-06-03 | 2017-12-07 | Lu Le | Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans |
CN108364294A (en) * | 2018-02-05 | 2018-08-03 | 西北大学 | Abdominal CT images multiple organ dividing method based on super-pixel |
CN109785306A (en) * | 2019-01-09 | 2019-05-21 | 上海联影医疗科技有限公司 | Organ delineation method, device, computer equipment and storage medium |
CN109934235A (en) * | 2019-03-20 | 2019-06-25 | 中南大学 | A kind of unsupervised abdominal CT sequence image multiple organ automatic division method simultaneously |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110378910A (en) * | 2019-06-28 | 2019-10-25 | 艾瑞迈迪科技石家庄有限公司 | Abdominal cavity multiple organ dividing method and device based on map fusion |
CN110415252A (en) * | 2018-04-26 | 2019-11-05 | 北京连心医疗科技有限公司 | A kind of eye circumference organ segmentation method, equipment and storage medium based on CNN |
CN111415359A (en) * | 2020-03-24 | 2020-07-14 | 浙江明峰智能医疗科技有限公司 | Method for automatically segmenting multiple organs of medical image |
CN111583204A (en) * | 2020-04-27 | 2020-08-25 | 天津大学 | Organ positioning method of two-dimensional sequence magnetic resonance image based on network model |
CN111640120A (en) * | 2020-04-09 | 2020-09-08 | 之江实验室 | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network |
-
2020
- 2020-12-18 CN CN202011499386.7A patent/CN112598634B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130223704A1 (en) * | 2012-02-28 | 2013-08-29 | Siemens Aktiengesellschaft | Method and System for Joint Multi-Organ Segmentation in Medical Image Data Using Local and Global Context |
WO2017062453A1 (en) * | 2015-10-05 | 2017-04-13 | The University Of North Carolina At Chapel Hill | Image segmentation of organs depicted in computed tomography images |
WO2017210690A1 (en) * | 2016-06-03 | 2017-12-07 | Lu Le | Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans |
CN108364294A (en) * | 2018-02-05 | 2018-08-03 | 西北大学 | Abdominal CT images multiple organ dividing method based on super-pixel |
CN110415252A (en) * | 2018-04-26 | 2019-11-05 | 北京连心医疗科技有限公司 | A kind of eye circumference organ segmentation method, equipment and storage medium based on CNN |
CN109785306A (en) * | 2019-01-09 | 2019-05-21 | 上海联影医疗科技有限公司 | Organ delineation method, device, computer equipment and storage medium |
CN109934235A (en) * | 2019-03-20 | 2019-06-25 | 中南大学 | A kind of unsupervised abdominal CT sequence image multiple organ automatic division method simultaneously |
CN110223352A (en) * | 2019-06-14 | 2019-09-10 | 浙江明峰智能医疗科技有限公司 | A kind of medical image scanning automatic positioning method based on deep learning |
CN110378910A (en) * | 2019-06-28 | 2019-10-25 | 艾瑞迈迪科技石家庄有限公司 | Abdominal cavity multiple organ dividing method and device based on map fusion |
CN111415359A (en) * | 2020-03-24 | 2020-07-14 | 浙江明峰智能医疗科技有限公司 | Method for automatically segmenting multiple organs of medical image |
CN111640120A (en) * | 2020-04-09 | 2020-09-08 | 之江实验室 | Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network |
CN111583204A (en) * | 2020-04-27 | 2020-08-25 | 天津大学 | Organ positioning method of two-dimensional sequence magnetic resonance image based on network model |
Non-Patent Citations (2)
Title |
---|
YAN WANG等: "Abdominal multi-organ segmentation with organ-attention networks and statistical fusion", 《MEDICAL IMAGE ANALYSIS》 * |
陈中华: "基于先验知识的医学图像多器官分割", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112598634B (en) | 2022-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780442B (en) | Stereo matching method and system | |
WO2021203795A1 (en) | Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network | |
US6975755B1 (en) | Image processing method and apparatus | |
CN111583204B (en) | Organ positioning method of two-dimensional sequence magnetic resonance image based on network model | |
CN111415316A (en) | Defect data synthesis algorithm based on generation of countermeasure network | |
CN110705555A (en) | Abdomen multi-organ nuclear magnetic resonance image segmentation method, system and medium based on FCN | |
CN111368769B (en) | Ship multi-target detection method based on improved anchor point frame generation model | |
CN109886929B (en) | MRI tumor voxel detection method based on convolutional neural network | |
CN109035196B (en) | Saliency-based image local blur detection method | |
CN111968138B (en) | Medical image segmentation method based on 3D dynamic edge insensitivity loss function | |
CN110188763B (en) | Image significance detection method based on improved graph model | |
CN110853011A (en) | Method for constructing convolutional neural network model for pulmonary nodule detection | |
CN111369615B (en) | Nuclear central point detection method based on multitasking convolutional neural network | |
CN111583276A (en) | CGAN-based space target ISAR image component segmentation method | |
CN111199245A (en) | Rape pest identification method | |
CN111709430A (en) | Ground extraction method of outdoor scene three-dimensional point cloud based on Gaussian process regression | |
CN115393734A (en) | SAR image ship contour extraction method based on fast R-CNN and CV model combined method | |
CN115564782A (en) | 3D blood vessel and trachea segmentation method and system | |
CN115063435A (en) | Multi-scale inter-class based tumor and peripheral organ segmentation method | |
CN114492619A (en) | Point cloud data set construction method and device based on statistics and concave-convex property | |
CN112598634B (en) | CT image organ positioning method based on 3D CNN and iterative search | |
CN114022526B (en) | SAC-IA point cloud registration method based on three-dimensional shape context | |
Hao et al. | An improved cervical cell segmentation method based on deep convolutional network | |
CN115131628A (en) | Mammary gland image classification method and equipment based on typing auxiliary information | |
CN115797378A (en) | Prostate contour segmentation method based on geometric intersection ratio loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |