CN111080657A - CT image organ segmentation method based on convolutional neural network multi-dimensional fusion - Google Patents

CT image organ segmentation method based on convolutional neural network multi-dimensional fusion Download PDF

Info

Publication number
CN111080657A
CN111080657A CN201911281983.XA CN201911281983A CN111080657A CN 111080657 A CN111080657 A CN 111080657A CN 201911281983 A CN201911281983 A CN 201911281983A CN 111080657 A CN111080657 A CN 111080657A
Authority
CN
China
Prior art keywords
model
data
image
segmentation
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911281983.XA
Other languages
Chinese (zh)
Inventor
杜强
黄丹
匡铭
郭雨晨
聂方兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xbentury Network Technology Co ltd
Original Assignee
Beijing Xbentury Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xbentury Network Technology Co ltd filed Critical Beijing Xbentury Network Technology Co ltd
Priority to CN201911281983.XA priority Critical patent/CN111080657A/en
Publication of CN111080657A publication Critical patent/CN111080657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention relates to a CT image organ segmentation method based on convolution neural network multi-dimensional fusion. The method comprises the following steps: for 2.5D and 3D models, different data processing is respectively carried out on the original data, so that the processed data can be input into the corresponding model for feature extraction and training; setting a loss function, and training two models of 2.5D and 3D; obtaining different segmentation results according to the 2.5D and 3D models; and fusing different segmentation results by using a model fusion technology to obtain a final accurate result. The method solves the problems that in the process of segmenting dangerous organ images in medical CT images, spatial information is lost when a 2D model is segmented, detailed information is lost when a 3D model is segmented, and an accurate segmentation model is constructed by utilizing a multi-dimensional fusion convolutional neural network, so that the segmentation accuracy of the model is improved.

Description

CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
Technical Field
The invention relates to the technical field of medical image processing, in particular to a CT image organ segmentation method based on convolutional neural network multi-dimensional fusion.
Background
At present, an image segmentation technology is widely applied to a plurality of practical scenes, and how to construct an accurate and rapid segmentation model is an important step in the image segmentation technology, especially in medical image application with extremely high accuracy requirements. Cancer is a major cause of death worldwide, and the number of deaths is increasing due to an increase in the population and aging of the population. In cancer therapy, radiation therapy is a treatment that kills cancer cells with high doses of radiation. Before planning a radiation treatment, the targeted tumor and its adjacent healthy organs need to be carefully contoured. The targeted tumor is called the danger organ (Organs at risk-OARs) in CT images. Often these contouring is done manually by the physician, which takes hours and places high demands on the physician's experience due to the large anatomical variation between patients, some noise introduced in the scan. This creates conditions for automatically delineating organs using computer image segmentation techniques.
Some previous techniques for OAR segmentation mostly use Hough transforms (Hough transforms), Scale Invariant Feature Transforms (SIFT), Histogram of Oriented Gradients (HOG), Local Binary Pattern (Local Binary Pattern), and other feature extractors to extract features and then segment. However, since the final effect of this method is closely related to the extracted features, which are all designed artificially, it sometimes fails to describe well all the information of the image, especially the medical image. This has led to the gradual replacement of such conventional methods by deep learning Convolutional Neural Networks (CNN). Deep learning is a generic term for a class of pattern analysis methods, and convolutional neural networks are one type. The basic principle is that a neural network is used for automatically learning image features which are beneficial to segmentation, then feature representation of a sample in an original space is transformed to another new feature space through feature transformation layer by layer, and segmentation tasks are performed after feature extraction and transformation are carried out continuously.
The existing deep learning image segmentation technical methods applied to the medical field mainly comprise two methods, one is directed at a single image, namely a traditional 2D U network (U-Net); the other is a V-network (V-Net) for the entire CT image, i.e. 3D space.
Because the CT machine obtains a three-dimensional 3D image after scanning, if the CT machine simply uses U-Net to segment one image by one image, some original spatial information of the 3D image can be lost, so that the segmented result possibly lacks continuity; however, with the use of V-Net, because the CT image size is large, it needs to perform downsampling before segmentation, and this step will cause the data to lose some detail information included in the original data, and will cause the result to lose some detail accuracy. How to retain the detail information of the original data to the maximum extent and how to utilize the 3D space information contained in the CT image still needs further research.
Disclosure of Invention
The invention aims to solve the technical problem that in the process of segmenting dangerous Organ (OARs) images in medical CT images, spatial information is lost when a 2D model is segmented, detailed information is lost when a 3D model is segmented, and an accurate segmentation model is constructed by using a multi-dimensional fused convolutional neural network, so that the segmentation accuracy of the model is improved.
Technical objects that can be achieved by the present invention are not limited to what has been particularly described above, and other technical objects that are not described herein will be more clearly understood by those skilled in the art from the following detailed description.
The technical scheme for solving the technical problems is as follows:
according to an aspect of the present disclosure, the present invention provides a method comprising:
for 2.5D and 3D models, different data processing is respectively carried out on the original data, so that the processed data can be input into the corresponding model for feature extraction and training;
setting a loss function, and training the 2.5D and 3D models;
obtaining different segmentation results according to the 2.5D and 3D models; and
and fusing different segmentation results by using a model fusion technology to obtain a final accurate result.
Alternatively, in the method as described above, for the 2.5D model, the original data is first uniformly changed into 512 × 128 size images, and then the single image is subjected to the front-back superposition operation in order to add spatial information on the 2D basis.
Alternatively, in the method as described above, for 3D model branching, coarse resolution data and fine resolution data are prepared, in which the raw data is collectively changed to 96 × 96 size as the coarse resolution data, and the portion to be segmented is sampled 96 × 96 size data a plurality of times at the raw size as the fine resolution data according to the segmentation labels of the existing raw data.
Alternatively, in the method as described above, for the changed original data, for the nth image, a two-dimensional data is fixed, and the third dimensional data of the (n-1) th and (n + 1) th images is superimposed on the third dimensional data of the nth image; then fixing three-dimensional data, and overlaying the third three-dimensional data of the (n-1) th and (n + 1) th images to the second two-dimensional data of the nth image; then, the two three-dimensional data are fixed, and the third-dimensional data of the (n-1) th image and the third-dimensional data of the (n + 1) th image are superposed on the first-dimensional data of the nth image to manufacture 2.5D data with three different dimensions.
Alternatively, in the method described above, for the 2.5D model part, its main framework employs a conventional UNet.
Optionally, in the method as described above, the coarse resolution data is used to train a coarse resolution model for roughly locating a region where an organ to be segmented is located as a volume of interest, the fine resolution data is used to train a fine resolution model, and the two trained models are combined, and the fine resolution model is finely segmented in the volume of interest found by the coarse resolution model.
Alternatively, in the method described above, the Loss function takes a weighted Loss,
Loss=α×Dice loss+β×Focal loss
wherein:
Figure BDA0002317015190000041
Figure BDA0002317015190000042
wherein: y isnIs a label, y'nFor the model output results, α, β are hyperginseng and α + β is 1.0 and ε is 1.0.
Optionally, in the method as described above, in order to integrate the results obtained by the 2.5D model and the 3D model, the results of the 2.5D model in the 3 directions and the results output by the 3D model are fused in a voting manner, so as to obtain a final segmentation result.
The above-described embodiments are only some of the embodiments of the present invention, and those skilled in the art can derive and understand various embodiments including technical features of the present invention from the following detailed description of the present invention.
According to the invention, by fusing the results of the 2.5D model and the 3D model and processing some details in the two branches, a part of space information and detail information which are lost originally can be reserved to the greatest extent, and the two methods are combined, so that the advantages can be obtained respectively, and the accuracy of the segmentation result is effectively improved. From experimental results, the CT image multi-organ segmentation method based on the convolutional neural network multi-dimensional fusion has the advantages of good generalization, high accuracy and the like, and has a strong practical application prospect.
It will be appreciated by persons skilled in the art that the effects that can be achieved by the present invention are not limited to what has been particularly described hereinabove and other advantages of the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description serve to explain the principle of the invention.
Fig. 1 is a schematic diagram of a method for segmenting an organ in a CT image based on convolutional neural network multi-dimensional fusion according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a UNet model architecture of a CT image organ segmentation method based on convolutional neural network multi-dimensional fusion according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a 3D model flow of a CT image organ segmentation method based on convolutional neural network multi-dimensional fusion according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a VNet model architecture of a CT image organ segmentation method based on convolutional neural network multi-dimensional fusion according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. The detailed description, which will be given below with reference to the accompanying drawings, is intended to explain exemplary embodiments of the present invention, rather than to show the only embodiments that can be implemented according to the present invention. The following detailed description includes specific details in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details.
In some instances, well-known structures and devices are omitted or shown in block diagram form, focusing on important features of the structures and devices so as not to obscure the concept of the present invention. The same reference numbers will be used throughout the specification to refer to the same or like parts.
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "center", "inner", "outer", "top", "bottom", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 shows a schematic diagram of a method for segmenting an organ of a CT image based on multi-dimensional fusion of a convolutional neural network according to an embodiment of the present invention. The original 3D CT image data are processed differently and processed by different 2.5D and 3D models, so that characteristics are learned and extracted, and then segmentation results are generated respectively. And finally, fusing the two results by using a model fusion mode to obtain a final result.
The specific process of the present invention comprises the following steps. Firstly, for the 2.5D branch and the 3D branch, different processing is carried out on the original data, so that the data can be input into a corresponding model for feature extraction and training. Second, a loss function is set, and models of the two branches are trained. Thirdly, different segmentation results are obtained according to different branches. And fourthly, fusing the results of the two branches by using a model fusion technology to obtain a final accurate result. The specific implementation mode is as follows:
1 data processing
Because the CT images in the original data set are different in size, and because the processing flow has two branches, different data needs to be prepared, the original data needs to be processed first:
for the 2.5D model branch:
the image formats of the original data sets are of varying sizes and are denoted herein by M x N x H, i.e., M is long, N is wide, and H is high. Firstly, uniformly changing (resize) original data into 512 × 128 size, and then, in order to increase spatial information on the basis of 2D, performing front-back superposition operation on a single data sheet: assuming that the third dimension is operated on, an example of the original CT image can generate 128 pieces of 2.5D training data, for example, 2 nd piece of [ x1, x2, x3] connected images with the size of 512 x3, and for example, 3 rd piece of [ x2, x3, x4] connected images with the size of 512 x3 (xn is the first two-dimensional invariant in the original image, and the nth image in the third dimension is taken). In order to make better use of the information in three dimensions, one three dimension is fixed and the second dimension is superimposed as in the above method; and fixing two dimensions and three dimensions, and overlapping the first dimension to manufacture 2.5D data with three different dimensions.
B for 3D model branching
In order to solve the problem of lack of details of the 3D model, the invention provides a set of coarse resolution and fine resolution processing methods, and the specific method is explained in detail in the next part, so that two parts of data, namely coarse resolution data and fine resolution data, need to be prepared. The coarse resolution data is obtained by uniformly changing (resize) the original data to 96 × 96 size, and the fine resolution is obtained by sampling (sample) the portion to be divided on the original size with data of 96 × 96 size multiple times according to the division label of the existing original data. All data preparation is complete so far.
2 model construction
A2.5D model branch:
for the 2.5D model part, its main framework inherits the traditional UNet. UNet is established on the framework of a full convolution neural network, and the network framework is modified and expanded, so that a very accurate segmentation result can be obtained by using few training images, an encoder-decoder structure is adopted, the down sampling of an encoder (encoder) is 16 times, the up sampling of a decoder (decoder) is 16 times, the number of channels is doubled every down sampling, and the size of a feature map is reduced by one time; each time the channel number is reduced by one time and the feature size is enlarged by one time, the increase of the channel number allows more information of the original image texture to be transmitted in the high resolution layer, as shown in fig. 2. On the basis of the original UNet, an attention mechanism is introduced in the up-sampling process, the attention mechanism is simply to focus attention on important points and ignore unimportant points, and according to different application scenes, the attention mechanism has a space attention and a time attention which are respectively used for image processing and natural language processing. The attention mechanism can be generally described by Query and Key-Value pair, the process is that given a Query, the correlation weight of the Query and each Key-Value is obtained and normalized by calculating the similarity of the Query and the Key, and the larger the weight is, the more important it is. The attention mechanism calculation process is as follows:
Figure BDA0002317015190000071
Figure BDA0002317015190000072
q, K, V are Query, Key, Value vector or matrix, f is similarity, and the following calculation methods are common:
dot multiplication: f (Q, K)i)=QTKi
Weighted dot product: f (Q, K)i)=QTWKi
Splicing with weight: f (Q, K)i)=W[QT;Ki]
A neural network: f (Q, K)i)=sigmoid(WQ+UKi)
Adding: f (Q, K)i)=Q+Ki
W, U is a parameter obtained by learning.
In the attention process of Unet, the feature map before each downsampling of encode is used as the feature map before upsampling of the corresponding decode of Query, Key and Value. The calculation process is as follows:
f(Q,K)=Q+K
Attention feature map=sigmoid(f(Q,K))×Value
the Attention feature map is a feature map with Attention,
Figure BDA0002317015190000081
when CNN is used for image segmentation, convolution is often finished, and then posing is needed to perform downsampling operation, so that the image size is reduced, and the receptive field is also increased. The cavity convolution can increase the receptive field under the condition of not reducing the image size, so that the convolution output contains large-range information, but because the cavity convolution does not reduce the image size, the calculation amount and the memory use are greatly increased compared with the posing mode, and the medical image input by the Unet is often larger, so that the cavity convolution with the step length of 5 is used at the bottom of the Unet, as shown in the red box of the upper graph, the receptive field can be increased under the condition of not losing the resolution, and meanwhile, the calculation amount and the memory occupation are ensured. As shown in fig. 2.
Because the original CT image has 3 dimensions, in order to better reserve and acquire spatial information, according to the previous data processing part, three 2.5D segmentation models for segmenting in different directions (length, width and height) are constructed according to different prepared data.
B3D model branch:
for the 3D model part, coarse resolution as well as fine resolution data are prepared according to the foregoing. The specific process is as follows: first, using coarse resolution data, a first model is trained to roughly locate the region of the organ to be segmented, referred to as the volume of interest (VOI). The fine resolution data is then used to train a finely segmented model. And then combining the two trained models, and finely segmenting the fine resolution model in the VOI found by the coarse resolution model, as shown in FIG. 3, so as to solve the problem that certain detailed information is lost due to the limitation of the size of the image in the conventional 3D segmentation method. While both coarse and fine resolution models use vnets. VNet is a three-dimensional image segmentation method based on volumetric full convolution neural network, which employs an encode-decoder structure, the encoder (encoder) using convolution operations, in order to extract features from the input image, and after each stage has ended to reduce its resolution and downsample with appropriate steps to reduce the size of the signal presented as input, and increase the receptive field of the subsequent nets, the decoder (decoder) extracts features and extends the spatial support of the lower resolution feature maps, in order to collect and combine the necessary information to output a two-channel volumetric segmentation. As shown in fig. 4.
3 constructive loss function
The common Loss function for segmentation comprises cross entropy, Dice Loss, Focal Loss, Mean iou and the like, the invention adopts the Loss with weight,
Loss=α×Dice loss+β×Focal loss
wherein:
Figure BDA0002317015190000091
Figure BDA0002317015190000092
wherein:
ynis a label, y'nFor model output results, α and β are hyperginseng, α + β is 1.0, epsilon is 1.0
Model 4 fusion
In order to integrate the results obtained by the 2.5D model and the 3D model and to integrate the advantages of the two, the results of the 2.5D model in 3 directions and the results output by the 3D model are fused in a voting way to obtain the final segmentation result.
According to the technical scheme, on the basis of an original segmentation method, the problems that space information is lost when a 2D image is segmented and detail information is lost when a 3D image is segmented are solved, and the segmentation accuracy of the model is improved. The invention provides a network architecture with attention mechanism and cavity convolution constructed based on Unet, wherein the network utilizes a feature graph before each down-sampling of encode as Query, a feature graph before up-sampling of decode as Key and Value, and the cavity convolution with the step length of 5 is used at the bottom of Unet, so that the receptive field is increased without increasing the calculation amount and the use of memory.
According to the result of the open match SegTHOR change 2019(Segmentation of clinical organics at skin CT Images), the method for segmenting the Organs of the CT image based on the convolutional neural network multi-dimensional fusion disclosed by the invention shows superiority. Specifically, in the game, 50 CT images are used as a training set, 20 images are used as a test set, and the evaluation standard is average Dice (mean Dice), wherein Dice is 1-Dice loss. In the game, the invention obtains 0.9067 average Dice with excellent effect.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and of course, can also be implemented by hardware. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
As mentioned above, a detailed description of the preferred embodiments of the invention has been given to enable those skilled in the art to make and practice the invention. Although the present invention has been described with reference to exemplary embodiments, those skilled in the art will appreciate that various modifications and changes can be made in the present invention without departing from the spirit or scope of the invention described in the appended claims. Thus, the present invention is not intended to be limited to the particular embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A CT image organ segmentation method based on convolutional neural network multi-dimensional fusion is characterized by comprising the following steps:
s1: for 2.5D and 3D models, different data processing is respectively carried out on the original data, so that the processed data can be input into the corresponding model for feature extraction and training;
s2: setting a loss function, and training the 2.5D and 3D models;
s3: obtaining different segmentation results according to the 2.5D and 3D models; and
s4: and fusing different segmentation results by using a model fusion technology to obtain a final accurate result.
2. The method of claim 1,
in S1, for the 2.5D model, the original data is first changed into 512 × 128 images uniformly, and then the single image is subjected to the front-back superimposition operation in order to add spatial information on a 2D basis.
3. The method of claim 1,
in S1, for the 3D model branch, coarse resolution data and fine resolution data are prepared, in which the raw data is collectively changed to 96 × 96 size as the coarse resolution data, and data of 96 × 96 size is sampled a plurality of times in the raw size of the portion to be segmented as the fine resolution data according to the segmentation labels of the existing raw data.
4. The method of claim 2,
for the changed original data, fixing two-dimensional data for the nth image, and superposing the third-dimensional data of the (n-1) th and (n + 1) th images to the third-dimensional data of the nth image; then fixing three-dimensional data, and overlaying the third three-dimensional data of the (n-1) th and (n + 1) th images to the second two-dimensional data of the nth image; then, the two three-dimensional data are fixed, and the third-dimensional data of the (n-1) th image and the third-dimensional data of the (n + 1) th image are superposed on the first-dimensional data of the nth image to manufacture 2.5D data with three different dimensions.
5. The method of claim 1,
in S1, for the 2.5D model part, its main framework employs a conventional UNet.
6. The method of claim 3,
firstly, the coarse resolution data is used for training a coarse resolution model to roughly position the region where the organ to be segmented is located as an interested body, then the fine resolution data is used for training a fine resolution model, then the two trained models are combined, and the fine resolution model is finely segmented in the interested body found by the coarse resolution model.
7. The method of claim 1,
at S2, the Loss function takes a weighted Loss,
Loss=α×Dice loss+β×Focal loss
wherein:
Figure FDA0002317015180000021
Figure FDA0002317015180000022
wherein: y isnIs a label, y'nFor the model output results, α, β are hyperginseng and α + β is 1.0 and ε is 1.0.
8. The method of claim 1,
in S4, in order to integrate the results obtained from the 2.5D model and the 3D model, the results from the 2.5D model in the 3 directions and the results output from the 3D model are voted together to obtain the final segmentation result.
CN201911281983.XA 2019-12-13 2019-12-13 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion Pending CN111080657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911281983.XA CN111080657A (en) 2019-12-13 2019-12-13 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911281983.XA CN111080657A (en) 2019-12-13 2019-12-13 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion

Publications (1)

Publication Number Publication Date
CN111080657A true CN111080657A (en) 2020-04-28

Family

ID=70314350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911281983.XA Pending CN111080657A (en) 2019-12-13 2019-12-13 CT image organ segmentation method based on convolutional neural network multi-dimensional fusion

Country Status (1)

Country Link
CN (1) CN111080657A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN111882535A (en) * 2020-07-21 2020-11-03 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN112132917A (en) * 2020-08-27 2020-12-25 盐城工学院 Intelligent diagnosis method for rectal cancer lymph node metastasis
CN112508900A (en) * 2020-11-30 2021-03-16 上海交通大学 Cytopathology image segmentation method and device
CN112561877A (en) * 2020-12-14 2021-03-26 中国科学院深圳先进技术研究院 Multi-scale double-channel convolution model training method, image processing method and device
CN113139627A (en) * 2021-06-22 2021-07-20 北京小白世纪网络科技有限公司 Mediastinal lump identification method, system and device
CN116912214A (en) * 2023-07-19 2023-10-20 首都医科大学宣武医院 Method, apparatus and storage medium for segmenting aneurysm detection image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064473A (en) * 2018-07-26 2018-12-21 华南理工大学 A kind of 2.5D ultrasonic wide-scene image partition method
CN109523560A (en) * 2018-11-09 2019-03-26 成都大学 A kind of three-dimensional image segmentation method based on deep learning
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net
CN110458142A (en) * 2019-08-21 2019-11-15 青岛根尖智能科技有限公司 A kind of face identification method and system merging 2D and 3D

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064473A (en) * 2018-07-26 2018-12-21 华南理工大学 A kind of 2.5D ultrasonic wide-scene image partition method
CN109523560A (en) * 2018-11-09 2019-03-26 成都大学 A kind of three-dimensional image segmentation method based on deep learning
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net
CN110458142A (en) * 2019-08-21 2019-11-15 青岛根尖智能科技有限公司 A kind of face identification method and system merging 2D and 3D

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KE HU等: "A 2.5D Cancer Segmentation for MRI Images Based on U-Net", 《2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING》 *
WENTAO ZHU等: "AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy", 《HTTPS://ARXIV.ORG/ABS/1808.05238-V2》 *
孙锦峰等: "基于W⁃Net的肝静脉和肝门静脉全自动分割", 《中国生物医学工程学报》 *
邢成颜等: "《临床医学诊疗丛书 医学影像分册》", 31 July 2008 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN111882535A (en) * 2020-07-21 2020-11-03 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN111882535B (en) * 2020-07-21 2023-06-27 中国计量大学 Resistance welding shear strength identification method based on improved Unet network
CN112132917A (en) * 2020-08-27 2020-12-25 盐城工学院 Intelligent diagnosis method for rectal cancer lymph node metastasis
CN112508900A (en) * 2020-11-30 2021-03-16 上海交通大学 Cytopathology image segmentation method and device
CN112508900B (en) * 2020-11-30 2022-11-01 上海交通大学 Cytopathology image segmentation method and device
CN112561877A (en) * 2020-12-14 2021-03-26 中国科学院深圳先进技术研究院 Multi-scale double-channel convolution model training method, image processing method and device
CN112561877B (en) * 2020-12-14 2024-03-29 中国科学院深圳先进技术研究院 Multi-scale double-channel convolution model training method, image processing method and device
CN113139627A (en) * 2021-06-22 2021-07-20 北京小白世纪网络科技有限公司 Mediastinal lump identification method, system and device
CN113139627B (en) * 2021-06-22 2021-11-05 北京小白世纪网络科技有限公司 Mediastinal lump identification method, system and device
CN116912214A (en) * 2023-07-19 2023-10-20 首都医科大学宣武医院 Method, apparatus and storage medium for segmenting aneurysm detection image
CN116912214B (en) * 2023-07-19 2024-03-22 首都医科大学宣武医院 Method, apparatus and storage medium for segmenting aneurysm detection image

Similar Documents

Publication Publication Date Title
CN111080657A (en) CT image organ segmentation method based on convolutional neural network multi-dimensional fusion
Liu et al. An encoder-decoder neural network with 3D squeeze-and-excitation and deep supervision for brain tumor segmentation
CN113012172B (en) AS-UNet-based medical image segmentation method and system
CN112184748B (en) Deformable context coding network model and method for segmenting liver and liver tumor
CN107492071A (en) Medical image processing method and equipment
CN116012344B (en) Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
Zhu et al. Arbitrary scale super-resolution for medical images
CN113052856A (en) Hippocampus three-dimensional semantic network segmentation method based on multi-scale feature multi-path attention fusion mechanism
CN116309650A (en) Medical image segmentation method and system based on double-branch embedded attention mechanism
CN114066913B (en) Heart image segmentation method and system
CN114972248A (en) Attention mechanism-based improved U-net liver tumor segmentation method
CN112288749A (en) Skull image segmentation method based on depth iterative fusion depth learning model
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN116612174A (en) Three-dimensional reconstruction method and system for soft tissue and computer storage medium
CN115661165A (en) Glioma fusion segmentation system and method based on attention enhancement coding and decoding network
Jiang et al. ALA-Net: Adaptive lesion-aware attention network for 3D colorectal tumor segmentation
CN115375897A (en) Image processing method, apparatus, device and medium
CN113744284B (en) Brain tumor image region segmentation method and device, neural network and electronic equipment
CN115496732A (en) Semi-supervised heart semantic segmentation algorithm
Yuan et al. FM-Unet: Biomedical image segmentation based on feedback mechanism Unet
CN114882135A (en) CT image synthesis method, device, equipment and medium based on MR image
CN114782532A (en) Spatial attention method and device for PET-CT (positron emission tomography-computed tomography) multi-modal tumor segmentation
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication