CN108596884B - Esophagus cancer segmentation method in chest CT image - Google Patents

Esophagus cancer segmentation method in chest CT image Download PDF

Info

Publication number
CN108596884B
CN108596884B CN201810335223.1A CN201810335223A CN108596884B CN 108596884 B CN108596884 B CN 108596884B CN 201810335223 A CN201810335223 A CN 201810335223A CN 108596884 B CN108596884 B CN 108596884B
Authority
CN
China
Prior art keywords
esophageal cancer
image
segmentation
layer
chest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810335223.1A
Other languages
Chinese (zh)
Other versions
CN108596884A (en
Inventor
陈树超
陈洪波
刘立志
黎浩江
徐绍凯
傅嘉文
朱志华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Sun Yat Sen University
Original Assignee
Guilin University of Electronic Technology
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology, Sun Yat Sen University filed Critical Guilin University of Electronic Technology
Priority to CN201810335223.1A priority Critical patent/CN108596884B/en
Publication of CN108596884A publication Critical patent/CN108596884A/en
Application granted granted Critical
Publication of CN108596884B publication Critical patent/CN108596884B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses an esophagus cancer segmentation method in a chest CT image, which comprises the steps of firstly selecting a plurality of groups of CT images containing esophagus cancer, and taking the CT images containing esophagus cancer as training samples; preprocessing the selected CT image to obtain esophageal cancer characteristics, and performing characteristic description to obtain an image serving as training data; establishing an esophageal cancer semantic segmentation model based on a full convolution neural network, and taking the described esophageal cancer characteristics as the characteristics of the full convolution neural network to input the characteristics as a learning sample for training to obtain an esophageal cancer segmentation network model; three-dimensional reconstruction of esophageal cancer, namely performing three-dimensional reconstruction and analysis on an esophageal cancer segmentation result obtained by the obtained esophageal cancer segmentation network model to obtain esophageal cancer imaging omics parameters in a three-dimensional space; and visually displaying the obtained esophageal cancer imaging group parameters. The method has the advantages of small model scale, high speed and high accuracy.

Description

Esophagus cancer segmentation method in chest CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for segmenting esophageal cancer in a chest CT image.
Background
With the continuous development of computer technology, medical imaging is also rapidly developing, so that more and more medical data can be processed and judged by a computer, and the success probability of disease diagnosis and treatment is improved. In diagnosis and detection of esophageal cancer, apart from endoscopy, Computed Tomography (CT) is more common, and CT images can provide gray-scale images of continuous slices of a certain part of a human body and show physiological conditions in the human body in detail, so that an image group method based on CT image processing is commonly used for diagnosis and prognosis research of esophageal cancer. In a computer-aided diagnosis system, accurate segmentation of an esophageal cancer CT scanning image is a key technology for subsequent tumor case analysis and three-dimensional reconstruction. Effective and stable esophageal cancer segmentation can not only increase the accuracy of disease diagnosis, but also provide imaging omics information for prognosis. Because the esophagus occupies only a small part of the body on the CT slice image and is closely attached to other related organs, the occupancy ratio of the esophagus to other tissues and organs is small, the contrast with other organs is low, the conventional image segmentation method is difficult to obtain the esophageal cancer region, and difficulty is caused in the imaging analysis and the prognosis prediction of esophageal cancer.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides the esophagus cancer segmentation method in the chest CT image.
The technical scheme for realizing the purpose of the invention is as follows:
a method for segmenting esophageal cancer in a chest CT image specifically comprises the following steps:
1) selecting a plurality of groups of CT images containing esophageal cancer, taking the CT images containing esophageal cancer as training samples, and taking the CT images as the data range of the step 3);
2) preprocessing the CT image selected in the step 1) to obtain esophageal cancer characteristics, and after performing characteristic description, taking the obtained image as training data in the step 3);
3) establishing a full convolution neural network-based esophageal cancer semantic segmentation model, and taking the esophageal cancer features described in the step 2) as features of the full convolution neural network to be input as learning samples to be trained to obtain an esophageal cancer segmentation network model;
4) three-dimensional reconstruction of esophageal cancer, namely performing three-dimensional reconstruction and analysis on an esophageal cancer segmentation result obtained by the esophageal cancer segmentation network model obtained in the step 3) to obtain esophageal cancer image omics parameters in a three-dimensional space, wherein the parameters are prepared in the step 5);
5) and 4) visually displaying the esophageal cancer imaging omics parameters obtained in the step 4).
The step 2) is to convert the DICOM image of the chest CT of each layer into a bitmap according to the window width and window level, cut the CT image into an image of 80 × 80 pixels, include the esophagus part in the cut image, and input the cut image as the feature of the full convolution neural network.
And 3), specifically, after the characteristic data is obtained in the step 2), after a full convolution neural network model is constructed, putting the data in the step 2) into the model for training to obtain a semantic segmentation image. In the training process, the semantic image obtained by the model is compared with the actual label image, the cross entropy is used for calculating the loss function of the loss value between the semantic image and the actual label image, and the loss value is used as the error gradient basis of the back propagation.
And 4), specifically, sequentially arranging the continuous semantic segmentation images obtained in the step 3) (without first and second order division), connecting the two-dimensional image data sequentially arranged into three-dimensional data, and searching a three-dimensional curved surface by using a method based on a triangular patch under the condition of the data to perform three-dimensional reconstruction.
And 5), specifically, displaying esophageal cancer imaging omics parameters by utilizing QT programming, and realizing a certain human-computer interaction function.
Has the advantages that: the esophagus cancer segmentation method in the chest CT image provided by the invention has the advantages of small model scale, high segmentation speed and high accuracy, and provides technical support for the imaging omics analysis and prognosis prediction of esophagus cancer.
Drawings
FIG. 1 is a technical circuit diagram of a method for esophageal cancer segmentation in a chest CT image;
FIG. 2 is a schematic diagram of a full convolution neural network for esophageal cancer semantic segmentation.
Detailed Description
The invention is further illustrated but not limited by the following figures and examples.
Example (b):
as shown in fig. 1, a method for segmenting esophageal cancer in a chest CT image specifically includes the following steps:
1) and selecting a plurality of groups of CT images containing esophageal cancer, and taking the CT images containing esophageal cancer as training samples.
2) Preprocessing the CT image selected in the step 1) to obtain esophageal cancer characteristics, and performing characteristic description; specifically, the method includes converting a DICOM image of chest CT of each layer into a bitmap according to the window width and window level, cutting the CT image into an image of 80 × 80 pixels, including an esophagus part in the cut image, and inputting the cut image as the feature of the full convolution neural network.
3) Establishing an esophageal cancer semantic segmentation model based on a full convolution neural network, inputting the esophageal cancer features described in the step 2) as the features of the full convolution neural network, and training the esophageal cancer features as learning samples to obtain an esophageal cancer segmentation network model;
establishing an esophageal cancer semantic segmentation model based on a full convolution neural network, specifically referring to a pyramid structure of AlexNet and a residual transmission structure of ResNet in a structure conv part, specifically referring to an up-sampling transmission mode of SegNet and a jump connection structure of U-Net in a structure deconv part, putting prepared data into a network for training, and obtaining a network model post-curing model meeting segmentation requirements.
Further, the specific structure of the convolutional network model is as follows, as shown in fig. 2:
the 1 st layer is a convolution kernel with 1 x 1, the number of convolutions is 3, the convolution kernel is used as a pretreatment layer of the network, and meanwhile, the depth of the characteristic network is expanded;
the 2 nd layer is a convolution kernel of 5 × 5, and the 5 × 5 receptive field acquires detailed characteristic relation of the image as much as possible and extracts image characteristics;
the 3 rd layer is a convolution kernel of 5 × 5, a 5 × 5 receptive field acquires detailed characteristic relation of the image as much as possible, and continuous convolution is used for extracting characteristic features of the characteristic graph and is used for enhancing the expression of the characteristic graph of the image;
the 4 th layer is a 2 x 2 pooling layer which is used as a down-sampling layer of the network, highlights local characteristics, reduces overfitting, reduces data volume and increases the receptive field under subsequent convolution kernels with the same size;
the 5 th layer is a convolution kernel of 3 x 3, the receptive field of the 3 x 3 convolution kernel after the pooling is larger than that of 5 x 5, characteristic features of the characteristic diagram are extracted, and characteristic expression is enhanced;
the 6 th layer is a convolution kernel of 3 x 3, characteristic features of the characteristic diagram are extracted, and characteristic expression is enhanced;
the 7 th layer is a 2 x 2 pooling layer which is used as a down-sampling layer of the network, local features are highlighted, overfitting is reduced, data volume is reduced, a sensing domain under convolution with the same size is increased, the processed data of the 7 th layer is transmitted to the 10 th layer through a feature map, and a residual error connection is formed;
the 8 th layer is a convolution kernel of 3 x 3, the receptive field is increased, the characteristic of the characteristic diagram is extracted, and the characteristic expression is enhanced;
the 9 th layer is a convolution kernel of 3 x 3, characteristic features of the characteristic diagram are extracted, and characteristic expression is enhanced;
the 10 th layer is a convolution kernel of 3-by-3, characteristic graph features are extracted, characteristic expression is enhanced, and the characteristic graph net of the 7 th layer is combined with the characteristic graph of the layer to form residual connection;
the 11 th layer is a 2 x 2 pooling layer which is used as a down-sampling layer of the network, local features are highlighted, overfitting is reduced, data volume is reduced, a sensing domain under convolution with the same size is increased, and the layer transmits a feature map to the 13 layers to form residual connection;
layer 12 is a convolution kernel of 3 x 3, and the receptive field is increased; extracting feature map features and enhancing feature expression;
the 13 th layer is a convolution kernel of 3 x 3, characteristic graph features are extracted, characteristic expression is enhanced, and the characteristic graph net of the 11 th layer is combined with the characteristic graph of the layer to form residual connection;
the 14 th layer is a convolution kernel of 3 x 3, characteristic features of the characteristic diagram are extracted, and characteristic expression is enhanced;
the 15 th layer is a 2 x 2 pooling layer which is used as a down-sampling layer of the network, highlights local features, reduces overfitting, reduces data volume and increases a receptive field under convolution with the same size;
the 16 th layer is a convolution kernel of 3 x 3, the receptive field is increased, the characteristic of the characteristic diagram is extracted, and the characteristic expression is enhanced;
the 17 th layer is convolution kernels of 1 × 1, the number of convolutions is 128, the 1 × 1 convolution is used as a pretreatment layer, the number of features is increased, and preparation is made for the next layer of convolution;
the 18 th layer is a convolution kernel of 3 x 3, the number of convolutions is 128, the number of features is increased, the extraction of high-level features is enhanced, and the feature expression is enhanced;
the 19 th layer is a convolution kernel of 1 × 1, the number of convolutions is 64, the 1 × 1 convolution is used as a pretreatment layer, the number of characteristic layers is compressed, a more effective characteristic diagram is generated, redundant parameters are reduced, and the characteristic diagram net of the 15 th layer is combined with the characteristic diagram of the layer to form a residual error connection;
the 20 th layer is a convolution kernel of 3 x 3, characteristic features of the characteristic diagram are extracted, and characteristic expression is enhanced;
the 21 st layer is a 3 x 3 deconvolution kernel, the feature graph is restored to a corresponding size to express high-level features, and the restored feature graph is connected with the 14 th layer feature graph to form a jump connection;
the 22 th layer is a convolution kernel of 3 x 3, corresponding original feature maps and high-level feature map features are extracted, new high-level feature maps are explained, and feature expression is enhanced;
the 23 rd layer is a convolution kernel of 3 x 3, the characteristics of the high-level characteristic graph are extracted, the high-level characteristics are explained, and the characteristic expression is enhanced;
the 24 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 25 th layer is a 3 x 3 deconvolution kernel, the feature graph is restored to the corresponding size to express high-level features, and the restored feature graph is connected with the 10 th layer feature graph to form a jump connection;
the 26 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 27 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 28 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 29 th layer is a 3 x 3 deconvolution kernel, the feature graph is restored to the corresponding size to express high-level features, and the restored feature graph is connected with the 6 th layer feature graph to form a jump connection;
the 30 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 31 st layer is a convolution kernel of 3 x 3, the characteristics of the high-level characteristic graph are extracted, the high-level characteristics are explained, and the characteristic expression is enhanced;
the 32 th layer is a 3 x 3 deconvolution kernel, the feature map is reduced to the corresponding size, and high-level features are expressed;
the 33 th layer is a convolution kernel of 3 x 3, is connected with the 3 rd layer characteristic diagram to form a jump connection, extracts the characteristics of the high-level characteristic diagram, explains the high-level characteristics and enhances the characteristic expression;
the 34 th layer is a convolution kernel of 3 x 3, high-level feature graph features are extracted, the high-level features are explained, and feature expression is enhanced;
the 35 th layer is a deconvolution kernel with 1 × 1, the number of convolutions is 2, the step length is 1, the target characteristics are explained and used as a score map;
in the above description, if not stated, the case of this layer is defined by: the step length of the convolution process is 1, the convolution process is filled with 64 convolution numbers, and a relu activation function is adopted; the step length of the deconvolution process is 2, the number of the convolution is 64 with padding, and a relu activation function is adopted; the pooling process adopts a maximum pooling method, the step length is 2, and all the steps are filled;
the model can be self-learned, cross entropy is adopted as a loss function, and Adam is used as a back propagation optimizer; in order to prevent the overfitting phenomenon in the training process, an early stopping mode is used; dividing a training set into a plurality of groups, calculating the fitting degree of all the groups each time, and stopping the training of a group if the fitting degree of a certain group meets the requirement;
4) outputting a semantic segmentation result of the network model; calculating the final scores of all pixel points of the 35-layer score map by using a softmax classifier, selecting the type with the highest score as the optimal type of a certain pixel, and obtaining the esophageal cancer semantic segmentation map after different pixels show different types;
4) three-dimensional reconstruction of esophageal cancer, namely performing three-dimensional reconstruction on the esophageal cancer segmentation network model obtained in the step 3), and performing three-dimensional analysis to obtain esophageal cancer image omics parameters in a three-dimensional space;
5) visually displaying the esophageal cancer imaging omics parameters obtained in the step 4);
the full convolution neural network model is obtained by training a chest esophageal cancer CT image, and can distinguish tumor regions from non-tumor regions;
the invention has the innovation that only deep residual error connection is adopted, full residual error is not used, residual error connection is also adopted on an upper sampling layer, and only deep residual error connection is used; the residual spans two convolution layers, the residual is considered to play a role in marking and assisting in feature extraction, and the residual cannot be subjected to pooling immediately after being connected, so that the process of assisting in pooling is avoided; except for the 1 st layer, the 17 th layer and the 18 th layer, the number of convolutions of each layer is 64, which is different from the fact that other networks increase by one time the number of convolutions along with the appearance of the pooling layer; the 1 st layer, the intermediate layers 17, 19 and 35 use convolution kernels with the size of 1 x 1, the capability of extracting network characteristics of the 1 x 1 kernel is not strong enough, but the 1 x 1 kernel is generally regarded as a pretreatment layer, and the 1 x 1 kernel is added at three key positions to effectively improve the transmission efficiency between networks; the fact proves that the network structure is effective and has good expression on the segmentation of the esophageal cancer; the structure which does not increase the convolution number along with the depth well reduces the volume of the network model;
most of the processes are model training processes, and only an original image needs to be input in actual operation, and then a semantic segmentation image can be obtained;
the above is a detailed technical principle of the present invention, wherein the hierarchy of the full convolutional neural network and the relationship between the layers are described in detail.

Claims (5)

1. A method for segmenting esophageal cancer in a chest CT image is characterized by comprising the following steps:
1) selecting a plurality of groups of CT images containing esophageal cancer, taking the CT images containing esophageal cancer as training samples, and taking the CT images as the data range of the step 3);
2) preprocessing the CT image selected in the step 1) to obtain esophageal cancer characteristics, and after performing characteristic description, taking the obtained image as training data in the step 3);
3) establishing an esophageal cancer semantic segmentation model based on a full convolution neural network, inputting the esophageal cancer features described in the step 2) as the features of the full convolution neural network, and training the esophageal cancer features as learning samples to obtain an esophageal cancer segmentation network model;
4) three-dimensional reconstruction of esophageal cancer, namely performing three-dimensional reconstruction and analysis on an esophageal cancer segmentation result obtained by the esophageal cancer segmentation network model obtained in the step 3) to obtain esophageal cancer image omics parameters in a three-dimensional space, wherein the parameters are prepared in the step 5);
5) and 4) visually displaying the esophageal cancer imaging omics parameters obtained in the step 4).
2. The method according to claim 1, wherein the step 2) is specifically to convert the DICOM image of the chest CT of each layer into a bitmap according to the window width and window level, crop the CT image into an image of 80 × 80 pixels, include the esophagus portion in the cropped image, and input the cropped image as the feature of the full convolution neural network.
3. The method for esophageal cancer segmentation in chest CT image according to claim 1, wherein in the step 3), after the characteristic data is obtained in the step 2), after a full convolution neural network model is constructed, the data in the step 2) is put into the model to be trained to obtain a semantic segmentation image, in the training process, the semantic image obtained by the model is compared with an actual label image, a loss function of a loss value between the semantic image and the actual label image is calculated by using cross entropy, and the loss value is used as a basis of error gradient of back propagation.
4. The method for segmenting esophageal cancer in a chest CT image according to claim 1, wherein in the step 4), the sequential semantic segmentation images obtained in the step 3) are sequentially arranged, the sequentially arranged two-dimensional image data are connected into three-dimensional data, and a three-dimensional curved surface is searched by a method based on a triangular patch under the condition of the three-dimensional data to perform three-dimensional reconstruction.
5. The method for segmenting esophageal cancer in chest CT image according to claim 1, wherein the step 5), in particular, the display of esophageal cancer image omics parameters is realized by QT programming, so that certain human-computer interaction function is realized.
CN201810335223.1A 2018-04-15 2018-04-15 Esophagus cancer segmentation method in chest CT image Expired - Fee Related CN108596884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810335223.1A CN108596884B (en) 2018-04-15 2018-04-15 Esophagus cancer segmentation method in chest CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810335223.1A CN108596884B (en) 2018-04-15 2018-04-15 Esophagus cancer segmentation method in chest CT image

Publications (2)

Publication Number Publication Date
CN108596884A CN108596884A (en) 2018-09-28
CN108596884B true CN108596884B (en) 2021-05-18

Family

ID=63622637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810335223.1A Expired - Fee Related CN108596884B (en) 2018-04-15 2018-04-15 Esophagus cancer segmentation method in chest CT image

Country Status (1)

Country Link
CN (1) CN108596884B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711409A (en) * 2018-11-15 2019-05-03 天津大学 A kind of hand-written music score spectral line delet method of combination U-net and ResNet
CN109598727B (en) * 2018-11-28 2021-09-14 北京工业大学 CT image lung parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109754403A (en) * 2018-11-29 2019-05-14 中国科学院深圳先进技术研究院 Tumour automatic division method and system in a kind of CT image
CN109902755B (en) * 2019-03-05 2019-10-11 南京航空航天大学 A kind of multi-layer information sharing and correcting method for XCT slice
CN109949352A (en) * 2019-03-22 2019-06-28 邃蓝智能科技(上海)有限公司 A kind of radiotherapy image Target delineations method based on deep learning and delineate system
CN111127430A (en) * 2019-12-24 2020-05-08 北京推想科技有限公司 Method and device for determining medical image display parameters
CN111127444B (en) * 2019-12-26 2021-06-04 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111275720B (en) * 2020-01-20 2022-05-17 浙江大学 Full end-to-end small organ image identification method based on deep learning
CN111275041B (en) * 2020-01-20 2022-12-13 腾讯科技(深圳)有限公司 Endoscope image display method and device, computer equipment and storage medium
CN111476778A (en) * 2020-04-07 2020-07-31 西华大学 Image detection segmentation method, system, storage medium, computer program, and terminal
CN111784721B (en) * 2020-07-01 2022-12-13 华南师范大学 Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning
CN111862090B (en) * 2020-08-05 2023-10-10 武汉楚精灵医疗科技有限公司 Method and system for esophageal cancer preoperative management based on artificial intelligence
CN112508001A (en) * 2020-12-03 2021-03-16 安徽理工大学 Coal gangue positioning method based on multispectral waveband screening and improved U-Net
CN113160124B (en) * 2021-02-25 2022-12-16 广东工业大学 Method for reconstructing esophageal cancer image in feature space of energy spectrum CT and common CT
CN113052930A (en) * 2021-03-12 2021-06-29 北京医准智能科技有限公司 Chest DR dual-energy digital subtraction image generation method
CN115082402A (en) * 2022-06-22 2022-09-20 济南大学 Esophageal squamous carcinoma image segmentation method and system based on attention mechanism

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886801A (en) * 2017-04-14 2017-06-23 北京图森未来科技有限公司 A kind of image, semantic dividing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003070102A2 (en) * 2002-02-15 2003-08-28 The Regents Of The University Of Michigan Lung nodule detection and classification

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886801A (en) * 2017-04-14 2017-06-23 北京图森未来科技有限公司 A kind of image, semantic dividing method and device

Also Published As

Publication number Publication date
CN108596884A (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN108596884B (en) Esophagus cancer segmentation method in chest CT image
CN111192245B (en) Brain tumor segmentation network and method based on U-Net network
CN111709953B (en) Output method and device in lung lobe segment segmentation of CT (computed tomography) image
CN112489061B (en) Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism
EP4345746A2 (en) Method and system for image segmentation and identification
CN113506310B (en) Medical image processing method and device, electronic equipment and storage medium
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
CN113344951A (en) Liver segment segmentation method based on boundary perception and dual attention guidance
CN112446892A (en) Cell nucleus segmentation method based on attention learning
CN112734755A (en) Lung lobe segmentation method based on 3D full convolution neural network and multitask learning
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN107767362A (en) A kind of early screening of lung cancer device based on deep learning
CN109919915A (en) Retinal fundus images abnormal area detection method and equipment based on deep learning
CN112819831B (en) Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN113034507A (en) CCTA image-based coronary artery three-dimensional segmentation method
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN110827283B (en) Head and neck blood vessel segmentation method and device based on convolutional neural network
CN114066883A (en) Liver tumor segmentation method based on feature selection and residual fusion
CN114119515A (en) Brain tumor detection method based on attention mechanism and MRI multi-mode fusion
CN116486156A (en) Full-view digital slice image classification method integrating multi-scale feature context
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
Khaledyan et al. Enhancing breast ultrasound segmentation through fine-tuning and optimization techniques: Sharp attention UNet
CN113362350A (en) Segmentation method and device for cancer medical record image, terminal device and storage medium
CN112541909A (en) Lung nodule detection method and system based on three-dimensional neural network of slice perception
Zhao et al. Automated breast lesion segmentation from ultrasound images based on ppu-net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210518

CF01 Termination of patent right due to non-payment of annual fee