CN115063393A - Liver and liver tumor automatic segmentation method based on edge compensation attention - Google Patents

Liver and liver tumor automatic segmentation method based on edge compensation attention Download PDF

Info

Publication number
CN115063393A
CN115063393A CN202210785138.1A CN202210785138A CN115063393A CN 115063393 A CN115063393 A CN 115063393A CN 202210785138 A CN202210785138 A CN 202210785138A CN 115063393 A CN115063393 A CN 115063393A
Authority
CN
China
Prior art keywords
liver
edge
attention
eca
net network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210785138.1A
Other languages
Chinese (zh)
Inventor
陈丽芳
罗世勇
谢振平
詹千熠
刘渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202210785138.1A priority Critical patent/CN115063393A/en
Publication of CN115063393A publication Critical patent/CN115063393A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Abstract

The invention discloses a liver and liver tumor automatic segmentation method based on edge compensation attention, which comprises the steps of collecting original data, dividing the data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set; generating an auxiliary supervision label by transforming the supervision information; carrying out rapid rough positioning on the liver through a Unet network, and carrying out clipping and interpolation processing after obtaining a liver region; establishing an ECA-Net network; training the ECA-Net network based on the training set data; cutting is performed based on an ECA-Net network. The method has the advantages that the sampling times of the segmentation algorithm are few, the generated global edge information compensation map is supervised by using edge sensitive information, the problem of detail information loss caused by multiple convolution with step length or pooling can be effectively solved, and the segmentation effect on the boundary part of the liver and the tumor is also obvious.

Description

Liver and liver tumor automatic segmentation method based on edge compensation attention
Technical Field
The invention relates to the technical field of liver and liver tumor segmentation, in particular to an automatic liver and liver tumor segmentation method based on edge compensation attention.
Background
According to the latest global cancer data released by the world health organization international cancer research organization (WHOIARC) in 2020, the number of liver cancer cases in China is up to 41.13 thousands of people, accounting for 45.3% of the liver cancer cases in China, and the liver is the first organ in the world leaderboard. Among the global deaths from liver cancer, the number of deaths from china is as high as 39 thousands, second only to lung cancer. Liver compensation function is strong (as long as 30-50% of the liver is not damaged, liver function is not affected), pain nerves around the liver are lacked, and the disease condition is easily covered by other background diseases, so that liver cancer is difficult to be found in early clinical diagnosis, and even in developed countries, only less than 30% of liver cancer patients can be found at early stage and treated by operation. Therefore, for the population with high risk of suffering from liver diseases, the liver part of the population is subjected to targeted examination regularly, and the liver is discovered and treated as soon as possible when the liver suffers from the pathological changes, so that the death rate of liver diseases such as liver cancer can be effectively reduced, and the survival probability and the life quality of the population suffering from the liver diseases are improved.
With the development and application of computer vision technology, imaging analysis by computer-aided doctors has become a mainstream research direction. The core of assisting a doctor in diagnosing liver CT is to help the doctor to clearly divide and identify liver regions, and the computer technology is used for assisting the doctor in identifying and dividing liver diseases (such as liver cancer) from the whole abdominal cavity CT, and then performing targeted analysis and diagnosis. The traditional segmentation algorithm loses part of detail information due to multiple downsampling in the encoding stage, the segmentation effect on the boundary part of the liver and the tumor is not ideal, and the resource requirement, the training speed and the edge segmentation effect are not well balanced.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and title of the application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above and/or other problems with existing methods for automatically segmenting liver and liver tumors based on edge-compensated attention.
Therefore, the problem to be solved by the present invention is how to provide an automatic segmentation method for liver and liver tumor based on edge compensation attention.
In order to solve the technical problems, the invention provides the following technical scheme: a liver and liver tumor automatic segmentation method based on edge compensation attention comprises collecting original data, dividing into training set and testing set according to proportion, and preprocessing; generating an auxiliary supervision label by transforming the supervision information; carrying out rapid rough positioning on the liver through a Unet network, and carrying out clipping and interpolation processing after obtaining a liver region; establishing an ECA-Net network; training the ECA-Net network based on the training set data; cutting is performed based on an ECA-Net network.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: the pre-processing step includes adjusting the HU values to [ -100,240], then normalizing to between [0,1 ].
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: transforming the supervision information to generate an auxiliary supervision label, wherein the step of setting the tumor part pixel point in the original marking information to be 0 and keeping the liver and background part pixels unchanged to obtain GT 1; GT1 is distance transformed and then normalized to between [0,1 ]; and subtracting GT1 from 1 to obtain the supervision label of the edge-sensitive auxiliary supervision.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: the cropping and interpolation process for the liver region includes unifying the input size to 336 x 336 and excluding empty slices that do not contain the liver region.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: the ECA-Net network comprises 3 edge attention compensation modules, 1 decoding module, 4 coding modules, 1 local-global integration module and 1 multi-level feature integration module, and coding and decoding features are connected through a conversion layer comprising 3 x 3 and 1 x 1 convolutions in the middle.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: and generating a global edge guide map containing detail information by aggregating multi-level features by using the generated edge-sensitive auxiliary supervision label supervision in an encoding stage.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: in the decoding stage, the upsampled features are supplemented by using the global edge guide map to obtain more accurate segmentation results.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: when the ECA-Net network is trained, image inversion and image rotation are used as data enhancement means for relieving the over-fitting problem, and after the training is finished, the ECA-Net network is tested through test set data until the learning rate is reduced to 10% of the initial learning rate and is kept stable, and the training is finished.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: after cutting based on the ECA-Net network, a means of maximum connectivity marking is adopted on the liver segmentation result, and tiny cavities in the liver are filled, and finally tumors outside the liver range are removed.
As a preferable scheme of the method for automatically segmenting the liver and the liver tumor based on the edge compensation attention, the method comprises the following steps: the generation process of the global edge guide map is expressed by the following formula,
f 3 1 =Concat(f 3 * ×U 2 (f 4 ),U 2 (f 4 ))
f 2 1 =Concat(f 2 * ×U 4 (f 4 )×U 2 (f 3 * ),f 3 1 )
E g =Sigmoid(f 2 1 )
in the formula (f) i For each level of features in each backbone network, i is 1,2,3,4,5, f i After a 1X 1 convolution with f i * Represents, U 2 、U 4 Respectively representing 2 times and 4 times of upsampling, sigmoid is an activation function, Concat is channel dimension splicing operation, E g And compensating the generated global edge information.
The invention has the beneficial effects that: the segmentation algorithm has less sampling times, and the generated global edge information compensation map is supervised by using edge sensitive information, so that the problem of detail information loss caused by multiple convolution with step length or pooling can be effectively solved, and the segmentation effect on the boundary part of the liver and the tumor is also obvious.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic diagram of a segmentation flow of an automatic liver and liver tumor segmentation method based on edge compensation attention.
Fig. 2 is a schematic diagram of an ECANet network structure of an automatic liver and liver tumor segmentation method based on edge compensation attention.
Fig. 3 is a schematic structural diagram of a multi-level feature integration module of an automatic liver and liver tumor segmentation method based on edge compensation attention.
Fig. 4 is a schematic structural diagram of an edge attention compensation module of an automatic liver and liver tumor segmentation method based on edge compensation attention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Example 1
Referring to fig. 1 to 4, a first embodiment of the present invention provides an automatic liver and liver tumor segmentation method based on edge compensation attention, which includes the following steps:
and S1, collecting original data, dividing the original data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set.
In the embodiment, the raw data is abdomen CT slices, the data set used in the invention is derived from a LiTS2017 match data set, the collected raw data is divided into a training set and a testing set according to a ratio of 8:2, and the training set and the testing set are preprocessed.
And S2, generating an auxiliary supervision label by transforming the supervision information.
And S3, rapidly and roughly positioning the liver through a Unet network, cutting and interpolating after obtaining the liver region, unifying the CT slices to 336 × 336 input sizes, and eliminating empty slices not containing the liver region to reduce the calculated amount.
And S4, establishing the ECA-Net network.
It should be noted that the ECA-Net network in the present invention is mainly based on the U-shaped coding-decoding structure based on the DenseNet169, but the present invention removes the full connection layer and the last DenseBlock module in the existing U-shaped coding-decoding structure, and removes the skip connection of the highest layer, so the highest layer decoding module has no input from the skip connection, as shown in fig. 2, the ECA-Net network in the present invention includes 3 edge attention compensation modules (FCAM), 1 decoding module (decoder block), 4 coding modules (dense block), 1 local-global integration module (LGIM), and 1 multi-layer feature integration module (MLAM), and the coding and decoding features are connected by the conversion layer including 3 and 1 × 1 convolutions, thereby reducing the number of channels.
And S5, training the ECA-Net network based on the training set data. When the ECA-Net network is trained, image inversion and image rotation are used as data enhancement means for relieving the over-fitting problem, and after the training is finished, the ECA-Net network is tested through test set data until the learning rate is reduced to 10% of the initial learning rate and is kept stable, and the training is finished
And S6, cutting based on the ECA-Net network, adopting a maximum communication mark means on the liver segmentation result in order to prevent cavities on the liver, filling small cavities in the liver, and finally removing tumors outside the liver.
In the preferred embodiment of this section, the preprocessing operation in step S1 includes adjusting the HU value to [ -100,240], and then normalizing to between [0,1 ].
In step S2, the generating of the auxiliary supervision tag by transforming the supervision information includes the following steps:
s21, setting the tumor part pixel point in the original labeling information as 0, and keeping the liver and background part pixels unchanged to obtain GT 1;
s22, carrying out distance transformation on GT1, and then normalizing to be between [0,1 ];
and S23, subtracting GT1 from 1 to obtain the supervision label of the edge-sensitive auxiliary supervision.
Preferably, a global edge guide map containing detail information is generated by aggregating multi-level features by using the generated edge-sensitive auxiliary supervision label supervision in the encoding stage. And in the decoding stage, the features obtained by the up-sampling are supplemented by using the global edge guide map so as to obtain a more accurate segmentation result.
It should be noted that, in the deep learning field, the shallow feature map contains more detailed information, but lacks high-level semantic information; although the deep feature map has rich high-level semantic information, a plurality of downsampling operations cause loss of partial content information and position relation, so that the fusion and utilization of the multi-level features are particularly important for a fine semantic segmentation task. Meanwhile, the edge information can provide detailed constraints for the whole segmentation process, however, the edge information only exists in the low-level features, so that the three-level multi-level features are utilized for generating the boundary-sensitive global edge guide graph, and the subsequent whole segmentation process is assisted. Specifically, a CT slice input with a resolution of h × w is denoted as I and can be obtained from the backbone network
Figure BDA0003721070150000061
Is marked as f i I ═ 1,2,3,4,5}, as shown in fig. 3, the multi-level feature integration module utilizes three levels of features for generating the global edge guide map. As shown in FIG. 2, f 2 ,f 3 Is subjected to a 1 x 1 convolution for reducing the number of channels, respectively denoted f 2 * ,f 3 * The feature fusion process can be expressed as:
f 3 1 =Concat(f 3 * ×U 2 (f 4 ),U 2 (f 4 ))
f 2 1 =Concat(f 2 * ×U 4 (f 4 )×U 2 (f 3 *),f 3 1 )
E g =Sigmoid(f 2 1 )
in the formula of U 2 、U 4 Respectively representing 2 times and 4 times of upsampling, sigmoid is an activation function, Concat is channel dimension splicing operation, E g And compensating the generated global edge information.
In conclusion, the segmentation algorithm of the invention adopts a two-dimensional segmentation network, is easy to train, has few parameters and sampling times, monitors the generated global edge information compensation map by using edge sensitive information, can effectively make up the problem of detail information loss caused by multiple convolution with step length or pooling, and has obvious segmentation effect on the boundary part of the liver and the tumor.
Example 2
Referring to table 1, a method for automatically segmenting liver and liver tumor based on edge compensation attention is provided as an embodiment of the present invention, and scientific demonstration is performed through comparative experiments in order to verify the beneficial effects of the present invention.
TABLE 1 comparison of the recent method dice index on Lits2017
Figure BDA0003721070150000062
Figure BDA0003721070150000071
As can be seen from Table 1, by adopting the automatic liver and liver tumor segmentation method based on edge compensation attention, the Dice per case and Dice global in the Dice index are obviously superior to segmentation methods such as FEDNet, MANet, Deeplabv3+, RAUNet and PolyUNet in the prior art, and the segmentation performance is effectively improved.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (10)

1. An automatic segmentation method of liver and liver tumor based on edge compensation attention is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
collecting original data, dividing the data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set;
generating an auxiliary supervision label by transforming the supervision information;
carrying out rapid rough positioning on the liver through a Unet network, and carrying out clipping and interpolation processing after obtaining a liver region;
establishing an ECA-Net network;
training the ECA-Net network based on the training set data;
cutting is performed based on an ECA-Net network.
2. The edge-compensated attention-based liver and liver lesion automatic segmentation method of claim 1, wherein: the pre-processing step includes adjusting the HU values to [ -100,240], then normalizing to between [0,1 ].
3. The edge-compensated attention-based liver and liver lesion automatic segmentation method of claim 1 or 2, wherein: transforming the supervision information to generate the auxiliary supervision label comprises the following steps,
setting the tumor part pixel point in the original labeling information as 0, and keeping the liver and background part pixels unchanged to obtain GT 1;
GT1 is distance transformed and then normalized to between [0,1 ];
and subtracting GT1 from 1 to obtain the supervision label of the edge-sensitive auxiliary supervision.
4. The edge-compensated attention-based liver and liver lesion automatic segmentation method of claim 3, wherein: the cropping and interpolation process for the liver region includes unifying the input size to 336 x 336 and excluding empty slices that do not contain the liver region.
5. The method for automatic segmentation of liver and liver tumors based on edge compensated attention according to any one of claims 1,2 and 4, wherein: the ECA-Net network comprises 3 edge attention compensation modules, 1 decoding module, 4 coding modules, 1 local-global integration module and 1 multi-level feature integration module, and coding and decoding features are connected through a conversion layer comprising 3 x 3 and 1 x 1 convolutions in the middle.
6. The edge-compensated attention-based liver and liver lesion automatic segmentation method of claim 5, wherein: and generating a global edge guide map containing detail information by aggregating multi-level features by using the generated edge-sensitive auxiliary supervision label supervision in an encoding stage.
7. The edge-compensated attention-based liver and liver tumor automatic segmentation method of claim 6, wherein: in the decoding stage, the upsampled features are supplemented by a global edge guide map to obtain a more accurate segmentation result.
8. The edge-compensated attention-based liver and liver lesion automatic segmentation method of claim 7, wherein: when the ECA-Net network is trained, image inversion and image rotation are used as data enhancement means for relieving the over-fitting problem, and after the training is finished, the ECA-Net network is tested through test set data until the learning rate is reduced to 10% of the initial learning rate and is kept stable, and the training is finished.
9. The method for automatically segmenting liver and liver tumors based on edge compensated attention according to any one of claims 1, 6 or 8, characterized by: after cutting based on the ECA-Net network, a means of maximum connectivity marking is adopted on the liver segmentation result, and tiny cavities in the liver are filled, and finally tumors outside the liver range are removed.
10. The method for automatically segmenting liver and liver tumors based on edge compensated attention according to any one of claims 1, 6 or 8, characterized by: the generation process of the global edge guide map is expressed by the following formula,
f 3 1 =Concat(f 3 * ×U 2 (f 4 ),U 2 (f 4 ))
f 2 1 =Concat(f 2 * ×U 4 (f 4 )×U 2 (f 3 *),f 3 1 )
E g =Sigmoid(f 2 1 )
in the formula (f) i For each level of features in each backbone network, i is 1,2,3,4,5, f i After a 1X 1 convolution with f i * Represents, U 2 、U 4 Respectively representing 2 times and 4 times of upsampling, sigmoid is an activation function, Concat is channel dimension splicing operation, E g And compensating the generated global edge information.
CN202210785138.1A 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention Pending CN115063393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210785138.1A CN115063393A (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210785138.1A CN115063393A (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Publications (1)

Publication Number Publication Date
CN115063393A true CN115063393A (en) 2022-09-16

Family

ID=83204430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210785138.1A Pending CN115063393A (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Country Status (1)

Country Link
CN (1) CN115063393A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401480A (en) * 2020-04-27 2020-07-10 上海市同济医院 Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN114677511A (en) * 2022-03-23 2022-06-28 三峡大学 Lung nodule segmentation method combining residual ECA channel attention UNet with TRW-S

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401480A (en) * 2020-04-27 2020-07-10 上海市同济医院 Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN114677511A (en) * 2022-03-23 2022-06-28 三峡大学 Lung nodule segmentation method combining residual ECA channel attention UNet with TRW-S

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANNING CHI ET AL.: "X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans", NEUROCOMPUTING, pages 81 - 96 *
刘一鸣 等: "基于特征融合的肝脏肿瘤自动分割方法", 激光与光电子学进展, vol. 58, no. 14, pages 1417001 - 1 *
陆倩杰 等: "COVID-19 肺部CT 图像多尺度编解码分割", 中国图象图形学报, vol. 27, no. 3, pages 827 - 837 *
马金林 等: "肝脏肿瘤CT图像深度学习分割方法综述", 中国图象图形学报, no. 10, 16 October 2020 (2020-10-16) *

Similar Documents

Publication Publication Date Title
CN109949309B (en) Liver CT image segmentation method based on deep learning
CN111047605B (en) Construction method and segmentation method of vertebra CT segmentation network model
CN112102321A (en) Focal image segmentation method and system based on deep convolutional neural network
CN110428427B (en) Semi-supervised renal artery segmentation method based on dense bias network and self-encoder
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
CN113724206B (en) Fundus image blood vessel segmentation method and system based on self-supervision learning
CN110991254B (en) Ultrasonic image video classification prediction method and system
WO2023207820A1 (en) Pancreatic postoperative diabetes prediction system based on supervised deep subspace learning
CN114723669A (en) Liver tumor two-point five-dimensional deep learning segmentation algorithm based on context information perception
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN113744271A (en) Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN114972248A (en) Attention mechanism-based improved U-net liver tumor segmentation method
CN112150470A (en) Image segmentation method, image segmentation device, image segmentation medium, and electronic device
CN116563533A (en) Medical image segmentation method and system based on target position priori information
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN114187181A (en) Double-path lung CT image super-resolution method based on residual information refining
CN115063393A (en) Liver and liver tumor automatic segmentation method based on edge compensation attention
CN114677389A (en) Depth semi-supervised segmentation children brain MRI demyelinating lesion positioning method
CN116258685A (en) Multi-organ segmentation method and device for simultaneous extraction and fusion of global and local features
CN115409857A (en) Three-dimensional hydrocephalus CT image segmentation method based on deep learning
CN115294023A (en) Liver tumor automatic segmentation method and device
CN113689353A (en) Three-dimensional image enhancement method and device and training method and device of image enhancement model
Xu et al. A Multi-scale Attention-based Convolutional Network for Identification of Alzheimer's Disease based on Hippocampal Subfields
CN112419267A (en) Brain glioma segmentation model and method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination