CN115063393B - Liver and liver tumor automatic segmentation method based on edge compensation attention - Google Patents

Liver and liver tumor automatic segmentation method based on edge compensation attention Download PDF

Info

Publication number
CN115063393B
CN115063393B CN202210785138.1A CN202210785138A CN115063393B CN 115063393 B CN115063393 B CN 115063393B CN 202210785138 A CN202210785138 A CN 202210785138A CN 115063393 B CN115063393 B CN 115063393B
Authority
CN
China
Prior art keywords
liver
edge
eca
segmentation
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210785138.1A
Other languages
Chinese (zh)
Other versions
CN115063393A (en
Inventor
陈丽芳
罗世勇
谢振平
詹千熠
刘渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202210785138.1A priority Critical patent/CN115063393B/en
Publication of CN115063393A publication Critical patent/CN115063393A/en
Application granted granted Critical
Publication of CN115063393B publication Critical patent/CN115063393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic liver and liver tumor segmentation method based on edge compensation attention, which comprises the steps of collecting original data, dividing the data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set; generating an auxiliary supervision tag by transforming the supervision information; performing rapid rough positioning on the liver through Unet network, and performing clipping and interpolation treatment after obtaining a liver region; establishing an ECA-Net network; training the ECA-Net network based on the preprocessed training set data; segmentation is based on ECA-Net networks. The method provided by the invention has the advantages that the sampling times of the segmentation algorithm are less, the generated global edge guide map is supervised by utilizing the edge sensitive information, the problem of detail information loss caused by convolution or pooling with step length for multiple times can be effectively solved, and the segmentation effect on the boundary parts of the liver and the tumor is also obvious.

Description

Liver and liver tumor automatic segmentation method based on edge compensation attention
Technical Field
The invention relates to the technical field of liver tumor segmentation, in particular to a liver and liver tumor automatic segmentation method based on edge compensation attention.
Background
The liver is an organ mainly used for metabolism of human body, but at the same time, the liver is a second major tumor lesion organ after lung, according to the latest global cancer data issued by the international cancer research institute (WHOIARC) of the world health organization, in 2020, the occurrence of liver cancer in China is up to 41.13 ten thousand people, accounting for 45.3% of the occurrence of liver cancer in the world, and the liver is in the top of the world. Of the global liver cancer death cases, chinese death cases are more than 39 tens of thousands, and are inferior to the death cases of lung cancer. Because the liver compensatory function is strong (as long as 30% -50% of the liver is not damaged, the liver function is not affected), pain nerves are absent around the liver, and the symptoms are easily covered by other background diseases, liver cancer is difficult to be found in early clinical diagnosis, and even in developed countries, less than 30% of liver cancer patients can be found in early stage and can undergo surgical treatment at present. Therefore, for the crowd with high risk and illness, the liver part is checked for pertinence at regular intervals, and the liver is found and treated early when the illness appears, so that the death rate of liver diseases such as liver cancer can be effectively reduced, and the survival rate and life quality of the sick crowd are improved.
With the development and application of computer vision technology, the use of computer-aided doctors for image analysis has become a dominant research direction. The core of the diagnosis of the liver CT by the auxiliary doctor is to help the doctor to have definite dividing and identifying effects on the liver region, and the most important step of assisting the doctor in carrying out liver diseases (such as liver cancer) by utilizing the computer technology is to identify and divide the liver region from the whole abdominal cavity CT, and then to carry out targeted analysis and diagnosis. The traditional segmentation algorithm has the defects that partial detail information is lost due to multiple downsampling in the ECA-Net network coding stage, the segmentation effect of the boundary part of the liver and the tumor is not ideal, and the resource requirement, the training speed and the edge segmentation effect are not well balanced.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application.
The present invention has been made in view of the above and/or problems associated with existing methods for automatic segmentation of liver and liver tumors based on edge compensated attention.
Therefore, the problem to be solved by the present invention is how to provide an automatic segmentation method for liver and liver tumor based on the attention of edge compensation.
In order to solve the technical problems, the invention provides the following technical scheme: an automatic liver and liver tumor segmentation method based on edge compensation attention comprises the steps of collecting original data, dividing the data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set; generating an auxiliary supervision tag by transforming the supervision information; performing rapid rough positioning on the liver through Unet network, and performing clipping and interpolation treatment after obtaining a liver region; establishing an ECA-Net network; training the ECA-Net network based on the preprocessed training set data; segmentation is based on ECA-Net networks.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: the pre-processing step includes adjusting the HU value to [ -100,240] and then normalizing to between [0,1 ].
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: the generation of the auxiliary supervision label by transforming the supervision information comprises the following steps that the pixels of a tumor part in the original labeling information are set to be 0, and the pixels of a liver part and a background part are unchanged, so that GT1 is obtained; performing distance transformation on GT1, and normalizing to be between [0,1 ]; and subtracting GT1 after the distance conversion and the normalization from 1 to obtain the auxiliary supervision label of the edge sensitive auxiliary supervision.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: clipping and interpolation of liver regions includes unifying input sizes to 336 x 336 and excluding empty slices that do not contain liver regions.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: the ECA-Net network includes 3 edge attention compensation modules, 1 decoding module, 4 encoding modules, 1 local-global integration module, and 1 multi-level feature integration module, with encoding and decoding features connected in between by a conversion layer comprising 3*3 and 1*1 convolutions.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: and in the ECA-Net network coding stage, by using the generated auxiliary supervision labels to supervise, generating a global edge guidance graph containing detail information by aggregating multi-level features.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: in the ECA-Net network decoding stage, the features obtained by up-sampling are supplemented by using a global edge guide graph to obtain a more accurate segmentation result.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: when training the ECA-Net network, image overturning and image rotation are used as data enhancement means for relieving the fitting problem, and after training is completed, the ECA-Net network is tested through the preprocessed test set data until the learning rate is reduced to 10% of the initial learning rate and kept stable, and the training is ended.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: after segmentation based on ECA-Net network, the method of maximum connected mark is adopted on the liver segmentation result, and tiny holes in the liver are filled, and finally, tumors outside the liver are removed.
As a preferable scheme of the automatic liver and liver tumor segmentation method based on the edge compensation attention, the invention comprises the following steps: the generation of the global edge guide map is formulated as follows,
f3 1=Concat(f3 *×U2(f4),U2(f4))
f2 1=Concat(f2 *×U4(f4)×U2(f3 *),f3 1)
Eg=Sigmoid(f2 1)
In the formula, f i is the characteristic of each layer in each backbone network, i=1, 2,3,4,5, f i is represented by f i * after being convolved by 1×1, U 2、U4 represents up-sampling by 2 times and 4 times respectively, sigmoid is an activation function, concat is a channel dimension splicing operation, and E g is a generated global edge guide graph.
The invention has the beneficial effects that: the segmentation algorithm has less sampling times, and the generated global edge guide graph is supervised by utilizing the edge sensitive information, so that the problem of detail information loss caused by convolution or pooling with step length for multiple times can be effectively solved, and the segmentation effect on the boundary parts of the liver and the tumor is also obvious.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
Fig. 1 is a schematic diagram of a segmentation flow of an automatic segmentation method for liver and liver tumor based on edge compensated attention.
Fig. 2 is a schematic diagram of a ECANet network structure of an automatic segmentation method for liver and liver tumor based on edge compensation attention.
Fig. 3 is a schematic diagram of a multi-level feature integration module structure of an automatic liver and liver tumor segmentation method based on edge compensation attention.
Fig. 4 is a schematic structural diagram of an edge-compensated attention module of an automatic liver and liver tumor segmentation method based on edge-compensated attention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Example 1
Referring to fig. 1 to 4, in a first embodiment of the present invention, an automatic liver and liver tumor segmentation method based on edge compensated attention is provided, and the automatic liver and liver tumor segmentation method based on edge compensated attention includes the following steps:
S1, collecting original data, dividing the original data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set.
In this embodiment, the original data is an abdomen CT film, the data set used in the present invention is derived from LiTS 2017:2017 competition data set, the collected original data is divided into a training set and a testing set according to the ratio of 8:2, and the preprocessing operation is performed on the training set and the testing set.
S2, generating an auxiliary supervision label by transforming the supervision information.
S3, performing rapid rough positioning on the liver through Unet networks, performing cutting and interpolation processing after obtaining a liver region, unifying CT slices to 336 x 336 input sizes, and removing empty slices which do not contain the liver region to reduce the calculated amount.
S4, establishing an ECA-Net network.
It should be noted that the ECA-Net network in the present invention is mainly based on the U-shaped encoding-decoding structure, the encoding structure is based on DenseNet a, but the present invention removes the full-connection layer and the last DenseBlock module in the existing U-shaped encoding-decoding structure, and removes the jump connection of the highest layer, so the highest layer decoding module has no input from the jump connection, as shown in fig. 2, the ECA-Net network in the present invention includes 3 edge attention compensation modules (FCAM), 1 decoding module (decoder block), 4 encoding modules (dense block), 1 local-global integration module (LGIM) and 1 multi-level feature integration module (MLAM), and the middle connects the encoding and decoding features through the conversion layer including 3*3 and 1*1 convolutions, thereby reducing the number of channels.
S5, training the ECA-Net network based on the preprocessed training set data. When training ECA-Net network, image overturn and image rotation are used as data enhancement means for relieving the over fitting problem, and after training, ECA-Net network is tested by the preprocessed test set data until learning rate is reduced to 10% of initial learning rate and kept stable, and training is finished
S6, segmentation is carried out based on an ECA-Net network, in order to prevent cavities on the liver, a means of maximum communication marks is adopted on a liver segmentation result, tiny cavities in the liver are filled, and finally tumors outside the liver range are removed.
In the preferred embodiment of this section, the preprocessing operation in step S1 involves adjusting the HU value to [ -100,240] and then normalizing to between [0,1 ].
In step S2, generating the auxiliary supervision tag by transforming the supervision information comprises the steps of:
S21, setting a pixel point of a tumor part in original labeling information to be 0, wherein pixels of a liver part and a background part are unchanged, and obtaining GT1;
S22, performing distance transformation on the GT1, and normalizing the GT1 to be between 0 and 1;
S23, subtracting GT1 after distance conversion and normalization from 1 to obtain an auxiliary supervision label of the edge sensitive auxiliary supervision.
Preferably, a global edge guide map containing detail information is generated by aggregating multiple levels of features, using the generated auxiliary supervision labels to supervise the ECA-Net network coding stage. And in the ECA-Net network decoding stage, the features obtained by up-sampling are supplemented by using the global edge guide graph so as to obtain a more accurate segmentation result.
It should be noted that, in the deep learning field, the shallow feature map contains more detail information, but lacks high-level semantic information; although the deep feature map has rich advanced semantic information, the multi-time downsampling operation leads to the loss of partial content information and position relation, so that the fusion utilization of multi-level features is particularly important for a fine semantic segmentation task. Meanwhile, the edge information can provide detail constraint for the whole segmentation process, however, the edge information only exists in low-level features, so that the method and the device utilize three-level multi-level features to generate a boundary-sensitive global edge guide graph, and help the following whole segmentation process. Specifically, by taking a CT slice input with a resolution of hxw as I, five levels of features can be obtained from each backbone network, expressed asWhere k has a value of 1,2,3,4, or 5, denoted as { f i, i=1, 2,3,4,5}, as shown in fig. 3, the multi-level feature integration module utilizes three levels of features for generating a global edge guide map. As shown in fig. 2, f 2,f3 is subjected to a1×1 convolution for reducing the number of channels, denoted as f 2 *,f3 *, respectively, and the feature fusion process can be expressed as:
f3 1=Concat(f3 *×U2(f4),U2(f4))
f2 1=Concat(f2 *×U4(f4)×U2(f3 *),f3 1)
Eg=Sigmoid(f2 1)
Where U 2、U4 represents 2-fold and 4-fold upsampling, sigmoid is the activation function, concat is the channel dimension stitching operation, and E g is the generated global edge guide map, respectively.
In summary, the segmentation algorithm of the invention adopts a two-dimensional segmentation network, is easy to train, has few parameters and few sampling times, and utilizes the edge sensitive information to supervise the generated global edge guide graph, so that the problem of detail information loss caused by convolution or pooling with step length for multiple times can be effectively solved, and the segmentation effect on the boundary parts of livers and tumors is also obvious.
Example 2
Referring to table 1, for one embodiment of the present invention, an automatic segmentation method for liver and liver tumor based on edge compensation attention is provided, and in order to verify the beneficial effects of the present invention, scientific demonstration is performed through a comparative experiment.
TABLE 1 comparison of the method dice index with recent years at LiTS2017
As can be seen from Table 1, by adopting the automatic segmentation method for liver and liver tumors based on the edge compensation attention, DICE PER CASE and Dice global in the Dice index are obviously better than the equal segmentation methods such as FEDNet, MANet, deeplabv < 3+ >, RAUNet and PolyUNet in the prior art, and the segmentation performance is effectively improved.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (6)

1. An automatic liver and liver tumor segmentation method based on edge compensation attention is characterized in that: comprising the steps of (a) a step of,
Collecting original data, dividing the original data into a training set and a testing set according to a proportion, and preprocessing the training set and the testing set;
Generating an auxiliary supervision tag by transforming the supervision information;
performing rapid rough positioning on the liver through Unet network, and performing clipping and interpolation treatment after obtaining a liver region;
establishing an ECA-Net network;
training the ECA-Net network based on the preprocessed training set data;
Segmentation is performed based on an ECA-Net network;
the global edge guidance map containing detail information is generated by aggregating multiple levels of features using the generated auxiliary supervision labels supervision during the ECA-Net network coding phase, including,
A CT slice input with resolution of h x w is denoted as I, five levels of features are obtained from each backbone network, denoted asWherein k has a value of 1,2,3,4 or 5, and is denoted as { f i, i=1, 2,3,4,5}, the multi-level feature integration module generates a global edge guide map by using three levels of features, f 2,f3 is used for reducing the channel number by a1×1 convolution, and is denoted as f 2 *,f3 * respectively, and the feature fusion process is denoted as:
f3 1=Concat(f3 *×U2(f4),U2(f4))
f2 1=Concat(f2 *×U4(f4)×U2(f3 *),f3 1)
Eg=Sigmoid(f2 1)
Wherein, f i is the characteristics of each layer in each backbone network, i=1, 2,3,4,5, f i is represented by f i * after 1×1 convolution, U 2、U4 represents 2 times and 4 times up-sampling respectively, sigmoid is an activation function, concat is channel dimension splicing operation, and E g is a generated global edge guide graph;
The pre-treatment step includes adjusting the HU value to [ -100,240] and then normalizing to between [0,1 ];
by transforming the supervision information, generating the auxiliary supervision labels comprises the steps of,
Setting the pixel point of the tumor part in the original labeling information to be 0, and keeping the pixels of the liver and the background part unchanged to obtain GT1;
performing distance transformation on GT1, and normalizing to be between [0,1 ];
And subtracting GT1 after the distance conversion and the normalization from 1 to obtain the auxiliary supervision label of the edge sensitive auxiliary supervision.
2. The method for automatic segmentation of liver and liver tumors based on edge-compensated attention as claimed in claim 1, wherein: clipping and interpolation of liver regions includes unifying input sizes to 336 x 336 and excluding empty slices that do not contain liver regions.
3. The method for automatic segmentation of liver and liver tumors based on edge-compensated attention as claimed in claim 2, wherein: the ECA-Net network includes 3 edge attention compensation modules, 1 decoding module, 4 encoding modules, 1 local-global integration module, and 1 multi-level feature integration module, with encoding and decoding features connected in between by a conversion layer comprising 3*3 and 1*1 convolutions.
4. A method for automatic segmentation of liver and liver tumors based on edge compensated attention as claimed in claim 3, wherein: in the ECA-Net network decoding stage, the features obtained by up-sampling are supplemented by using a global edge guide graph to obtain a more accurate segmentation result.
5. The method for automatic segmentation of liver and liver tumors based on edge-compensated attention as set forth in claim 4, wherein: when training the ECA-Net network, image overturning and image rotation are used as data enhancement means for relieving the fitting problem, and after training is completed, the ECA-Net network is tested through the preprocessed test set data until the learning rate is reduced to 10% of the initial learning rate and kept stable, and the training is ended.
6. The method for automatic segmentation of liver and liver tumors based on edge-compensated attention as set forth in claim 5, wherein: after segmentation based on ECA-Net network, the method of maximum connected mark is adopted on the liver segmentation result, and tiny holes in the liver are filled, and finally, tumors outside the liver are removed.
CN202210785138.1A 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention Active CN115063393B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210785138.1A CN115063393B (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210785138.1A CN115063393B (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Publications (2)

Publication Number Publication Date
CN115063393A CN115063393A (en) 2022-09-16
CN115063393B true CN115063393B (en) 2024-06-07

Family

ID=83204430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210785138.1A Active CN115063393B (en) 2022-06-29 2022-06-29 Liver and liver tumor automatic segmentation method based on edge compensation attention

Country Status (1)

Country Link
CN (1) CN115063393B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401480A (en) * 2020-04-27 2020-07-10 上海市同济医院 Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN114677511A (en) * 2022-03-23 2022-06-28 三峡大学 Lung nodule segmentation method combining residual ECA channel attention UNet with TRW-S

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401480A (en) * 2020-04-27 2020-07-10 上海市同济医院 Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN114677511A (en) * 2022-03-23 2022-06-28 三峡大学 Lung nodule segmentation method combining residual ECA channel attention UNet with TRW-S

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
COVID-19 肺部CT 图像多尺度编解码分割;陆倩杰 等;中国图象图形学报;第27卷(第3期);第827-837页 *
Jianning Chi et al..X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans.Neurocomputing.2021,第81-96页. *
X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans;Jianning Chi et al.;Neurocomputing;第81-96页 *
基于特征融合的肝脏肿瘤自动分割方法;刘一鸣 等;激光与光电子学进展;第58卷(第14期);第1417001-1-1417001-9页 *
肝脏肿瘤CT图像深度学习分割方法综述;马金林 等;中国图象图形学报;20201016(第10期);全文 *

Also Published As

Publication number Publication date
CN115063393A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
Borovsky et al. Lesion correlates of conversational speech production deficits
CN109949309A (en) A kind of CT image for liver dividing method based on deep learning
CN111260705B (en) Prostate MR image multi-task registration method based on deep convolutional neural network
WO2022121100A1 (en) Darts network-based multi-modal medical image fusion method
CN113724206B (en) Fundus image blood vessel segmentation method and system based on self-supervision learning
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN111047605A (en) Construction method and segmentation method of vertebra CT segmentation network model
CN112365980A (en) Brain tumor multi-target point auxiliary diagnosis and prospective treatment evolution visualization method and system
CN112651929B (en) Medical image organ segmentation method and system based on three-dimensional full-convolution neural network and region growing
CN115564712B (en) Capsule endoscope video image redundant frame removing method based on twin network
CN116883341A (en) Liver tumor CT image automatic segmentation method based on deep learning
Ruan et al. An efficient tongue segmentation model based on u-net framework
CN109949299A (en) A kind of cardiologic medical image automatic segmentation method
CN115063393B (en) Liver and liver tumor automatic segmentation method based on edge compensation attention
CN116258685A (en) Multi-organ segmentation method and device for simultaneous extraction and fusion of global and local features
CN111292289A (en) CT lung tumor segmentation method, device, equipment and medium based on segmentation network
CN112669327B (en) Magnetic resonance image segmentation system and segmentation method thereof
CN114677389A (en) Depth semi-supervised segmentation children brain MRI demyelinating lesion positioning method
CN114400086A (en) Articular disc forward movement auxiliary diagnosis system and method based on deep learning
CN112967269A (en) Pulmonary nodule identification method based on CT image
CN116778157B (en) Cross-domain segmentation method and system for moment-invariant contrast cyclic consistency countermeasure network
CN113781636B (en) Pelvic bone modeling method and system, storage medium, and computer program product
Shen et al. The network algorithm for polyp image segmentation with fused attention mechanism
CN116843619A (en) Prostate cancer diagnosis model based on multiparameter ultrasonic image and training method thereof
Meng et al. An Efficient Spine Segmentation Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant