CN116309621A - Liver tumor segmentation method and device based on symbol distance - Google Patents

Liver tumor segmentation method and device based on symbol distance Download PDF

Info

Publication number
CN116309621A
CN116309621A CN202310269462.2A CN202310269462A CN116309621A CN 116309621 A CN116309621 A CN 116309621A CN 202310269462 A CN202310269462 A CN 202310269462A CN 116309621 A CN116309621 A CN 116309621A
Authority
CN
China
Prior art keywords
segmentation
liver tumor
distance
liver
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310269462.2A
Other languages
Chinese (zh)
Inventor
包锐钻
卜佳俊
顾静军
王旭敏
王叶超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310269462.2A priority Critical patent/CN116309621A/en
Publication of CN116309621A publication Critical patent/CN116309621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for segmenting liver tumor based on a symbol distance, wherein the method comprises the steps of acquiring liver nuclear magnetic resonance image data, and carrying out manual segmentation labeling to obtain a training set; inputting the training set into a deep learning model, respectively carrying out image segmentation prediction and symbol distance prediction on the training set, and training the deep learning model according to a group trunk calculation loss function to obtain a liver tumor segmentation model; and inputting the liver nuclear magnetic resonance image data to be segmented into the liver tumor segmentation model to obtain a liver tumor segmentation result. The method applies the symbol distance to the segmentation model based on the deep learning, so that the segmentation model performs multi-task learning, not only can the pixel-by-pixel classification be learned through the conventional segmentation task, but also the shape, the position and other information of the liver tumor can be learned through the symbol distance map regression task, and the two tasks complement and promote each other, so that the performance of the segmentation model is obviously improved.

Description

Liver tumor segmentation method and device based on symbol distance
Technical Field
The invention relates to the technical field of computers, in particular to a liver tumor segmentation method and device based on a symbol distance.
Background
Liver tumor is the sixth most common tumor in the world and the cause of death associated with the third cancer, and cases in china account for almost half of the world, liver tumor has become the third most common tumor in men in china. Accurate diagnosis of liver tumors is important for the treatment of liver tumors and for improving patient survival. Nuclear magnetic resonance is an important tool for liver tumor diagnosis due to its non-invasiveness, high spatial resolution, high anatomical contrast, and excellent soft tissue imaging. The accurate segmentation of liver tumor plays a key role in the follow-up accurate diagnosis of diseases, quantitative evaluation of focus and the establishment of operation plan.
In recent years, thanks to the rapid development of artificial intelligence technology, a segmentation method based on deep learning achieves a good effect on liver tumor segmentation tasks, and many models use modules such as jump connection, dense connection, cavity convolution, pyramid structure, attention mechanism and the like to promote the performance of the segmentation model, but the improvement neglects the utilization of geometric prior information such as shape, position and the like in images. While in medical images often different organs, tissues or lesions have relatively fixed anatomical positions, structures and shapes, for example the liver is located in the right upper abdomen of the human body, the two kidneys are generally symmetrical about the central axis, the tumor tissue usually presents a circular or oval shape, and the blood vessels present an elongated tubular structure, how to effectively use these a priori information becomes one idea of improving the segmentation algorithm.
Disclosure of Invention
The invention provides a liver tumor segmentation method and a device based on a symbol distance, wherein the symbol distance is applied to a segmentation model based on deep learning, so that the segmentation model performs multi-task learning, not only is the classification of pixels learned by a conventional segmentation task, but also the shape, the position and other information of the liver tumor are learned by a symbol distance graph regression task, and the two tasks complement and promote each other, so that the expression of the segmentation model is obviously improved; meanwhile, the problem that the existing segmentation algorithm based on deep learning lacks the introduction of geometric priori information is solved.
The specific technical scheme is as follows:
a method for liver tumor segmentation based on symbolic distances, comprising:
s1: acquiring liver nuclear magnetic resonance image data, and manually segmenting and labeling an original image according to liver tumors in the image to obtain a training set;
s2: inputting the training set into a deep learning model, respectively carrying out image segmentation prediction and symbol distance prediction on the training set, and training the deep learning model according to a loss function of a group trunk calculation prediction result to obtain a liver tumor segmentation model;
s3: and inputting the liver nuclear magnetic resonance image data to be segmented into the liver tumor segmentation model to obtain a liver tumor segmentation result.
Further, the liver tumor segmentation model mainly comprises:
the encoder is used for preprocessing the liver nuclear magnetic resonance image in the training set and extracting to obtain a feature map;
a first decoder for receiving the feature map transmitted from the encoder and decoding the output predicted segmentation probability map;
a second decoder for receiving the feature map transmitted from the encoder and decoding the output predicted symbol distance map;
and the distance conversion module is used for converting the binary imaged ground trunk into the ground trunk of the symbol distance graph.
Further, the step of S2 specifically includes:
s2-1: after the training set is input into a deep learning model, preprocessing a liver nuclear magnetic resonance image by an encoder to obtain a feature map;
s2-2: the feature map enters a first decoder and a second decoder respectively; outputting a predicted segmentation probability map through a first decoder, and outputting a predicted symbol distance map through a second decoder;
s2-3: calculating the deviation between the predicted segmentation probability map and the ground trunk to obtain a segmentation loss function; meanwhile, calculating the deviation between the predicted symbol distance graph and the ground trunk of the symbol distance graph to obtain a regression loss function of the symbol distance graph;
s2-4: adding the segmentation loss function and the regression loss function of the symbol distance graph to obtain a total loss function; training the deep learning model through the total loss function to obtain a liver tumor segmentation model.
Further, in step S2-3, the segmentation loss function is the sum of a cross entropy loss function and a dice loss function;
the formula is as follows:
Figure BDA0004134246460000021
in the formula (1), L seg To divide the loss function, CELoss (p i ,y i ) For cross entropy loss function, diceLoss (p i ,y i ) N represents the total number of pixel points, i is the ith pixel, p is a probability map, and y is a label in a one-hot form;
the cross entropy loss function is:
Figure BDA0004134246460000022
in the formula (2), N represents the total number of pixel points, i represents the ith pixel, C represents the number of divided categories of the pixel points, p is a probability map, and y is a label in a one-hot form;
the dice function is:
Figure BDA0004134246460000023
in the formula (3), p is a probability map, y is a label in the form of one-hot, C is the number of categories, N is the total number of pixel points, and i is the ith pixel.
Further, in step S2-2, the activation function adopted by the second decoder is a tanh function, and the output symbol distance map is d i ∈[-1,1] H×W The method comprises the steps of carrying out a first treatment on the surface of the Wherein d i A symbol distance map is shown, H is the height of the image, W is the width of the image, and i= … N is the i-th sample.
Further, in step S2-3, the regression loss function of the symbol distance map uses a mean square error loss function, and the formula is as follows:
Figure BDA0004134246460000024
in the formula (4), L sdm Representing a symbolic distance regression loss function; n represents the total number of pixel points, i represents the ith pixel point,
Figure BDA0004134246460000031
group trunk, d representing symbol distance i Representing a predicted symbol distance map; the sign distance is indicated inside the liver tumor and the sign distance is indicated outside the liver tumor.
Further, in step S2-3, the distance conversion module converts the imaged group trunk into the group trunk of the symbol distance graph, and the method comprises the following steps: firstly, inputting a binary image of a group trunk into a distance conversion module, and obtaining an initial symbol distance graph through calculation in a formula (5); then, carrying out normalization processing on the initial symbol distance graph to obtain a group trunk of the symbol distance graph;
the formula is as follows:
Figure BDA0004134246460000032
in the formula (5), SDM (u) represents a transformed initial symbol distance map; inf (·) represents the take-down function; u tableShowing pixel points; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary;
Figure BDA0004134246460000033
representing the boundary of a liver tumor; g in G represents the region inside the boundary of liver tumor out Representing the area outside the boundary of a liver tumor.
Further, the normalization processing method adopts a truncated normalization mode, and the specific formula is as follows:
Figure BDA0004134246460000034
in the formula (6), the amino acid sequence of the compound,
Figure BDA0004134246460000035
group trunk representing a symbol distance graph; normize represents a normalization function; inf (·) represents the take-down function; u represents a pixel point; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary; />
Figure BDA0004134246460000036
Representing the boundary of a liver tumor; g in G represents the region inside the boundary of liver tumor out Representing a region outside the boundary of a liver tumor;
Figure BDA0004134246460000037
in the formula (7), x' represents the normalized output; x represents the normalized input; max (x) represents the maximum value of x; min (x) represents the minimum value of x.
In step S2-4, the formula of the total loss function is as follows:
L=L seg +λL sdm (8)
in the formula (8), L represents a total loss function, L seg Represents the segmentation loss function, L sdm Represents the symbolic distance map regression loss function, and lambda represents the weights of the segmentation loss function and the symbolic distance map regression loss function.
Further, in S1, the data of the training set is:
Figure BDA0004134246460000038
wherein x is i ∈R H×W×C ,x i Representing an image, R representing a pixel point; y is i ∈{0,1} H×W The manual segmentation marking of liver tumor corresponding to the pixel points is represented, H represents the height of the image, W represents the width of the image, and C represents the channel number of the image; d represents a training set; i= … N denotes the i-th sample.
The invention also provides a device for liver tumor segmentation based on the symbol distance, which mainly comprises: an encoder, a first decoder, a second decoder, and a distance conversion module;
the encoder is used for preprocessing the liver nuclear magnetic resonance image in the training set and extracting to obtain a feature map; the first decoder is used for receiving the characteristic map transmitted by the encoder and decoding and outputting a predicted segmentation probability map; the second decoder is used for receiving the characteristic diagram transmitted by the encoder and decoding and outputting a predicted symbol distance diagram; the distance conversion module is used for converting the binary imaged group trunk into the group trunk of the symbol distance graph.
The encoder comprises a convolution module and a downsampling module; the input image firstly passes through a convolution module to obtain a characteristic image, then passes through a downsampling module to obtain a characteristic image with width and height doubled by half channels, and then the characteristic image passes through the convolution module and the downsampling module to repeat. Specifically, the encoder includes five convolution modules and four downsampling modules. Each convolution module contains two convolution operations of 3 x 3 size, each followed by batch normalization and ReLU activation functions. The padding of the convolution operation is set to 1, while the step size of the convolution operation is set to 1, so that the width and height of the feature map are unchanged. The first convolution operation doubles the number of channels of the feature map and the second convolution operation keeps the number of channels of the feature map unchanged. The downsampling module uses a 2 x 2 max pooling (max pooling) operation, setting the step size to 2 to halve the feature map width height, and the number of channels remains unchanged. The number of channels output by the first convolution module in the whole encoder is set to be 64, and then the number of channels is increased continuously to enhance the expression capability of the encoder, the number of channels output by the second convolution module is 128, the number of channels output by the third convolution module is 256, the number of channels output by the fourth convolution module is 512, and the number of channels output by the fifth convolution module is 1024.
The two decoders have the same structure but do not share parameters, the feature map is gradually restored to the size of the original input image in the decoders, the first decoder outputs a predicted segmentation mask, the used activation function is a softmax function, the second decoder outputs a predicted symbol distance map, and the used activation function is a tanh function. The segmentation model is actually used for multitasking learning, wherein a first decoder performs a segmentation task of pixel-by-pixel classification, and a second decoder performs a regression task of a symbol distance graph.
The decoder comprises an up-sampling module and a convolution module; the decoder needs to continually restore the size of the feature map so a jump connection is used. Firstly, a feature map passes through an up-sampling module, the up-sampling module uses a transposed convolution operation, the convolution kernel size of the transposed convolution is set to be 2 multiplied by 2, the step length is set to be 2, and the number of channels to be output is half of the number of channels to be input, so that the width and the height of the feature map are doubled and the number of channels is half of the number of the original channels, and then the feature map after up-sampling and the corresponding feature map of the same layer of an encoder part are spliced along the dimension of the channels; the obtained characteristic diagram passes through a convolution module, the convolution module is similar to an encoder, the convolution module is also composed of two convolution operations, the convolution operation is also followed by batch normalization and ReLU activation functions, the number of channels is halved by the first convolution operation, and the number of channels is kept unchanged by the second convolution operation. Preferably, the decoder comprises four up-sampling modules and four convolution modules, and finally the feature map with the same width and height as the original image is obtained. Finally, the characteristic diagram is subjected to one convolution operation, so that the number of channels of the characteristic diagram is reduced to the required number.
Further, in the upper branch of the model, since the image segmentation task is performed, the number of categories is 2, including background and liver tumor, so that the number of channels output by the convolution operation is 2, followed by the softmax activation function, resulting in the output segmentation probability map. In the branches below the model, since the sign distance graph regression task is performed, the number of channels output by the convolution operation is 1, and then the tanh activation function is used to limit the output range between [ -1,1] to obtain the sign distance graph of the output.
Compared with the prior art, the invention has the following beneficial effects:
the method applies the symbol distance to the segmentation model based on the deep learning, so that the segmentation model performs multi-task learning, not only can the pixel-by-pixel classification be learned through the conventional segmentation task, but also the shape, the position and other information of the liver tumor can be learned through the symbol distance map regression task, and the two tasks complement and promote each other, so that the performance of the segmentation model is obviously improved.
Drawings
Fig. 1 is a general flow chart of a liver tumor segmentation method based on a symbol distance.
Fig. 2 is a diagram showing a model structure of a hepatic tumor segmentation method based on a symbolic distance according to the present invention.
Fig. 3 is a detailed structure of an encoder and a decoder of the present invention.
FIG. 4 is a visual representation of a symbolic distance map;
wherein the light colored region of the third column represents inside the tumor and the brighter the brightness the farther from the tumor boundary, the gray region represents outside the tumor and the darker the gray the farther from the tumor boundary.
FIG. 5 is a visual result of the segmentation model and the comparative experiment model of the present invention in application example 1; wherein, outer refers to the model provided in embodiment 1 of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The definition of division indexes, dice Score, jaccard Score, 95% Hausdorff Distance (HD 95), average Surface Distance (ASD), referred to in the following application examples are shown in references 1 to 5;
reference document:
[1]Jha,D.,Smedsrud,P.H.,Riegler,M.A.,et al.ResUNet++:An Advanced Architecture for Medical Image Segmentation[C/OL]//2019IEEE International Symposium on Multimedia(ISM).San Diego,CA,USA:IEEE,2019:225-2255.https://ieeexplore.ieee.org
/document/8959021/;
[2]Jadon,S.A survey of loss functions for semantic segmentation[J/OL].2020IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology(CIBCB),2020:1-7.
[3]Nai,Y.H.,Teo,B.W.,Tan,N.L.,et al.Comparison of metrics for the evaluation of medical segmentations using prostate MRI dataset[J/OL].Computers in Biology and Medicine,2021,134:104497.
[4]Taha,A.A.&Hanbury,A.Metrics for evaluating 3D medical image segmentation:analysis,selection,and tool[J/OL].BMC Medical Imaging,2015,15(1):29.
[5]Ibtehaz,N.&Rahman,M.S.MultiResUNet:Rethinking the U-Net architecture for multimodal biomedical image segmentation[J/OL].Neural Networks,2020,121:74-87.
example 1
Fig. 1 and fig. 2 show an overall flowchart (fig. 1) and a model structure diagram (fig. 2) of a liver tumor segmentation method based on a symbol distance according to an embodiment of the present invention, and the specific liver tumor segmentation method based on a symbol distance is as follows:
s1: acquiring liver nuclear magnetic resonance image data, and manually segmenting and labeling an original image according to liver tumors in the image to obtain a training set;
in order to ensure the accuracy of labeling information, the liver nuclear magnetic resonance image data is labeled by the three-hospital imaging physician and is audited by the senior citizen.
The training set data is:
Figure BDA0004134246460000061
wherein x is i ∈R H×W×C ,x i Representing an image, R representing a pixel point; y is i ∈{0,1} H×W The manual segmentation marking of liver tumor corresponding to the pixel points is represented, H represents the height of the image, W represents the width of the image, and C represents the channel number of the image; d represents a training set; i= … N denotes the i-th sample.
S2: and inputting the training set into a deep learning model, respectively carrying out image segmentation prediction and symbol distance prediction on the training set, and training the deep learning model according to a loss function of a group trunk calculation prediction result to obtain a liver tumor segmentation model.
The specific method comprises the following steps:
s2-1: after the training set is input into a deep learning model, preprocessing a liver nuclear magnetic resonance image by an encoder to obtain a feature map;
as shown in fig. 2, the deep learning model of the present invention uses an encoder-decoder structure, and an input image is subjected to an encoder to obtain a feature map, and then the feature map is respectively entered into two decoders (a first decoder and a second decoder).
The encoder receives an image with the size of H multiplied by W multiplied by C as input, H represents the height of the image, W represents the width of the image, and C represents the channel number of the image, and then the width and the height of the characteristic image are continuously halved through convolution modules of different layers, and the channel number is doubled.
Specifically, an input image firstly passes through a convolution module to obtain a characteristic image, then passes through a downsampling module to obtain a characteristic image with width and height doubled by half channels, and then the characteristic image passes through the convolution module and the downsampling module to be repeated. In order for the encoder to efficiently extract features of an image, the encoder includes a total of four downsampling modules. And finally, the characteristic diagram is subjected to a convolution module to obtain the output of the encoder. Each convolution module contains two convolution operations of 3 x 3 size, each followed by batch normalization and ReLU activation functions. The padding of the convolution operation is set to 1, while the step size of the convolution operation is set to 1, so that the width and height of the feature map are unchanged. The first convolution operation doubles the number of channels of the feature map and the second convolution operation keeps the number of channels of the feature map unchanged. The downsampling module uses a 2 x 2 max pooling (max pooling) operation, setting the step size to 2 to halve the feature map width height, and the number of channels remains unchanged. The number of channels output by the first convolution module in the whole encoder is set to be 64, and then the number of channels is increased continuously to enhance the expression capability of the encoder, the number of channels output by the second convolution module is 128, the number of channels output by the third convolution module is 256, the number of channels output by the fourth convolution module is 512, and the number of channels output by the fifth convolution module is 1024.
S2-2: the feature map enters a first decoder and a second decoder respectively; outputting a predicted segmentation probability map through a first decoder, and outputting a predicted symbol distance map through a second decoder;
the two decoders have the same structure but do not share parameters, the feature map is gradually restored to the size of the original input image in the decoders, the first decoder outputs a predicted segmentation mask, the used activation function is a softmax function, the second decoder outputs a predicted symbol distance map, and the used activation function is a tanh function. The segmentation model is actually used for multitasking learning, wherein a first decoder performs a segmentation task of pixel-by-pixel classification, and a second decoder performs a regression task of a symbol distance graph.
The decoder needs to continuously restore the size of the feature map and use a jump connection. Firstly, a feature map passes through an up-sampling module, the up-sampling module uses a transposed convolution operation, the convolution kernel size of the transposed convolution is set to be 2 multiplied by 2, the step length is set to be 2, and the number of channels to be output is half of the number of channels to be input, so that the width and the height of the feature map are doubled and the number of channels is half of the number of the original channels, and then the feature map after up-sampling and the corresponding feature map of the same layer of an encoder part are spliced along the dimension of the channels; the obtained characteristic diagram passes through a convolution module, the convolution module is similar to an encoder, the convolution module is also composed of two convolution operations, the convolution operation is also followed by batch normalization and ReLU activation functions, the number of channels is halved by the first convolution operation, and the number of channels is kept unchanged by the second convolution operation. Thus, the decoder comprises four up-sampling modules and four convolution modules, and finally the feature map with the same width and height as the original image is obtained.
Finally, the characteristic diagram is subjected to one convolution operation, so that the number of channels of the characteristic diagram is reduced to the required number. According to fig. 2, in the upper branch of the model, since the image segmentation task is performed, the number of categories is 2, including background and liver tumor, and then the number of channels output by the convolution operation is 2, followed by the softmax activation function, resulting in the segmentation probability map of the output. In the branches below the model, since the sign distance graph regression task is performed, the number of channels output by the convolution operation is 1, and then the tanh activation function is used to limit the output range between [ -1,1] to obtain the sign distance graph of the output.
FIG. 3 is a detailed structure of an encoder and a decoder; each rounded rectangle in the figure represents a stage, the width and height of the feature map in one stage are maintained unchanged, and in the encoder part, other stages perform downsampling to halve the width and height of the feature map except that the first stage does not perform max sampling. Similarly, in each stage of the decoder, up-sampling is performed before passing through the convolution module. In the figure, each stage is accompanied by the shape of the feature map output by that stage.
The partition probability map output by the first decoder is p i ∈[0,1] H×W The method comprises the steps of carrying out a first treatment on the surface of the Wherein p is i Represents a segmentation probability map, H represents the height of the image, W represents the width of the image, and i representsThe i-th pixel is shown. The symbol distance graph output by the second decoder is d i ∈[-1,1] H×W The method comprises the steps of carrying out a first treatment on the surface of the Wherein d i A symbol distance map is shown, H is the height of the image, W is the width of the image, and i= … N is the i-th sample.
S2-3: calculating the deviation between the predicted segmentation probability map and the ground trunk to obtain a segmentation loss function; meanwhile, calculating the deviation between the predicted symbol distance graph and the ground trunk of the symbol distance graph to obtain a regression loss function of the symbol distance graph;
the segmentation loss function is the sum of a cross entropy loss function and a dice loss function;
the formula is as follows:
Figure BDA0004134246460000071
in the formula (1), L seg To divide the loss function, CELoss (p i ,y i ) For cross entropy loss function, diceLoss (p i ,y i ) N represents the total number of pixel points, i is the ith pixel, p is a probability map, and y is a label in a one-hot form;
the cross entropy loss function is:
Figure BDA0004134246460000072
in the formula (2), N represents the total number of pixel points, i represents the ith pixel, C represents the number of divided categories of the pixel points, p is a probability map, and y is a label in a one-hot form;
the dice function is:
Figure BDA0004134246460000081
in the formula (3), p is a probability map, y is a label in the form of one-hot, C is the number of categories, N is the total number of pixel points, and i is the ith pixel.
The regression loss function of the symbolic distance map adopts a mean square error loss function (namely an L2 loss function), and the formula is as follows:
Figure BDA0004134246460000082
in the formula (4), L sdm Representing a symbolic distance regression loss function; n represents the total number of pixel points, i represents the ith pixel point,
Figure BDA0004134246460000083
group trunk, d representing symbol distance i Representing a predicted symbol distance map; the sign distance is indicated inside the liver tumor and the sign distance is indicated outside the liver tumor.
The sign distance is expressed in the object when the sign is taken, the sign is expressed outside the object when the sign is taken, and when the sign taken by the predicted sign distance graph is opposite to the sign taken by the corresponding ground trunk, the obtained mean square error is large, so that the false segmentation outside the tumor and the missed segmentation inside the tumor can be punished through the sign distance graph regression task, and the auxiliary segmentation task is realized.
The model of the invention promotes the mutual complementation between the two tasks by learning the segmentation task of pixel-by-pixel classification and the regression task of the symbol distance graph, can improve the capacity of extracting more essential characteristics of the image of the encoder part and improve the representation capacity of the encoder. However, there is no relation between the two decoders, and in order to facilitate mutual learning and mutual progress between the two decoders, a difference between the segmentation probability map p output by the loss function weighting model and the symbol distance map d output by the loss function weighting model is introduced between the two branches.
The distance transformation module uses a symbolic distance map (signed distance map, SDM) common in conventional image processing, which accepts a binary image, outputs the distance of each pixel to the boundary of an object in the image, and the symbol represents which side of the object the pixel is on, and generally defines the sign inside the object as negative and the sign outside as positive. For example, in a sign distance map of a blood vessel, the sign inside the lumen is negative, the distance in the center of the blood vessel is the largest, and the skeleton of the blood vessel can be extracted from the sign distance map. In the sign distance graph of the tumor, the sign inside the tumor is negative, the sign outside the tumor is positive, and the size and the boundary of the tumor can be obtained according to the sign distance graph. Thus, the symbolic distance map contains a large amount of shape information.
The group trunk of the symbol distance graph can be obtained from the divided group trunk, and a binary image G E R is given H×W The formula for obtaining the symbol distance graph from the binary image is shown as formula (5), wherein
Figure BDA0004134246460000084
Represents the boundary of the target object, in the present invention, the boundary of liver tumor, G in The region inside the boundary, in the present invention, is referred to as liver tumor, G out The region outside the border, in the present invention the background region, where +.>
Figure BDA0004134246460000085
I.e. points on all boundaries, inf (·) represents the take-down function.
The formula is as follows:
Figure BDA0004134246460000086
in the formula (5), SDM (u) represents a transformed initial symbol distance map; inf (·) represents the take-down function; u represents a pixel point; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary;
Figure BDA0004134246460000091
representing the boundary of a liver tumor; g in G represents the region inside the boundary of liver tumor out Representing the area outside the boundary of a liver tumor.
When the pixel point u is positioned in the tumor, i.e. u epsilon G in When calculating its distance to any point on the boundary, here fromThe euclidean distance is used from the calculation mode, then the minimum value of all distances is taken, and the sign of the point inside the boundary is uniformly negative. When the pixel point u is located on the object boundary, the distance is 0. Similarly, when the pixel point u is located in the background area, the distance to the boundary is also calculated, then the minimum value is taken, and the sign is uniformly positive.
The symbolic distance map obtained by equation (5) needs to be further normalized to [ -1,1] to match the model output through the tanh activation function. Considering that the resolution of medical images is typically 512 x 512, whereas tumors typically occupy only a small part of the image, to prevent the effects of larger extremes, a truncated normalization (clipping momalization) approach is used herein.
Specifically, for u ε G in After the minimum Euclidean distance from all points to the boundary is calculated, taking the 95 fractional numbers of the distances from all points, wherein the value of the 95 fractional numbers is 1, and the value of the 95 fractional numbers is calculated according to the formula (7), wherein max (x) is the 95 fractional numbers. The distance after normalization is calculated again like the sign of equation (5) because it is a point inside the tumor.
The normalization processing method adopts a truncated normalization mode, and the specific formula is as follows:
Figure BDA0004134246460000092
in the formula (6), the amino acid sequence of the compound,
Figure BDA0004134246460000093
group trunk representing a symbol distance graph; normize represents a normalization function; inf (·) represents the take-down function; u represents a pixel point; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary; />
Figure BDA0004134246460000094
Representing the boundary of a liver tumor; g in Is shown inThe area inside the boundary of liver tumor, G out Representing the area outside the boundary of a liver tumor.
Figure BDA0004134246460000095
In the formula (7), x' represents the normalized output; x represents the normalized input; max (x) represents the maximum value of x; min (x) represents the minimum value of x.
Similarly, u E G can be calculated out Since the tumor occupies only a small part of the image, the distance from the pixel point in the background will be greater, thus taking a 90-decimal place. The normalization is followed by a positive sign because it is a pixel outside the tumor. Thus, u ε G in And u epsilon G out Is combined together, and the point distance on the boundary of the object is set to be 0, so that the final normalized symbol distance graph can be obtained
Figure BDA0004134246460000096
As shown in formula (6). />
Figure BDA0004134246460000097
The loss is calculated from the group trunk as the symbol distance map and the symbol distance map output from the model.
Visualization of the symbolic distance map is shown in fig. 4, in which the first list of the original liver tumor MRI images is shown, the second list of the group trunk is shown, and the third list of the symbolic distance map is calculated according to the group trunk.
S2-4: adding the segmentation loss function and the regression loss function of the symbol distance graph to obtain a total loss function; training the deep learning model through the total loss function to obtain a liver tumor segmentation model.
The formula of the total loss function is as follows:
L=L seg +λL sdm (8)
in the formula (8), L represents a total loss function, L seg Represents the segmentation loss function, L sdm Represents the regression loss function of the symbolic distance graph, lambda represents the weight of the symbolic distance graph and lambda represents the weightLet 1 be set.
Application case 1
Data from a trimethyl hospital were used, all of which were liver tumor patients MRI T2 phase images, for a total of 126 cases. In dividing the training set and the test set, the training set is divided into 83 cases and 43 cases according to the proportion of about 2:1. The number of slices in the MRI image was not equal, but the resolution of all slices was 512 x 512 and the layer thickness of the MRI data was 6mm. Medical instruments for acquiring MRI images include siemens, GE, philips, etc., all of which remain approximately the same, and data formats are in the dicom format common to the medical imaging arts.
In the case, benign tumors such as hepatic cyst, hepatic hemangioma and the like are not marked, and hepatocellular carcinoma, cholangiocarcinoma and metastatic tumors are marked as the same type and are not further distinguished. All MRI images are marked by two or more image doctors in three hospitals, and are checked by senior physicians, and marked information is confirmed by pathological diagnosis. All labels are only applied to liver tumors, and if other diseases such as hepatitis, fatty liver, liver cirrhosis, cholecystitis, portal hypertension and the like are combined, no labeling treatment is performed.
Taking 1 slice above the layer and 1 slice below the layer by using a 2.5D model, wherein the total number of the slices is 3; three-dimensional MRI data were processed into 3 slices. The image of the dicom-format liver tumor data set obtained from the hospital is converted into a npy-format file provided by numpy, the image data is reserved in the form of an array, and the header file data is discarded. Using the method of clipping normalization, 95 quantiles of the MRI image are selected as the upper bound and 5 quantiles as the lower bound, truncated, and finally normalized to [0,1]. All images were center cut to an aspect size of 384 x 448. The image is then subjected to some data enhancement operations. The image is first subjected to some affine transformation including translation, scaling and rotation. The image is then subjected to pixel-level transformations, including adjusting brightness and adjusting contrast.
The data is input into the model, and then the output segmentation probability map and the symbol distance map are respectively obtained from the two branches. Dividing the probability map into partitionsThe ground trunk of the cut calculates the cross entropy loss and the dice loss, and adds the two losses to obtain the cut loss L seg . Then the divided group trunk is passed through the distance change module described in Distance Transform and 3.1.3 sections in fig. 2 to obtain the group trunk of the symbol distance graph, and then the regression loss L is calculated with the symbol distance graph predicted by the model sdm . The losses are added to obtain the final loss, and then the model parameters are updated by reverse rebroadcasting. When the parameters are updated, the two decoders are guided to update by the respective supervisory signals, and the encoder is guided by all the supervisory signals, so that the capability of the encoder for extracting image features can be enhanced, and the improvement of model segmentation performance is promoted.
The application example is compared with other mainstream segmentation methods besides adopting the algorithm of the invention. The segmentation methods of the selected comparison include UNet, unet++, attentionUNet, deeplabv < 3+ >, PSPNet.
According to the invention, the segmentation algorithm and the comparison method are operated for 100 epochs, and all models are converged through experimental observation. All models used an SGD optimizer with an initial learning rate of 0.1 and momentum set to 0.9. The learning rate of the optimizer was adjusted using ReduceLROnPlateeau, and was adjusted to 0.5 times the original rate every 5 epoch losses were not reduced. The number of channels after the input image entered the model became 64, and then increased by a multiple of 64, 128, 256, 512, 1024, respectively. The DataLoader's batch_size is set to 32, num_works is set to 4, and pin_memory is set to True. The random seed is set to 1234. The two branches occupy the same weight, λ is set to 1. The model parameters of the comparison method are basically set to be identical.
All experiments were averaged after 5 runs and the results of the experiments are shown in table 1 and fig. 5.
Table 1 results of comparative experiments with segmentation models
Model Dice Score Jaccard HD95 ASD
UNet 0.75963 0.70852 6.93719 2.05906
UNet++ 0.75910 0.70952 7.86113 1.58129
AttentionUNet 0.75368 0.70247 7.74241 1.58942
Deeplabv3+ 0.71900 0.66280 8.50718 2.02509
PSPNet 0.73302 0.67467 7.27133 1.63418
Example 1 0.77126 0.72046 6.80759 1.78896
It can be seen from the table that the segmentation method proposed in the present project is superior to the common mainstream method, and is superior to the comparative segmentation method in all of the Dice Score, jaccard Score, and HD 95. Compared to the other best methods, the method proposed in this paper improved by 1.2% on the Dice Score, 1.1% on the Jaccard Score, and 0.13 on HD 95.
It can also be noted that when the tumor is large or has a clear demarcation from the surrounding tissue (e.g., the gray value of the tumor area is significantly different from that of the surrounding area), these several segmentation methods all perform better, and when the tumor is small or the demarcation of the tumor tissue is not clear, these methods all perform worse, but the method proposed by the present invention is still superior to the comparative method.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (12)

1. The liver tumor segmentation method based on the symbol distance is characterized by comprising the following steps of:
s1: acquiring liver nuclear magnetic resonance image data, and manually segmenting and labeling an original image according to liver tumors in the image to obtain a training set;
s2: inputting the training set into a deep learning model, respectively carrying out image segmentation prediction and symbol distance prediction on the training set, and training the deep learning model according to a loss function of a group trunk calculation prediction result to obtain a liver tumor segmentation model;
s3: and inputting the liver nuclear magnetic resonance image data to be segmented into the liver tumor segmentation model to obtain a liver tumor segmentation result.
2. The method of liver tumor segmentation based on symbolic distance according to claim 1, wherein the deep learning model mainly comprises:
the encoder is used for preprocessing the liver nuclear magnetic resonance image in the training set and extracting to obtain a feature map;
a first decoder for receiving the feature map transmitted from the encoder and decoding the output predicted segmentation probability map;
a second decoder for receiving the feature map transmitted from the encoder and decoding the output predicted symbol distance map;
and the distance conversion module is used for converting the binary imaged ground trunk into the ground trunk of the symbol distance graph.
3. The method for segmenting hepatic tumors based on symbolic distances according to claim 2, wherein the step of S2 comprises the following steps:
s2-1: after the training set is input into a deep learning model, preprocessing a liver nuclear magnetic resonance image by an encoder to obtain a feature map;
s2-2: the feature map enters a first decoder and a second decoder respectively; outputting a predicted segmentation probability map through a first decoder, and outputting a predicted symbol distance map through a second decoder;
s2-3: calculating the deviation between the predicted segmentation probability map and the ground trunk to obtain a segmentation loss function; meanwhile, calculating the deviation between the predicted symbol distance graph and the ground trunk of the symbol distance graph to obtain a regression loss function of the symbol distance graph;
s2-4: adding the segmentation loss function and the regression loss function of the symbol distance graph to obtain a total loss function; training the deep learning model through the total loss function to obtain a liver tumor segmentation model.
4. The method of liver tumor segmentation based on symbolic distance as set forth in claim 3, wherein in step S2-2, the activation function adopted by the first decoder is a softmax function, and the output segmentation probability map is p i ∈[0,1] H×W The method comprises the steps of carrying out a first treatment on the surface of the Wherein p is i The segmentation probability map is represented by H, W, and i, respectively, representing the height, width, and i, respectively, of the image.
5. The method for segmenting liver tumors based on symbolic distances according to claim 3, wherein in the step S2-3, the segmentation loss function is the sum of a cross entropy loss function and a dice loss function;
the formula is as follows:
Figure FDA0004134246450000011
in the formula (1), L seg To divide the loss function, CELoss (p i ,y i ) For cross entropy loss function, diceLoss (p i ,y i ) N represents the total number of pixel points, i is the ith pixel, p is a probability map, and y is a label in a one-hot form;
the cross entropy loss function is:
Figure FDA0004134246450000021
in the formula (2), N represents the total number of pixel points, i represents the ith pixel, C represents the number of divided categories of the pixel points, p is a probability map, and y is a label in a one-hot form;
the dice function is:
Figure FDA0004134246450000022
in the formula (3), p is a probability map, y is a label in the form of one-hot, C is the number of categories, N is the total number of pixel points, and i is the ith pixel.
6. The method of liver tumor segmentation based on symbol distance as set forth in claim 3, wherein in step S2-2, the activation function adopted by the second decoder is a tanh function, and the output symbol distance graph is d i ∈[-1,1] H×W The method comprises the steps of carrying out a first treatment on the surface of the Wherein d i A symbol distance map is shown, H is the height of the image, W is the width of the image, and i= … N is the i-th sample.
7. The method for segmenting hepatic tumors based on symbolic distances according to claim 3, wherein in the step S2-3, the symbolic distance map regression loss function adopts a mean square error loss function with the following formula:
Figure FDA0004134246450000023
in the formula (4), L sdm Representing a symbolic distance regression loss function; n represents the total number of pixel points, i represents the ith pixel point,
Figure FDA0004134246450000024
group trunk, d representing symbol distance i Representing a predicted symbol distance map; the sign distance is indicated inside the liver tumor and the sign distance is indicated outside the liver tumor.
8. The method for segmenting liver tumors based on symbolic distances according to claim 1, wherein in the step S2-3, the distance transformation module transforms the imaged group trunk into the group trunk of the symbolic distance map, and the method comprises the following steps: firstly, inputting a binary image of a group trunk into a distance conversion module, and obtaining an initial symbol distance graph through calculation in a formula (5); then, carrying out normalization processing on the initial symbol distance graph to obtain a group trunk of the symbol distance graph;
the formula is as follows:
Figure FDA0004134246450000025
in the formula (5), SDM (u) represents a transformed initial symbol distance map; inf (·) represents the take-down function; u represents a pixel point; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary;
Figure FDA0004134246450000026
representing the boundary of a liver tumor; g in G represents the region inside the boundary of liver tumor out Representing the area outside the boundary of a liver tumor.
9. The method for segmenting hepatic tumor based on symbolic distance according to claim 8, wherein the normalization processing method adopts a truncated normalization mode, and the specific formula is as follows:
Figure FDA0004134246450000031
in the formula (6), the amino acid sequence of the compound,
Figure FDA0004134246450000032
group trunk representing a symbol distance graph; normize represents a normalization function; inf (·) represents the take-down function; u represents a pixel point; v represents a pixel point located on the boundary of the liver tumor; u epsilon G in Representing the area of the pixel points inside the liver tumor boundary; u epsilon G out Representing an area representing pixels outside the liver tumor boundary; />
Figure FDA0004134246450000035
Representing the boundary of a liver tumor; g in G represents the region inside the boundary of liver tumor out Indicated outside the boundary of liver tumorIs a region of (2);
Figure FDA0004134246450000033
in the formula (7), x' represents the normalized output; x represents the normalized input; max (x) represents the maximum value of x; min (x) represents the minimum value of x.
10. The method of liver tumor segmentation based on symbolic distance according to claim 8, wherein in step S2-4, the formula of the total loss function is as follows:
L=L seg +λL sdm (8)
in the formula (8), L represents a total loss function, L seg Represents the segmentation loss function, L sdm Represents the symbolic distance map regression loss function, and lambda represents the weight of the two.
11. The method for segmenting liver tumors based on the symbol distance as claimed in claim 1, wherein in S1, the data of the training set is:
Figure FDA0004134246450000034
wherein x is i ∈R H×W×C ,x i Representing an image, R representing a pixel point; y is i ∈{0,1} H×W The manual segmentation marking of liver tumor corresponding to the pixel points is represented, H represents the height of the image, W represents the width of the image, and C represents the channel number of the image; d represents a training set; i= … N denotes the i-th sample.
12. A device for liver tumor segmentation based on symbolic distances, comprising:
the encoder is used for preprocessing the liver nuclear magnetic resonance image in the training set and extracting to obtain a feature map;
a first decoder for receiving the feature map transmitted from the encoder and decoding the output predicted segmentation probability map;
a second decoder for receiving the feature map transmitted from the encoder and decoding the output predicted symbol distance map;
and the distance conversion module is used for converting the binary imaged ground trunk into the ground trunk of the symbol distance graph.
CN202310269462.2A 2023-03-13 2023-03-13 Liver tumor segmentation method and device based on symbol distance Pending CN116309621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310269462.2A CN116309621A (en) 2023-03-13 2023-03-13 Liver tumor segmentation method and device based on symbol distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310269462.2A CN116309621A (en) 2023-03-13 2023-03-13 Liver tumor segmentation method and device based on symbol distance

Publications (1)

Publication Number Publication Date
CN116309621A true CN116309621A (en) 2023-06-23

Family

ID=86835722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310269462.2A Pending CN116309621A (en) 2023-03-13 2023-03-13 Liver tumor segmentation method and device based on symbol distance

Country Status (1)

Country Link
CN (1) CN116309621A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy
CN111709952A (en) * 2020-05-21 2020-09-25 无锡太湖学院 MRI brain tumor automatic segmentation method based on edge feature optimization and double-flow decoding convolutional neural network
CN112802040A (en) * 2021-01-28 2021-05-14 上海藤核智能科技有限公司 X-ray pneumothorax segmentation and evaluation method based on edge perception
CN113657393A (en) * 2021-08-16 2021-11-16 山东建筑大学 Shape prior missing image semi-supervised segmentation method and system
CN114359558A (en) * 2021-12-14 2022-04-15 重庆大学 Roof image segmentation method based on hybrid framework
CN114612479A (en) * 2022-02-09 2022-06-10 苏州大学 Medical image segmentation method based on global and local feature reconstruction network
CN114862800A (en) * 2022-05-10 2022-08-05 浙江大学 Semi-supervised medical image segmentation method based on geometric consistency constraint
CN115082493A (en) * 2022-06-02 2022-09-20 陕西科技大学 3D (three-dimensional) atrial image segmentation method and system based on shape-guided dual consistency
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy
CN111709952A (en) * 2020-05-21 2020-09-25 无锡太湖学院 MRI brain tumor automatic segmentation method based on edge feature optimization and double-flow decoding convolutional neural network
CN112802040A (en) * 2021-01-28 2021-05-14 上海藤核智能科技有限公司 X-ray pneumothorax segmentation and evaluation method based on edge perception
CN113657393A (en) * 2021-08-16 2021-11-16 山东建筑大学 Shape prior missing image semi-supervised segmentation method and system
CN114359558A (en) * 2021-12-14 2022-04-15 重庆大学 Roof image segmentation method based on hybrid framework
CN114612479A (en) * 2022-02-09 2022-06-10 苏州大学 Medical image segmentation method based on global and local feature reconstruction network
CN114862800A (en) * 2022-05-10 2022-08-05 浙江大学 Semi-supervised medical image segmentation method based on geometric consistency constraint
CN115082493A (en) * 2022-06-02 2022-09-20 陕西科技大学 3D (three-dimensional) atrial image segmentation method and system based on shape-guided dual consistency
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何姗姗: ""基于深度活动轮廓模型的肝脏及肿瘤CT图像分割方法研究"", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, 15 January 2023 (2023-01-15), pages 21 - 22 *

Similar Documents

Publication Publication Date Title
Yi et al. Generative adversarial network in medical imaging: A review
Bernal et al. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review
CN111192245B (en) Brain tumor segmentation network and method based on U-Net network
Hesamian et al. Deep learning techniques for medical image segmentation: achievements and challenges
CN113870258B (en) Counterwork learning-based label-free pancreas image automatic segmentation system
CN112150428A (en) Medical image segmentation method based on deep learning
CN111951288B (en) Skin cancer lesion segmentation method based on deep learning
CN113436173B (en) Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception
Wang et al. U-net using stacked dilated convolutions for medical image segmentation
Ma et al. HT-Net: hierarchical context-attention transformer network for medical ct image segmentation
Yu et al. Sample-adaptive gans: Linking global and local mappings for cross-modality mr image synthesis
CN110751651A (en) MRI pancreas image segmentation method based on multi-scale migration learning
Yamanakkanavar et al. MF2-Net: A multipath feature fusion network for medical image segmentation
CN112767407A (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
Haq et al. BTS-GAN: computer-aided segmentation system for breast tumor using MRI and conditional adversarial networks
Wu et al. Image synthesis in contrast MRI based on super resolution reconstruction with multi-refinement cycle-consistent generative adversarial networks
Li et al. Cross-shaped windows transformer with self-supervised pretraining for clinically significant prostate cancer detection in Bi-parametric MRI
CN116245892B (en) Image processing model generation method, image processing method and device
Gao et al. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation
CN116228690A (en) Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT
CN116309621A (en) Liver tumor segmentation method and device based on symbol distance
Lu et al. Regional perception and multi-scale feature fusion network for cardiac segmentation
Shao et al. Semantic segmentation method of 3D liver image based on contextual attention model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination