CN113344951A - Liver segment segmentation method based on boundary perception and dual attention guidance - Google Patents
Liver segment segmentation method based on boundary perception and dual attention guidance Download PDFInfo
- Publication number
- CN113344951A CN113344951A CN202110556924.XA CN202110556924A CN113344951A CN 113344951 A CN113344951 A CN 113344951A CN 202110556924 A CN202110556924 A CN 202110556924A CN 113344951 A CN113344951 A CN 113344951A
- Authority
- CN
- China
- Prior art keywords
- attention
- feature map
- level
- image
- boundary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000004185 liver Anatomy 0.000 title claims abstract description 47
- 230000011218 segmentation Effects 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000008447 perception Effects 0.000 title claims abstract description 21
- 230000009977 dual effect Effects 0.000 title claims description 24
- 230000007246 mechanism Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 11
- 230000003187 abdominal effect Effects 0.000 claims description 10
- 238000005070 sampling Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 210000003240 portal vein Anatomy 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 7
- 210000004204 blood vessel Anatomy 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000011084 recovery Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 210000003041 ligament Anatomy 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 2
- 238000002156 mixing Methods 0.000 claims description 2
- 206010019695 Hepatic neoplasm Diseases 0.000 abstract description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 208000014018 liver neoplasm Diseases 0.000 abstract description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 14
- 230000004913 activation Effects 0.000 description 4
- 230000003902 lesion Effects 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000002440 hepatic effect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003908 liver function Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 210000002989 hepatic vein Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a liver segment segmentation method based on boundary perception and double attention guidance, which is called a boundary perception and double attention guidance symmetric coding and decoding network and is used for accurately judging the position of a liver tumor. The method can enhance the characteristic learning effect of the boundary in the medical image, and improve the segmentation precision of the liver segment by accurately positioning the edge position. The double attention mechanism proposed by the method is composed of a spatial attention module and a semantic attention module in parallel. The low-level feature map with rich boundary position information is weighted from two dimensions of space and channel and is spliced with the corresponding high-level feature map in the decoding path, so that the boundary feature expression is clearer and more prominent, the positioning of liver segment boundaries is more facilitated, the segmentation accuracy is improved, and the liver segment segmentation problem is effectively solved.
Description
Technical Field
The invention belongs to the field of image segmentation in computer vision, and particularly relates to a liver segment segmentation method based on boundary perception and dual attention guidance.
Background
The lobes of the liver are divided into eight parts by a cross-section that bifurcates through the main portal vein as described in the Couinaud classification, which is widely used in liver anatomy. The segmentation of the liver into individual segments is of crucial importance in surgical treatment, since the part involved in the tumor can be excised separately without damaging the rest, so that the liver function can be preserved as much as possible. Radiographic images (such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI)) used with contrast agents can clearly show anatomical structures such as hepatic veins, portal veins and their vascular branches, which are particularly important for delineating liver segments.
The existing liver segment segmentation works by adopting a similar method. First, blood vessels in the liver are segmented using various conventional image processing methods. Then, the result of the blood vessel segmentation is taken as reference, and the liver segment is segmented by adopting a nearest neighbor approximation algorithm. Although the problem of liver segmentation can be solved using conventional segmentation methods, there are still some disadvantages. In particular, the conventional method is not good at handling fuzzy edges and cannot distinguish which blood vessels can be used as a reference for segmenting liver segments. Furthermore, these methods cannot accommodate various characteristics of the data.
In recent years, the application of deep learning based methods in biomedical image analysis and diagnosis has attracted more researchers in the field of computer vision. Semantic segmentation has been applied in many medical scenarios, including organ segmentation, vessel segmentation, lesion segmentation, 3D reconstruction and visual enhancement, as a classical branch of research in computer vision. The deep network is applied to the liver segmentation task, so that the problems of insufficient robustness, incapability of adapting to multi-type data characteristics, low efficiency and the like can be effectively solved.
Disclosure of Invention
The invention aims to provide a novel deep learning method applied to liver segment segmentation, namely boundary perception dual attention guided U-Net. We add a dual attention module at each layer of the decoding path of U-Net that takes into account the characteristics of the boundary from a spatial and semantic perspective. The double attention module splices high and low feature maps from corresponding encoding and decoding paths in U-Net respectively and calculates the attention of the weighted feature maps. The splicing mode can supplement semantic information of each other, highlight liver regions to help the liver regions to be positioned, enhance the representation of blood vessel characteristics and facilitate the determination of the boundary position of the liver segment in the image pixel recovery process.
The dual attention module consists of a spatial attention module and a semantic attention module. The design of the two modules is based on the same principle, and the fused feature maps are used for revealing boundary attributes related to semantic information and spatial information as gating signals, so that the attention tendency of the model is guided. In the spatial attention module, the fused feature map is compressed into a single-channel map, and each pixel represents a weight that is multiplied by the original feature map. The liver position information and the boundary position information may enable pixels of the liver region and the boundary region of the liver segment to obtain greater weight, thereby increasing the difference between the foreground and the background. In the semantic attention module, the fused feature map is compressed into a one-dimensional column vector representing the weight of each channel. Multiplying the one-bit weight column vector by the original feature map enhances the valid channels and suppresses the invalid channels, while also weighting the different filters in the previous convolution kernel.
In order to achieve the purpose, the invention adopts the following technical scheme: a hepatic segment segmentation method of boundary perception dual attention guide, scan abdominal MRI case data of collecting portal period at first and utilize specialized software to label hepatic segment label to each picture; secondly, carrying out certain preprocessing on the acquired original data to ensure the uniformity of the data form, specifically carrying out image size normalization by adopting a bilinear interpolation method, and normalizing each pixel value in the image to be in a range of [0,1 ]; converting the preprocessed data and the liver label into a tensor form, inputting the tensor form to a feature extraction module (namely a coding path of U-Net) of a Convolutional Neural Network (CNN), and performing continuous convolution and down-sampling operation on the image to reduce the dimension and obtain deep feature expression of the image; then, in the process of recovering the image resolution (namely a decoding path of U-Net), a method of recovering pixels step by step is adopted, and a low-layer feature map in an encoding path is input into a boundary perception double attention module for weighting on each same resolution level, so that feature expression of a key boundary region is highlighted, and interference of an irrelevant background region on a segmentation result is suppressed; then, the low-level feature map output from the dual attention module is spliced with the corresponding high-level feature map with the same resolution ratio, so that the boundary positioning accuracy is improved in the image resolution ratio recovery process; finally, after four levels of resolution restoration and feature map weighting, a segmentation mask with the same resolution as that of the input image is obtained, and the accuracy of the segmentation result is remarkably improved compared with that of the previous method.
A boundary-aware dual-attention-guided liver segment segmentation method, comprising the steps of:
And 2, carrying out preprocessing such as size and pixel value normalization on the original MRI data.
And 3, constructing a deep convolutional neural network (U-Net) and a boundary perception dual attention module of the symmetric coding and decoding path.
And 3.1, coding and four-time dimensionality reduction are carried out on the image by utilizing a CNN (common noise network) feature extraction module, namely a coding path of U-Net, and deep-level feature expression of the image is obtained.
And 3.2, designing a parallel boundary perception dual attention module based on a gated attention mechanism, wherein the module can weight the feature map from a space dimension and a semantic channel dimension.
And 3.3, adding the designed double attention module into four resolution levels of the image resolution recovery process (namely a decoding path of U-Net) to perform attention weighting on the low-level feature map.
And 3.4, splicing the low-level feature map output from the dual attention module with the corresponding high-level feature map with the same resolution, performing convolution and up-sampling operation, and inputting to the next resolution level until the four resolution levels completely pass through, and outputting a segmentation mask.
And 4, inputting the unlabeled abdominal MRI into the trained segmentation model, and outputting a specific segmentation result of the liver segment.
Compared with the prior art, the invention has the following obvious advantages:
based on clinical practical application, the portal period sequence of clinical abdominal magnetic resonance images is used as a training sample, a boundary perception dual-attention-guided liver segment segmentation method is provided, and a deep network is constructed for training. Compared with the traditional method proposed so far, the method has the advantages of high segmentation efficiency, high accuracy, strong robustness and the like, and is easy to deploy in an actual clinical scene. Can assist a doctor to efficiently and accurately locate the specific position of the liver tumor in the liver before operation, so that the surgeon can independently cut the tumor without damaging the rest part, thereby preserving the liver function as much as possible.
Drawings
FIG. 1 is a diagram of the overall model architecture of the present invention;
FIG. 2 is a flow chart of a method of the present invention;
FIG. 3 is a block diagram of a spatial attention module in a boundary aware dual attention module;
FIG. 4 is a block diagram of a semantic attention module in a boundary aware dual attention module;
FIG. 5 is a graph of MRI data labeled according to the Couinaud classification;
Detailed Description
The present invention will be described in further detail below with reference to specific embodiments and with reference to the attached drawings.
The general structure diagram of the present invention is shown in fig. 1, the flow of the method is shown in fig. 2, the proposed boundary sensing dual attention module is shown in fig. 3, fig. 4 is a labeling diagram of MRI data according to the Couinaud classification method, and the method specifically includes the following steps:
The clinically collected cases relate to MRI portal vein phase scanning sequences of various liver focal lesions, so that the liver segment segmentation of the training model under different lesion conditions can be ensured to have higher precision, and the robustness of the model is improved. The acquired MRI data invite experienced radiologists to divide the liver into eight segments (numbers I to VIII) according to the blood vessels according to the Couinaud classification method which is most commonly used internationally. The I section is a tail leaf. The second and third segments are located outside the sickle ligament, with the second segment located above the portal vein supply and the third segment located below the portal vein supply. Segment IV is located inside the sickle ligament and is subdivided into segments IVa (superior) and IVb (inferior). Sections V to VIII constitute the right part of the liver. The V-th segment is located innermost and lowermost in the whole liver. The VI-th section is located further rearward. The VII stage is located above the VI stage. The VIII section is positioned above the V section and at the upper inner side. A liver segment segmentation method and a 3D reconstruction scheme are illustrated in fig. 1. We used nine different colors to label different colored liver segments. In the data read phase, 9 tags in the mask are assigned 1 to 9 corresponding gray values.
And 2, carrying out preprocessing such as size and pixel value normalization on the original MRI data.
Preprocessing the MRI data after the collection and the labeling, firstly, eliminating a small amount of data with different lengths and widths in the image data, and keeping the length-width ratio of the data image to be 1: 1. Due to the large difference of the image gray values, for the screened data, the pixel values of the data need to be normalized to [0,1], and the expression of the normalization is as follows:
and 3, constructing a deep convolutional neural network (U-Net) and a boundary perception dual attention module of the symmetric coding and decoding path.
And 3.1, coding and four-time dimensionality reduction are carried out on the image by utilizing a CNN (common noise network) feature extraction module, namely a coding path of U-Net, and deep-level feature expression of the image is obtained.
The invention adopts U-Net as basic backbone network, and the network is characterized by symmetrical encoding and decoding paths. And the coding path of the U-Net is the feature extractor of the image. And obtaining the higher-level feature expression of the image data by filtering and feature extraction of multiple convolution kernels, dimension reduction of four times of downsampling operation and the addition of a concept of a residual error module.
And 3.2, designing a parallel boundary perception dual attention module based on a gated attention mechanism, wherein the module can weight the feature map from a space dimension and a semantic channel dimension.
The boundary perception double attention module based on the gated attention mechanism is formed by a space attention module and a semantic channel attention module in parallel, and weights can be given to the feature map from two dimensions of the image.
In the spatial attention module, the low-level feature map from the encoding path is first givenAnd high level feature maps from a previous levelFeeding the two into two different convolution layersAndthereby obtaining two new characteristic mapsAndthe next step is to fuse the two new feature maps. The invention adopts a fusion mode of splicing along the channel dimension to generate a fusion graph of low-level features and high-level featuresThe spliced fusion graph is activated by a ReLu activation function and then sent to a convolution layerA network of batch normalization layers (Batchnorm layers) and Relu activation functions that normalize the channel size of the fused graph to C, e.g.At this time, the low-level feature map and the high-level feature map are subjected to integral splicing under the condition of keeping spatial information and dimension reduction. Then sending the spliced characteristic diagram into a convolution layerCompressing it into a single-channel feature map. Then a Sigmoid function is executed to normalize it to 0,1]Generating a spatial attention mapThis results in an attention map in which each pixel reflects a weight indicating how much it is salient in the original feature map from the perspective of the spatial view. Finally, we try to find out the spatial attention of the same resolution level in the decoding pathAnd low level feature map FlPerforming a pixel dot product to obtain a final weighted feature map, i.e. of the spatial attention moduleThe whole process formula is as follows:
Fconcat=Concat(Wl TFl+bl;Wh TFh+bh)
wherein b isl,bh,bc,bfIs a bias term, σ, corresponding to different convolution layers1And σ2Respectively representing the ReLu activation function and Sigmoid activation function.
The semantic attention module employs the same gating approach as in the spatial attention module. At the same time, the same feature map F is usedl,FhAs input, and sharing the parameter W of the convolutional layerl,Wh. Unlike the spatial attention module, the low-level feature and high-level feature maps in the semantic attention module employ an element-summation approach to generate the fusion map. The whole process formula is as follows:
and finally, performing point-addition fusion on the two attention maps output by the spatial attention module and the semantic attention module to obtain a final attention weighted feature map.
And 3.3, adding the designed double attention module into four resolution levels of the image resolution recovery process (namely a decoding path of U-Net) to perform attention weighting on the low-level feature map.
The U-Net basic network adopted by the invention comprises four times of up-sampling operation, and the high-level feature diagram in each up-sampling operation is spliced and fused with the low-level feature diagram of the same resolution level on the coding path. The boundary perception double attention mechanism provided by the invention is applied to the process of splicing and fusing the feature maps at each time, and the weighted low-level feature map is fused with the high-level feature map so as to realize more accurate boundary feature positioning and be beneficial to not losing important information in the process of recovering the image resolution.
And 3.4, splicing the low-level feature map output from the dual attention module with the corresponding high-level feature map with the same resolution, performing convolution and up-sampling operation, and inputting to the next resolution level until the four resolution levels completely pass through, and outputting a segmentation mask.
In each level, the operation of convolution and up-sampling is carried out on the fused feature map after the high-level and low-level features are fused. The feature maps after these operations are spliced again as the high-level feature map in the next resolution level (i.e. higher resolution level) and the low-level feature map in the corresponding encoding path. The iteration operation is carried out until the feature map is restored to the same resolution as the input image after passing through four resolution levels, and the feature map can be used as a final output result of the network, namely a mask for image segmentation. The whole network adopts a mixed loss function, which consists of a Dice loss function and a cross entropy loss function, and is specifically as follows:
LMixed=λ1LCE+λ2LDice
wherein λ1And λ2For superparameters, all set to 1, N represents the total number of pixels, y andone-hot encoding representing the real value and the predicted value, respectively, and a and B representing the real area and the predicted area, respectively.
And 4, inputting the unlabeled abdominal MRI into the trained segmentation model, and outputting a specific segmentation result of the liver segment.
In the training process, the overlapping area of the prediction region and the real label region is increased by optimizing the mixing loss function in the step 3.4, and the accuracy of pixel classification is ensured. For the trained model with the optimal effect, the clinical unlabeled abdominal MRI is input into the trained model, so that the result of liver segment segmentation can be accurately generated, and the trained model provides help for clinical diagnosis and preoperative analysis of doctors.
The specific implementation of the present invention is now described.
Claims (4)
1. A liver segment segmentation method of boundary perception dual attention guidance is characterized in that: the method comprises the following steps:
step 1, acquiring an MRI enhanced portal scanning sequence;
step 2, carrying out size and pixel value normalization pretreatment on data of an original MRI enhanced portal scanning sequence;
step 3, constructing a deep convolutional neural network U-Net and a boundary perception dual attention module of the symmetric coding and decoding path;
step 3.1, coding and four-time dimensionality reduction are carried out on the image by utilizing a CNN (common noise network) feature extraction module, namely a coding path of U-Net, and deep-level feature expression of the image is obtained;
step 3.2, designing a parallel boundary perception dual attention module based on a gated attention mechanism, and weighting the feature map from a space dimension and a semantic channel dimension;
step 3.3, adding the designed double attention module into four resolution levels of a decoding path of the U-Net in the process of recovering the image resolution, and carrying out attention weighting on the low-level feature map;
step 3.4, splicing the low-level feature map output from the dual attention module with the corresponding high-level feature map with the same resolution, performing convolution and up-sampling operations, and inputting the operation to the next resolution level until the four resolution levels completely pass through, and outputting a segmentation mask;
and 4, inputting the unlabeled abdominal MRI into the trained segmentation model, and outputting a specific segmentation result of the liver segment.
2. The boundary-aware dual-attention-directed liver segment segmentation method of claim 1, wherein: the method specifically comprises the following steps:
step 1, collecting abdominal MRI enhanced portal scan sequence of clinical cases;
the acquired MRI data invites experienced radiologists to divide the liver into eight part numbers I to VIII according to the most internationally used Couinaud classification method according to the blood vessels; the section I is a caudate leaf; the second and third segments are located outside the sickle ligament, the second segment is located above the portal vein supply, and the third segment is located below the portal vein supply; section IV is located inside the sickle ligament and is subdivided into sections IVa and IVb; sections V to VIII constitute the right part of the liver; the V section is located at the innermost and the lowest part of the whole liver; the VI section is located further rearward; the VII section is positioned above the VI section; the VIII section is positioned above the V section and at the upper inner side.
3. The boundary-aware dual-attention-directed liver segment segmentation method of claim 1, wherein: in step 2, preprocessing the acquired and labeled MRI data, firstly, eliminating data with different lengths and widths in the image data, and keeping the length-width ratio of the data image to be 1: 1; due to the fact that the difference of the image gray values is too large, pixel values of the screened data are normalized to [0,1 ];
step 3, constructing a deep convolutional neural network U-Net and a boundary perception dual attention module of the symmetric coding and decoding path;
step 3.1, coding and four-time dimensionality reduction are carried out on the image by utilizing a CNN (common noise network) feature extraction module, namely a coding path of U-Net, and deep-level feature expression of the image is obtained;
a backbone network based on U-Net is adopted, and the backbone network is a symmetrical encoding and decoding path; wherein the coding path of the U-Net is a feature extractor of the image; obtaining higher-level feature expression of image data by filtering and feature extraction of multiple convolution kernels, dimensionality reduction of four times of downsampling operation and the addition of a concept of a residual error module;
step 3.2, designing a parallel boundary perception dual attention module based on a gated attention mechanism, wherein the module weights the feature map from a space dimension and a semantic channel dimension;
the boundary perception double attention module based on the gated attention mechanism is formed by a space attention module and a semantic channel attention module in parallel, and a weight is given to the feature map from two dimensions of the image; the semantic attention module adopts the same gating method as that in the space attention module; use feature map Fl,FhAs input, and sharing the parameter W of the convolutional layerl,Wh(ii) a Unlike the spatial attention module, the low-level feature and the high-level feature map in the semantic attention module adopt an element addition mode to generate a fusion map; finally, point addition fusion is carried out on the two attention diagrams output by the space attention module and the semantic attention module to obtain a final attention weighted feature diagram;
step 3.3, adding the designed double attention module into the image resolution recovery process, namely four resolution levels of a decoding path of the U-Net, and carrying out attention weighting on the low-layer feature diagram; the adopted U-Net basic network comprises four times of up-sampling operation, and a high-level feature map in each up-sampling operation is spliced and fused with a low-level feature map of the same resolution level on a coding path;
step 3.4, splicing the low-level feature map output from the dual attention module with the corresponding high-level feature map with the same resolution, performing convolution and up-sampling operations, and inputting the operation to the next resolution level until the four resolution levels completely pass through, and outputting a segmentation mask;
in each level, after high-level and low-level features are fused, the fused feature map is also subjected to convolution and upsampling operation; the feature map after operation can be used as a high-level feature map in the next rate level, namely a higher resolution level, and spliced with a low-level feature map in a corresponding coding path again; the iteration operation is carried out until the feature map is restored to the resolution which is the same as that of the input image after passing through four resolution levels, and the feature map is used as a final output result of the network, namely a mask for image segmentation; the whole network adopts a mixed loss function which consists of a Dice loss function and a cross entropy loss function.
4. The boundary-aware dual-attention-directed liver segment segmentation method of claim 1, wherein:
step 4, inputting the unlabeled abdominal MRI into the trained segmentation model, and outputting a specific segmentation result of the liver segment;
in the training process, the overlapping area of the prediction region and the real label region is increased by optimizing the mixing loss function in the step 3.4, and the accuracy of pixel classification is ensured.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110556924.XA CN113344951B (en) | 2021-05-21 | 2021-05-21 | Boundary-aware dual-attention-guided liver segment segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110556924.XA CN113344951B (en) | 2021-05-21 | 2021-05-21 | Boundary-aware dual-attention-guided liver segment segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113344951A true CN113344951A (en) | 2021-09-03 |
CN113344951B CN113344951B (en) | 2024-05-28 |
Family
ID=77470547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110556924.XA Active CN113344951B (en) | 2021-05-21 | 2021-05-21 | Boundary-aware dual-attention-guided liver segment segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344951B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744284A (en) * | 2021-09-06 | 2021-12-03 | 浙大城市学院 | Brain tumor image region segmentation method and device, neural network and electronic equipment |
CN113902692A (en) * | 2021-09-26 | 2022-01-07 | 北京医准智能科技有限公司 | Blood vessel segmentation method, device and computer readable medium |
CN113936220A (en) * | 2021-12-14 | 2022-01-14 | 深圳致星科技有限公司 | Image processing method, storage medium, electronic device, and image processing apparatus |
CN114066908A (en) * | 2021-10-09 | 2022-02-18 | 山东师范大学 | Method and system for brain tumor image segmentation |
CN114119538A (en) * | 2021-11-24 | 2022-03-01 | 广东工业大学 | Deep learning segmentation system for hepatic vein and hepatic portal vein |
CN114565628A (en) * | 2022-03-23 | 2022-05-31 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114926423A (en) * | 2022-05-12 | 2022-08-19 | 深圳大学 | Polyp image segmentation method, device, apparatus and medium based on attention and boundary constraint |
CN116205934A (en) * | 2023-02-13 | 2023-06-02 | 苏州大学 | CNN-based meibomian gland region and meibomian gland atrophy region segmentation model and method |
EP4242682A1 (en) | 2022-03-07 | 2023-09-13 | FUJIFILM Healthcare Corporation | Magnetic resonance imaging apparatus, image processor, and image noise reduction method |
CN117635478A (en) * | 2024-01-23 | 2024-03-01 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163878A (en) * | 2019-05-28 | 2019-08-23 | 四川智盈科技有限公司 | A kind of image, semantic dividing method based on dual multiple dimensioned attention mechanism |
CN111598864A (en) * | 2020-05-14 | 2020-08-28 | 北京工业大学 | Hepatocellular carcinoma differentiation assessment method based on multi-modal image contribution fusion |
CN112365496A (en) * | 2020-12-02 | 2021-02-12 | 中北大学 | Multi-modal MR image brain tumor segmentation method based on deep learning and multi-guidance |
CN112418176A (en) * | 2020-12-09 | 2021-02-26 | 江西师范大学 | Remote sensing image semantic segmentation method based on pyramid pooling multilevel feature fusion network |
US20210089807A1 (en) * | 2019-09-25 | 2021-03-25 | Samsung Electronics Co., Ltd. | System and method for boundary aware semantic segmentation |
-
2021
- 2021-05-21 CN CN202110556924.XA patent/CN113344951B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163878A (en) * | 2019-05-28 | 2019-08-23 | 四川智盈科技有限公司 | A kind of image, semantic dividing method based on dual multiple dimensioned attention mechanism |
US20210089807A1 (en) * | 2019-09-25 | 2021-03-25 | Samsung Electronics Co., Ltd. | System and method for boundary aware semantic segmentation |
CN111598864A (en) * | 2020-05-14 | 2020-08-28 | 北京工业大学 | Hepatocellular carcinoma differentiation assessment method based on multi-modal image contribution fusion |
CN112365496A (en) * | 2020-12-02 | 2021-02-12 | 中北大学 | Multi-modal MR image brain tumor segmentation method based on deep learning and multi-guidance |
CN112418176A (en) * | 2020-12-09 | 2021-02-26 | 江西师范大学 | Remote sensing image semantic segmentation method based on pyramid pooling multilevel feature fusion network |
Non-Patent Citations (2)
Title |
---|
刘云鹏;刘光品;王仁芳;金冉;孙德超;邱虹;董晨;李瑾;洪国斌;: "深度学习结合影像组学的肝脏肿瘤CT分割", 中国图象图形学报, no. 10, 16 October 2020 (2020-10-16) * |
赵欣;石德来;王洪凯;: "基于3D全卷积深度神经网络的脑白质病变分割方法", 计算机与现代化, no. 10, 15 October 2020 (2020-10-15) * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744284B (en) * | 2021-09-06 | 2023-08-29 | 浙大城市学院 | Brain tumor image region segmentation method and device, neural network and electronic equipment |
CN113744284A (en) * | 2021-09-06 | 2021-12-03 | 浙大城市学院 | Brain tumor image region segmentation method and device, neural network and electronic equipment |
CN113902692A (en) * | 2021-09-26 | 2022-01-07 | 北京医准智能科技有限公司 | Blood vessel segmentation method, device and computer readable medium |
CN114066908A (en) * | 2021-10-09 | 2022-02-18 | 山东师范大学 | Method and system for brain tumor image segmentation |
CN114119538A (en) * | 2021-11-24 | 2022-03-01 | 广东工业大学 | Deep learning segmentation system for hepatic vein and hepatic portal vein |
CN113936220A (en) * | 2021-12-14 | 2022-01-14 | 深圳致星科技有限公司 | Image processing method, storage medium, electronic device, and image processing apparatus |
CN113936220B (en) * | 2021-12-14 | 2022-03-04 | 深圳致星科技有限公司 | Image processing method, storage medium, electronic device, and image processing apparatus |
EP4242682A1 (en) | 2022-03-07 | 2023-09-13 | FUJIFILM Healthcare Corporation | Magnetic resonance imaging apparatus, image processor, and image noise reduction method |
CN114565628A (en) * | 2022-03-23 | 2022-05-31 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114565628B (en) * | 2022-03-23 | 2022-09-13 | 中南大学 | Image segmentation method and system based on boundary perception attention |
CN114926423B (en) * | 2022-05-12 | 2023-02-10 | 深圳大学 | Polyp image segmentation method, device, apparatus and medium based on attention and boundary constraint |
CN114926423A (en) * | 2022-05-12 | 2022-08-19 | 深圳大学 | Polyp image segmentation method, device, apparatus and medium based on attention and boundary constraint |
CN116205934A (en) * | 2023-02-13 | 2023-06-02 | 苏州大学 | CNN-based meibomian gland region and meibomian gland atrophy region segmentation model and method |
CN117635478A (en) * | 2024-01-23 | 2024-03-01 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
CN117635478B (en) * | 2024-01-23 | 2024-05-17 | 中国科学技术大学 | Low-light image enhancement method based on spatial channel attention |
Also Published As
Publication number | Publication date |
---|---|
CN113344951B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113344951B (en) | Boundary-aware dual-attention-guided liver segment segmentation method | |
CN109087327B (en) | Thyroid nodule ultrasonic image segmentation method of cascaded full convolution neural network | |
CN111784671B (en) | Pathological image focus region detection method based on multi-scale deep learning | |
Huang et al. | Coronary artery segmentation by deep learning neural networks on computed tomographic coronary angiographic images | |
CN110930416B (en) | MRI image prostate segmentation method based on U-shaped network | |
CN113947609B (en) | Deep learning network structure and multi-label aortic dissection CT image segmentation method | |
CN112241966B (en) | Method and system for establishing and segmenting multitask and multi-classification chest organ segmentation model | |
Cheng et al. | Contour-aware semantic segmentation network with spatial attention mechanism for medical image | |
US20230281809A1 (en) | Connected machine-learning models with joint training for lesion detection | |
CN113506310B (en) | Medical image processing method and device, electronic equipment and storage medium | |
Liu et al. | Automated cardiac segmentation of cross-modal medical images using unsupervised multi-domain adaptation and spatial neural attention structure | |
CN112446892A (en) | Cell nucleus segmentation method based on attention learning | |
CN115063592B (en) | Multi-scale-based full-scanning pathological feature fusion extraction method and system | |
Chen et al. | Combining edge guidance and feature pyramid for medical image segmentation | |
Liu et al. | Cascaded atrous dual attention U-Net for tumor segmentation | |
CN113450359A (en) | Medical image segmentation, display, model training methods, systems, devices, and media | |
CN115908449A (en) | 2.5D medical CT image segmentation method and device based on improved UNet model | |
CN113012164A (en) | U-Net kidney tumor image segmentation method and device based on inter-polymeric layer information and storage medium | |
Pal et al. | A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation | |
Baumgartner et al. | Fully convolutional networks in medical imaging: applications to image enhancement and recognition | |
Das et al. | Attention-UNet architectures with pretrained backbones for multi-class cardiac MR image segmentation | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
Arulappan et al. | Liver tumor segmentation using a new asymmetrical dilated convolutional semantic segmentation network in CT images | |
Goni et al. | Salient feature extraction using Attention for Brain Tumor segmentation | |
Allgöwer et al. | Liver Tumor Segmentation Using Classical Algorithms & Deep Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |