CN111145170B - Medical image segmentation method based on deep learning - Google Patents

Medical image segmentation method based on deep learning Download PDF

Info

Publication number
CN111145170B
CN111145170B CN201911416961.XA CN201911416961A CN111145170B CN 111145170 B CN111145170 B CN 111145170B CN 201911416961 A CN201911416961 A CN 201911416961A CN 111145170 B CN111145170 B CN 111145170B
Authority
CN
China
Prior art keywords
convolution
layer
data
input
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911416961.XA
Other languages
Chinese (zh)
Other versions
CN111145170A (en
Inventor
陈俊江
刘宇
贾树开
陈智
方俊
李治熹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Original Assignee
University of Electronic Science and Technology of China
Sichuan Provincial Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Provincial Peoples Hospital filed Critical University of Electronic Science and Technology of China
Priority to CN201911416961.XA priority Critical patent/CN111145170B/en
Publication of CN111145170A publication Critical patent/CN111145170A/en
Application granted granted Critical
Publication of CN111145170B publication Critical patent/CN111145170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to a medical image segmentation method based on deep learning. The method disclosed by the invention integrates multiple technologies such as a multi-scale frame, a dense convolution network, an attention mechanism, a pyramid model, small sample enhancement and the like on the basis of U-Net Baseline, is beneficial to realizing feature reuse, recovering lost context information, inhibiting the response of an irrelevant area and improving the performance of a small ROI, solves the problems of pain points such as few ultrasonic image samples, low pixels, fuzzy boundaries, large differences and the like, and obtains an optimal segmentation effect.

Description

Medical image segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of medical image processing and computer vision, and particularly relates to a medical image segmentation method based on deep learning.
Background
With the development of technology, doctors begin to adopt a great deal of medical image data as the basis of medical diagnosis and treatment, thereby promoting the development and progress of various new technologies. However, with the development of the technology, how to correctly segment the medical image becomes an important bottleneck restricting the development of each technology, and even the accurate segmentation of the image becomes the most important problem in the field of medical images and needs to be solved urgently.
In recent years, with the increasing computing power and the increasing amount of data, deep learning has made remarkable progress in the field of medical images. The Convolutional Neural Network (CNN) can capture nonlinear mapping between input and output, automatically learn local features and high-level abstract features through a multi-layer network structure, and is superior to manual feature extraction and prediction. However, the conventional CNN cannot reasonably propagate the low-level features to the high-level, and therefore, a U-Net algorithm is further proposed, which fuses the low-level and high-dimensional features through jump connection and has a good segmentation effect.
Most of the existing medical image segmentation algorithms are based on U-Net (U-Net base), but due to the problems of unbalanced data, large difference among training samples, small perceptual Region (ROI) and the like of medical images, the problems of difficulty in balancing between accuracy and recall rate, insufficient feature extraction, redundant computing resources and model parameters, unobvious segmentation effect and the like are caused, so that the problem that the ultrasound image is segmented more accurately still needs to be solved urgently.
Disclosure of Invention
The purpose of the invention is: based on the thought of deep learning, an ultrasonic image segmentation method capable of accurately segmenting medical tissues or focuses is provided.
The technical scheme adopted by the invention is as follows:
a medical image segmentation method based on deep learning comprises the following steps:
step 1, preprocessing an original ultrasonic image to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) and increasing the data volume of the training set and the verification set by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing a pyramid attention U-shaped network with scale intensive convolution, comprising the following steps:
1) a multiple input dense convolutional encoder module: the input layer is input in a sample format of NxNx1, N is a positive integer, the size of input data is scaled into four groups of input data according to the ratio of 8:4:2:1 through a multi-input module, wherein the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st 4-layer dense convolution module, and then carries out 1 st down-sampling; the second group of data forms an input 2 through 3-by-3 convolution, the input 2 is fused with the data after the 1 st down-sampling, and then the 2 nd down-sampling is carried out after the 2 nd dense convolution module with 4 layers; in the same way, the third layer and the fourth layer adopt the same network construction; the dense convolution module is structurally provided with 4 densely connected convolution layers, and the input of each layer is the feature map fusion of all previous layer outputs of the dense block; the encoder module utilizes the dense convolution layer and the pooling layer to complete feature extraction, the feature extraction is totally divided into 4 layers, a feature map increases along with the number of layers, the number of channels increases, the size decreases, the number of convolution kernel channels from the 1 st layer to the 4 th layer is respectively 32, 64, 128 and 256, the size of each convolution kernel layer is 3 x 3, and the cavity convolution r is 2;
2) feature pyramid attention center module: after the 4 th downsampling, passing through a feature pyramid attention center module, wherein the feature pyramid attention center module comprises a main branch, a direct branch and a global branch, and the main branch is structurally input, is subjected to the first downsampling, and is subjected to 7-by-7 convolution once to construct a first layer of network; then respectively carrying out second and third downsampling, and respectively adopting convolution kernels of 5 × 5 and 3 × 3 for convolution to construct a second layer network and a third layer network; after the third downsampling and convolution, the data are convoluted again by the same level of size, are upsampled, and are then fused with the data of the second layer after the convolution by the same level of size, and similarly, the second layer and the first layer also go forwardPerforming line data fusion to obtain output P (X); the direct branch structure is that the input is processed by 1X 1 convolution to obtain X1Multiplied by the output P (X) of the main branch to obtain
Figure BDA0002351441540000021
The global branch structure is that the input is subjected to global average pooling to obtain X2And is and
Figure BDA0002351441540000022
adding to obtain the output of the feature pyramid attention center module:
Figure BDA0002351441540000023
3) a multi-output attention mechanism decoder module: performing channel feature fusion on the attention feature map and the up-sampling feature map of each layer by using deconvolution as an up-sampling mode; the attention mechanism is as follows: convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, performing global average pooling, 1 × 1 convolution, nonlinear transformation, and upsampling to obtain linear attention coefficient
Figure BDA0002351441540000031
Finally, linear attention coefficient
Figure BDA0002351441540000032
By element and low dimensional feature xlMultiplying and retaining the relative activation to obtain the attention coefficient
Figure BDA0002351441540000033
Figure BDA0002351441540000034
Figure BDA0002351441540000035
Wherein xlRepresenting a pixel vector, giA gating vector is represented that is a function of,
Figure BDA0002351441540000036
a linear attention coefficient is represented by a linear attention coefficient,
Figure BDA0002351441540000037
denotes the attention coefficient, δ1Denotes the ReLU activation function, δ2Representing Sigmod activation function, ΘattComprises the following steps: linear transformation
Figure BDA0002351441540000038
Figure BDA0002351441540000039
And bias term
Figure BDA00023514415400000310
The constructed U-shaped network adopts Tversely Loss and Focal Loss as a multi-output Loss function;
step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 5, inputting the preprocessed to-be-processed original ultrasonic image into the trained U-shaped network to obtain a segmentation result.
The invention has the beneficial effects that: on the basis of U-Net Baseline, multiple technologies such as a multi-scale frame, a dense convolution network, an attention mechanism, a pyramid model and small sample enhancement are fused, so that the method is beneficial to realizing feature reuse, recovering lost context information, inhibiting the response of an irrelevant area, improving the performance of a small ROI, solving the problems of pain points such as few ultrasonic image samples, low pixels, fuzzy boundaries and large differences, and obtaining the optimal segmentation effect.
Drawings
FIG. 1 is a schematic diagram of a medical image segmentation process of the present invention;
fig. 2 is a schematic diagram of the overall structure of the DPA-UNet network of the present invention;
FIG. 3 is a schematic diagram of a dense convolutional network module of the present invention;
FIG. 4 is a schematic diagram of a feature pyramid attention module according to the present invention;
FIG. 5 is a schematic view of an attention mechanism module of the present invention;
FIG. 6 is a schematic of Loss and DSC for the training and validation sets; (a) is a loss function graph of the training set and the verification set, and (b) is a correct rate graph of the training set and the verification set;
FIG. 7 is a schematic diagram of the original label and the segmentation result of the test set of the present invention: (a) for the test label image, (b) for the segmentation result image.
Detailed Description
The invention is described in detail below with reference to the following figures and simulations:
the invention provides a thyroid ultrasound image segmentation method based on deep learning, which mainly comprises 5 major modules of data acquisition, data preprocessing, network model construction, data training and parameter adjustment, data testing and evaluation and the like, as shown in figure 1. The specific implementation steps are as follows:
1. preprocessing an original ultrasonic image, and dividing a training set and a verification set;
1) removing patient privacy information and image instrument marks on the ultrasonic image;
2) making a data label (label) by a professional ultrasound imaging physician team;
3) dividing the original data into a training set, a verification set and a test set according to the ratio of 6:2:2, and carrying out label similarity;
4) the resolution of the images is unified to 256 × 256; and the label is subjected to binarization processing and normalized to a [0,1] interval.
2. Data enhancement is carried out on training set and verification set of small sample
1) And (3) offline enhancement: the number of data sets is expanded to 10 times the original number.
2) Online enhancement: and a DataGenerator online iterator mode is adopted to perform scale transformation, scaling transformation, translation transformation, color contrast transformation and the like, so that the memory pressure is reduced while the data diversity is enhanced, and the generalization of a network model is enhanced.
3. Design a multiscale dense convolution pyramid attention U-Net network (DPA-UNet) (as shown in FIG. 2)
An input layer of the DPA-UNet network adopts a sample format of NxNx1 (N is a positive integer), and is divided into four groups of input data through a multi-input module; the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st 4-layer dense convolution module, and then carries out 1 st down-sampling; the second group of data forms an input 2 through 3-by-3 convolution, the input 2 is fused (concat) with the data after the 1 st down-sampling, and then the second group of data passes through a 2 nd 4-layer dense convolution module and then the 2 nd down-sampling is carried out; in the same way, the third layer and the fourth layer adopt the same network construction; after the 4 th down-sampling, the feature pyramid attention center module is processed; the central module forms gating attention with the data after the fourth layer dense convolution and is fused with the data sampled on the central module (concat); then two convolutions (3 × 3 convolution, BN operation, ReLU activation function) were performed in succession; similarly, the third layer, the second layer and the first layer are also similar; finally, a convolution (1 × 1 convolution, sigmoid activation function) is performed to obtain pixel-level classification, namely segmentation of the image.
1) Multi-input dense convolution encoder module (shown in the left half of FIG. 2)
1.1 multiple input module: the size of input data is scaled into four pairs of data according to the ratio of 8:4:2:1, and the four pairs of data are respectively fused with a two-three-four sampling layer of an encoder network.
1.2 dense convolution module (as shown in FIG. 3): each dense block contains 4 densely connected convolutional layers, the input of each layer is a feature map fusion of all previous layer outputs of the dense block. The pooled feature maps of each layer of the encoder will go through a dense block (BN operation, ReLU activation function and 3 × 3 convolution).
1.3 the encoder module mainly utilizes the dense convolution layer and the pooling layer to complete the feature extraction, the feature extraction is totally divided into 4 layers, the feature map increases along with the number of layers, the number of channels increases, and the size becomes smaller. The number of convolution kernel channels from the 1 st layer to the 4 th layer is 32, 64, 128 and 256 respectively, the convolution kernel size of each layer is 3 x 3, and the void convolution r is 2.
2) Feature pyramid attention center module (as shown in figure 4)
2.1 Main branch: the input constructs a first layer network by first downsampling, then carrying out convolution (7 × 7 convolution, BN operation, ReLU activation function); then respectively carrying out second and third downsampling, and respectively adopting convolution kernels of 5 × 5 and 3 × 3 for convolution to construct a second layer network and a third layer network; after the third downsampling and convolution, performing convolution again in the same level size, performing upsampling, and then fusing the upsampled data with the convolved data (concat) in the same level size of the second layer; similarly, the second layer and the first layer are similar to each other, and a pyramid structure p (x) of the multi-dimensional receptive field is constructed.
2.2 direct branching: the input is processed by 1X 1 convolution to obtain X1And the dimension of the characteristic diagram channel is reduced while the unchanged size is ensured. And multiplied by the pyramid structure p (x), more accurately combining long-context features of adjacent scales.
2.3 Global pooling Branch: input via global average pooled branch X2And adding the result to obtain the attention output of the characteristic pyramid. The formula is as follows:
Figure BDA0002351441540000051
3) multi-output attention machine decoder module (as shown on the right half of FIG. 2)
And 3.1, dividing the decoder module into 4 layers in total, and performing channel feature fusion on the attention feature map and the up-sampling feature map of each layer by using deconvolution as an up-sampling mode. The characteristic diagram increases along with the number of layers, the number of channels is reduced, and the size is increased. The number of convolution kernel channels from the 6 th layer to the 9 th layer is 256, 128, 64 and 32 respectively, and the size of each convolution kernel layer is 3 x 3.
3.2 attention mechanism module (as shown in fig. 5): convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, performing global average pooling, 1 × 1 convolution, nonlinear transformation, and upsampling to obtain linear attention coefficient
Figure BDA0002351441540000052
Finally, linear attention coefficient
Figure BDA0002351441540000053
By element and low dimensional feature xlMultiplying and retaining the relative activation to obtain the attention coefficient
Figure BDA0002351441540000054
The formula is as follows:
Figure BDA0002351441540000061
Figure BDA0002351441540000062
4. inputting the training set and the verification set data into a DPA-UNet network for training to obtain an optimal parameter model
Loss and dsc were recorded for each training. According to loss and dsc on the verification set, parameters are adjusted, trained again and the best model and parameters are saved.
5. Inputting the data to be segmented into the optimal parameter model to obtain the segmentation result (as shown in FIGS. 6 and 7)
FIG. 6 is a schematic diagram of Loss and DSC of the training set and validation set provided by the embodiment of the present invention: (a) is a loss function graph of the training set and the verification set, and (b) is a correct rate graph of the training set and the verification set.
Fig. 7 is a schematic diagram of an original label and a segmentation result of a test set according to an embodiment of the present invention: (a) the original label image and (b) the segmentation result image.

Claims (1)

1. A medical image segmentation method based on deep learning is characterized by comprising the following steps:
step 1, preprocessing an original ultrasonic image to obtain training set and verification set data;
step 2, performing data enhancement on the training set and the verification set data, including:
1) and increasing the data volume of the training set and the verification set by adopting offline enhancement: adopting rotation transformation and horizontal turning transformation to perform 10 times of enhancement;
2) enhancing the generalization of the network model by online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator;
step 3, constructing a multi-scale intensive convolution pyramid attention U-shaped network, comprising the following steps:
1) a multiple input dense convolutional encoder module: the input layer is input in a sample format of NxNx1, N is a positive integer, the size of input data is scaled into four groups of input data according to the ratio of 8:4:2:1 through a multi-input module, wherein the first group of data forms input 1 through 3-by-3 convolution, then passes through a 1 st 4-layer dense convolution module, and then carries out 1 st down-sampling; the second group of data forms an input 2 through 3-by-3 convolution, the input 2 is fused with the data after the 1 st down-sampling, and then the 2 nd down-sampling is carried out after the 2 nd dense convolution module with 4 layers; in the same way, the third layer and the fourth layer adopt the same network construction; the dense convolution module is structurally provided with 4 densely connected convolution layers, and the input of each layer is the feature map fusion of all previous layer outputs of the dense convolution module; the encoder module utilizes the dense convolution layer and the pooling layer to complete feature extraction, the feature extraction is totally divided into 4 layers, a feature map increases along with the number of layers, the number of channels increases, the size decreases, the number of convolution kernel channels from the 1 st layer to the 4 th layer is respectively 32, 64, 128 and 256, the size of each convolution kernel layer is 3 x 3, and the cavity convolution r is 2;
2) feature pyramid attention center module: after the 4 th downsampling, passing through a feature pyramid attention center module, wherein the feature pyramid attention center module comprises a main branch, a direct branch and a global branch, and the main branch is structurally input, is subjected to the first downsampling, and is subjected to 7-by-7 convolution once to construct a first layer of network;then respectively carrying out second and third downsampling, and respectively adopting convolution kernels of 5 × 5 and 3 × 3 for convolution to construct a second layer network and a third layer network; after the third downsampling and convolution, performing convolution with the same level of size again, performing upsampling, then fusing with the data of the second layer after the convolution with the same level of size, and similarly fusing the data of the second layer and the first layer to obtain an output P (X); the direct branch structure is that the input is processed by 1X 1 convolution to obtain X1Multiplied by the output P (X) of the main branch to obtain
Figure FDA0003545566590000011
The global branch structure is that the input is subjected to global average pooling to obtain X2And is and
Figure FDA0003545566590000012
adding to obtain the output of the feature pyramid attention center module:
Figure FDA0003545566590000013
3) a multi-output attention mechanism decoder module: performing channel feature fusion on the attention feature map and the up-sampling feature map of each layer by using deconvolution as an up-sampling mode; the attention mechanism is as follows: convolving the high dimensional features by 1 x 1 to obtain a gating signal gi(ii) a Then the low dimensional feature xlSampling by 2 times, and gating signal giAdding, global average pooling, 1 × 1 convolution, nonlinear transformation, and up-sampling to obtain linear attention coefficient
Figure FDA0003545566590000021
Finally, linear attention coefficient
Figure FDA0003545566590000022
By element and low dimensional feature xlMultiplying and retaining the relative activation to obtain the attention coefficient
Figure FDA0003545566590000023
Figure FDA0003545566590000024
Figure FDA0003545566590000025
Wherein xlRepresenting a pixel vector, giA gating vector is represented that is a function of,
Figure FDA0003545566590000026
a linear attention coefficient is represented by a linear attention coefficient,
Figure FDA0003545566590000027
denotes the attention coefficient, δ1Denotes the ReLU activation function, δ2Representing Sigmod activation function, ΘattComprises the following steps: linear transformation
Figure FDA0003545566590000028
Figure FDA0003545566590000029
And bias term bψ∈R,
Figure FDA00035455665900000210
Step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and 5, inputting the preprocessed original ultrasonic image into the trained U-shaped network to obtain a segmentation result.
CN201911416961.XA 2019-12-31 2019-12-31 Medical image segmentation method based on deep learning Active CN111145170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416961.XA CN111145170B (en) 2019-12-31 2019-12-31 Medical image segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416961.XA CN111145170B (en) 2019-12-31 2019-12-31 Medical image segmentation method based on deep learning

Publications (2)

Publication Number Publication Date
CN111145170A CN111145170A (en) 2020-05-12
CN111145170B true CN111145170B (en) 2022-04-22

Family

ID=70522857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416961.XA Active CN111145170B (en) 2019-12-31 2019-12-31 Medical image segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN111145170B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612750B (en) * 2020-05-13 2023-08-11 中国矿业大学 Overlapping chromosome segmentation network based on multi-scale feature extraction
CN111784701B (en) * 2020-06-10 2024-05-10 深圳市人民医院 Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information
CN112434723B (en) * 2020-07-23 2021-06-01 之江实验室 Day/night image classification and object detection method based on attention network
CN111915626B (en) * 2020-08-14 2024-02-02 东软教育科技集团有限公司 Automatic segmentation method, device and storage medium for heart ultrasonic image ventricular region
CN111986181B (en) * 2020-08-24 2021-07-30 中国科学院自动化研究所 Intravascular stent image segmentation method and system based on double-attention machine system
CN112070690B (en) * 2020-08-25 2023-04-25 西安理工大学 Single image rain removing method based on convolution neural network double-branch attention generation
CN112150428B (en) * 2020-09-18 2022-12-02 青岛大学 Medical image segmentation method based on deep learning
CN112184587B (en) * 2020-09-29 2024-04-09 中科方寸知微(南京)科技有限公司 Edge data enhancement model, and efficient edge data enhancement method and system based on model
CN112330642B (en) * 2020-11-09 2022-11-04 山东师范大学 Pancreas image segmentation method and system based on double-input full convolution network
CN112529042B (en) * 2020-11-18 2024-04-05 南京航空航天大学 Medical image classification method based on dual-attention multi-example deep learning
CN112419267A (en) * 2020-11-23 2021-02-26 齐鲁工业大学 Brain glioma segmentation model and method based on deep learning
CN113781440B (en) * 2020-11-25 2022-07-29 北京医准智能科技有限公司 Ultrasonic video focus detection method and device
CN112232328A (en) * 2020-12-16 2021-01-15 南京邮电大学 Remote sensing image building area extraction method and device based on convolutional neural network
CN112699835B (en) * 2021-01-12 2023-09-26 华侨大学 Road extraction method, device, equipment and storage medium based on reconstruction bias U-Net
CN112890828A (en) * 2021-01-14 2021-06-04 重庆兆琨智医科技有限公司 Electroencephalogram signal identification method and system for densely connecting gating network
CN112906780A (en) * 2021-02-08 2021-06-04 中国科学院计算技术研究所 Fruit and vegetable image classification system and method
CN112906718B (en) * 2021-03-09 2023-08-22 西安电子科技大学 Multi-target detection method based on convolutional neural network
CN113095330A (en) * 2021-04-30 2021-07-09 辽宁工程技术大学 Compressive attention model for semantically segmenting pixel groups
CN113205524B (en) * 2021-05-17 2023-04-07 广州大学 Blood vessel image segmentation method, device and equipment based on U-Net
CN113192087A (en) * 2021-05-19 2021-07-30 北京工业大学 Image segmentation method based on convolutional neural network
CN113256609B (en) * 2021-06-18 2021-09-21 四川大学 CT picture cerebral hemorrhage automatic check out system based on improved generation Unet
CN113591608A (en) * 2021-07-12 2021-11-02 浙江大学 High-resolution remote sensing image impervious surface extraction method based on deep learning
CN113674253B (en) * 2021-08-25 2023-06-30 浙江财经大学 Automatic segmentation method for rectal cancer CT image based on U-transducer
CN114387467B (en) * 2021-12-09 2022-07-29 哈工大(张家口)工业技术研究院 Medical image classification method based on multi-module convolution feature fusion
CN114004836B (en) * 2022-01-04 2022-04-01 中科曙光南京研究院有限公司 Self-adaptive biomedical image segmentation method based on deep learning
CN114419449B (en) * 2022-03-28 2022-06-24 成都信息工程大学 Self-attention multi-scale feature fusion remote sensing image semantic segmentation method
CN114494893B (en) * 2022-04-18 2022-06-14 成都理工大学 Remote sensing image feature extraction method based on semantic reuse context feature pyramid
CN115661820B (en) * 2022-11-15 2023-08-04 广东工业大学 Image semantic segmentation method and system based on dense feature reverse fusion
CN116468619B (en) * 2023-03-01 2024-02-06 山东省人工智能研究院 Medical image denoising method based on multi-feature feedback fusion
CN116503420B (en) * 2023-04-26 2024-05-14 佛山科学技术学院 Image segmentation method based on federal learning and related equipment
CN116543151B (en) * 2023-05-05 2024-05-28 山东省人工智能研究院 3D medical CT image segmentation method based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886510A (en) * 2017-11-27 2018-04-06 杭州电子科技大学 A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
CN109902693A (en) * 2019-02-16 2019-06-18 太原理工大学 One kind being based on more attention spatial pyramid characteristic image recognition methods
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565707B2 (en) * 2017-11-02 2020-02-18 Siemens Healthcare Gmbh 3D anisotropic hybrid network: transferring convolutional features from 2D images to 3D anisotropic volumes
CN108986124A (en) * 2018-06-20 2018-12-11 天津大学 In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method
CN109584246B (en) * 2018-11-16 2022-12-16 成都信息工程大学 DCM (cardiac muscle diagnosis and treatment) radiological image segmentation method based on multi-scale feature pyramid

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886510A (en) * 2017-11-27 2018-04-06 杭州电子科技大学 A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN109191476A (en) * 2018-09-10 2019-01-11 重庆邮电大学 The automatic segmentation of Biomedical Image based on U-net network structure
CN109410219A (en) * 2018-10-09 2019-03-01 山东大学 A kind of image partition method, device and computer readable storage medium based on pyramid fusion study
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
CN109902693A (en) * 2019-02-16 2019-06-18 太原理工大学 One kind being based on more attention spatial pyramid characteristic image recognition methods
CN110189334A (en) * 2019-05-28 2019-08-30 南京邮电大学 The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
U-Next: A Novel Convolution Neural Network With an Aggregation U-Net Architecture for Gallstone Segmentation in CT Images;Tao Song 等;《IEEE Access》;20191118;第7卷;第166823-166832页 *
基于Snake模型的医学图像分割技术;倪雅樱;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20090615(第06期);第I138-974页 *
基于深度学习的医学影像检测算法;陈云;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20190915(第09期);第I138-1149页 *

Also Published As

Publication number Publication date
CN111145170A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111145170B (en) Medical image segmentation method based on deep learning
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN110738697B (en) Monocular depth estimation method based on deep learning
CN108492271B (en) Automatic image enhancement system and method fusing multi-scale information
CN111161271A (en) Ultrasonic image segmentation method
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN107492071A (en) Medical image processing method and equipment
Chen et al. PCAT-UNet: UNet-like network fused convolution and transformer for retinal vessel segmentation
CN111179275B (en) Medical ultrasonic image segmentation method
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
CN111951288A (en) Skin cancer lesion segmentation method based on deep learning
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN114092439A (en) Multi-organ instance segmentation method and system
CN115526829A (en) Honeycomb lung focus segmentation method and network based on ViT and context feature fusion
CN114399510B (en) Skin focus segmentation and classification method and system combining image and clinical metadata
CN116823850A (en) Cardiac MRI segmentation method and system based on U-Net and transducer fusion improvement
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
CN116229074A (en) Progressive boundary region optimized medical image small sample segmentation method
CN114119558B (en) Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report
CN115719357A (en) Multi-structure segmentation method for brain medical image
CN115409812A (en) CT image automatic classification method based on fusion time attention mechanism
CN112967295A (en) Image processing method and system based on residual error network and attention mechanism
CN114463339A (en) Medical image segmentation method based on self-attention
CN112242193A (en) Automatic blood vessel puncture method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant