CN112446890A - Melanoma segmentation method based on void convolution and multi-scale fusion - Google Patents

Melanoma segmentation method based on void convolution and multi-scale fusion Download PDF

Info

Publication number
CN112446890A
CN112446890A CN202011094831.1A CN202011094831A CN112446890A CN 112446890 A CN112446890 A CN 112446890A CN 202011094831 A CN202011094831 A CN 202011094831A CN 112446890 A CN112446890 A CN 112446890A
Authority
CN
China
Prior art keywords
layer
training
convolution
channel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011094831.1A
Other languages
Chinese (zh)
Inventor
张聚
潘伟栋
俞伦端
陈德臣
牛彦
施超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202011094831.1A priority Critical patent/CN112446890A/en
Publication of CN112446890A publication Critical patent/CN112446890A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

A melanoma segmentation method based on void convolution and multi-scale fusion comprises the following steps: step 1) preprocessing a medical image; step 2) constructing a multi-scale aggregation network model with a flexible receptive field; step 3) inputting training set data into a model for training; and 4) segmenting a focus area of the dermatoscope image. The channel attention hole convolution module can adaptively expand the receptive field according to the image characteristics, obtain more compact context information and relieve the problem of insufficient characteristics caused by the fixed receptive field; the aggregation interaction module can aggregate the characteristics output by the coding layer and the characteristics of the adjacent coding layer to obtain multi-scale information, reduce the semantic difference between the coding layer and the corresponding decoding layer and inhibit noise caused by direct aggregation. The invention can segment accurate skin mirror images and plays an auxiliary role.

Description

Melanoma segmentation method based on void convolution and multi-scale fusion
Technical Field
The present invention relates to a method for dividing melanoma which is skin cancer
Background
Melanoma is one of the most dangerous skin diseases. Early studies found that the 5-year survival rate of the most pathogenic melanoma can be as high as 99%; however, delaying diagnosis can result in a decrease in survival rate to 23%. Dermoscopy is a means of examining skin lesions with a dermoscopy, and is commonly used to diagnose melanoma. However, even with professional dermatologists, manually examining the dermatoscope images is an error-prone and time-consuming task.
Therefore, there is a need to develop a calculation support system that assists a dermatologist to accurately segment melanoma. This task remains challenging due to the different sizes, shapes and textures of melanomas. Furthermore, some dermatoscopic images may contain disturbing objects, such as hair, scale markings and color calibration. Convolutional neural networks are used extensively to solve the semantic segmentation task. Among them, U-Net is widely used in medical image segmentation, using encoder (down-sampling) and decoder (up-sampling) structures, integrating low-level texture features and corresponding high-level semantic features together with skip connections. Because the shallow features are not processed, the shallow features are directly integrated with the deep features, information redundancy is caused, and the segmentation precision result is influenced.
Because the melanoma area has fuzzy boundaries and different shapes, the melanoma area is difficult to be accurately segmented by a general segmentation network. While there may be a strong connection between a large range of pixels in a medical image, a general segmentation network typically downsamples the image using a fixed-size convolution kernel, which results in the network capturing only local context information. The proposed hole space convolution pooling pyramid (ASPP) can only extract part of the context information after down-sampling, and cannot generate compact multi-scale features.
Disclosure of Invention
The present invention overcomes the above disadvantages of the prior art and provides a melanoma segmentation method based on void volume and multi-scale fusion.
In order to deal with the problem that a neural network cannot obtain better segmentation accuracy due to interference in a skin mirror image, a downsampling structure and simple jump connection in the traditional U-Net are correspondingly changed, the receptive field is effectively expanded, and noise caused by direct fusion of superficial features and deep features is suppressed, and the melanoma segmentation method based on cavity convolution and multi-scale fusion is provided.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are further described below. A melanoma segmentation method based on void convolution and multi-scale fusion comprises the following steps:
step 1) preprocessing a medical image;
data of the acquired dermatoscope image are as follows: 1: 2, dividing the training set into a training set, a verification set and a test set, and performing data augmentation processing on training set images for network training;
step 2) constructing a multi-scale aggregation network model with a flexible receptive field;
2.1 constructing a channel attention void convolution module for feature extraction;
replacing a coding layer in the U-Net with a channel attention void convolutional layer, taking the skin mirror image in the step 1) as an input, and outputting the extracted feature diagram featuremap to provide input for a subsequent network; each layer uses three parallel cavity convolutions to extract features, and the expansion rate of each cavity convolution is different; then, carrying out global average pooling operation on the extracted feature maps, capturing cross-channel interaction information by considering each channel and k neighbor channels thereof, redistributing weight for each channel of the feature maps, and then adding the three feature maps to obtain an output feature map of the layer; the shallow layer usually learns simple texture information, and as the layer number is deeper, complex abstract information is captured;
2.2 constructing an aggregation interaction module, namely AIM;
the aggregation interaction module is provided for making up semantic difference between feature maps of the coding layer and the corresponding decoding layer and inhibiting noise possibly caused by jump connection; the U-Net directly aggregates the two, and because semantic information between the two has a large difference, information redundancy noise can be generated, so that the final segmentation result can be influenced; the AIM receives the feature maps from the adjacent coding layers, reduces the number of channels of the features through a 3-by-3 convolutional layer to reduce the calculation amount, and then obtains multi-scale information through the convolutional layer to be aggregated into a final feature map;
2.3 constructing a decoding layer;
the decoding layer performs up-sampling operation on the feature graph obtained by the coding layer, then performs coefficient addition on the feature graph output by the aggregation interaction module, and obtains an output feature graph after two times of 3-by-3 convolutional layer operation; processing the last decoding layer by a 1-by-1 convolution layer and a Sigmoid function to obtain a final segmentation result;
step 3) inputting training set data into a model for training;
inputting the processed training set in the step 1) into the network model constructed in the step 2), and adopting a random initialization and Adam optimization method; setting the initial learning rate, momentum and iteration times, and training according to the set training strategy. Firstly, performing data amplification processing on an input training set, then training, then obtaining a verification result on a trained network model by using a verification set, then updating once according to a gradient, and repeating the steps until the iteration times are reached;
step 4), dividing a focus area of the dermatoscope image;
inputting the test set data into the prediction model trained in the step 3) to obtain a segmentation result, and according to the evaluation index, indicating that the method can assist in segmenting the dermatoscope image;
the invention provides a melanoma segmentation method based on void convolution and multi-scale fusion. The effect of flexibly expanding the receptive field is achieved by adding three parallel hole convolutions and distributing different weights to the characteristic diagram generated by each hole convolution; and through fusing the characteristic information obtained by the adjacent downsampling layers, the noise interference caused by information redundancy in the jump connection is inhibited. The adaptability to the area of the melanoma with large size difference is improved, and the accuracy of segmentation of the melanoma is improved.
The invention has the following advantages:
the invention provides a multi-scale aggregation network with a flexible receptive field for segmenting melanoma, wherein a channel attention cavity convolution module can adaptively expand the receptive field according to image characteristics, acquire more compact context information and relieve the problem of insufficient characteristics caused by a fixed receptive field; the aggregation interaction module can aggregate the characteristics output by the coding layer and the characteristics of the adjacent coding layer to obtain multi-scale information, reduce the semantic difference between the coding layer and the corresponding decoding layer and inhibit information redundancy noise caused by a jump connection structure.
Drawings
FIG. 1 is a schematic diagram of the overall network framework structure of a system implementing the method of the present invention.
FIG. 2 is a schematic diagram of a channel attention hole convolution module in a network of a system implementing the method of the present invention.
Fig. 3 is a schematic diagram of a channel attention module in a network of a system implementing the method of the present invention.
Fig. 4 is a schematic diagram of an aggregation interaction module in a network implementing the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
the melanoma segmentation method based on the cavity convolution and the multi-scale fusion comprises the following steps:
step 1) preprocessing a medical image;
data of the acquired dermatoscope image are as follows: 1: 2, dividing the image into a training set, a verification set and a test set, and setting the size of the image pixel to be 128 × 128; carrying out data augmentation processing on training set images for network training, randomly rotating in a range of-30 degrees to 30 degrees, randomly horizontally turning and randomly zooming to an original image of 0.8 to 1.2 times;
step 2) constructing a multi-scale aggregation network model with a flexible receptive field;
2.1 constructing a channel attention void convolution module for feature extraction;
replacing a coding layer in the U-Net with a channel attention void convolutional layer, taking the skin mirror image in the step 1) as an input, and outputting the extracted feature diagram featuremap to provide input for a subsequent network; each layer uses three parallel cavity convolutions to extract features, and the expansion rate of each cavity convolution is different; then, carrying out global average pooling operation on the extracted feature maps, capturing cross-channel interaction information by considering each channel and k neighbor channels thereof, redistributing weight for each channel of the feature maps, and then adding the three feature maps to obtain an output feature map of the layer; the shallow layer usually learns simple texture information, and as the layer number is deeper, complex abstract information is captured;
in the downsampling, 5 channel attention cavity convolution layers are adopted, the convolution kernel size of each layer using three parallel cavity convolutions is 3 x 3, the expansion rates are respectively set to be 1, 2 and 3, the step length stride is 1, the filling padding is the same as the respective expansion rates, and the pooling posing operation adopts 2 x 2 maximum pooling; inputting a 128 × 3 picture, obtaining three feature maps featuremap with 64 channels by convolution of three holes of 64 convolution kernels, obtaining vectors with the size of 1 × C by global average pooling operation of a channel attention module, obtaining cross-channel information by convolution with 1 × 1 with the convolution kernel size of 3, activating by a Sigmoid function, multiplying the original feature maps by coefficients to distribute respective weights to each channel, adding the weights to obtain the 128 × 64 feature maps, and using a firing operation as input of a next coding layer; repeating the operations for five times, and obtaining feature maps with 64, 128, 256, 512 and 1024 channels by the coding layer respectively; the modules may be described as:
Figure BDA0002723382770000041
wherein D represents a hole convolution, k represents an expansion rate, C represents a channel attention module, and f represents an input characteristic;
2.2 constructing an aggregation interaction module, namely AIM;
the aggregation interaction module is provided for making up semantic difference between feature maps of the coding layer and the corresponding decoding layer and inhibiting noise possibly caused by jump connection; the U-Net directly aggregates the two, and because semantic information between the two has a large difference, information redundancy noise can be generated, so that the final segmentation result can be influenced; AIM receives feature map f from adjacent coding layeri-1、fi、fi+1Reducing the number of channels of the feature by a 3 x 3 convolutional layer to reduce the amount of computation; then, each branch B is zoomed to the size of the characteristic graph of the adjacent branch by adopting pooling or interpolation operation, and each branch is fused and added by adopting a coefficient; finally, all branches are integrated into one convolutional layer, and a residual error module is added to the output for easier optimization of training; the whole module process can be written as:
Figure BDA0002723382770000042
Figure BDA0002723382770000043
where I and M represent residual mapping and branch merging, respectively, BiRepresenting the operation of the i branch, and f representing the characteristics of the input;
2.3 constructing a decoding layer;
the decoding layer performs up-sampling operation on the feature graph obtained by the coding layer, then performs coefficient addition on the feature graph output by the aggregation interaction module, and obtains an output feature graph after two times of 3-by-3 convolutional layer operation; processing the last decoding layer by a 1-by-1 convolution layer and a Sigmoid function to obtain a final segmentation result; the Sigmoid function is defined as follows:
Figure BDA0002723382770000051
step 3) inputting training set data into a model for training;
inputting the processed training set in the step 1) into the network model constructed in the step 2), and adopting a random initialization and Adam optimization method; setting an initial learning rate, momentum and iteration times, and training according to a set training strategy; firstly, performing data amplification processing on an input training set, then training, then obtaining a verification result on a trained network model by using a verification set, then updating once according to a gradient, and repeating the steps until the iteration times are reached;
the batch size is 12, the epoch is 80, the initial learning rate is 0.0001, and the momentum is 0.9; training the prediction and the group channel obtained by training by adopting tverskiyos + consistency-enhancedlos, wherein the loss function can be written as follows:
Figure BDA0002723382770000052
Figure BDA0002723382770000053
Ltotal=Ltver(p,g,α,β)+Lcel(p,g) (7)
wherein alpha is set to 0.3, beta is set to 0.7, and p and g respectively represent a prediction graph and a calibrated standard graph;
step 4), dividing a focus area of the dermatoscope image;
inputting test set data into the prediction model trained in the step 3) to obtain a segmentation result, and evaluating the segmentation result by an evaluation index, wherein the evaluation index comprises Accuracy (AC), a dice coefficient (DI), a Jacobian index (JA) and Sensitivity (SE), and the evaluation index is calculated in the following way:
Figure BDA0002723382770000054
Figure BDA0002723382770000055
wherein TP is true positive, TN is true negative, FP is false positive, FN is false negative; the evaluation index shows that the invention can assist in segmenting the dermatoscope image.
The channel attention hole convolution module can adaptively expand the receptive field according to the image characteristics, obtain more compact context information and relieve the problem of insufficient characteristics caused by the fixed receptive field; the aggregation interaction module can aggregate the characteristics output by the coding layer and the characteristics of the adjacent coding layer to obtain multi-scale information, reduce the semantic difference between the coding layer and the corresponding decoding layer and inhibit noise caused by direct aggregation. The invention can segment accurate skin mirror images and plays an auxiliary role.

Claims (1)

1. A melanoma segmentation method based on void convolution and multi-scale fusion comprises the following steps:
step 1) preprocessing a medical image;
data of the acquired dermatoscope image are as follows: 1: 2, dividing the image into a training set, a verification set and a test set, and setting the size of the image pixel to be 128 × 128; carrying out data augmentation processing on training set images for network training, randomly rotating in a range of-30 degrees to 30 degrees, randomly horizontally turning and randomly zooming to an original image of 0.8 to 1.2 times;
step 2) constructing a multi-scale aggregation network model with a flexible receptive field;
2.1 constructing a channel attention void convolution module for feature extraction;
replacing an encoding layer in the U-Net with a channel attention void convolutional layer, taking the skin mirror image in the step 1) as an input, and outputting the extracted feature map to provide an input for a subsequent network; each layer uses three parallel cavity convolutions to extract features, and the expansion rate of each cavity convolution is different; then, carrying out global average pooling operation on the extracted feature maps, capturing cross-channel interaction information by considering each channel and k neighbor channels thereof, redistributing weight for each channel of the feature maps, and then adding the three feature maps to obtain an output feature map of the layer; the shallow layer usually learns simple texture information, and as the layer number is deeper, complex abstract information is captured;
in the downsampling, 5 channel attention cavity convolution layers are adopted, the convolution kernel size of each layer using three parallel cavity convolutions is 3 x 3, the expansion rates are respectively set to be 1, 2 and 3, the step length stride is 1, the filling padding is the same as the respective expansion rates, and the pooling posing operation adopts 2 x 2 maximum pooling; inputting a 128 × 3 picture, obtaining a feature map with three channels with 64 channel numbers by convolution of three holes of 64 convolution kernels, obtaining a vector with the size of 1 × C by global average pooling operation of a channel attention module, obtaining cross-channel information by convolution with the size of 1 × 1 of the convolution kernel with 3, activating by a Sigmoid function, multiplying the original feature map coefficients to distribute respective weights to each channel, adding the weights to obtain a 128 × 64 feature map, and using a firing operation as input of a next coding layer; repeating the operations for five times, and obtaining feature maps with 64, 128, 256, 512 and 1024 channels by the coding layer respectively; the modules may be described as:
Figure FDA0002723382760000011
wherein D represents a hole convolution, k represents an expansion rate, C represents a channel attention module, and f represents an input characteristic;
2.2 constructing an aggregation interaction module, namely AIM;
the aggregation interaction module is provided for making up semantic difference between feature maps of the coding layer and the corresponding decoding layer and inhibiting noise possibly caused by jump connection; the U-Net directly aggregates the two, and because semantic information between the two has a large difference, information redundancy noise can be generated, so that the final segmentation result can be influenced; AIM fromAdjacent coding layer receiving characteristic graph fi-1、fi、fi+1Reducing the number of channels of the feature by a 3 x 3 convolutional layer to reduce the amount of computation; then, each branch B is zoomed to the size of the characteristic graph of the adjacent branch by adopting pooling or interpolation operation, and each branch is fused and added by adopting a coefficient; finally, all branches are integrated into one convolutional layer, and a residual error module is added to the output for easier optimization of training; the whole module process can be written as:
Figure FDA0002723382760000021
Figure FDA0002723382760000022
where I and M represent residual mapping and branch merging, respectively, BiRepresenting the operation of the i branch, and f representing the characteristics of the input;
2.3 constructing a decoding layer;
the decoding layer performs up-sampling operation on the feature graph obtained by the coding layer, then performs coefficient addition on the feature graph output by the aggregation interaction module, and obtains an output feature graph after two times of 3-by-3 convolutional layer operation; processing the last decoding layer by a 1-by-1 convolution layer and a Sigmoid function to obtain a final segmentation result; the Sigmoid function is defined as follows:
Figure FDA0002723382760000023
step 3) inputting training set data into a model for training;
inputting the processed training set in the step 1) into the network model constructed in the step 2), and adopting a random initialization and Adam optimization method; setting an initial learning rate, momentum and iteration times, and training according to a set training strategy; firstly, performing data amplification processing on an input training set, then training, then obtaining a verification result on a trained network model by using a verification set, then updating once according to a gradient, and repeating the steps until the iteration times are reached;
the batch size is 12, the epoch is 80, the initial learning rate is 0.0001, and the momentum is 0.9; and (3) training the prediction and the group channel obtained by training by adopting tversky loss + consistency-enhanced loss, wherein the loss function can be written as follows:
Figure FDA0002723382760000024
Figure FDA0002723382760000025
Ltotal=Ltver(p,g,α,β)+Lcel(p,g) (7)
wherein alpha is set to 0.3, beta is set to 0.7, and p and g respectively represent a prediction graph and a calibrated standard graph;
step 4), dividing a focus area of the dermatoscope image;
inputting test set data into the prediction model trained in the step 3) to obtain a segmentation result, and evaluating the segmentation result by an evaluation index, wherein the evaluation index comprises Accuracy (AC), a dice coefficient (DI), a Jacobian index (JA) and Sensitivity (SE), and the evaluation index is calculated in the following way:
Figure FDA0002723382760000031
Figure FDA0002723382760000032
wherein TP is true positive, TN is true negative, FP is false positive, and FN is false negative. The evaluation index shows that the invention can assist in segmenting the dermatoscope image.
CN202011094831.1A 2020-10-14 2020-10-14 Melanoma segmentation method based on void convolution and multi-scale fusion Pending CN112446890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011094831.1A CN112446890A (en) 2020-10-14 2020-10-14 Melanoma segmentation method based on void convolution and multi-scale fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011094831.1A CN112446890A (en) 2020-10-14 2020-10-14 Melanoma segmentation method based on void convolution and multi-scale fusion

Publications (1)

Publication Number Publication Date
CN112446890A true CN112446890A (en) 2021-03-05

Family

ID=74736181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011094831.1A Pending CN112446890A (en) 2020-10-14 2020-10-14 Melanoma segmentation method based on void convolution and multi-scale fusion

Country Status (1)

Country Link
CN (1) CN112446890A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967294A (en) * 2021-03-11 2021-06-15 西安智诊智能科技有限公司 Liver CT image segmentation method and system
CN113033570A (en) * 2021-03-29 2021-06-25 同济大学 Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information
CN113077471A (en) * 2021-03-26 2021-07-06 南京邮电大学 Medical image segmentation method based on U-shaped network
CN113129295A (en) * 2021-04-28 2021-07-16 桂林电子科技大学 Full-scale connected deep learning phase unwrapping method
CN113256641A (en) * 2021-07-08 2021-08-13 湖南大学 Skin lesion image segmentation method based on deep learning
CN113298825A (en) * 2021-06-09 2021-08-24 东北大学 Image segmentation method based on MSF-Net network
CN113436114A (en) * 2021-07-26 2021-09-24 北京富通东方科技有限公司 Data enhancement method for medical image
CN113554668A (en) * 2021-07-27 2021-10-26 深圳大学 Skin mirror image melanoma segmentation method, device and related components
CN113592878A (en) * 2021-06-29 2021-11-02 中国人民解放军陆军工程大学 Compact multi-scale video foreground segmentation method
CN113781410A (en) * 2021-08-25 2021-12-10 南京邮电大学 Medical image segmentation method and system based on MEDU-Net + network
CN113936006A (en) * 2021-10-29 2022-01-14 天津大学 Segmentation method and device for processing high-noise low-quality medical image
CN113940635A (en) * 2021-11-25 2022-01-18 南京邮电大学 Skin lesion segmentation and feature extraction method based on depth residual pyramid
CN114359202A (en) * 2021-12-29 2022-04-15 电子科技大学 Fetus corpus callosum segmentation system and method based on interactive semi-supervision
CN115019068A (en) * 2022-05-26 2022-09-06 杭州电子科技大学 Progressive salient object identification method based on coding and decoding framework
CN117078692A (en) * 2023-10-13 2023-11-17 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN117115668A (en) * 2023-10-23 2023-11-24 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967294A (en) * 2021-03-11 2021-06-15 西安智诊智能科技有限公司 Liver CT image segmentation method and system
CN113077471A (en) * 2021-03-26 2021-07-06 南京邮电大学 Medical image segmentation method based on U-shaped network
CN113077471B (en) * 2021-03-26 2022-10-14 南京邮电大学 Medical image segmentation method based on U-shaped network
CN113033570A (en) * 2021-03-29 2021-06-25 同济大学 Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information
CN113033570B (en) * 2021-03-29 2022-11-11 同济大学 Image semantic segmentation method for improving void convolution and multilevel characteristic information fusion
CN113129295A (en) * 2021-04-28 2021-07-16 桂林电子科技大学 Full-scale connected deep learning phase unwrapping method
CN113298825A (en) * 2021-06-09 2021-08-24 东北大学 Image segmentation method based on MSF-Net network
CN113298825B (en) * 2021-06-09 2023-11-14 东北大学 Image segmentation method based on MSF-Net network
CN113592878A (en) * 2021-06-29 2021-11-02 中国人民解放军陆军工程大学 Compact multi-scale video foreground segmentation method
CN113256641A (en) * 2021-07-08 2021-08-13 湖南大学 Skin lesion image segmentation method based on deep learning
CN113256641B (en) * 2021-07-08 2021-10-01 湖南大学 Skin lesion image segmentation method based on deep learning
CN113436114A (en) * 2021-07-26 2021-09-24 北京富通东方科技有限公司 Data enhancement method for medical image
CN113554668A (en) * 2021-07-27 2021-10-26 深圳大学 Skin mirror image melanoma segmentation method, device and related components
CN113554668B (en) * 2021-07-27 2022-02-22 深圳大学 Skin mirror image melanoma segmentation method, device and related components
CN113781410A (en) * 2021-08-25 2021-12-10 南京邮电大学 Medical image segmentation method and system based on MEDU-Net + network
CN113781410B (en) * 2021-08-25 2023-10-13 南京邮电大学 Medical image segmentation method and system based on MEDU-Net+network
CN113936006A (en) * 2021-10-29 2022-01-14 天津大学 Segmentation method and device for processing high-noise low-quality medical image
CN113940635A (en) * 2021-11-25 2022-01-18 南京邮电大学 Skin lesion segmentation and feature extraction method based on depth residual pyramid
CN113940635B (en) * 2021-11-25 2023-09-26 南京邮电大学 Skin lesion segmentation and feature extraction method based on depth residual pyramid
CN114359202A (en) * 2021-12-29 2022-04-15 电子科技大学 Fetus corpus callosum segmentation system and method based on interactive semi-supervision
CN115019068A (en) * 2022-05-26 2022-09-06 杭州电子科技大学 Progressive salient object identification method based on coding and decoding framework
CN115019068B (en) * 2022-05-26 2024-02-23 杭州电子科技大学 Progressive salient target identification method based on coding and decoding architecture
CN117078692A (en) * 2023-10-13 2023-11-17 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN117078692B (en) * 2023-10-13 2024-02-06 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion
CN117115668A (en) * 2023-10-23 2023-11-24 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium
CN117115668B (en) * 2023-10-23 2024-01-26 安徽农业大学 Crop canopy phenotype information extraction method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112446890A (en) Melanoma segmentation method based on void convolution and multi-scale fusion
CN113077471B (en) Medical image segmentation method based on U-shaped network
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN109978037B (en) Image processing method, model training method, device and storage medium
CN112819910B (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
CN110889853A (en) Tumor segmentation method based on residual error-attention deep neural network
CN110136122B (en) Brain MR image segmentation method based on attention depth feature reconstruction
CN112862689A (en) Image super-resolution reconstruction method and system
CN112419153A (en) Image super-resolution reconstruction method and device, computer equipment and storage medium
WO2018112137A1 (en) System and method for image segmentation using a joint deep learning model
CN115457021A (en) Skin disease image segmentation method and system based on joint attention convolution neural network
CN114037714A (en) 3D MR and TRUS image segmentation method for prostate system puncture
CN113807361A (en) Neural network, target detection method, neural network training method and related products
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN116310219A (en) Three-dimensional foot shape generation method based on conditional diffusion model
CN117392153B (en) Pancreas segmentation method based on local compensation and multi-scale adaptive deformation
CN116825363B (en) Early lung adenocarcinoma pathological type prediction system based on fusion deep learning network
CN113096032A (en) Non-uniform blur removing method based on image area division
CN112529886A (en) Attention DenseUNet-based MRI glioma segmentation method
CN114078149A (en) Image estimation method, electronic equipment and storage medium
CN116708807A (en) Compression reconstruction method and compression reconstruction device for monitoring video
CN116309429A (en) Chip defect detection method based on deep learning
CN113298827B (en) Image segmentation method based on DP-Net network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination