CN111325750B - Medical image segmentation method based on multi-scale fusion U-shaped chain neural network - Google Patents

Medical image segmentation method based on multi-scale fusion U-shaped chain neural network Download PDF

Info

Publication number
CN111325750B
CN111325750B CN202010117698.0A CN202010117698A CN111325750B CN 111325750 B CN111325750 B CN 111325750B CN 202010117698 A CN202010117698 A CN 202010117698A CN 111325750 B CN111325750 B CN 111325750B
Authority
CN
China
Prior art keywords
scale fusion
neural network
shaped chain
training
shaped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010117698.0A
Other languages
Chinese (zh)
Other versions
CN111325750A (en
Inventor
王志
王春
惠维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010117698.0A priority Critical patent/CN111325750B/en
Publication of CN111325750A publication Critical patent/CN111325750A/en
Application granted granted Critical
Publication of CN111325750B publication Critical patent/CN111325750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The invention discloses a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, which well fuses a multi-scale feature fusion module and the U-shaped network together. The features learned by different scales are fused to generate new features, residual connection through 1 x 1 convolution is added, the features and the new features are added, the segmentation effect of the network is improved, and more accurate target positioning is realized; meanwhile, the jump layer connection of the U-shaped chain neural network architecture is utilized to realize feature combination, and features of different abstract levels are stacked, so that the convergence rate of network training is greatly improved, and a better training model can be obtained in a shorter time. The invention can meet the requirements of higher processing speed and segmentation precision in the medical image segmentation task and achieve very excellent performance in the medical image segmentation.

Description

Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
Technical Field
The invention relates to the technical field of image processing and the like, relates to an image segmentation technology, and particularly relates to a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network.
Background
Medical image segmentation is a primary step in medical image processing and medical image analysis, and aims to segment regions of interest (such as tumor regions and organs) on a medical image, extract relevant features, classify the relevant features at a pixel level, and gather pixels of the same type. This can greatly improve clinical procedures and is critical for disease diagnosis, disease progression detection and planning of treatment protocols. However, obtaining accurate medical image segmentation remains a significant challenge due to the large differences in image resolution and image noise that result from the large differences in the shapes and sizes of the various organs and tumor regions, and the differences between medical images acquired across the organ and different biomedical imaging devices.
Due to the rapid development of deep learning, Deep Convolutional Neural Networks (DCNN) are also widely used for various computer vision tasks, such as: face recognition, image recognition, target location, object tracking, etc., and achieve the most advanced performance. Although DCNN makes a huge breakthrough in image segmentation, a large amount of labeled data is required in the training process. However, medical images are expensive and complicated to obtain, and manual segmentation of labels also consumes a lot of effort and time when performing image segmentation, and is also prone to errors or inter-observer differences. This limitation is particularly important in the medical field, since segmentation of images in different medical fields requires an expert in this field. With the advent of full convolutional networks, this problem has also been solved.
However, in the most popular medical image segmentation model (such as FCN, U-Net), an encoder and decoder structure is adopted, a complete connection layer of the classic CNN after the convolutional layer is replaced by the convolutional layer, the feature map obtained by the last convolutional layer is subjected to upsampling or transposed convolution operation, skip connection (skip connection) is added between layers with the same depth, and local information learned by the network shallow layer and more complex information learned by the deep layer are combined to obtain new more complex features. However, as the model layers are deeper and deeper, the bottom information of the image is gradually lost in the convolution process, so that the boundary of the region of interest cannot be well described, and accurate segmentation cannot be achieved.
Disclosure of Invention
Aiming at the problem that the segmentation result in the traditional medical image segmentation method is inaccurate, the invention provides a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, so as to solve the problem that the traditional medical image segmentation method cannot accurately segment.
The invention is realized by the following technical scheme:
a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network comprises the following steps:
s1, preprocessing image data to be segmented to obtain a training data set;
s2, constructing a multi-scale fusion U-shaped chain neural network, wherein the multi-scale fusion U-shaped chain neural network comprises two U-shaped modules, each U-shaped module is composed of a plurality of multi-scale fusion modules, and the two U-shaped modules are in one-to-one correspondence and carry out add operation on the characteristics of the two U-shaped modules;
s3, training the multi-scale fusion U-shaped chain neural network by adopting a training data set, randomly dividing the training data set into two parts, wherein one part is a training set, and the other part is a verification set:
s4, inputting the training set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the training process by using a cross entropy loss function;
s5, inputting the verification set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the verification process by using cross entropy loss;
s6, judging whether the loss value in the verification process is smaller than the minimum loss value in the training process, if so, saving the updated network parameters of the currently trained model, and then executing the step S7;
when the loss value in the verification process is greater than the minimum loss value in the training process, performing step S7;
and S7, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S3 for the next iteration, and if so, finishing the training of the multi-scale fusion U-shaped chain neural network.
Preferably, the image data preprocessing process in step S1 is specifically as follows:
and carrying out format conversion on the image data, then carrying out scaling on the obtained image data and setting pixels, and sequentially carrying out normalization and binarization processing on the scaled image data to obtain a training data set.
Preferably, step S1 further includes the step of enhancing the training data set obtained after the binarization processing, and increasing the data amount of the training data set by performing inverse transformation, translation transformation and noise disturbance on the training data set.
Preferably, in step S2, each U-shaped module includes an encoder, a decoder, and a bottom module, the encoder and the decoder respectively include a plurality of multi-scale fusion modules, the bottom module includes a multi-scale fusion module, the multi-scale fusion modules of the encoder and the decoder are in one-to-one correspondence, and perform add operation on features of each other, and the multi-scale fusion module of the bottom module performs convolution operation with a last multi-scale fusion module of the encoder and the decoder respectively.
Preferably, the encoder and the decoder have the same structure and both comprise 4 multi-scale fusion modules, and a 3 × 3 convolution operation with a step length of 2 is performed between every two adjacent multi-scale fusion modules;
the multiscale fusion module of the bottom module performs a 3 × 3 convolution operation with a step size of 2 with the last multiscale fusion module of the encoder, and performs a 3 × 3 transpose convolution operation with a step size of 2 with the last multiscale fusion module of the decoder.
Preferably, the multi-scale fusion module comprises 3 convolution operations of 3 × 3, residual connection is added after the first two convolution operations, and localization operation is performed on the residual connection and the features obtained after the third convolution operation;
the number of filters of three consecutive convolution layers is set to (0.2, 0.3, 0.5) × Input, respectively channel Wherein Input channel For the number of input channels, 1 × 1 convolution operation is added after the original input, the added complex features are added, and then the new activation function and BN processing are carried out to form the input of the next convolution or transposition convolution.
Preferably, the calculation formula of the loss value of step 4 or step 5 is as follows:
Figure GDA0003740705460000041
wherein H (p, q) is a loss value between the predicted segmentation result and the true label, p (x) represents a true distribution of the sample, and q (x) represents a distribution predicted by the model.
Preferably, step S7 is followed by the following steps:
and S8, inputting the image into the trained multi-scale fusion U-shaped chain neural network, and outputting an image segmentation result and an evaluation index.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a medical image segmentation method based on multi-scale fusion U-shaped chain convolution, which well fuses a multi-scale feature fusion module and a U-shaped chain network together, generates new features by fusing learned features of different scales, adds residual connection through 1 multiplied by 1 convolution, adds the features and the new features, improves the segmentation effect of the network and realizes more accurate target positioning; meanwhile, the jump layer connection of the U-shaped chain neural network architecture is utilized to realize feature combination, and features of different abstract levels are stacked, so that the convergence rate of network training is greatly improved, and a better training model is obtained in a shorter time. The invention can meet the requirements of higher processing speed and segmentation accuracy in the medical image segmentation task, and achieves very excellent performance in the medical image segmentation application.
Drawings
Fig. 1 is a flowchart of a medical image segmentation method based on a multi-scale fusion U-chain neural network according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a step S1 according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a multi-scale fusion module according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a multi-scale fusion U-chain neural network model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the step S3 according to an embodiment of the present invention;
FIG. 6 is a comparison of the effects of the neural network model provided by the present invention and other existing network models.
Detailed Description
The present invention will now be described in further detail with reference to the attached drawings, which are illustrative, but not limiting, of the present invention.
Referring to fig. 1, the medical image segmentation method based on the multi-scale fusion U-chain neural network includes the following steps:
step 1, preprocessing colposcopy image data to be segmented to obtain a training data set and a test data set.
Referring to fig. 2, the specific method of this step is as follows:
and S11, converting the format of the colposcopy image data to be segmented.
S12, the format-converted image is scaled to set its pixel value to 256 × 256.
And S13, normalizing the zoomed image to a [0,1] interval.
And S14, performing binarization processing on the normalized image, setting a threshold value to be 0.5, processing the pixel value larger than the threshold value to be 1, and processing the pixel value smaller than the threshold value to be 0.
And S15, dividing the binarized image into a training data set and a test data set in proportion.
And S16, performing data enhancement on the training data set.
Deep learning typically requires a large amount of data to train, and the acquisition of data is very expensive and difficult in medicine. The tagged data is more difficult and difficult to mark by experts in the field. The overfitting in the training process is avoided, the segmentation precision is improved, and the data enhancement of a training data set is needed, in the embodiment of the invention, the data enhancement is carried out by using the following method:
flip transform (flip) flipping an image in either a horizontal or vertical direction.
Translation transform (shift) that translates an image in a certain manner on an image plane.
Noise perturbation (noise) each pixel RGB of the image is perturbed randomly.
And 2, constructing a multi-scale fusion U-shaped chain neural network.
The step 2 comprises the following specific steps:
and S21, constructing a multi-scale fusion module.
As shown in FIG. 3, the multi-scale fusion module comprises 3 convolution operations of 3 × 3, and is added after the first two convolution operationsAnd adding residual errors for connection, and performing collocation operation on the features obtained after the third convolution operation so as to obtain complex features obtained from different scales. In order to keep the number of Input and output channels the same, we set the number of filters of three consecutive convolutional layers to (0.2, 0.3, 0.5) × Input, respectively channel Wherein Input channel Is the number of input channels. Adding 1 × 1 convolution operation after the original input, adding the complex features obtained before, and then performing relu activation function and BN processing to form the input of the next convolution or transposition convolution.
And S22, constructing a multi-scale fusion U-shaped chain neural network according to the multi-scale fusion module.
As shown in fig. 4, the multi-scale fusion U-shaped chain neural network includes two U-shaped modules. Wherein each U-shaped module comprises an encoder, a decoder and a bottom module;
the encoder comprises 4 multi-scale fusion modules, and 3 × 3 convolution operation with the step length of 2 is performed between every two adjacent multi-scale fusion modules;
the decoder comprises 4 multi-scale fusion modules, and 3 × 3 transposition convolution operation with the step size of 2 is performed between every two adjacent multi-scale fusion modules;
the bottom module comprises 1 multi-scale fusion module, and performs a 3 × 3 convolution operation with a step length of 2 with the last multi-scale fusion module of the encoder, and performs a 3 × 3 transposition convolution operation with a step length of 2 with the last multi-scale fusion module of the decoder;
the multi-scale fusion modules in the encoder and the multi-scale fusion modules in the decoder are in one-to-one correspondence, and add operation is carried out on the characteristics of the multi-scale fusion modules and the multi-scale fusion modules;
and the two U-shaped modules are in one-to-one correspondence and perform add operation according to the characteristics of the two U-shaped modules.
And 3, inputting the training data set into a multi-scale fusion U-shaped chain neural network for training to obtain a learned convolutional neural network model.
As shown in FIG. 5, step 3 includes the following substeps S31-S40:
s31, randomly selecting 10% of data in the training data set as a verification data set, and using the rest 90% of data as a training data set;
s32, initializing the convolution kernel weight and the loss function to be 0;
s33, inputting the training data set in the S31 into a multi-scale fusion U-shaped chain neural network;
s34, calculating the training data set in the S31 and each node parameter in the multi-scale fusion U-shaped chain neural network to realize the forward propagation of the network training;
s35, in the neural network, calculating the difference between the characteristic diagram output according to the forward propagation of the neural network and the real label, and calculating the loss function loss to apply the backward propagation. In the implementation of the invention, a cross entropy loss function is used to calculate the loss value between the prediction segmentation result and the real label in the training process, and the calculation formula is as follows:
Figure GDA0003740705460000081
wherein H (p, q) represents the loss value between the predicted segmentation result and the true label, p (x) represents the true distribution of the sample, and q (x) represents the distribution predicted by the model;
s36, inputting the verification data set in the S32 into a multi-scale fusion U-shaped chain neural network;
s37, calculating a loss value between the prediction segmentation result and the real label in the verification process by using cross entropy loss;
s38, judging whether the loss value in the verification process is smaller than the minimum loss value in the previous verification process, if so, saving the updated network parameters of the currently trained model, and entering the step S39, otherwise, directly entering the step S39;
s39, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S31 for the next iteration, otherwise, entering the step S40;
s40, the learned convolutional neural network model is output, and the process advances to step S4.
And 4, inputting the test data set into the learned convolutional neural network model, and outputting an image segmentation result and an evaluation index.
In the embodiment of the invention, five indexes of Accuracy (AC), Sensitivity (SE), Specificity (SP), Jaccard Similarity (JS) and Area Under Curve (AUC) are respectively adopted to be compared with the existing U-Net, LadderNet and a residual error model (ResModelW/OConv) which is not subjected to 1 × 1 convolution and is proposed in the invention. The final segmentation results were evaluated by training 100 epochs on the same cervical cancer dataset, as shown in table 1.
TABLE 1
Figure GDA0003740705460000082
Figure GDA0003740705460000091
As can be seen from Table 1, the method proposed in the present invention achieves the highest of the five indexes.
The test data set data is input into the neural network model after learning, and U-Net, LadderNet, ResModelW/OConv and the segmentation result of the invention are shown in FIG. 6. The invention can realize more accurate segmentation result for the area sensitive to the boundary.
The invention discloses a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, which well fuses a multi-scale feature fusion module and the U-shaped network together. The features learned by different scales are fused to generate new features, residual connection through 1 x 1 convolution is added, the features and the new features are added, the segmentation effect of the network is improved, and more accurate target positioning is realized; meanwhile, the jump layer connection of the U-shaped chain neural network architecture is utilized to realize feature combination, and features of different abstract levels are stacked, so that the convergence rate of network training is greatly improved, and a better training model can be obtained in a shorter time. The invention can meet the requirements of higher processing speed and segmentation precision in the medical image segmentation task and achieve very excellent performance in the medical image segmentation.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (5)

1. A medical image segmentation method based on multi-scale fusion U-shaped chain neural network is characterized by comprising the following steps:
s1, preprocessing image data to be segmented to obtain a training data set;
s2, constructing a multi-scale fusion U-shaped chain neural network, wherein the multi-scale fusion U-shaped chain neural network comprises two U-shaped modules, each U-shaped module is composed of a plurality of multi-scale fusion modules, and the two U-shaped modules are in one-to-one correspondence and perform add operation on the characteristics of each U-shaped module;
each U-shaped module comprises an encoder, a decoder and a bottom module, the encoder and the decoder respectively comprise a plurality of multi-scale fusion modules, the bottom module comprises a multi-scale fusion module, the multi-scale fusion modules of the encoder and the decoder correspond to each other one by one and perform add operation on the characteristics of the multi-scale fusion modules, and the multi-scale fusion module of the bottom module performs convolution operation with the last multi-scale fusion module of the encoder and the decoder respectively;
the multi-scale fusion module comprises 3 multiplied by 3 convolution operations, residual errors are added after the first two convolution operations and connected with the previous two convolution operations, and the subsequent convolution operations are carried out on the residual errors and the features obtained after the third convolution operation;
the number of filters of three consecutive convolution layers is set to (0.2, 0.3, 0.5) × Input, respectively channel Wherein Input channel For the number of input channels, 1 × 1 convolution operation is added after the original input, the added complex features are added, and then the added complex features are processed by a relu activation function and BN to become the input of the next convolution or transposition convolution;
s3, training the multi-scale fusion U-shaped chain neural network by adopting a training data set, randomly dividing the training data set into two parts, wherein one part is a training set, and the other part is a verification set:
s4, inputting the training set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the training process by using a cross entropy loss function;
s5, inputting the verification set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the verification process by using cross entropy loss;
the calculation formula of the loss value in step 4 or step 5 is as follows:
Figure FDA0003598937330000021
wherein H (p, q) is a loss value between the predicted segmentation result and the true label, p (x) represents a true distribution of the sample, and q (x) represents a distribution predicted by the model;
s6, judging whether the loss value in the verification process is smaller than the minimum loss value in the training process, if so, saving the updated network parameters of the currently trained model, and then executing the step S7;
when the loss value in the verification process is greater than the minimum loss value in the training process, performing step S7;
and S7, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S3 for the next iteration, and if so, finishing the training of the multi-scale fusion U-shaped chain neural network.
2. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the image data preprocessing process in step S1 is specifically as follows:
and carrying out format conversion on the image data, then carrying out scaling on the obtained image data and setting pixels, and sequentially carrying out normalization and binarization processing on the scaled image data to obtain a training data set.
3. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 2, wherein the step S1 further comprises the steps of enhancing the training data set obtained after the binarization processing, and increasing the data volume of the training data set by performing inversion transformation, translation transformation and noise disturbance on the training data set.
4. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the encoder and the decoder have the same structure and each include 4 multi-scale fusion modules, and a 3 x 3 convolution operation with a step size of 2 is performed between every two adjacent multi-scale fusion modules;
the multiscale fusion module of the bottom module performs a 3 × 3 convolution operation with a step size of 2 with the last multiscale fusion module of the encoder, and performs a 3 × 3 transpose convolution operation with a step size of 2 with the last multiscale fusion module of the decoder.
5. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the step S7 is further followed by the steps of:
and S8, inputting the image to the trained multi-scale fusion U-shaped chain neural network, and outputting an image segmentation result and an evaluation index.
CN202010117698.0A 2020-02-25 2020-02-25 Medical image segmentation method based on multi-scale fusion U-shaped chain neural network Active CN111325750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010117698.0A CN111325750B (en) 2020-02-25 2020-02-25 Medical image segmentation method based on multi-scale fusion U-shaped chain neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010117698.0A CN111325750B (en) 2020-02-25 2020-02-25 Medical image segmentation method based on multi-scale fusion U-shaped chain neural network

Publications (2)

Publication Number Publication Date
CN111325750A CN111325750A (en) 2020-06-23
CN111325750B true CN111325750B (en) 2022-08-16

Family

ID=71172972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010117698.0A Active CN111325750B (en) 2020-02-25 2020-02-25 Medical image segmentation method based on multi-scale fusion U-shaped chain neural network

Country Status (1)

Country Link
CN (1) CN111325750B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112216371B (en) * 2020-11-20 2022-07-12 中国科学院大学 Multi-path multi-scale parallel coding and decoding network image segmentation method, system and medium
CN113177913A (en) * 2021-04-15 2021-07-27 上海工程技术大学 Coke microscopic optical tissue extraction method based on multi-scale U-shaped neural network
CN113191242A (en) * 2021-04-25 2021-07-30 西安交通大学 Embedded lightweight driver leg posture estimation method based on OpenPose improvement
CN113205523A (en) * 2021-04-29 2021-08-03 浙江大学 Medical image segmentation and identification system, terminal and storage medium with multi-scale representation optimization
CN113240691B (en) * 2021-06-10 2023-08-01 南京邮电大学 Medical image segmentation method based on U-shaped network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108492286B (en) * 2018-03-13 2020-05-05 成都大学 Medical image segmentation method based on dual-channel U-shaped convolutional neural network
CN109063710B (en) * 2018-08-09 2022-08-16 成都信息工程大学 3D CNN nasopharyngeal carcinoma segmentation method based on multi-scale feature pyramid
CN109447994B (en) * 2018-11-05 2019-12-17 陕西师范大学 Remote sensing image segmentation method combining complete residual error and feature fusion
CN109886971A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image partition method and system based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN110120033A (en) * 2019-04-12 2019-08-13 天津大学 Based on improved U-Net neural network three-dimensional brain tumor image partition method
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network

Also Published As

Publication number Publication date
CN111325750A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
WO2022041307A1 (en) Method and system for constructing semi-supervised image segmentation framework
Pacal et al. A robust real-time deep learning based automatic polyp detection system
Li et al. Multitask semantic boundary awareness network for remote sensing image segmentation
Bi et al. Multi-label classification of multi-modality skin lesion via hyper-connected convolutional neural network
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
WO2023015743A1 (en) Lesion detection model training method, and method for recognizing lesion in image
CN113077471A (en) Medical image segmentation method based on U-shaped network
CN115661144B (en) Adaptive medical image segmentation method based on deformable U-Net
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
Guo et al. Learning with noise: Mask-guided attention model for weakly supervised nuclei segmentation
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
Sun et al. FDRN: a fast deformable registration network for medical images
Lu et al. DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation
Lin et al. CSwinDoubleU-Net: A double U-shaped network combined with convolution and Swin Transformer for colorectal polyp segmentation
CN113538363A (en) Lung medical image segmentation method and device based on improved U-Net
Chatterjee et al. A survey on techniques used in medical imaging processing
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
Su et al. Physical model and image translation fused network for single-image dehazing
Feng et al. Improved deep fully convolutional network with superpixel-based conditional random fields for building extraction
Wang et al. Self-supervised learning for high-resolution remote sensing images change detection with variational information bottleneck
CN115775252A (en) Magnetic resonance image cervical cancer tumor segmentation method based on global local cascade
CN109871835B (en) Face recognition method based on mutual exclusion regularization technology
CN108154107B (en) Method for determining scene category to which remote sensing image belongs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant