CN111325750A - Medical image segmentation method based on multi-scale fusion U-shaped chain neural network - Google Patents
Medical image segmentation method based on multi-scale fusion U-shaped chain neural network Download PDFInfo
- Publication number
- CN111325750A CN111325750A CN202010117698.0A CN202010117698A CN111325750A CN 111325750 A CN111325750 A CN 111325750A CN 202010117698 A CN202010117698 A CN 202010117698A CN 111325750 A CN111325750 A CN 111325750A
- Authority
- CN
- China
- Prior art keywords
- scale fusion
- neural network
- shaped chain
- training
- chain neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, which well fuses a multi-scale feature fusion module and a U-shaped network together, generates new features by fusing learned features of different scales, adds residual error connection after 1 × 1 convolution, adds the features and the new features, improves the segmentation effect of the network, realizes more accurate target positioning, realizes feature combination by using skip layer connection of a U-shaped chain neural network framework, stacks the features of different abstract levels, greatly improves the convergence speed of network training, and enables the network training to obtain a better training model in a shorter time.
Description
Technical Field
The invention relates to the technical field of image processing and the like, relates to an image segmentation technology, and particularly relates to a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network.
Background
Medical image segmentation is a primary step in medical image processing and medical image analysis, and aims to segment regions of interest (such as tumor regions and organs) on a medical image, extract relevant features, classify the relevant features at a pixel level, and gather pixels of the same type. This can greatly improve clinical procedures and is critical for disease diagnosis, disease progression detection and planning of treatment protocols. However, obtaining accurate medical image segmentation remains a significant challenge due to the large differences in image resolution and image noise that result from the large differences in the shapes and sizes of the various organs and tumor regions, and the differences between medical images acquired across the organ and different biomedical imaging devices.
Due to the rapid development of deep learning, Deep Convolutional Neural Networks (DCNN) are also widely used for various computer vision tasks, such as: face recognition, image recognition, target location, object tracking, etc., and achieve the most advanced performance. Although DCNN makes a huge breakthrough in image segmentation, a large amount of labeled data is required in the training process. However, medical images are expensive and complicated to obtain, and manual segmentation of labels also consumes a lot of effort and time when performing image segmentation, and is also prone to errors or inter-observer differences. This limitation is particularly important in the medical field, since segmentation of images in different medical fields requires an expert in this field. With the advent of full convolutional networks, this problem has also been solved.
However, in the most popular medical image segmentation model (such as FCN, U-Net), an encoder and decoder structure is adopted, a complete connection layer of the classic CNN after the convolutional layer is replaced by the convolutional layer, the feature map obtained by the last convolutional layer is subjected to upsampling or transposed convolution operation, skip connection (skip connection) is added between layers with the same depth, and local information learned by the network shallow layer and more complex information learned by the deep layer are combined to obtain new more complex features. However, as the model layers are deeper and deeper, the bottom information of the image is gradually lost in the convolution process, so that the boundary of the region of interest cannot be well described, and accurate segmentation cannot be achieved.
Disclosure of Invention
Aiming at the problem that the segmentation result in the traditional medical image segmentation method is inaccurate, the invention provides a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, so as to solve the problem that the traditional medical image segmentation method cannot accurately segment.
The invention is realized by the following technical scheme:
a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network comprises the following steps:
s1, preprocessing image data to be segmented to obtain a training data set;
s2, constructing a multi-scale fusion U-shaped chain neural network, wherein the multi-scale fusion U-shaped chain neural network comprises two U-shaped modules, each U-shaped module is composed of a plurality of multi-scale fusion modules, and the two U-shaped modules are in one-to-one correspondence and carry out add operation on the characteristics of the two U-shaped modules;
s3, training the multi-scale fusion U-shaped chain neural network by adopting a training data set, randomly dividing the training data set into two parts, wherein one part is a training set, and the other part is a verification set:
s4, inputting the training set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the training process by using a cross entropy loss function;
s5, inputting the verification set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the verification process by using cross entropy loss;
s6, judging whether the loss value in the verification process is smaller than the minimum loss value in the training process, if so, saving the updated network parameters of the currently trained model, and then executing the step S7;
when the loss value in the verification process is greater than the minimum loss value in the training process, performing step S7;
and S7, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S3 for the next iteration, and if so, finishing the training of the multi-scale fusion U-shaped chain neural network.
Preferably, the image data preprocessing process in step S1 is specifically as follows:
and carrying out format conversion on the image data, then carrying out scaling on the obtained image data and setting pixels, and sequentially carrying out normalization and binarization processing on the scaled image data to obtain a training data set.
Preferably, step S1 further includes the step of enhancing the training data set obtained after the binarization processing, and increasing the data amount of the training data set by performing inverse transformation, translation transformation and noise disturbance on the training data set.
Preferably, in step S2, each U-shaped module includes an encoder, a decoder, and a bottom module, the encoder and the decoder respectively include a plurality of multi-scale fusion modules, the bottom module includes a multi-scale fusion module, the multi-scale fusion modules of the encoder and the decoder are in one-to-one correspondence, and perform add operation on features of each other, and the multi-scale fusion module of the bottom module performs convolution operation with a last multi-scale fusion module of the encoder and the decoder respectively.
Preferably, the encoder and the decoder have the same structure and each include 4 multi-scale fusion modules, and a convolution operation of 3 × 3 with a step size of 2 is performed between every two adjacent multi-scale fusion modules;
the multi-scale fusion module of the bottom module performs a convolution operation of 3 × 3 with a step size of 2 with the last multi-scale fusion module of the encoder, and performs a convolution operation of 3 × 3 with a step size of 2 with the last multi-scale fusion module of the decoder.
Preferably, the multi-scale fusion module comprises 3 convolution operations of 3 × 3, residual errors are added after the first two convolution operations respectively, and the subsequent localization operation is performed on the residual errors and the features obtained after the third convolution operation;
the number of filters of three consecutive convolutional layers is set to (0.2, 0.3, 0.5) × InputchannelWherein InputchannelFor the number of input channels, in the originalAnd adding 1 convolution operation of 1 × 1 after input, adding the complex features obtained before, and then processing by a relu activation function and BN to become the input of the next convolution or transposition convolution.
Preferably, the calculation formula of the loss value of step 4 or step 5 is as follows:
wherein H (p, q) is a loss value between the predicted segmentation result and the true label, p (x) represents a true distribution of the sample, and q (x) represents a distribution predicted by the model.
8. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the step S7 is further followed by the steps of:
and S8, inputting the image into the trained multi-scale fusion U-shaped chain neural network, and outputting an image segmentation result and an evaluation index.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a medical image segmentation method based on multi-scale fusion U-shaped chain convolution, which well fuses a multi-scale feature fusion module and a U-shaped chain network together, generates new features by fusing learned features of different scales, adds residual connection after 1 × 1 convolution, adds the features and the new features, improves the segmentation effect of the network, realizes more accurate target positioning, realizes feature combination by using skip layer connection of a U-shaped chain neural network framework, stacks the features of different abstract levels, greatly improves the convergence speed of network training, and obtains a better training model in shorter time.
Drawings
Fig. 1 is a flowchart of a medical image segmentation method based on a multi-scale fusion U-chain neural network according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a step S1 according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a multi-scale fusion module according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a multi-scale fusion U-chain neural network model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a step S3 according to an embodiment of the present invention;
FIG. 6 is a comparison of the effect of the neural network model provided by the present invention with other existing network models.
Detailed Description
The present invention will now be described in further detail with reference to the attached drawings, which are illustrative, but not limiting, of the present invention.
Referring to fig. 1, the medical image segmentation method based on the multi-scale fusion U-chain neural network includes the following steps:
Referring to fig. 2, the specific method of this step is as follows:
and S11, converting the format of the colposcopy image data to be segmented.
S12, the format-converted image is scaled to set its pixel value to 256 × 256.
And S13, normalizing the zoomed image to a [0,1] interval.
And S14, performing binarization processing on the normalized image, setting a threshold value to be 0.5, processing the pixel value larger than the threshold value to be 1, and processing the pixel value smaller than the threshold value to be 0.
And S15, dividing the binarized image into a training data set and a test data set in proportion.
And S16, performing data enhancement on the training data set.
Deep learning typically requires a large amount of data to train, and the acquisition of data is very expensive and difficult in medicine. The tagged data is more difficult and difficult to mark by experts in the field. The overfitting in the training process is avoided, the segmentation precision is improved, and the data enhancement of a training data set is needed, in the embodiment of the invention, the data enhancement is carried out by using the following method:
flip transform (flip) flipping an image in either a horizontal or vertical direction.
Shift transform (shift) an image is shifted in a certain manner on an image plane.
Noise perturbation (noise) each pixel RGB of the image is perturbed randomly.
And 2, constructing a multi-scale fusion U-shaped chain neural network.
The step 2 comprises the following specific steps:
and S21, constructing a multi-scale fusion module.
As shown in FIG. 3, the multi-scale fusion module comprises 3 convolution operations of 3 × 3, and adds residual connection after the first two convolution operations, respectively, and performs the collocation operation with the feature obtained after the third convolution operation to obtain the complex feature obtained from different scales, so as to keep the number of channels Input and output the same, we set the number of filters of three consecutive convolution layers to (0.2, 0.3, 0.5) × Input, respectivelychannelWherein InputchannelAdding 1 × 1 convolution operation after the original input, adding the complex characteristic obtained before, and then becoming the input of the next convolution or transposition convolution after the relu activation function and BN processing.
And S22, constructing a multi-scale fusion U-shaped chain neural network according to the multi-scale fusion module.
As shown in fig. 4, the multi-scale fusion U-shaped chain neural network includes two U-shaped modules. Wherein each U-shaped module comprises an encoder, a decoder and a bottom module;
the encoder comprises 4 multi-scale fusion modules, and 3 × 3 convolution operation with the step size of 2 is carried out between every two adjacent multi-scale fusion modules;
the decoder comprises 4 multi-scale fusion modules, and 3 × 3 transposition convolution operation with the step size of 2 is carried out between every two adjacent multi-scale fusion modules;
the bottom module comprises 1 multi-scale fusion module, and performs a 3 × 3 convolution operation with the last multi-scale fusion module of the encoder and a 3 × 3 transposition convolution operation with the last multi-scale fusion module of the decoder, wherein the convolution operation is performed by one step size of 2;
the multi-scale fusion modules in the encoder and the multi-scale fusion modules in the decoder are in one-to-one correspondence, and add operation is carried out on the characteristics of the multi-scale fusion modules and the multi-scale fusion modules;
and the two U-shaped modules are in one-to-one correspondence and perform add operation according to the characteristics of the two U-shaped modules.
And 3, inputting the training data set into a multi-scale fusion U-shaped chain neural network for training to obtain a learned convolutional neural network model.
As shown in FIG. 5, step 3 includes the following substeps S31-S40:
s31, randomly selecting 10% of data in the training data set as a verification data set, and using the rest 90% of data as a training data set;
s32, initializing the convolution kernel weight and the loss function to be 0;
s33, inputting the training data set in the S31 into a multi-scale fusion U-shaped chain neural network;
s34, calculating the training data set in the S31 and each node parameter in the multi-scale fusion U-shaped chain neural network to realize the forward propagation of the network training;
s35, in the neural network, calculating the difference between the characteristic diagram output according to the forward propagation of the neural network and the real label, and calculating the loss function loss to apply the backward propagation. In the implementation of the invention, a cross entropy loss function is used to calculate the loss value between the prediction segmentation result and the real label in the training process, and the calculation formula is as follows:
wherein H (p, q) represents the loss value between the predicted segmentation result and the true label, p (x) represents the true distribution of the sample, and q (x) represents the distribution predicted by the model;
s36, inputting the verification data set in the S32 into a multi-scale fusion U-shaped chain neural network;
s37, calculating a loss value between the prediction segmentation result and the real label in the verification process by using cross entropy loss;
s38, judging whether the loss value in the verification process is smaller than the minimum loss value in the previous verification process, if so, saving the updated network parameters of the currently trained model, and entering the step S39, otherwise, directly entering the step S39;
s39, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S31 for the next iteration, otherwise, entering the step S40;
s40, the learned convolutional neural network model is output, and the process proceeds to step S4.
And 4, inputting the test data set into the learned convolutional neural network model, and outputting an image segmentation result and an evaluation index.
In the embodiment of the invention, five indexes of Accuracy (AC), Sensitivity (SE), Specificity (SP), Jaccard Similarity (JS) and Area Under Curve (AUC) are respectively adopted to be compared with the existing U-Net, LadderNet and the residual model (ResModelW/OConv) which is not subjected to convolution of 1 × 1 and is proposed in the invention, 100 epochs are trained on the same cervical cancer data set, and the final segmentation result is evaluated, as shown in Table 1.
TABLE 1
Methods | AC | SE | SP | JS | AUC |
U-Net | 0.9496 | 0.9203 | 0.9630 | 0.9496 | 0.9407 |
LadderNet | 0.9532 | 0.9222 | 0.9725 | 0.9532 | 0.9434 |
ResModelW/OConv | 0.9551 | 0.9267 | 0.9729 | 0.9551 | 0.9498 |
Our method | 0.9572 | 0.9295 | 0.9746 | 0.9572 | 0.9521 |
As can be seen from Table 1, the method proposed in the present invention achieves the highest of the five indexes.
The test data set data is input into the neural network model after learning, and U-Net, LadderNet, ResModelW/OConv and the segmentation result of the invention are shown in FIG. 6. The invention can realize more accurate segmentation result for the area sensitive to the boundary.
The invention discloses a medical image segmentation method based on a multi-scale fusion U-shaped chain neural network, which well fuses a multi-scale feature fusion module and a U-shaped network together, generates new features by fusing learned features of different scales, adds residual error connection after 1 × 1 convolution, adds the features and the new features, improves the segmentation effect of the network, realizes more accurate target positioning, realizes feature combination by using skip layer connection of a U-shaped chain neural network framework, stacks the features of different abstract levels, greatly improves the convergence speed of network training, and enables the network training to obtain a better training model in a shorter time.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.
Claims (8)
1. A medical image segmentation method based on a multi-scale fusion U-shaped chain neural network is characterized by comprising the following steps:
s1, preprocessing image data to be segmented to obtain a training data set;
s2, constructing a multi-scale fusion U-shaped chain neural network, wherein the multi-scale fusion U-shaped chain neural network comprises two U-shaped modules, each U-shaped module is composed of a plurality of multi-scale fusion modules, and the two U-shaped modules are in one-to-one correspondence and carry out add operation on the characteristics of the two U-shaped modules;
s3, training the multi-scale fusion U-shaped chain neural network by adopting a training data set, randomly dividing the training data set into two parts, wherein one part is a training set, and the other part is a verification set:
s4, inputting the training set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the training process by using a cross entropy loss function;
s5, inputting the verification set into a multi-scale fusion U-shaped chain neural network, and calculating a loss value between a prediction segmentation result and a real label in the verification process by using cross entropy loss;
s6, judging whether the loss value in the verification process is smaller than the minimum loss value in the training process, if so, saving the updated network parameters of the currently trained model, and then executing the step S7;
when the loss value in the verification process is greater than the minimum loss value in the training process, performing step S7;
and S7, judging whether the current iteration number reaches a preset epoch value, if not, returning to the step S3 for the next iteration, and if so, finishing the training of the multi-scale fusion U-shaped chain neural network.
2. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the image data preprocessing process in step S1 is as follows:
and carrying out format conversion on the image data, then carrying out scaling on the obtained image data and setting pixels, and sequentially carrying out normalization and binarization processing on the scaled image data to obtain a training data set.
3. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 2, wherein the step S1 further comprises the steps of enhancing the training data set obtained after the binarization processing, and increasing the data volume of the training data set by performing inversion transformation, translation transformation and noise disturbance on the training data set.
4. The method according to claim 1, wherein each of the U-shaped modules in step S2 includes an encoder, a decoder, and a bottom module, the encoder and the decoder respectively include a plurality of multi-scale fusion modules, the bottom module includes a multi-scale fusion module, the multi-scale fusion modules of the encoder and the decoder respectively perform add operations with features in one-to-one correspondence, and the multi-scale fusion module of the bottom module performs convolution operations with the last multi-scale fusion module of the encoder and the decoder respectively.
5. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 4, wherein the encoder and the decoder are identical in structure and each comprises 4 multi-scale fusion modules, and a convolution operation of 3 × 3 with a step size of 2 is performed between every two adjacent multi-scale fusion modules;
the multi-scale fusion module of the bottom module performs a convolution operation of 3 × 3 with a step size of 2 with the last multi-scale fusion module of the encoder, and performs a convolution operation of 3 × 3 with a step size of 2 with the last multi-scale fusion module of the decoder.
6. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 4, wherein the multi-scale fusion module comprises 3 convolution operations of 3 × 3, residual connection is added after the first two convolution operations, and the subsequent localization operation is performed on the residual connection and the feature obtained after the third convolution operation;
the number of filters of three consecutive convolutional layers is set to (0.2, 0.3, 0.5) × InputchannelWherein InputchannelFor the number of input channels, 1 × 1 convolution operation is added after the original input, the addition is carried out on the complex features obtained before, and then the complex features are processed by a relu activation function and BN to become the input of the next convolution or transposition convolution.
7. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the loss value in step 4 or step 5 is calculated as follows:
wherein H (p, q) is a loss value between the predicted segmentation result and the true label, p (x) represents a true distribution of the sample, and q (x) represents a distribution predicted by the model.
8. The medical image segmentation method based on the multi-scale fusion U-shaped chain neural network as claimed in claim 1, wherein the step S7 is further followed by the steps of:
and S8, inputting the image into the trained multi-scale fusion U-shaped chain neural network, and outputting an image segmentation result and an evaluation index.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010117698.0A CN111325750B (en) | 2020-02-25 | 2020-02-25 | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010117698.0A CN111325750B (en) | 2020-02-25 | 2020-02-25 | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111325750A true CN111325750A (en) | 2020-06-23 |
CN111325750B CN111325750B (en) | 2022-08-16 |
Family
ID=71172972
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010117698.0A Active CN111325750B (en) | 2020-02-25 | 2020-02-25 | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111325750B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112216371A (en) * | 2020-11-20 | 2021-01-12 | 中国科学院大学 | Multi-path multi-scale parallel coding and decoding network image segmentation method, system and medium |
CN113177913A (en) * | 2021-04-15 | 2021-07-27 | 上海工程技术大学 | Coke microscopic optical tissue extraction method based on multi-scale U-shaped neural network |
CN113191242A (en) * | 2021-04-25 | 2021-07-30 | 西安交通大学 | Embedded lightweight driver leg posture estimation method based on OpenPose improvement |
CN113205523A (en) * | 2021-04-29 | 2021-08-03 | 浙江大学 | Medical image segmentation and identification system, terminal and storage medium with multi-scale representation optimization |
CN113822428A (en) * | 2021-08-06 | 2021-12-21 | 中国工商银行股份有限公司 | Neural network training method and device and image segmentation method |
WO2022257408A1 (en) * | 2021-06-10 | 2022-12-15 | 南京邮电大学 | Medical image segmentation method based on u-shaped network |
US20240265645A1 (en) * | 2023-02-03 | 2024-08-08 | Rayhan Papar | Live surgical aid for brain tumor resection using augmented reality and deep learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492286A (en) * | 2018-03-13 | 2018-09-04 | 成都大学 | A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel |
WO2018200493A1 (en) * | 2017-04-25 | 2018-11-01 | The Board Of Trustees Of The Leland Stanford Junior University | Dose reduction for medical imaging using deep convolutional neural networks |
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN109447994A (en) * | 2018-11-05 | 2019-03-08 | 陕西师范大学 | In conjunction with the remote sensing image segmentation method of complete residual error and Fusion Features |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
CN110120033A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | Based on improved U-Net neural network three-dimensional brain tumor image partition method |
US10482603B1 (en) * | 2019-06-25 | 2019-11-19 | Artificial Intelligence, Ltd. | Medical image segmentation using an integrated edge guidance module and object segmentation network |
CN110570431A (en) * | 2019-09-18 | 2019-12-13 | 东北大学 | Medical image segmentation method based on improved convolutional neural network |
-
2020
- 2020-02-25 CN CN202010117698.0A patent/CN111325750B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018200493A1 (en) * | 2017-04-25 | 2018-11-01 | The Board Of Trustees Of The Leland Stanford Junior University | Dose reduction for medical imaging using deep convolutional neural networks |
CN108492286A (en) * | 2018-03-13 | 2018-09-04 | 成都大学 | A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel |
CN109063710A (en) * | 2018-08-09 | 2018-12-21 | 成都信息工程大学 | Based on the pyramidal 3D CNN nasopharyngeal carcinoma dividing method of Analysis On Multi-scale Features |
CN109447994A (en) * | 2018-11-05 | 2019-03-08 | 陕西师范大学 | In conjunction with the remote sensing image segmentation method of complete residual error and Fusion Features |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
CN110120033A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | Based on improved U-Net neural network three-dimensional brain tumor image partition method |
US10482603B1 (en) * | 2019-06-25 | 2019-11-19 | Artificial Intelligence, Ltd. | Medical image segmentation using an integrated edge guidance module and object segmentation network |
CN110570431A (en) * | 2019-09-18 | 2019-12-13 | 东北大学 | Medical image segmentation method based on improved convolutional neural network |
Non-Patent Citations (8)
Title |
---|
ADRIAN BULAT等: ""Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources"", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 * |
ADRIAN BULAT等: ""Hierarchical Binary CNNs for Landmark Localization with Limited Resources"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
ALEKSEI TIULPIN等: ""KNEEL: Knee Anatomical Landmark Localization Using Hourglass Networks"", 《HTTPS://ARXIV.ORG/PDF/1907.12237V2.PDF》 * |
JIA GUO等: ""Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment"", 《HTTPS://ARXIV.ORG/PDF/1812.01936V1.PDF》 * |
JUNTANG ZHUANG: ""Laddernet: Multi-path networks based on u-net for medical image segmentation"", 《HTTPS://ARXIV.ORG/PDF/1810.07810V4.PDF》 * |
KAIMING HE等: ""Deep Residual Learning for Image Recognition"", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
何昱: ""基于深度学习的视网膜眼底图像分割技术研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
彭博: ""基于深度学习的遥感图像道路信息提取算法研究"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112216371A (en) * | 2020-11-20 | 2021-01-12 | 中国科学院大学 | Multi-path multi-scale parallel coding and decoding network image segmentation method, system and medium |
CN113177913A (en) * | 2021-04-15 | 2021-07-27 | 上海工程技术大学 | Coke microscopic optical tissue extraction method based on multi-scale U-shaped neural network |
CN113191242A (en) * | 2021-04-25 | 2021-07-30 | 西安交通大学 | Embedded lightweight driver leg posture estimation method based on OpenPose improvement |
CN113205523A (en) * | 2021-04-29 | 2021-08-03 | 浙江大学 | Medical image segmentation and identification system, terminal and storage medium with multi-scale representation optimization |
WO2022257408A1 (en) * | 2021-06-10 | 2022-12-15 | 南京邮电大学 | Medical image segmentation method based on u-shaped network |
CN113822428A (en) * | 2021-08-06 | 2021-12-21 | 中国工商银行股份有限公司 | Neural network training method and device and image segmentation method |
US20240265645A1 (en) * | 2023-02-03 | 2024-08-08 | Rayhan Papar | Live surgical aid for brain tumor resection using augmented reality and deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111325750B (en) | 2022-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325750B (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
WO2022041307A1 (en) | Method and system for constructing semi-supervised image segmentation framework | |
WO2023015743A1 (en) | Lesion detection model training method, and method for recognizing lesion in image | |
CN107506761B (en) | Brain image segmentation method and system based on significance learning convolutional neural network | |
CN109886121B (en) | Human face key point positioning method for shielding robustness | |
CN111738363B (en) | Alzheimer disease classification method based on improved 3D CNN network | |
CN115661144B (en) | Adaptive medical image segmentation method based on deformable U-Net | |
CN111882560B (en) | Lung parenchyma CT image segmentation method based on weighted full convolution neural network | |
CN108062753A (en) | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study | |
CN110889853A (en) | Tumor segmentation method based on residual error-attention deep neural network | |
CN105844669A (en) | Video target real-time tracking method based on partial Hash features | |
Wang et al. | A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images | |
CN110705565A (en) | Lymph node tumor region identification method and device | |
Cheng et al. | DDU-Net: A dual dense U-structure network for medical image segmentation | |
Guo et al. | Learning with noise: Mask-guided attention model for weakly supervised nuclei segmentation | |
CN114663426B (en) | Bone age assessment method based on key bone region positioning | |
CN111340816A (en) | Image segmentation method based on double-U-shaped network framework | |
CN115496720A (en) | Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment | |
CN114998362B (en) | Medical image segmentation method based on double segmentation models | |
Sun et al. | FDRN: a fast deformable registration network for medical images | |
Lu et al. | DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation | |
CN115546638A (en) | Change detection method based on Siamese cascade differential neural network | |
Zhang et al. | Learning from multiple annotators for medical image segmentation | |
CN118115507A (en) | Image segmentation method based on cross-domain class perception graph convolution alignment | |
Lin et al. | CSwinDoubleU-Net: A double U-shaped network combined with convolution and Swin Transformer for colorectal polyp segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |