CN108492286B - Medical image segmentation method based on dual-channel U-shaped convolutional neural network - Google Patents
Medical image segmentation method based on dual-channel U-shaped convolutional neural network Download PDFInfo
- Publication number
- CN108492286B CN108492286B CN201810203917.XA CN201810203917A CN108492286B CN 108492286 B CN108492286 B CN 108492286B CN 201810203917 A CN201810203917 A CN 201810203917A CN 108492286 B CN108492286 B CN 108492286B
- Authority
- CN
- China
- Prior art keywords
- data
- channel
- neural network
- dual
- training set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a medical image segmentation method based on a dual-channel U-shaped convolutional neural network, which well fuses a dual-channel module and the U-shaped network together, reuses original features through stacking operation, and simultaneously adds the features to generate new features, thereby improving the training precision of the network; meanwhile, the U-shaped convolutional neural network architecture is utilized to perform transverse feature stacking between blocks, so that the convergence rate of network training is greatly improved, and an ideal training model is obtained more quickly. The invention can meet the requirements of larger data volume to be processed and higher requirements on processing efficiency and precision of medical images, and realizes very good performance in biomedical detection application.
Description
Technical Field
The invention belongs to the technical field of medical image segmentation, and particularly relates to a design of a medical image segmentation method based on a dual-channel U-shaped convolutional neural network.
Background
The medical image segmentation comprises manual segmentation, semi-automatic segmentation and full-automatic segmentation, the difficulty of the manual segmentation is high, the process is complicated, although the precision is high, the method depends on the experience and knowledge of an operator to a great extent, and the segmentation result is difficult to reproduce; semi-automatic segmentation is an interactive way that combines manual and computer processing, allowing manual interactive operations to provide some useful information, which is then processed by the computer for segmentation. The semi-automatic segmentation also comprises methods such as Gragh Cut, CRF (conditional random field) and level set, and all the methods need manual input, and one point is designated as a seed point to guide subsequent segmentation. The full-automatic segmentation method overcomes the defects of semi-automatic segmentation and manual segmentation, and obtains better segmentation effect.
The traditional image segmentation technology needs manual input no matter manual segmentation or semi-automatic segmentation, the process is complicated, the process depends on experience and knowledge of an operator to a great extent, the segmentation result is difficult to reproduce, and the efficiency and the precision of full-automatic segmentation cannot be achieved completely. The existing full-automatic segmentation method can be limited on a 2-dimensional level, and the existing three-dimensional network structure is still to be optimized. Thus, an objective criterion for accurately evaluating the success or failure of segmentation does not exist so far.
Disclosure of Invention
The invention aims to provide a medical image segmentation method based on a dual-channel U-shaped convolutional neural network, which improves the accuracy of image segmentation to a pixel level and improves the accuracy of subsequent medical diagnosis.
The technical scheme of the invention is as follows: a medical image segmentation method based on a dual-channel U-shaped convolution neural network comprises the following steps:
and S1, preprocessing the functional magnetic resonance image data to be segmented to obtain training set data and test set data.
Step S1 includes the following substeps:
and S11, performing format conversion on the functional magnetic resonance image data to be segmented.
And S12, normalizing the image after format conversion to a [0,1] interval.
And S13, proportionally dividing the normalized image into training set data and test set data.
And S14, performing data enhancement on the training set data.
The method for enhancing the data comprises the following steps:
and (3) scale transformation: and (3) amplifying or reducing the image according to a specified scale factor, or constructing a scale space by filtering the image by using the specified scale factor according to the SIFT feature extraction idea, and changing the size or the fuzzy degree of the image content.
Translation transformation: the image is translated in the image plane.
Rotation transformation: the image is flipped in either the horizontal or vertical direction.
Scaling transformation: the image is enlarged or reduced according to a set scale.
And S2, constructing a dual-channel U-shaped convolutional neural network.
Step S2 includes the following substeps:
and S21, constructing a dual-channel BO module.
The dual-channel BO module comprises a Sum channel and a density channel, wherein the Sum channel is used for realizing the addition of characteristic values of D1 data and C1 data, and the density channel is used for realizing the stacking of D2 data and C2 data;
the acquisition method of the D1 data and the D2 data comprises the following steps: after the original characteristic diagram data is subjected to BN, activation function and convolution operation, the original characteristic diagram data is divided into two parts according to channels, namely D1 data and D2 data;
the method for acquiring the C1 data and the C2 data comprises the following steps: synthesizing D1 data and D2 data into new feature map data, processing the new feature map data by BN, an activation function, convolution and dropout, and dividing the new feature map data into two parts according to channels, namely C1 data and C2 data;
dropout processing refers to the operation of randomly dropping neural network elements from the network temporarily at a set probability to prevent overfitting.
And S22, constructing a dual-channel U-shaped convolutional neural network according to the dual-channel BO module.
The dual-channel U-shaped convolutional neural network comprises a left side unit and a right side unit;
the left unit comprises N double-channel BO modules, N is more than or equal to 2, and pooling operation is performed between every two adjacent double-channel BO modules in the left unit;
the right side unit comprises N double-channel BO modules, and in the right side unit, up-sampling operation is carried out between every two adjacent double-channel BO modules;
the dual-channel BO modules in the left unit and the dual-channel BO modules in the right unit are in one-to-one correspondence and perform stacking reuse characteristic operation with each other.
And S3, inputting the training set data into a dual-channel U-shaped convolutional neural network for training to obtain a learned convolutional neural network model.
Step S3 includes the following substeps:
s31, dividing the training set data into m batches, and initializing the derivative of the convolution kernel weight and the bias value to the loss function as 0, namely:
ΔW(l)=0 (1)
Δb(l)=0 (2)
wherein Δ W(l)Representing the derivative of the convolution kernel weights in the l-th convolutional/deconvolution layer on the loss function, Δ b(l)The derivative of the convolution kernel bias value in the l-th convolutional layer/deconvolution layer to the loss function is represented.
And S32, randomly selecting a batch of untrained training set data to be input into the dual-channel U-shaped convolutional neural network.
And S33, calculating the training set data and each node parameter of the subsequent network unit in the dual-channel U-shaped convolutional neural network, realizing the forward propagation of network training, and outputting a prediction probability map.
S34, calculating the error between the prediction probability graph and the positive sample in the training set data, wherein the calculation formula is as follows:
wherein L represents the error between the prediction probability graph and the positive sample in the training set data, M represents the number of the positive samples in the training set data, and the positive sample is an artificial marking area obtained by extracting the characteristics of the training set data by taking a block as a unit; x represents the number of network elements in the neural network, y represents positive sample data in the training set data, a represents the predictive probability map data, and
a=W(l)*x+b(l)(4)
s35, respectively calculating the partial derivatives of the convolution kernel weight and the partial value to the error function L by adopting a gradient descent methodAndand adds it to Δ W(l)And Δ b(l)For Δ W(l)And Δ b(l)Updating, wherein the updating formula is as follows:
wherein Δ W(l)′And Δ b(l)′Representing the convolution kernel weights and the derivatives of the bias values to the loss function before updating, respectively.
And S36, judging whether all training set data are input into a dual-channel U-shaped convolutional neural network for training, if so, completing one iteration, entering the step S37, and otherwise, returning to the step S32.
S37, according to Δ W(l)And Δ b(l)Using batch gradient descent algorithm to weight W of convolution kernel(l)Sum bias value b(l)Updating, wherein the updating formula is as follows:
wherein W(l)′And b(l)′Respectively representing the weights and bias values of convolution kernels before updating, m representing the batch number of the training set data, α representing the learning rate, and λ representing the kinetic energy.
And S38, judging whether the current iteration number reaches a preset iteration number threshold, if so, entering the step S39, and if not, returning to the step S32 to carry out the next iteration.
S39, the learned convolutional neural network model is output, and the process proceeds to step S4.
And S4, inputting the test set data into the learned convolutional neural network model, and outputting an image segmentation result.
The invention has the beneficial effects that: according to the invention, the dual-channel module and the U-shaped network are well fused together, original features are reused through stacking operation, and new features are generated by adding the features, so that the training precision of the network is improved; meanwhile, the U-shaped convolutional neural network architecture is utilized to perform transverse feature stacking between blocks, so that the convergence rate of network training is greatly improved, and an ideal training model is obtained more quickly; the invention achieves very good performance in biomedical detection applications.
Drawings
Fig. 1 is a flowchart of a medical image segmentation method based on a dual-channel U-shaped convolutional neural network according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating the step S1 according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a dual-channel BO module according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a dual-channel U-shaped convolutional neural network provided in an embodiment of the present invention.
Fig. 5 is a flowchart illustrating the step S3 according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating a prediction probability map and a segmentation result according to an embodiment of the present invention.
Fig. 7 is a schematic diagram illustrating the accuracy and loss of the training set data and the test set data according to the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It is to be understood that the embodiments shown and described in the drawings are merely exemplary and are intended to illustrate the principles and spirit of the invention, not to limit the scope of the invention.
The embodiment of the invention provides a medical image segmentation method based on a dual-channel U-shaped convolutional neural network, which comprises the following steps of S1-S4 as shown in FIG. 1:
and S1, preprocessing the functional Magnetic Resonance Image (MRI) data to be segmented to obtain training set data and test set data.
As shown in FIG. 2, step S1 includes the following substeps S11-S14:
and S11, performing format conversion on the functional magnetic resonance image data to be segmented.
And S12, normalizing the image after format conversion to a [0,1] interval.
And S13, proportionally dividing the normalized image into training set data and test set data.
And S14, performing data enhancement on the training set data.
Deep learning usually requires a large amount of data for support, and after training set data and test set data are obtained, data enhancement is required for increasing the data amount, avoiding overfitting and improving segmentation precision. In the embodiment of the invention, the amount of input data can be increased by geometric transformation of the image by using one or more of the following combined data enhancement transformations:
and (3) scale transformation: and (3) amplifying or reducing the image according to a specified scale factor, or constructing a scale space by filtering the image by using the specified scale factor according to the SIFT feature extraction idea, and changing the size or the fuzzy degree of the image content.
Translation transformation: the image is translated in the image plane.
Rotation transformation: the image is flipped in either the horizontal or vertical direction.
Scaling transformation: the image is enlarged or reduced according to a set scale.
And S2, constructing a dual-channel U-shaped convolutional neural network.
Step S2 includes the following substeps S21-S22:
and S21, constructing a dual-channel BO module.
As shown in fig. 3, the dual channel BO module includes a Sum channel for realizing the eigenvalue addition of the D1 data and the C1 data, and a density channel for realizing the stacking (size addition) of the D2 data and the C2 data.
The acquisition method of the D1 data and the D2 data comprises the following steps: the original feature map data is divided into two parts, namely D1 data and D2 data, according to channels after being processed by BN (batch normalization), ReLU (activation function) and convolution operation.
The method for acquiring the C1 data and the C2 data comprises the following steps: and synthesizing the D1 data and the D2 data into new feature map data, and dividing the feature map data into two parts according to channels after BN, activation function, convolution and dropout processing, namely C1 data and C2 data.
dropout processing refers to the operation of randomly dropping neural network elements from the network temporarily at a set probability to prevent overfitting.
And S22, constructing a dual-channel U-shaped convolutional neural network according to the dual-channel BO module.
As shown in fig. 4, the dual-path U-shaped convolutional neural network includes a left side unit and a right side unit.
The left side unit comprises N double-channel BO modules, N is larger than or equal to 2, and in the left side unit, pooling operation is performed between every two adjacent double-channel BO modules.
The right side unit comprises N double-channel BO modules, and in the right side unit, up-sampling operation is performed between every two adjacent double-channel BO modules.
The dual-channel BO modules in the left unit and the dual-channel BO modules in the right unit are in one-to-one correspondence and perform stacking reuse characteristic operation with each other.
And S3, inputting the training set data into a dual-channel U-shaped convolutional neural network for training to obtain a learned convolutional neural network model.
As shown in FIG. 5, step S3 includes the following substeps S31-S39:
s31, dividing the training set data into m batches, and initializing the derivative of the convolution kernel weight and the bias value to the loss function as 0, namely:
ΔW(l)=0 (1)
Δb(l)=0 (2)
wherein Δ W(l)Representing the derivative of the convolution kernel weights in the l-th convolutional/deconvolution layer on the loss function, Δ b(l)The derivative of the convolution kernel bias value in the l-th convolutional layer/deconvolution layer to the loss function is represented.
And S32, randomly selecting a batch of untrained training set data to be input into the dual-channel U-shaped convolutional neural network.
And S33, calculating the training set data and each node parameter of the subsequent network unit in the dual-channel U-shaped convolutional neural network, realizing the forward propagation of network training, and outputting a prediction probability map.
And S34, calculating the error between the prediction probability graph and the positive sample in the training set data.
In the calculation of the neural network, the difference between the score P1 calculated according to the feature map of the forward propagation output of the neural network and the score P2 calculated according to the real label needs to be calculated, and the cross entropy Loss function Loss is calculated, so that the backward propagation can be applied. In the embodiment of the invention, the error between the prediction probability graph and the positive sample in the training set data is calculated by using a cross entropy loss function, and the calculation formula is as follows:
in the embodiment of the invention, after the features of the training set data are extracted by taking a block as a unit, an artificial mark area is used as a positive sample, and a background area is used as a negative sample. x represents the number of network elements in the neural network, y represents the expected output data, which is the positive sample data in the training set data in the embodiment of the invention, a represents the actual output data of the neural network elements, which is the prediction probability map data in the embodiment of the invention
a=W(l)*x+b(l)(4)
S35, respectively calculating the partial derivatives of the convolution kernel weight and the partial value to the error function L by adopting a gradient descent methodAndand adds it to Δ W(l)And Δ b(l)For Δ W(l)And Δ b(l)Updating, wherein the updating formula is as follows:
wherein Δ W(l)′And Δ b(l)′Representing the convolution kernel weights and the derivatives of the bias values to the loss function before updating, respectively.
And S36, judging whether all training set data are input into a dual-channel U-shaped convolutional neural network for training, if so, completing one iteration, entering the step S37, and otherwise, returning to the step S32.
S37, according to Δ W(l)And Δ b(l)Using batch gradient descent algorithm to weight W of convolution kernel(l)Sum bias value b(l)Updating, wherein the updating formula is as follows:
wherein W(l)′And b(l)′Respectively representing the weight and the bias value of the convolution kernel before updating, wherein m represents the batch number of the training set data, α represents the learning rate, and lambda represents the kinetic energy, so that the influence of the last iteration parameter in the parameter updating process is determined.
And S38, judging whether the current iteration number reaches a preset iteration number threshold, if so, entering the step S39, and if not, returning to the step S32 to carry out the next iteration.
S39, the learned convolutional neural network model is output, and the process proceeds to step S4.
And S4, inputting the test set data into the learned convolutional neural network model, and outputting an image segmentation result.
As shown in fig. 6, the test set data is input into the learned convolutional neural network model, and the resulting image segmentation results are shown in the first column of fig. 6. As shown in fig. 7, the precision and loss variation trend of the training set data and the test set data are reflected, and as the number of iterations increases, the precision of the training set data and the precision of the test set data are continuously improved to finally reach 0.98 and 0.99 respectively; the loss of the training set data and the loss of the test set data are continuously reduced and respectively reach 0.04 and 0.03, so that the preset iteration threshold is necessary to carry out a certain number of iterations.
In the embodiment of the present invention, two indexes, namely, a dice value and an assd value, are respectively used to perform image segmentation prediction evaluation on four groups of test data, as shown in table 1, data enhancement and pre-and post-data enhancement are compared in table 1, and it can be seen that a prediction result with data enhancement is obviously superior to a prediction result without data enhancement, so step S14 in the present invention is necessary.
TABLE 1
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (4)
1. A medical image segmentation method based on a dual-channel U-shaped convolution neural network is characterized by comprising the following steps:
s1, preprocessing functional magnetic resonance image data to be segmented to obtain training set data and test set data;
s2, constructing a double-channel U-shaped convolution neural network;
s3, inputting training set data into a dual-channel U-shaped convolutional neural network for training to obtain a learned convolutional neural network model;
s4, inputting the test set data into the learned convolutional neural network model, and outputting an image segmentation result;
the step S2 includes the following sub-steps:
s21, constructing a dual-channel BO module;
s22, constructing a dual-channel U-shaped convolutional neural network according to the dual-channel BO module;
the dual-channel U-shaped convolutional neural network comprises a left side unit and a right side unit;
the left side unit comprises N double-channel BO modules, N is more than or equal to 2, and pooling operation is performed between every two adjacent double-channel BO modules in the left side unit;
the right side unit comprises N double-channel BO modules, and up-sampling operation is performed between every two adjacent double-channel BO modules in the right side unit;
the dual-channel BO modules in the left unit and the dual-channel BO modules in the right unit are in one-to-one correspondence and perform stacking reuse characteristic operation with each other;
the dual-channel BO module comprises a Sum channel for enabling eigenvalue addition of D1 data and C1 data and a Dense channel for enabling stacking of D2 data and C2 data;
the method for acquiring the D1 data and the D2 data comprises the following steps: after the original characteristic diagram data is subjected to BN, activation function and convolution operation, the original characteristic diagram data is divided into two parts according to channels, namely D1 data and D2 data;
the method for acquiring the C1 data and the C2 data comprises the following steps: synthesizing D1 data and D2 data into new feature map data, processing the new feature map data by BN, an activation function, convolution and dropout, and dividing the new feature map data into two parts according to channels, namely C1 data and C2 data;
the dropout process represents an operation of temporarily dropping neural network elements from the network at random with a set probability to prevent overfitting.
2. A medical image segmentation method as claimed in claim 1, characterized in that said step S1 comprises the sub-steps of:
s11, converting the format of the functional magnetic resonance image data to be segmented;
s12, normalizing the image after format conversion to a [0,1] interval;
s13, dividing the normalized image into training set data and test set data according to the proportion;
and S14, performing data enhancement on the training set data.
3. The medical image segmentation method according to claim 2, wherein the data enhancement in step S14 includes:
and (3) scale transformation: amplifying or reducing the image according to a specified scale factor, or constructing a scale space by filtering the image by using the specified scale factor according to the SIFT feature extraction idea, and changing the size or the fuzzy degree of the image content;
translation transformation: translating the image on the image plane;
rotation transformation: flipping the image in either the horizontal or vertical direction;
scaling transformation: the image is enlarged or reduced according to a set scale.
4. A medical image segmentation method as claimed in claim 1, characterized in that said step S3 comprises the sub-steps of:
s31, dividing the training set data into m batches, and initializing the derivative of the convolution kernel weight and the bias value to the loss function as 0, namely:
ΔW(l)=0 (1)
Δb(l)=0 (2)
wherein Δ W(l)Representing the derivative of the convolution kernel weights in the l-th convolutional/deconvolution layer on the loss function, Δ b(l)Representing the derivative of the convolution kernel offset value in the first layer of convolution layer/deconvolution layer to the loss function;
s32, randomly selecting a batch of untrained training set data to input into a dual-channel U-shaped convolutional neural network;
s33, calculating the training set data and each node parameter of the subsequent network unit in the dual-channel U-shaped convolutional neural network, realizing the forward propagation of network training, and outputting a prediction probability graph;
s34, calculating the error between the prediction probability graph and the positive sample in the training set data, wherein the calculation formula is as follows:
wherein L represents the error between the prediction probability graph and the positive sample in the training set data, M represents the number of the positive samples in the training set data, and the positive sample is an artificial marking area obtained by extracting the characteristics of the training set data by taking a block as a unit; x represents the number of network elements in the neural network, y represents positive sample data in the training set data, a represents the predictive probability map data, and
a=W(l)*x+b(l)(4)
s35, respectively calculating the partial derivatives of the convolution kernel weight and the partial value to the error function L by adopting a gradient descent methodAndand adds it to Δ W(l)And Δ b(l)For Δ W(l)And Δ b(l)Updating, wherein the updating formula is as follows:
wherein Δ W(l)′And Δ b(l)′Respectively representing the convolution kernel weight and the derivative of the bias value to the loss function before updating;
s36, judging whether all training set data are input into a dual-channel U-shaped convolutional neural network for training, if so, completing one iteration, entering the step S37, otherwise, returning to the step S32;
s37, according to Δ W(l)And Δ b(l)Using batch gradient descent algorithm to weight W of convolution kernel(l)Sum bias value b(l)Updating, wherein the updating formula is as follows:
wherein W(l)′And b(l)′Respectively representing the weight and the bias value of the convolution kernel before updating, wherein m represents the batch number of the training set data, α represents the learning rate, and lambda represents the kinetic energy;
s38, judging whether the current iteration number reaches a preset iteration number threshold, if so, entering the step S39, otherwise, returning to the step S32 to carry out the next iteration;
s39, the learned convolutional neural network model is output, and the process proceeds to step S4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810203917.XA CN108492286B (en) | 2018-03-13 | 2018-03-13 | Medical image segmentation method based on dual-channel U-shaped convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810203917.XA CN108492286B (en) | 2018-03-13 | 2018-03-13 | Medical image segmentation method based on dual-channel U-shaped convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108492286A CN108492286A (en) | 2018-09-04 |
CN108492286B true CN108492286B (en) | 2020-05-05 |
Family
ID=63338633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810203917.XA Expired - Fee Related CN108492286B (en) | 2018-03-13 | 2018-03-13 | Medical image segmentation method based on dual-channel U-shaped convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108492286B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389584A (en) * | 2018-09-17 | 2019-02-26 | 成都信息工程大学 | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN |
CN109377501A (en) * | 2018-09-30 | 2019-02-22 | 上海鹰觉科技有限公司 | Remote sensing images naval vessel dividing method and system based on transfer learning |
CN109523560A (en) * | 2018-11-09 | 2019-03-26 | 成都大学 | A kind of three-dimensional image segmentation method based on deep learning |
CN109754403A (en) * | 2018-11-29 | 2019-05-14 | 中国科学院深圳先进技术研究院 | Tumour automatic division method and system in a kind of CT image |
CN109636812A (en) * | 2018-12-13 | 2019-04-16 | 银河水滴科技(北京)有限公司 | A kind of Rail Surface and contact net surface image dividing method and device |
CN109785344A (en) * | 2019-01-22 | 2019-05-21 | 成都大学 | The remote sensing image segmentation method of binary channel residual error network based on feature recalibration |
CN109919954B (en) * | 2019-03-08 | 2021-06-15 | 广州视源电子科技股份有限公司 | Target object identification method and device |
CN109919098B (en) * | 2019-03-08 | 2021-06-15 | 广州视源电子科技股份有限公司 | Target object identification method and device |
CN109949323B (en) * | 2019-03-19 | 2022-12-20 | 广东省农业科学院农业生物基因研究中心 | Crop seed cleanliness judgment method based on deep learning convolutional neural network |
CN110070175B (en) * | 2019-04-12 | 2021-07-02 | 北京市商汤科技开发有限公司 | Image processing method, model training method and device and electronic equipment |
CN110335276B (en) * | 2019-07-10 | 2021-02-26 | 四川大学 | Medical image segmentation model, method, storage medium and electronic device |
CN110555512B (en) * | 2019-07-30 | 2021-12-03 | 北京航空航天大学 | Data reuse method and device for binary convolution neural network |
CN110473243B (en) * | 2019-08-09 | 2021-11-30 | 重庆邮电大学 | Tooth segmentation method and device based on depth contour perception and computer equipment |
CN112446888A (en) * | 2019-09-02 | 2021-03-05 | 华为技术有限公司 | Processing method and processing device for image segmentation model |
CN111062252B (en) * | 2019-11-15 | 2023-11-10 | 浙江大华技术股份有限公司 | Real-time dangerous goods semantic segmentation method, device and storage device |
CN111127504B (en) * | 2019-12-28 | 2024-02-09 | 中国科学院深圳先进技术研究院 | Method and system for segmenting heart medical image of patient with atrial septal occlusion |
WO2021137756A1 (en) | 2019-12-30 | 2021-07-08 | Medo Dx Pte. Ltd | Apparatus and method for image segmentation using a deep convolutional neural network with a nested u-structure |
CN111325658B (en) * | 2020-02-19 | 2022-08-23 | 成都大学 | Color image self-adaptive decolorizing method |
CN111325750B (en) * | 2020-02-25 | 2022-08-16 | 西安交通大学 | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network |
CN111402268B (en) * | 2020-03-16 | 2023-05-23 | 苏州科技大学 | Liver in medical image and focus segmentation method thereof |
CN112748382A (en) * | 2020-12-15 | 2021-05-04 | 杭州电子科技大学 | SPEED magnetic resonance imaging method based on CUNet artifact positioning |
CN113177981B (en) * | 2021-04-29 | 2022-10-14 | 中国科学院自动化研究所 | Double-channel craniopharyngioma invasiveness classification and focus region segmentation system thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN107633486A (en) * | 2017-08-14 | 2018-01-26 | 成都大学 | Structure Magnetic Resonance Image Denoising based on three-dimensional full convolutional neural networks |
-
2018
- 2018-03-13 CN CN201810203917.XA patent/CN108492286B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN107633486A (en) * | 2017-08-14 | 2018-01-26 | 成都大学 | Structure Magnetic Resonance Image Denoising based on three-dimensional full convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
基于卷积神经网络的图像处理与识别算法研究;满凤环;《中国优秀硕士学位论文全文数据库信息科技辑》;20180215(第2期);第20页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108492286A (en) | 2018-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108492286B (en) | Medical image segmentation method based on dual-channel U-shaped convolutional neural network | |
CN109584161A (en) | The Remote sensed image super-resolution reconstruction method of convolutional neural networks based on channel attention | |
CN111583285B (en) | Liver image semantic segmentation method based on edge attention strategy | |
CN109005398B (en) | Stereo image parallax matching method based on convolutional neural network | |
CN112233129B (en) | Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device | |
CN109102498B (en) | Method for segmenting cluster type cell nucleus in cervical smear image | |
CN110349170B (en) | Full-connection CRF cascade FCN and K mean brain tumor segmentation algorithm | |
CN110517272B (en) | Deep learning-based blood cell segmentation method | |
CN111696046A (en) | Watermark removing method and device based on generating type countermeasure network | |
CN111325750A (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN115393293A (en) | Electron microscope red blood cell segmentation and positioning method based on UNet network and watershed algorithm | |
CN114972759A (en) | Remote sensing image semantic segmentation method based on hierarchical contour cost function | |
CN114639102B (en) | Cell segmentation method and device based on key point and size regression | |
CN115409846A (en) | Colorectal cancer focus region lightweight segmentation method based on deep learning | |
CN115170801A (en) | FDA-deep Lab semantic segmentation algorithm based on double-attention mechanism fusion | |
CN115953784A (en) | Laser coding character segmentation method based on residual error and feature blocking attention | |
CN114998756A (en) | Yolov 5-based remote sensing image detection method and device and storage medium | |
CN117935289A (en) | Diffusion model graphic symbol anomaly identification and correction method based on classifier | |
CN113436115A (en) | Image shadow detection method based on depth unsupervised learning | |
CN117437423A (en) | Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement | |
CN116524495A (en) | Traditional Chinese medicine microscopic identification method and system based on multidimensional channel attention mechanism | |
CN116071331A (en) | Workpiece surface defect detection method based on improved SSD algorithm | |
CN110890143B (en) | 2D convolution method introducing spatial information | |
CN114972155A (en) | Polyp image segmentation method based on context information and reverse attention | |
CN114419078A (en) | Surface defect region segmentation method and device based on convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200505 Termination date: 20210313 |