CN113538363A - Lung medical image segmentation method and device based on improved U-Net - Google Patents
Lung medical image segmentation method and device based on improved U-Net Download PDFInfo
- Publication number
- CN113538363A CN113538363A CN202110789538.5A CN202110789538A CN113538363A CN 113538363 A CN113538363 A CN 113538363A CN 202110789538 A CN202110789538 A CN 202110789538A CN 113538363 A CN113538363 A CN 113538363A
- Authority
- CN
- China
- Prior art keywords
- net
- network
- improved
- adopting
- image segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001976 improved effect Effects 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000003709 image segmentation Methods 0.000 title claims abstract description 31
- 210000004072 lung Anatomy 0.000 title claims abstract description 21
- 238000005457 optimization Methods 0.000 claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 11
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000010606 normalization Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 7
- 230000002685 pulmonary effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000013589 supplement Substances 0.000 claims description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 abstract description 6
- 210000000621 bronchi Anatomy 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 210000005259 peripheral blood Anatomy 0.000 abstract description 3
- 239000011886 peripheral blood Substances 0.000 abstract description 3
- 238000011160 research Methods 0.000 abstract description 2
- 238000002372 labelling Methods 0.000 description 4
- 239000006185 dispersion Substances 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000711573 Coronaviridae Species 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 206010056342 Pulmonary mass Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000001605 fetal effect Effects 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 210000003437 trachea Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Abstract
The invention discloses a lung medical image segmentation method and a device based on improved U-Net, which are used for carrying out normalization and threshold value method binarization processing on a pre-acquired original CT image; optimizing a U-Net downsampling part by adopting a bottleneck residual error module to construct a U-Net optimization network; adopting a Dice loss function for the U-Net optimization network; and training an original U-Net network and a U-Net optimization network by adopting an NADAM optimization algorithm, and measuring the segmentation accuracy by adopting an average cross-over ratio MIoU index. The invention adopts the structure of the residual block to improve the structure of the traditional U-Net network, effectively improves the convergence rate and improves the accuracy rate; simultaneously, optimizing and using a Dice loss function to judge the difference degree between the estimated value and the reality; the invention can effectively improve the image detail quality, the boundary of two lungs, the tiny cavity and the peripheral blood vessel of each level of bronchus in the lung are more accurate, and the invention can guide the research of image segmentation and further put the leading-edge technology into clinical application.
Description
Technical Field
The invention belongs to the technical field of medical image segmentation, and particularly relates to a lung medical image segmentation method and device based on improved U-Net.
Background
The new coronavirus epidemic situation spreads in the world, a large number of patients without signs and with mild infection exist, the potential period is long, and the suspected cases are difficult to judge. Analysis of lung images can be an important aid. The medical image with high resolution can clearly present the conditions of the lung lobes, the trachea and other tissue parts, and is the best standard for clinical research, disease diagnosis and function detection.
Image semantic segmentation, i.e. classification of pixel levels and information labeling, interprets global content. The information of human tissue parts can be provided in the medical field for tumor, fetal brain detection and the like. Has considerable application prospect in the computer vision category. The difficulty mainly includes category grade, object level and scene level. To establish a mapping from pixels to semantics, a deep learning construct is applied in image segmentation.
In computer vision image processing, image segmentation roughly marks each pixel in an image, so that image information is simplified and easy to analyze. Pixels that are marked consistently often have similar characteristics and visually appear to locate the boundary of an object. The simplest image segmentation is based on thresholding, which converts a grayscale image into a binary image, such as histogram methods, fixed threshold methods, etc. Other conventional segmentation methods include image segmentation means based on borderlines such as a differential operator method, image segmentation means based on whole blocks such as a region growing method, segmentation means based on graphs such as Graph Cut and Graph Cut as study objects, segmentation means based on multi-object composition such as Mean Shift algorithm, and segmentation means based on shape discrimination by external force such as a geometric deformation model.
With the proposed FCN and the completion of image segmentation from input directly to output, the currently most widely used and performing method is CNN-based image segmentation. The network input by the picture uses convolution, pooling and other down-sampling to obtain characteristics, and then uses deconvolution up-sampling to realize image pixel level labeling. The segmentation step in the general sense is data integration operation, ROI refinement, network structure segmentation, and segmentation result ending operation.
Although FCN solves the problem of achieving end-to-end pixel labeling, FCN-8s still yields results that are not smooth and fine enough. U-Net was promulgated in MICCAI 2015 as a network that improved based on FCN, segmented neuronal constructs in the EM stack beyond previous networks, and won in later photoreoscopic image segmentation in ISBI cell tracking challenges. U-Net can be from end to end on few data sets, and lays a good foundation for numerous semantic segmentation algorithms put into medical use. The U-network has 2 parts: the contraction path and the symmetrical expansion path are accurately determined corresponding to the former path, the latter path and the latter path. And it requires fewer data sets to produce more accurate classifications. In its architecture, numerous characteristic channels in the upsampling aspect enable information to propagate to higher resolution layers. The retrospective human organ structure is stable, semantic information is easy, and U-Net combines high-level semantic information and low-level characteristics, which plays a key role in the semantic segmentation range of medical images. Furthermore, U-Net has strong interpretability to assist doctors in making diagnosis. The current U-Net network construction has the following disadvantages: although the method has good performance in the medical image segmentation category, the sensitivity to the image detail part needs to be improved because the number of network layers is small and the parameters are few; and the gradient dispersion problem along with the deepening of the network layer number can cause the possible reduction of the accuracy.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a lung medical image segmentation method and device based on improved U-Net, aiming at the problems of fuzzy boundary, loss of non-connected regions and the like of the conventional lung CT image segmentation algorithm.
The technical scheme is as follows: the invention provides a lung medical image segmentation method based on improved U-Net, which specifically comprises the following steps:
(1) carrying out normalization and threshold value method binarization processing on an original CT image acquired in advance;
(2) optimizing a U-Net downsampling part by adopting a bottleneck residual error module to construct a U-Net optimization network;
(3) adopting a Dice loss function for the U-Net optimization network constructed in the step (2);
(4) and training an original U-Net network and a U-Net optimization network by adopting an NADAM optimization algorithm, and measuring the segmentation accuracy by adopting an average cross-over ratio MIoU index.
Further, the step (2) is realized as follows:
on the basis of a conventional residual error module, a bottleneck residual error module is improved and used, 1-by-1 convolution is introduced, the dimensionality reduction effect is exerted on the number of channels, and multiple feature maps can be linearly combined while the size of the feature maps is ensured; each residual module uses two ReLU functions and introduces a plurality of nonlinear mappings; then 1-by-1 convolution is introduced to reduce operation dimensionality;
in the U-Net network optimization structure for increasing the bottleneck residual error, a U-shaped structure is represented as a down-sampling path at the left side and an up-sampling path at the right side; the down-sampling contraction path comprises five convolution structures, and a bottleneck residual error is added into an improved network structure:
a first layer: 7 by 7 convolutional layers; followed by a 3 by 3 max pooling operation; then four convolution module layers respectively containing 3, 4, 23 and 3 bottleneck residuals; and the output characteristic diagram is subjected to average pooling and softmax operation;
carrying out deconvolution operation on the upper sampling expansion path at the lower part, and simultaneously carrying out four times of superposition operation on the upper characteristic diagram to supplement details to obtain a high-resolution characteristic diagram; in the convolution process, the size of the feature graph is reduced step by step, the upper path and the lower path are not absolutely symmetrical, and the top feature graph is cut by adopting the Crop operation during the superposition operation.
Further, the step (3) is realized by the following formula:
wherein, | A | andgate B |: A. b, collecting an intersection part which can be approximately regarded as a point-multiplication prediction graph and a label, and adding elements in the matrix; l A |: elements in A; l B l: and B, element addition is also adopted during calculation.
Further, the MIoU index in step (4) is:
wherein K is the number of image pixel types; t is tiIs the sum of i-type pixels; n isijThe pixel summation of the real category i and the estimated category j is obtained; n isjjThe pixel summation of the real category j and the estimated category i.
Based on the same inventive concept, the invention also provides a pulmonary medical image segmentation device based on the improved U-Net, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the computer program is loaded into the processor to realize the pulmonary medical image segmentation method based on the improved U-Net.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: the lung image segmentation optimization algorithm provided by the invention can effectively improve the image detail quality, the boundary between two lungs, a tiny cavity and peripheral blood vessels of all levels of bronchus in the lung are more accurate, and the study of image segmentation can be guided and the leading-edge technology can be further applied to clinical application.
Drawings
FIG. 1 is a diagram of a U-Net optimization network structure after bottleneck residue is added;
fig. 2 is a diagram of a bottleneck residual block.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention provides a lung medical image segmentation method based on improved U-Net, which is characterized in that after the normalization of an original medical image and the binarization operation of a threshold value method, a preprocessed image is input; optimizing a U-Net down-sampling part by adopting a bottleneck residual error module to construct a deeper network structure; improving the application loss function; training the original U-Net network and the improved network to be used as a comparison group. The invention adopts the structure of the residual block to improve the structure of the traditional U-Net network, effectively improves the convergence rate and improves the accuracy rate; and simultaneously, a Dice loss function is optimized and used to judge the difference degree between the estimated value and the reality. The method specifically comprises the following steps:
step 1: and carrying out normalization and threshold value method binarization processing on the pre-acquired original CT image.
The CT image has many noise interferences including poisson, gaussian, multiplicative noise, etc. Due to the limited technology of the photographing apparatus, the image may sometimes have ghosts, blurs, and the like. The data is therefore normalized to enhance the foreground and background discrimination. And in the labeling set, binarization processing is carried out by a threshold value method, so that the precision can be improved to a certain degree, and the convergence speed is increased.
Step 2: and optimizing the U-Net downsampling part by adopting a bottleneck residual error module to construct a U-Net optimization network, as shown in figure 1.
In order to improve the problem of gradient dispersion presented along with the number of layers of the deepened network, a residual error module is introduced. The conventional residual module constructs an identity map as follows:
setting an input initial value x, and expecting to output H (x); if so:
H(x)=F(x)+x
by converting the learning objective from output h (x) to residual f (x), the image features are easier to obtain. The implementation of the residual block is that the input data is divided into two paths to be added and output through a ReLU function, wherein one path is a convolution layer of 3 times 3 twice, the output of the first layer of convolution is H (x), and the output of the second layer of convolution is H (x) + x; the other path is directly output by an input short circuit, and the next layer of network can be fed back to a layer of network in a gradient manner by the branch in the feedback process, so that the defect of gradient dispersion under the condition that the layer number of the network is deeper is overcome.
On the basis of the conventional residual error module, the bottleneck residual error module is improved and used, 1-by-1 convolution is introduced, and therefore the dimensionality reduction effect is achieved on the number of channels, and multiple characteristic graphs can be linearly combined while the size of the characteristic graphs is ensured; secondly, comparing that the 3-by-3 convolution stacking only has one ReLU function, improving two ReLU functions of each residual block of the network structure, and introducing a plurality of nonlinear mappings; compared with any other convolution kernel, the 1-by-1 convolution is introduced to greatly reduce the operation dimensionality, as shown in a bottleneck residual error module structure diagram of fig. 2. Setting the image of the dimension 256, the comparison parameters are calculated to obtain: the two 3 by 3 convolved parameters of the conventional residual block are:
32×2562+32×2562=1179648
after the bottleneck residual module is dimensionality reduced, the parameters are
12×256×64+32×642+12×256×64=69632
In contrast, the reference amount is only 6% of the former.
In the U-Net network optimization structure for increasing the bottleneck residual error, a U-shaped structure is represented as a down-sampling path at the left side and an up-sampling path at the right side; the down-sampling shrinkage path contains five convolution structures, and the bottleneck residual error is added into the improved network structure:
a first layer: 7 by 7 convolutional layers; followed by a 3 by 3 max pooling operation; then four convolution module layers respectively containing 3, 4, 23 and 3 bottleneck residuals, wherein the specific structure corresponds to that shown in Con 2-Con 5 in FIG. 1; the final output characteristic diagram is subjected to average pooling and softmax operation, and the computational power FLOPs is 7.6 multiplied by 109。
Assuming the original image size of 160 × 160 × 3, the formula (one-sided) is calculated based on the convolution layer output size:
obtaining a characteristic diagram of 80 multiplied by 64 after the first layer of convolution structure; obtaining a characteristic diagram of 40 multiplied by 256 after the second layer of convolution structure; obtaining a characteristic map of 20 multiplied by 512 after the first layer of convolution structure; obtaining a 10 multiplied by 1024 characteristic diagram after the first layer of convolution structure; the first layer of convolution structure obtains a characteristic map of 5 multiplied by 2048.
And carrying out deconvolution operation on the lower up-sampling expansion path, and simultaneously carrying out four times of superposition operation on the upper feature map to supplement details to obtain a high-resolution feature map. In the convolution process, the size of the feature map is reduced step by step, and the upper path and the lower path are not absolutely symmetrical, so that the Crop operation is carried out during the superposition operation to cut the upper feature map.
And step 3: the built U-Net optimization network adopts a Dice loss function.
The improvement uses a Dice loss function:
wherein, | A | andgate B |: A. b, collecting an intersection part which can be approximately regarded as a point-multiplication prediction graph and a label, and adding elements in the matrix; l A |: elements in A; l B l: and B, element addition is also adopted during calculation.
In the training process, the Dice loss function can judge the difference degree between the estimated value and the reality so as to modify the network weight value to ensure that the estimated value is real.
And 4, step 4: and training an original U-Net network and a U-Net optimization network by adopting an NADAM optimization algorithm, and measuring the segmentation accuracy by adopting an average cross-over ratio MIoU index.
According to the fact that the representation prediction of the smaller Loss function is basically consistent with the reality, a Loss curve during the training of the original U-net network is observed, and the Loss function can be observed to gradually approach to 0; and comparing the improved U-net network Loss curve to obtain a smaller Loss function value, and reflecting the improved effect of the network structure. The epoch is reduced backwards more slowly around U-Net training 80. Generates small oscillation around 60 epochs, but has a learning rate of 5 multiplied by 10-4The time is reduced.
The segmentation result is compared with the graph to carry out visual comparison, the improved algorithm has higher sensitivity to details, and the obtained segmentation is more accurate at the boundary of two lungs, a small cavity and peripheral blood vessels of all levels of bronchus in the lung.
And measuring the segmentation accuracy by adopting an average cross-over ratio MIoU index:
wherein K is the number of image pixel types; t is tiIs the sum of i-type pixels; n isijThe pixel summation of the real category i and the estimated category j is obtained; n isjjThe pixel summation of the real category j and the estimated category i.
The average cross-over ratio is averaged based on the degree of overlap of the computed result with the true value of the original image. The index features are concise in expression, wide in use and representative.
In the experiment, by calculating the average cross-over ratio, the MIoU obtained by the U-Net segmentation result is 0.97672, and by improving the network segmentation result, the MIoU is 0.98216, the segmentation accuracy can be improved slightly from the data.
The lung nodule LIDC-IDRI subset dataset LUNA16 received using NCI. The original image is cropped and scaled to 320 × 320 pixels. 311 lung medical images in total in the data set and corresponding markers thereof; 286 of which were used as training sets and the remaining 25 were used as test sets to test convolutional network performance. Normalizing the original medical image and binarizing the labeled set by a threshold method. After the pre-operation, the structure precision and the caulking speed are improved. The optimization algorithm adopts NADAM, and the learning rate is 1 multiplied by 10-5The Batch size is 4, the epoch is 100, i.e. 4 cases are used for one training, 100 times are trained, and the picture enhancement data sets are respectively trained on the U-Net network and the improved structure. And obtaining a Loss mapping and a segmentation result mapping during training, wherein two groups of comparison groups are an original U-Net network and an improved network respectively. By observing the characteristics of the curve, comparing the detailed characteristics of the graph and calculating the average intersection ratio, the improvement of the segmentation precision is obtained.
Based on the same inventive concept, the invention also provides a pulmonary medical image segmentation device based on the improved U-Net, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the computer program is loaded into the processor to realize the pulmonary medical image segmentation method based on the improved U-Net.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the protection scope of the present invention.
Claims (5)
1. A lung medical image segmentation method based on improved U-Net is characterized by comprising the following steps:
(1) carrying out normalization and threshold value method binarization processing on an original CT image acquired in advance;
(2) optimizing a U-Net downsampling part by adopting a bottleneck residual error module to construct a U-Net optimization network;
(3) adopting a Dice loss function for the U-Net optimization network constructed in the step (2);
(4) and training an original U-Net network and a U-Net optimization network by adopting an NADAM optimization algorithm, and measuring the segmentation accuracy by adopting an average cross-over ratio MIoU index.
2. The improved U-Net based lung medical image segmentation method according to claim 1, wherein the step (2) is implemented as follows:
on the basis of a conventional residual error module, a bottleneck residual error module is improved and used, 1-by-1 convolution is introduced, the dimensionality reduction effect is exerted on the number of channels, and multiple feature maps can be linearly combined while the size of the feature maps is ensured; each residual module uses two ReLU functions and introduces a plurality of nonlinear mappings; then 1-by-1 convolution is introduced to reduce operation dimensionality;
in the U-Net network optimization structure for increasing the bottleneck residual error, a U-shaped structure is represented as a down-sampling path at the left side and an up-sampling path at the right side; the down-sampling contraction path comprises five convolution structures, and a bottleneck residual error is added into an improved network structure:
a first layer: 7 by 7 convolutional layers; followed by a 3 by 3 max pooling operation; then four convolution module layers respectively containing 3, 4, 23 and 3 bottleneck residuals; and the output characteristic diagram is subjected to average pooling and softmax operation;
carrying out deconvolution operation on the upper sampling expansion path at the lower part, and simultaneously carrying out four times of superposition operation on the upper characteristic diagram to supplement details to obtain a high-resolution characteristic diagram; in the convolution process, the size of the feature graph is reduced step by step, the upper path and the lower path are not absolutely symmetrical, and the top feature graph is cut by adopting the Crop operation during the superposition operation.
3. The improved U-Net based lung medical image segmentation method according to claim 1, wherein the step (3) is implemented by the following formula:
wherein, | A | andgate B |: A. b, collecting an intersection part which can be approximately regarded as a point-multiplication prediction graph and a label, and adding elements in the matrix; l A |: elements in A; l B l: and B, element addition is also adopted during calculation.
4. The method for pulmonary medical image segmentation based on improved U-Net as claimed in claim 1, wherein the MIoU index in step (4) is:
wherein K is the number of image pixel types; t is tiIs the sum of i-type pixels; n isijThe pixel summation of the real category i and the estimated category j is obtained; n isjjThe pixel summation of the real category j and the estimated category i.
5. An improved U-Net based pulmonary medical image segmentation apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the computer program when loaded into the processor implements the improved U-Net based pulmonary medical image segmentation method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110789538.5A CN113538363A (en) | 2021-07-13 | 2021-07-13 | Lung medical image segmentation method and device based on improved U-Net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110789538.5A CN113538363A (en) | 2021-07-13 | 2021-07-13 | Lung medical image segmentation method and device based on improved U-Net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113538363A true CN113538363A (en) | 2021-10-22 |
Family
ID=78127682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110789538.5A Pending CN113538363A (en) | 2021-07-13 | 2021-07-13 | Lung medical image segmentation method and device based on improved U-Net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113538363A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114862739A (en) * | 2022-07-06 | 2022-08-05 | 珠海市人民医院 | Intelligent medical image enhancement method and system |
CN114937022A (en) * | 2022-05-31 | 2022-08-23 | 天津大学 | Novel coronary pneumonia disease detection and segmentation method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509178A (en) * | 2018-10-24 | 2019-03-22 | 苏州大学 | A kind of OCT image choroid dividing method based on improved U-net network |
CN110930416A (en) * | 2019-11-25 | 2020-03-27 | 宁波大学 | MRI image prostate segmentation method based on U-shaped network |
CN112785617A (en) * | 2021-02-23 | 2021-05-11 | 青岛科技大学 | Automatic segmentation method for residual UNet rectal cancer tumor magnetic resonance image |
-
2021
- 2021-07-13 CN CN202110789538.5A patent/CN113538363A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509178A (en) * | 2018-10-24 | 2019-03-22 | 苏州大学 | A kind of OCT image choroid dividing method based on improved U-net network |
CN110930416A (en) * | 2019-11-25 | 2020-03-27 | 宁波大学 | MRI image prostate segmentation method based on U-shaped network |
CN112785617A (en) * | 2021-02-23 | 2021-05-11 | 青岛科技大学 | Automatic segmentation method for residual UNet rectal cancer tumor magnetic resonance image |
Non-Patent Citations (1)
Title |
---|
张晓鹏: "《基于多任务深度学习的腹部核磁共振影像分割》", 《中国优秀硕士学位论文全文数据库》, no. 05, 15 May 2021 (2021-05-15), pages 1 - 85 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114937022A (en) * | 2022-05-31 | 2022-08-23 | 天津大学 | Novel coronary pneumonia disease detection and segmentation method |
CN114937022B (en) * | 2022-05-31 | 2023-04-07 | 天津大学 | Novel coronary pneumonia disease detection and segmentation method |
CN114862739A (en) * | 2022-07-06 | 2022-08-05 | 珠海市人民医院 | Intelligent medical image enhancement method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113077471B (en) | Medical image segmentation method based on U-shaped network | |
CN111145170B (en) | Medical image segmentation method based on deep learning | |
CN110930416B (en) | MRI image prostate segmentation method based on U-shaped network | |
CN110176012B (en) | Object segmentation method in image, pooling method, device and storage medium | |
CN111951288B (en) | Skin cancer lesion segmentation method based on deep learning | |
CN111325750B (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN113379773B (en) | Segmentation model establishment and segmentation method and device based on dual-attention mechanism | |
CN113223005B (en) | Thyroid nodule automatic segmentation and grading intelligent system | |
CN114092439A (en) | Multi-organ instance segmentation method and system | |
CN113344951A (en) | Liver segment segmentation method based on boundary perception and dual attention guidance | |
CN113538363A (en) | Lung medical image segmentation method and device based on improved U-Net | |
CN117078692B (en) | Medical ultrasonic image segmentation method and system based on self-adaptive feature fusion | |
CN111724401A (en) | Image segmentation method and system based on boundary constraint cascade U-Net | |
CN115375711A (en) | Image segmentation method of global context attention network based on multi-scale fusion | |
Shan et al. | SCA-Net: A spatial and channel attention network for medical image segmentation | |
CN114511502A (en) | Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium | |
CN117036288A (en) | Tumor subtype diagnosis method for full-slice pathological image | |
CN116229074A (en) | Progressive boundary region optimized medical image small sample segmentation method | |
CN116091458A (en) | Pancreas image segmentation method based on complementary attention | |
CN115222651A (en) | Pulmonary nodule detection system based on improved Mask R-CNN | |
CN114565626A (en) | Lung CT image segmentation algorithm based on PSPNet improvement | |
CN113469962A (en) | Feature extraction and image-text fusion method and system for cancer lesion detection | |
CN116740041B (en) | CTA scanning image analysis system and method based on machine vision | |
CN117710681A (en) | Semi-supervised medical image segmentation method based on data enhancement strategy | |
Cheng et al. | EA-Net: Research on skin lesion segmentation method based on U-Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |