NL2024772B1 - Leukocyte localization and segmentation method based on deep neural network - Google Patents
Leukocyte localization and segmentation method based on deep neural network Download PDFInfo
- Publication number
- NL2024772B1 NL2024772B1 NL2024772A NL2024772A NL2024772B1 NL 2024772 B1 NL2024772 B1 NL 2024772B1 NL 2024772 A NL2024772 A NL 2024772A NL 2024772 A NL2024772 A NL 2024772A NL 2024772 B1 NL2024772 B1 NL 2024772B1
- Authority
- NL
- Netherlands
- Prior art keywords
- segmentation
- leukocyte
- localization
- feature
- network
- Prior art date
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 120
- 210000000265 leukocyte Anatomy 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000004807 localization Effects 0.000 title claims abstract description 38
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 16
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 210000000601 blood cell Anatomy 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 22
- 230000037361 pathway Effects 0.000 claims description 15
- 238000005259 measurement Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 abstract description 7
- 238000004422 calculation algorithm Methods 0.000 description 32
- 238000010186 staining Methods 0.000 description 12
- 210000004369 blood Anatomy 0.000 description 9
- 239000008280 blood Substances 0.000 description 9
- 210000004027 cell Anatomy 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 239000012535 impurity Substances 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 210000000805 cytoplasm Anatomy 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 210000003743 erythrocyte Anatomy 0.000 description 3
- 208000014951 hematologic disease Diseases 0.000 description 3
- 210000004940 nucleus Anatomy 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 208000019838 Blood disease Diseases 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 241000255581 Drosophila <fruit fly, genus> Species 0.000 description 1
- 208000002476 Falciparum Malaria Diseases 0.000 description 1
- 238000002738 Giemsa staining Methods 0.000 description 1
- 201000011336 Plasmodium falciparum malaria Diseases 0.000 description 1
- 240000003864 Ulex europaeus Species 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004820 blood count Methods 0.000 description 1
- 238000009534 blood test Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000021164 cell adhesion Effects 0.000 description 1
- 210000003855 cell nucleus Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005206 flow analysis Methods 0.000 description 1
- 208000018706 hematopoietic system disease Diseases 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 208000032839 leukemia Diseases 0.000 description 1
- 238000000386 microscopy Methods 0.000 description 1
- 230000006740 morphological transformation Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000004976 peripheral blood cell Anatomy 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004071 soot Substances 0.000 description 1
- 238000007447 staining method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20152—Watershed segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The invention provides a leukocyte localization and segmentation method based on a deep neural network. The method comprises: step Sl, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step SZ, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature maps to obtain proposed regions; step S3, prediction stage: first, using an ROI Align layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation. The invention significantly improves segmentation precision, and has good robustness for blood cell images under different acquisition environments and preparation techniques.
Description
NEURAL NETWORK Technical Field The invention belongs to the technical field of image processing, and particularly relates to a leukocyte localization and segmentation method based on a deep neural network. Background The total number of WBCs (White Blood Cells, formerly known as Leukocytes) in the blood, the proportion and morphology of various types of leukocytes, and other information are important indicators for the diagnosis of human blood diseases such as leukemia. An important part of the routine blood tests in hospitals is to classify and count leukocytes and analyze abnormal morphology. At present, domestic hospitals usually first use a blood cell analyzer based on electrical impedance method (physical method) and flow analysis method (physical-chemical method) to perform blood cell classification and counting. When the blood cell count is abnormal or the attending doctor suspects that a patient has a blood disease, the laboratory doctor will perform push-up, staining, and microscopic examination on the blood to confirm the classification and count of leukocytes and analyze the abnormal morphology. The accuracy of manual microscopy depends on the professional skills of the doctor. It has the problems of strong subjectivity, large individual differences, and time-consuming and labor-intensive problems. It is also likely to affect the precision of the test due to the doctor's visual fatigue. Therefore, it is necessary to replace the human eye with a camera and the human brain with a computer to achieve segmentation and classification of leukocytes to assist doctors in microscopic examination. In recent years, the rapid development of deep learning, image processing, pattern recognition and other technologies makes it possible. Leukocyte images can be obtained by shooting blood smears with a digital imaging device. The unstained leukocytes which are similar in color to the background are difficult to recognize due to low contrast. For this reason, when preparing blood smears, staining is usually performed to enhance the contrast between leukocytes and the background and improve the degree of recognition. Standard blood smear preparation methods commonly use Wright's staining method and Giemsa staining method to stain cells, achieving a good and stable staining effect; however, staining usually takes more than ten minutes and the staining speed is slow, which cannot meet the needs of wide-scale clinical applications. Huazhong University of Science and Technology's research team, leaded by professors Liu Jianguo and Wang Guoyou, proposed a method for rapid preparation of blood smears, which has a high staining speed and shortens the staining time of cells to about ten seconds; however, with an unstable staining effect, it easily causes dark impurities and a contaminated background, and it will dissolve red blood cells that have a diagnostic effect for some blood diseases. The challenges of leukocyte segmentation: (1) the staining preparation process, individual differences, disease differences, and category differences may cause large differences in the color and morphology of leukocytes; (2) low contrast between the cytoplasm and background, interference of cell adhesion and staining impurities; and (3) poor image quality of leukocytes.
Leukocyte segmentation is intended to extract a region where a single leukocyte is located from an image of stained human peripheral blood cells, and then segment the nucleus and cytoplasm. In recent years, scholars at home and abroad have conducted a series of studies on leukocyte segmentation. Based on the techniques used in existing leukocyte segmentation methods, we classify them as supervised leukocyte segmentation and unsupervised leukocyte segmentation. The unsupervised leukocyte segmentation method directly implements segmentation based on the features of leukocytes, such as color and brightness. The most commonly used leukocyte segmentation technique is threshold segmentation. The others are morphological transformation, fuzzy theory, clustering, deformation model, watershed segmentation, region merging, visual attention model, and edge detection in sequence. Supervised leukocyte segmentation, which considers the image segmentation problem as an image classification problem, is implemented as follows: the features (such as color and texture) of the training sample images are extracted first, and then a classifier is trained using the features of the training samples, and finally the trained classifier is used to classify pixels in an image of test samples to recognize the region where leukocytes are located. The most commonly used supervised leukocyte segmentation technique 1s support vector machine. The others are neural network, nearest neighbor classifier, extreme learning machine, and random forest classifier in sequence.
Recently, CNN (Convolutional Neural Network)-based methods have achieved remarkable success in the fields of computer vision and image processing. In medical image segmentation, due to its powerful feature learning and representation capabilities, CNN-based methods have also been widely used. Among these methods, the fully convolutional network (FCN) has demonstrated good performance in biological cell and organ segmentation.
U-Net (U-Net) is developed from FCN and considers the jump connection between an encoder and a decoder.
By extending the symmetrical autoencoder design, the high-resolution features in the encoding path are combined with upsampling output to locate targets in the image.
The U-Net network first trains FCN, learns a rough model for implementing pixel-level prediction of nucleus segmentation, and then crops image sub-regions where the cell nuclei lie, from the rough prediction and the original image, and then uses a graph-based method to obtain a refined segmentation.
The U-Net network is used to recognize and segment Drosophila heart regions at different developmental stages.
In addition, the CNN may also be used to construct a focus stack based method for automatically detecting Plasmodium falciparum malaria from blood smears.
However, the above-mentioned CNN-based methods all directly segment cells or organs on the entire image and are easily affected by complex backgrounds.
Summary of the Invention The object of the invention is to improve the precision of leukocyte segmentation in an image, and to provide a leukocyte localization and segmentation method based on a deep neural network.
This method not only can significantly improve the segmentation precision, but also has good robustness for blood cell images under different acquisition environments and preparation techniques.
To achieve the above objective, the technical solution of the invention is: a leukocyte localization and segmentation method based on a deep neural network, including the following steps: step S1, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step S2, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature map to obtain proposed regions; and step S3, prediction stage: first, using an RolAlign layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation.
In an embodiment of the invention, in step Sl, the improved FPN consists of three parts: a bottom-up pathway module, a top-down pathway module, and a lateral connection module.
In one embodiment of the invention, the bottom-up pathway module is composed of an improved
ResNet50, including 5 building blocks. That is, blood cell image oriented network structure optimization is carried out on an original network ResNet50 used to extract features of natural scene images. The step is specifically implemented as follows: 1) an improved conv _1 module uses two convolutional layers with a convolution kernel size of 3x3; 2) the numbers of building blocks in original conv3 x and conv4 x modules are respectively reduced to 2 and 3; and 3) the output of the last layer of network of each building block constitutes intermediate results of the pyramid feature map; in the top-down pathway module, the nearest neighbor upsampling method is used to perform a lateral connection operation on intermediate features extracted by the bottom-up pathway module, i.e, a feature map enlargement with a scale of 2 is performed; and the results after the enlargement are connected and merged with the corresponding original intermediate feature maps; and then, pyramid feature maps are constructed to further expand the feature resolution of the target region.
In an embodiment of the invention, step S2 is implemented as follows: first, mapping a feature map extracted from a sliding window to a 2048-dimensional feature vector, wherein the feature mapping is implemented by a convolutional layer with a convolution kernel size of 3x3; then, causing the feature vector to pass through two convolutional layers with a convolution kernel size of 1x1 to achieve the final box classification and box regression to obtain a score 2k and a position output 4k, respectively, wherein the score is used to evaluate the probability that the box belongs to the leukocyte region.
In an embodiment of the invention, after step S3, the performance measurement of the leukocyte segmentation result is needed to optimize the overall network, and specifically, the performance measurement is implemented as follows: providing a multi-task loss function to guide the learning of the network, wherein the multi-task loss function refers to the sum of the loss function Z, of Box Localization, the loss function Z , of Box Classification, and the loss function 7; of Mask Segmentation, which is defined as: L=L, +L, +L, (1) where Z, and I, define references [3], is defined as follows according to a binary cross entropy loss function:
LL log 7 +(1- y,)log(1- 3 L mask - ol? Dezen Vi 08 Yi +( Vi) og( 55) | (2) where y, represents the true category label of a pixel (i, j), 5 represents the category predicted value of the pixel (i, j), and the binary variables k = 0 and 1 respectively indicate that the current pixel belongs to the leukocyte category and the non-leukocyte category.
5 Compared with the prior art, the invention has the following beneficial effects: the invention introduces deep learning technology into the field of blood leukocyte segmentation, and provides an end-to-end leukocyte localization and segmentation method based on a deep neural network; the method of the invention 1s implemented as follows: based on the features of the leukocyte image, more distinctive leukocyte features are extracted using an improved feature pyramid network composed of ResNet residual blocks; and then the classification of the leukocyte proposed regions is achieved through region classification and regression; finally, the leukocytes in the proposed regions after ROI Align are accurately located and classified to achieve leukocyte segmentation. Experimental results on several blood cell image datasets confirm that the method of the invention significantly improves the precision of leukocyte segmentation.
Brief Description of the Drawings FIG. 1 is a flowchart of the method according to the invention; FIG. 2 1s a network model structure according to the invention; FIG. 3 is an improved FPN structure according to the invention; FIG. 4 is an RPN structure for leukocyte localization according to the invention; FIG. 5 is a box plot of the performance comparison of three deep learning methods on four image datasets under six measures; FIG. 6 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset]; FIG. 7 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset2; FIG. 8 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset BCISC; and FIG. 9 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset LISC. Detailed Description The technical solution of the invention will be described below in detail with reference to the accompanying drawings. The invention provides a leukocyte localization and segmentation method based on a deep neural network, including the following steps: step S1, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step S2, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature map to obtain proposed regions; and step S3, prediction stage: first, using an Rol Align layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation. The specific implementation process of the invention is as follows. FIG. 1 shows a flowchart of a leukocyte localization and segmentation method based on a deep neural network according to the invention. In order to achieve the segmentation of blood leukocytes, the method (Leukocyte Mask) of the invention considers the leukocyte localization problem as a pixel-level binary classification problem, that is, classifying pixels into target (leukocyte) pixels and background (non-leukocyte) pixels. In order to make full use of the features of leukocytes, such as shape, color, and texture, and information about their spatial positions in the image, the algorithm of the invention provides an improved Mask-RCNN[1] deep neural network model for realizing localization and segmentation of leukocytes. In this network model, a leukocyte-oriented network structure (Feature pyramid network, FPN) is designed to extract pyramid leukocyte features, paving the way for subsequent localization and segmentation of leukocytes. The network model structure provided by the invention is shown in FIG.
2. It consists of three stages: Feature Extraction, Region Proposal, and Prediction.
1. Feature extract In the stage of feature extraction, an improved feature pyramid network (FPN) [2] is designed to extract the discriminative and stable features of leukocytes, thus laying a foundation for the leukocyte localization in the next stage. The improved FPN structure, as shown in FIG. 3, consists of three parts: a bottom-up pathway module, a top-down pathway module, and a lateral connection module. Among them, the bottom-up pathway module is composed of the improved ResNet50[24], including 5 building blocks, and the detailed parameter configuration of each building block 1s shown in the left side of FIG. 3. The original network ResNet50 used to extract the features of natural scene images is subjected to blood cell image oriented network structure optimization. First, the original conv_1 module uses a single convolution layer with a convolution kernel size of 7x7, and the improved conv l module uses two convolution layers with a convolution kernel size of 3x3 to extract fine-grained leukocytes features. Second, according to different shooting environments and blood smear preparation techniques, different cell colors may be realized, the numbers of building blocks in the original conv3_x and conv4 x modules are respectively reduced to 2 and 3 to avoid an overfitting problem that may exist in network training. Finally, the output of the last layer of the network of each module constitutes intermediate results of pyramid feature maps. In the top-down pathway module, the nearest neighbor upsampling [3] method is used to perform a lateral connection operation on intermediate features extracted by the bottom-up pathway module, i.e, a feature map enlargement with a scale of 2 is performed; and the results after the enlargement are connected and merged with the corresponding original intermediate feature maps, as shown in the right side of FIG. 3. Then, pyramid feature maps are formed, which are denoted as P2, P3, P4, P5, and P6. Among them, P6 is only an upsampling result, with a scale of 2, of PS, and is used to further expand the feature resolution of the target region and improve the final segmentation precision. Finally, the pyramid feature maps will be used to achieve the localization of leukocytes in the Region Proposal stage.
2. Region Proposal In the region proposal stage, all regions where leukocytes may exist in the image will be located. Regions where leukocytes may exist in the pyramid feature maps may be located using the RPN (Region Proposal Network) [1], as shown in FIG. 4. First, a feature map extracted from a sliding window is mapped to a 2048-dimensional feature vector, wherein the feature mapping is implemented by a convolutional layer with a convolution kernel size of 3x3; then, the feature vector passes through two convolutional layers with a convolution kernel size of 1x1 to achieve the final box classification and box regression to obtain a score 2k and a position output 4k (x coordinate, y coordinate, box width, box height), respectively, wherein the score is used to evaluate the probability that the box belongs to the leukocyte region.
3. Prediction In the prediction stage, as shown in FIG. 2, first, an Rol Align layer[1] is used to perform bilinear interpolation to align the localization results of the region proposal stage, and each proposed region is mapped to a fixed-size feature map, and then the fixed-size feature maps are respectively used as inputs of a localization branch and a mask branch for final localization and segmentation (pixel-level segmentation). Models based on a deep neural network are supervised methods and require model training. The network model provided by the invention uses a multi-task loss function to guide the learning of the network during the training of the entire network, where the multi-task loss function refers to the sum of the loss function Z, of Box Localization, the loss function Z, of Box Classification, and the loss function Z,,,, of Mask Segmentation, which is defined as: L=L + Ly + L mast (1) where Zand L define references [3], ZL, 1s defined as follows according to a binary cross entropy loss function: __ log 7 + (1— y )log(1- 7 Last = ne Doe en > „108 yy + (1- 1) log( 7) (2) where y, represents the true category label of a pixel (í, j), Vy represents the category predicted value of the pixel (4, j), and the binary variables k = 0 and 1 respectively indicate that the current pixel belongs to the leukocyte category and the non-leukocyte category. Test results In order to evaluate the performance of the leukocyte segmentation algorithm, a 50-fold cross-validation experiment is performed on four data sets: Dataset] (300 fast stained images), Dataset? (100 standard stained images), BCISC (268 standard stained images), and LISC (257 standard stained images). The segmentation performance of the algorithm on four data sets is measured under six common segmentation measures. Among the six common segmentation measures, the first three measures, i.e., precision, Dice coefficient, and mloU (mean Intersection over Union),
are commonly used to measure the performance of a segmentation model based on deep learning. The larger the measurement value, the better the segmentation performance. The last three measures, i.e, false positive rate (FPR), false negative rate (FNR), and misclassification error (ME), are often used to measure the performance of traditional segmentation models; the smaller the measurement value, the better the segmentation performance. These measures are defined as: ROE, Presion = 5 — #, 2FOF, Dice = +——++ EIF, IE OF, BOB, mloU =| TT F‚UF,) BUB, B OF,
EPR B, FNB, 0 FNR= LE iA g B OB, |+|F, NF] ME=1-L%__rl Lel £]+ 2] where, B, and +, represent the background and target of the result of the manual standard segmentation B, and F represent the background and target in the segmentation result corresponding to the automatic segmentation algorithm, and | : | represents the number of elements in a set. The values of the six measures are all between O and 1. Lower ME, FPR, and FNR values represent better segmentation results. Conversely, higher precision, Dice, and mloU values represent better segmentation effects. FIG. 6 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset], where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 7 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset2, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 8 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset BCISC, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results, the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 9 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset LISC, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
In order to verify the effectiveness of the algorithm of the invention in blood leukocyte segmentation, the algorithm of the invention is compared with the traditional Watershed segmentation algorithm and two deep learning based segmentation methods FCN and U-Net. As shown in FIG. 5 and Table 1, the measurement results of the method (Leukocyte Mask) of the invention under six measures on the four data sets are almost the best, and the corresponding Precision, Dice, and mloU measurement values are significantly higher than the other three methods. However, there are exceptions. For example, on the dataset Dataset], the Watershed algorithm and FCN algorithm are better than Leukocyte Mask under FPR and FNR measures because the segmentation results of the two methods show obvious under-segmentation and over-segmentation. For the data sets BCISC and LISC, although the U-Net algorithm achieves a lower FNR, as shown in FIGS. 5-9, the segmentation results of the U-Net algorithm are not as stable as the algorithm of the invention. Table 1 Quantitative comparison of three methods in term of segmentation precision under six measures te Tre TT Eas ss TY Tse Ssst | Men) | i Preawmon i we mes OSPR FER OSE Waterbed DBSO | GTI CORSE | BINNE GET nana FON SONG | GATHER OBOE | ONISEY BSI | Dies Dataset! uNe SAI | GSSIET ROAR GOIN SOOT | SONS [Teukoryviendask | VOS | O.08198 GOGH GONE TSI VBI Lodoowieddask | GSM | GSR ABGATY | QOQIRY BOISE | DOS FON WESTIE | GBIEIE SOY NMY | OOTY 8238s \ SE BUS Ns eN OE RT aa ane hes eb Ei oikoovieMask | BUGRST | OTRAS GRG4TY | GBR GOR | SOMMER EN LOBE DIENS GREEN | DOERR DASD DSD LIS iLNe EN GEIRT | Qo0TIE O3RaE | dann Lonkooriiak | ANA ILSE DESIR | OOST DOSED | noose FIGS. 6-9 show the manual segmentation results on four data sets and the best and worst segmentation results of different algorithms, respectively. It can be seen from these figures that the watershed segmentation algorithm can only segment the nucleus in most cases, and it is difficult to segment the cytoplasm. The FCN algorithm and the U-Net algorithm which perform leukocyte segmentation on the entire image is susceptible to interference by red blood cells and staining impurities, resulting in a reduction in segmentation precision. Different from FCN and U-Net, the method (Leukocyte Mask) of the invention only segments the ROI located in the prediction stage, which narrows the segmentation range, eliminates the interference of red blood cells and staining impurities on leukocyte segmentation, thus improving the segmentation precision. With reference to Table 1 and FIGS. 6-9, it can be found that the Leukocyte Mask model provided by the method of the invention not only significantly improves the precision of leukocyte segmentation, but also has good robustness for blood smear images under different shooting environments and preparation conditions. References:
[1] K He, G Gkioxari, P Dollar, R Girshick. Mask R-CNN. IEEE International Conference on Computer Vision (ICCV), 2017, PP. 2961-2969.
[2] TY Lin, P Dollar, R Girshick, K He, B Hariharan, S Belongie. Feature pyramid networks for object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, PP. 2117-2125.
[3] R Girshick. Fast R-CNN. IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448.
The above are the preferred embodiments of the invention. Any changes made according to the technical solution of the invention that do not exceed the scope of the technical solution of the invention belong to the protection scope of the invention.
Claims (5)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910426658.1A CN110136149A (en) | 2019-05-21 | 2019-05-21 | Leucocyte positioning and dividing method based on deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2024772B1 true NL2024772B1 (en) | 2020-12-01 |
Family
ID=67572051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2024772A NL2024772B1 (en) | 2019-05-21 | 2020-01-28 | Leukocyte localization and segmentation method based on deep neural network |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110136149A (en) |
NL (1) | NL2024772B1 (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490860A (en) * | 2019-08-21 | 2019-11-22 | 北京大恒普信医疗技术有限公司 | Diabetic retinopathy recognition methods, device and electronic equipment |
CN110532681B (en) * | 2019-08-28 | 2023-01-31 | 哈尔滨工业大学 | Combustion engine abnormity detection method based on NARX network-boxline diagram and normal mode extraction |
CN110729045A (en) * | 2019-10-12 | 2020-01-24 | 闽江学院 | Tongue image segmentation method based on context-aware residual error network |
CN110807465B (en) * | 2019-11-05 | 2020-06-30 | 北京邮电大学 | Fine-grained image identification method based on channel loss function |
CN110837809A (en) * | 2019-11-11 | 2020-02-25 | 湖南伊鸿健康科技有限公司 | Blood automatic analysis method, blood automatic analysis system, blood cell analyzer, and storage medium |
CN111062296B (en) * | 2019-12-11 | 2023-07-18 | 武汉兰丁智能医学股份有限公司 | Automatic white blood cell identification and classification method based on computer |
CN111489327A (en) * | 2020-03-06 | 2020-08-04 | 浙江工业大学 | Cancer cell image detection and segmentation method based on Mask R-CNN algorithm |
CN111666850A (en) * | 2020-05-28 | 2020-09-15 | 浙江工业大学 | Cell image detection and segmentation method for generating candidate anchor frame based on clustering |
CN111882551B (en) * | 2020-07-31 | 2024-04-05 | 北京小白世纪网络科技有限公司 | Pathological image cell counting method, system and device |
CN111968088B (en) * | 2020-08-14 | 2023-09-15 | 西安电子科技大学 | Building detection method based on pixel and region segmentation decision fusion |
CN112070772B (en) * | 2020-08-27 | 2024-01-12 | 闽江学院 | Blood leukocyte image segmentation method based on UNet++ and ResNet |
CN112508931A (en) * | 2020-12-18 | 2021-03-16 | 闽江学院 | Leukocyte segmentation method based on U-Net and ResNet |
CN112784767A (en) * | 2021-01-27 | 2021-05-11 | 天津理工大学 | Cell example segmentation algorithm based on leukocyte microscopic image |
CN112750132A (en) * | 2021-02-01 | 2021-05-04 | 闽江学院 | White blood cell image segmentation method based on dual-path network and channel attention |
CN112907603B (en) * | 2021-02-05 | 2024-04-19 | 杭州电子科技大学 | Cell instance segmentation method based on Unet and watershed algorithm |
CN113159171B (en) * | 2021-04-20 | 2022-07-22 | 复旦大学 | Plant leaf image fine classification method based on counterstudy |
CN113239786B (en) * | 2021-05-11 | 2022-09-30 | 重庆市地理信息和遥感应用中心 | Remote sensing image country villa identification method based on reinforcement learning and feature transformation |
CN117197224B (en) * | 2023-08-16 | 2024-02-06 | 广东工业大学 | Raman spectrometer self-adaptive focusing device and method based on residual error network |
CN117078761B (en) * | 2023-10-07 | 2024-02-27 | 深圳爱博合创医疗机器人有限公司 | Automatic positioning method, device, equipment and medium for slender medical instrument |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204642B (en) * | 2016-06-29 | 2019-07-09 | 四川大学 | A kind of cell tracker method based on deep neural network |
CN107977671B (en) * | 2017-10-27 | 2021-10-26 | 浙江工业大学 | Tongue picture classification method based on multitask convolutional neural network |
CN108021903B (en) * | 2017-12-19 | 2021-11-16 | 南京大学 | Error calibration method and device for artificially labeling leucocytes based on neural network |
CN109034045A (en) * | 2018-07-20 | 2018-12-18 | 中南大学 | A kind of leucocyte automatic identifying method based on convolutional neural networks |
-
2019
- 2019-05-21 CN CN201910426658.1A patent/CN110136149A/en active Pending
-
2020
- 2020-01-28 NL NL2024772A patent/NL2024772B1/en not_active IP Right Cessation
Non-Patent Citations (5)
Title |
---|
ANONYMOUS: "LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks - Fan - 2019 - Journal of Biophotonics - Wiley Online Library", 19 March 2019 (2019-03-19), XP055713369, Retrieved from the Internet <URL:https://onlinelibrary.wiley.com/doi/full/10.1002/jbio.201800488> [retrieved on 20200709] * |
HAOYI FAN ET AL: "LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks", JOURNAL OF BIOPHOTONICS, vol. 12, no. 7, 10 April 2019 (2019-04-10), DE, XP055713367, ISSN: 1864-063X, DOI: 10.1002/jbio.201800488 * |
K HEG GKIOXARIP DOLLARR GIRSHICK.MASK R-CNN, IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, 2017, pages 2961 - 2969 |
R GIRSHICKFAST R-CNN, IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, 2015, pages 1440 - 1448 |
T Y LINP DOLLARR GIRSHICKK HEB HARIHARANS BELONGIE: "Feature pyramid networks for object detection", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR, 2017, pages 2117 - 2125 |
Also Published As
Publication number | Publication date |
---|---|
CN110136149A (en) | 2019-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
NL2024772B1 (en) | Leukocyte localization and segmentation method based on deep neural network | |
CN112070772B (en) | Blood leukocyte image segmentation method based on UNet++ and ResNet | |
Lu et al. | WBC-Net: A white blood cell segmentation network based on UNet++ and ResNet | |
Aswathy et al. | Detection of breast cancer on digital histopathology images: Present status and future possibilities | |
Panicker et al. | Automatic detection of tuberculosis bacilli from microscopic sputum smear images using deep learning methods | |
Fan et al. | LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks | |
NL2024774B1 (en) | Blood leukocyte segmentation method based on adaptive histogram thresholding and contour detection | |
CN111899229A (en) | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology | |
CN111798425B (en) | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning | |
Pan et al. | Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review | |
CN110751636A (en) | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network | |
Ngugi et al. | A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks | |
CN112750132A (en) | White blood cell image segmentation method based on dual-path network and channel attention | |
Albayrak et al. | A hybrid method of superpixel segmentation algorithm and deep learning method in histopathological image segmentation | |
Anari et al. | Computer-aided detection of proliferative cells and mitosis index in immunohistichemically images of meningioma | |
de Souza Oliveira et al. | A new approach for malaria diagnosis in thick blood smear images | |
Narayanan et al. | DeepSDCS: Dissecting cancer proliferation heterogeneity in Ki67 digital whole slide images | |
Yu et al. | Large-scale gastric cancer screening and localization using multi-task deep neural network | |
CN115063592A (en) | Multi-scale-based full-scanning pathological feature fusion extraction method and system | |
Lu et al. | Breast cancer mitotic cell detection using cascade convolutional neural network with U-Net | |
Song et al. | Red blood cell classification based on attention residual feature pyramid network | |
Sunny et al. | Oral epithelial cell segmentation from fluorescent multichannel cytology images using deep learning | |
Zhang et al. | Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion | |
Benazzouz et al. | Modified U‐Net for cytological medical image segmentation | |
Khoshdeli et al. | Deep learning models delineates multiple nuclear phenotypes in h&e stained histology sections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM | Lapsed because of non-payment of the annual fee |
Effective date: 20240201 |