NL2024772B1 - Leukocyte localization and segmentation method based on deep neural network - Google Patents

Leukocyte localization and segmentation method based on deep neural network Download PDF

Info

Publication number
NL2024772B1
NL2024772B1 NL2024772A NL2024772A NL2024772B1 NL 2024772 B1 NL2024772 B1 NL 2024772B1 NL 2024772 A NL2024772 A NL 2024772A NL 2024772 A NL2024772 A NL 2024772A NL 2024772 B1 NL2024772 B1 NL 2024772B1
Authority
NL
Netherlands
Prior art keywords
segmentation
leukocyte
localization
feature
network
Prior art date
Application number
NL2024772A
Other languages
Dutch (nl)
Inventor
Li Zuoyong
Fan Haoyi
Liu Weixia
Shen Danying
Zhou Chang'en
Original Assignee
Univ Minjiang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Minjiang filed Critical Univ Minjiang
Application granted granted Critical
Publication of NL2024772B1 publication Critical patent/NL2024772B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention provides a leukocyte localization and segmentation method based on a deep neural network. The method comprises: step Sl, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step SZ, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature maps to obtain proposed regions; step S3, prediction stage: first, using an ROI Align layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation. The invention significantly improves segmentation precision, and has good robustness for blood cell images under different acquisition environments and preparation techniques.

Description

LEUKOCYTE LOCALIZATION AND SEGMENTATION METHOD BASED ON DEEP
NEURAL NETWORK Technical Field The invention belongs to the technical field of image processing, and particularly relates to a leukocyte localization and segmentation method based on a deep neural network. Background The total number of WBCs (White Blood Cells, formerly known as Leukocytes) in the blood, the proportion and morphology of various types of leukocytes, and other information are important indicators for the diagnosis of human blood diseases such as leukemia. An important part of the routine blood tests in hospitals is to classify and count leukocytes and analyze abnormal morphology. At present, domestic hospitals usually first use a blood cell analyzer based on electrical impedance method (physical method) and flow analysis method (physical-chemical method) to perform blood cell classification and counting. When the blood cell count is abnormal or the attending doctor suspects that a patient has a blood disease, the laboratory doctor will perform push-up, staining, and microscopic examination on the blood to confirm the classification and count of leukocytes and analyze the abnormal morphology. The accuracy of manual microscopy depends on the professional skills of the doctor. It has the problems of strong subjectivity, large individual differences, and time-consuming and labor-intensive problems. It is also likely to affect the precision of the test due to the doctor's visual fatigue. Therefore, it is necessary to replace the human eye with a camera and the human brain with a computer to achieve segmentation and classification of leukocytes to assist doctors in microscopic examination. In recent years, the rapid development of deep learning, image processing, pattern recognition and other technologies makes it possible. Leukocyte images can be obtained by shooting blood smears with a digital imaging device. The unstained leukocytes which are similar in color to the background are difficult to recognize due to low contrast. For this reason, when preparing blood smears, staining is usually performed to enhance the contrast between leukocytes and the background and improve the degree of recognition. Standard blood smear preparation methods commonly use Wright's staining method and Giemsa staining method to stain cells, achieving a good and stable staining effect; however, staining usually takes more than ten minutes and the staining speed is slow, which cannot meet the needs of wide-scale clinical applications. Huazhong University of Science and Technology's research team, leaded by professors Liu Jianguo and Wang Guoyou, proposed a method for rapid preparation of blood smears, which has a high staining speed and shortens the staining time of cells to about ten seconds; however, with an unstable staining effect, it easily causes dark impurities and a contaminated background, and it will dissolve red blood cells that have a diagnostic effect for some blood diseases. The challenges of leukocyte segmentation: (1) the staining preparation process, individual differences, disease differences, and category differences may cause large differences in the color and morphology of leukocytes; (2) low contrast between the cytoplasm and background, interference of cell adhesion and staining impurities; and (3) poor image quality of leukocytes.
Leukocyte segmentation is intended to extract a region where a single leukocyte is located from an image of stained human peripheral blood cells, and then segment the nucleus and cytoplasm. In recent years, scholars at home and abroad have conducted a series of studies on leukocyte segmentation. Based on the techniques used in existing leukocyte segmentation methods, we classify them as supervised leukocyte segmentation and unsupervised leukocyte segmentation. The unsupervised leukocyte segmentation method directly implements segmentation based on the features of leukocytes, such as color and brightness. The most commonly used leukocyte segmentation technique is threshold segmentation. The others are morphological transformation, fuzzy theory, clustering, deformation model, watershed segmentation, region merging, visual attention model, and edge detection in sequence. Supervised leukocyte segmentation, which considers the image segmentation problem as an image classification problem, is implemented as follows: the features (such as color and texture) of the training sample images are extracted first, and then a classifier is trained using the features of the training samples, and finally the trained classifier is used to classify pixels in an image of test samples to recognize the region where leukocytes are located. The most commonly used supervised leukocyte segmentation technique 1s support vector machine. The others are neural network, nearest neighbor classifier, extreme learning machine, and random forest classifier in sequence.
Recently, CNN (Convolutional Neural Network)-based methods have achieved remarkable success in the fields of computer vision and image processing. In medical image segmentation, due to its powerful feature learning and representation capabilities, CNN-based methods have also been widely used. Among these methods, the fully convolutional network (FCN) has demonstrated good performance in biological cell and organ segmentation.
U-Net (U-Net) is developed from FCN and considers the jump connection between an encoder and a decoder.
By extending the symmetrical autoencoder design, the high-resolution features in the encoding path are combined with upsampling output to locate targets in the image.
The U-Net network first trains FCN, learns a rough model for implementing pixel-level prediction of nucleus segmentation, and then crops image sub-regions where the cell nuclei lie, from the rough prediction and the original image, and then uses a graph-based method to obtain a refined segmentation.
The U-Net network is used to recognize and segment Drosophila heart regions at different developmental stages.
In addition, the CNN may also be used to construct a focus stack based method for automatically detecting Plasmodium falciparum malaria from blood smears.
However, the above-mentioned CNN-based methods all directly segment cells or organs on the entire image and are easily affected by complex backgrounds.
Summary of the Invention The object of the invention is to improve the precision of leukocyte segmentation in an image, and to provide a leukocyte localization and segmentation method based on a deep neural network.
This method not only can significantly improve the segmentation precision, but also has good robustness for blood cell images under different acquisition environments and preparation techniques.
To achieve the above objective, the technical solution of the invention is: a leukocyte localization and segmentation method based on a deep neural network, including the following steps: step S1, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step S2, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature map to obtain proposed regions; and step S3, prediction stage: first, using an RolAlign layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation.
In an embodiment of the invention, in step Sl, the improved FPN consists of three parts: a bottom-up pathway module, a top-down pathway module, and a lateral connection module.
In one embodiment of the invention, the bottom-up pathway module is composed of an improved
ResNet50, including 5 building blocks. That is, blood cell image oriented network structure optimization is carried out on an original network ResNet50 used to extract features of natural scene images. The step is specifically implemented as follows: 1) an improved conv _1 module uses two convolutional layers with a convolution kernel size of 3x3; 2) the numbers of building blocks in original conv3 x and conv4 x modules are respectively reduced to 2 and 3; and 3) the output of the last layer of network of each building block constitutes intermediate results of the pyramid feature map; in the top-down pathway module, the nearest neighbor upsampling method is used to perform a lateral connection operation on intermediate features extracted by the bottom-up pathway module, i.e, a feature map enlargement with a scale of 2 is performed; and the results after the enlargement are connected and merged with the corresponding original intermediate feature maps; and then, pyramid feature maps are constructed to further expand the feature resolution of the target region.
In an embodiment of the invention, step S2 is implemented as follows: first, mapping a feature map extracted from a sliding window to a 2048-dimensional feature vector, wherein the feature mapping is implemented by a convolutional layer with a convolution kernel size of 3x3; then, causing the feature vector to pass through two convolutional layers with a convolution kernel size of 1x1 to achieve the final box classification and box regression to obtain a score 2k and a position output 4k, respectively, wherein the score is used to evaluate the probability that the box belongs to the leukocyte region.
In an embodiment of the invention, after step S3, the performance measurement of the leukocyte segmentation result is needed to optimize the overall network, and specifically, the performance measurement is implemented as follows: providing a multi-task loss function to guide the learning of the network, wherein the multi-task loss function refers to the sum of the loss function Z, of Box Localization, the loss function Z , of Box Classification, and the loss function 7; of Mask Segmentation, which is defined as: L=L, +L, +L, (1) where Z, and I, define references [3], is defined as follows according to a binary cross entropy loss function:
LL log 7 +(1- y,)log(1- 3 L mask - ol? Dezen Vi 08 Yi +( Vi) og( 55) | (2) where y, represents the true category label of a pixel (i, j), 5 represents the category predicted value of the pixel (i, j), and the binary variables k = 0 and 1 respectively indicate that the current pixel belongs to the leukocyte category and the non-leukocyte category.
5 Compared with the prior art, the invention has the following beneficial effects: the invention introduces deep learning technology into the field of blood leukocyte segmentation, and provides an end-to-end leukocyte localization and segmentation method based on a deep neural network; the method of the invention 1s implemented as follows: based on the features of the leukocyte image, more distinctive leukocyte features are extracted using an improved feature pyramid network composed of ResNet residual blocks; and then the classification of the leukocyte proposed regions is achieved through region classification and regression; finally, the leukocytes in the proposed regions after ROI Align are accurately located and classified to achieve leukocyte segmentation. Experimental results on several blood cell image datasets confirm that the method of the invention significantly improves the precision of leukocyte segmentation.
Brief Description of the Drawings FIG. 1 is a flowchart of the method according to the invention; FIG. 2 1s a network model structure according to the invention; FIG. 3 is an improved FPN structure according to the invention; FIG. 4 is an RPN structure for leukocyte localization according to the invention; FIG. 5 is a box plot of the performance comparison of three deep learning methods on four image datasets under six measures; FIG. 6 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset]; FIG. 7 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset2; FIG. 8 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset BCISC; and FIG. 9 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset LISC. Detailed Description The technical solution of the invention will be described below in detail with reference to the accompanying drawings. The invention provides a leukocyte localization and segmentation method based on a deep neural network, including the following steps: step S1, feature extraction stage: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte features to form pyramid feature maps; step S2, region proposal stage: using a region proposal network (RPN) to locate regions where leukocytes may exist in the pyramid feature map to obtain proposed regions; and step S3, prediction stage: first, using an Rol Align layer to perform bilinear interpolation to align the localization results of the region proposal stage, and mapping each proposed region to a fixed-size feature map, and then respectively considering the fixed-size feature maps as inputs of a localization branch and a mask branch for final localization and segmentation, thus realizing leukocyte segmentation. The specific implementation process of the invention is as follows. FIG. 1 shows a flowchart of a leukocyte localization and segmentation method based on a deep neural network according to the invention. In order to achieve the segmentation of blood leukocytes, the method (Leukocyte Mask) of the invention considers the leukocyte localization problem as a pixel-level binary classification problem, that is, classifying pixels into target (leukocyte) pixels and background (non-leukocyte) pixels. In order to make full use of the features of leukocytes, such as shape, color, and texture, and information about their spatial positions in the image, the algorithm of the invention provides an improved Mask-RCNN[1] deep neural network model for realizing localization and segmentation of leukocytes. In this network model, a leukocyte-oriented network structure (Feature pyramid network, FPN) is designed to extract pyramid leukocyte features, paving the way for subsequent localization and segmentation of leukocytes. The network model structure provided by the invention is shown in FIG.
2. It consists of three stages: Feature Extraction, Region Proposal, and Prediction.
1. Feature extract In the stage of feature extraction, an improved feature pyramid network (FPN) [2] is designed to extract the discriminative and stable features of leukocytes, thus laying a foundation for the leukocyte localization in the next stage. The improved FPN structure, as shown in FIG. 3, consists of three parts: a bottom-up pathway module, a top-down pathway module, and a lateral connection module. Among them, the bottom-up pathway module is composed of the improved ResNet50[24], including 5 building blocks, and the detailed parameter configuration of each building block 1s shown in the left side of FIG. 3. The original network ResNet50 used to extract the features of natural scene images is subjected to blood cell image oriented network structure optimization. First, the original conv_1 module uses a single convolution layer with a convolution kernel size of 7x7, and the improved conv l module uses two convolution layers with a convolution kernel size of 3x3 to extract fine-grained leukocytes features. Second, according to different shooting environments and blood smear preparation techniques, different cell colors may be realized, the numbers of building blocks in the original conv3_x and conv4 x modules are respectively reduced to 2 and 3 to avoid an overfitting problem that may exist in network training. Finally, the output of the last layer of the network of each module constitutes intermediate results of pyramid feature maps. In the top-down pathway module, the nearest neighbor upsampling [3] method is used to perform a lateral connection operation on intermediate features extracted by the bottom-up pathway module, i.e, a feature map enlargement with a scale of 2 is performed; and the results after the enlargement are connected and merged with the corresponding original intermediate feature maps, as shown in the right side of FIG. 3. Then, pyramid feature maps are formed, which are denoted as P2, P3, P4, P5, and P6. Among them, P6 is only an upsampling result, with a scale of 2, of PS, and is used to further expand the feature resolution of the target region and improve the final segmentation precision. Finally, the pyramid feature maps will be used to achieve the localization of leukocytes in the Region Proposal stage.
2. Region Proposal In the region proposal stage, all regions where leukocytes may exist in the image will be located. Regions where leukocytes may exist in the pyramid feature maps may be located using the RPN (Region Proposal Network) [1], as shown in FIG. 4. First, a feature map extracted from a sliding window is mapped to a 2048-dimensional feature vector, wherein the feature mapping is implemented by a convolutional layer with a convolution kernel size of 3x3; then, the feature vector passes through two convolutional layers with a convolution kernel size of 1x1 to achieve the final box classification and box regression to obtain a score 2k and a position output 4k (x coordinate, y coordinate, box width, box height), respectively, wherein the score is used to evaluate the probability that the box belongs to the leukocyte region.
3. Prediction In the prediction stage, as shown in FIG. 2, first, an Rol Align layer[1] is used to perform bilinear interpolation to align the localization results of the region proposal stage, and each proposed region is mapped to a fixed-size feature map, and then the fixed-size feature maps are respectively used as inputs of a localization branch and a mask branch for final localization and segmentation (pixel-level segmentation). Models based on a deep neural network are supervised methods and require model training. The network model provided by the invention uses a multi-task loss function to guide the learning of the network during the training of the entire network, where the multi-task loss function refers to the sum of the loss function Z, of Box Localization, the loss function Z, of Box Classification, and the loss function Z,,,, of Mask Segmentation, which is defined as: L=L + Ly + L mast (1) where Zand L define references [3], ZL, 1s defined as follows according to a binary cross entropy loss function: __ log 7 + (1— y )log(1- 7 Last = ne Doe en > „108 yy + (1- 1) log( 7) (2) where y, represents the true category label of a pixel (í, j), Vy represents the category predicted value of the pixel (4, j), and the binary variables k = 0 and 1 respectively indicate that the current pixel belongs to the leukocyte category and the non-leukocyte category. Test results In order to evaluate the performance of the leukocyte segmentation algorithm, a 50-fold cross-validation experiment is performed on four data sets: Dataset] (300 fast stained images), Dataset? (100 standard stained images), BCISC (268 standard stained images), and LISC (257 standard stained images). The segmentation performance of the algorithm on four data sets is measured under six common segmentation measures. Among the six common segmentation measures, the first three measures, i.e., precision, Dice coefficient, and mloU (mean Intersection over Union),
are commonly used to measure the performance of a segmentation model based on deep learning. The larger the measurement value, the better the segmentation performance. The last three measures, i.e, false positive rate (FPR), false negative rate (FNR), and misclassification error (ME), are often used to measure the performance of traditional segmentation models; the smaller the measurement value, the better the segmentation performance. These measures are defined as: ROE, Presion = 5 — #, 2FOF, Dice = +——++ EIF, IE OF, BOB, mloU =| TT F‚UF,) BUB, B OF,
EPR B, FNB, 0 FNR= LE iA g B OB, |+|F, NF] ME=1-L%__rl Lel £]+ 2] where, B, and +, represent the background and target of the result of the manual standard segmentation B, and F represent the background and target in the segmentation result corresponding to the automatic segmentation algorithm, and | : | represents the number of elements in a set. The values of the six measures are all between O and 1. Lower ME, FPR, and FNR values represent better segmentation results. Conversely, higher precision, Dice, and mloU values represent better segmentation effects. FIG. 6 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset], where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 7 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset Dataset2, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 8 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset BCISC, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results, the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
FIG. 9 shows three best segmentation results and three worst segmentation results of the four algorithms on the dataset LISC, where four rows correspond to the segmentation results of the algorithms Watershed, FCN, U-Net, and the method (Leukocyte Mask) of the invention; columns 1-3 are the three best segmentation results, columns 4-6 are three worst segmentation results; the blue dotted line indicates the manual segmentation result, and the red solid line indicates the algorithm segmentation result.
In order to verify the effectiveness of the algorithm of the invention in blood leukocyte segmentation, the algorithm of the invention is compared with the traditional Watershed segmentation algorithm and two deep learning based segmentation methods FCN and U-Net. As shown in FIG. 5 and Table 1, the measurement results of the method (Leukocyte Mask) of the invention under six measures on the four data sets are almost the best, and the corresponding Precision, Dice, and mloU measurement values are significantly higher than the other three methods. However, there are exceptions. For example, on the dataset Dataset], the Watershed algorithm and FCN algorithm are better than Leukocyte Mask under FPR and FNR measures because the segmentation results of the two methods show obvious under-segmentation and over-segmentation. For the data sets BCISC and LISC, although the U-Net algorithm achieves a lower FNR, as shown in FIGS. 5-9, the segmentation results of the U-Net algorithm are not as stable as the algorithm of the invention. Table 1 Quantitative comparison of three methods in term of segmentation precision under six measures te Tre TT Eas ss TY Tse Ssst | Men) | i Preawmon i we mes OSPR FER OSE Waterbed DBSO | GTI CORSE | BINNE GET nana FON SONG | GATHER OBOE | ONISEY BSI | Dies Dataset! uNe SAI | GSSIET ROAR GOIN SOOT | SONS [Teukoryviendask | VOS | O.08198 GOGH GONE TSI VBI Lodoowieddask | GSM | GSR ABGATY | QOQIRY BOISE | DOS FON WESTIE | GBIEIE SOY NMY | OOTY 8238s \ SE BUS Ns eN OE RT aa ane hes eb Ei oikoovieMask | BUGRST | OTRAS GRG4TY | GBR GOR | SOMMER EN LOBE DIENS GREEN | DOERR DASD DSD LIS iLNe EN GEIRT | Qo0TIE O3RaE | dann Lonkooriiak | ANA ILSE DESIR | OOST DOSED | noose FIGS. 6-9 show the manual segmentation results on four data sets and the best and worst segmentation results of different algorithms, respectively. It can be seen from these figures that the watershed segmentation algorithm can only segment the nucleus in most cases, and it is difficult to segment the cytoplasm. The FCN algorithm and the U-Net algorithm which perform leukocyte segmentation on the entire image is susceptible to interference by red blood cells and staining impurities, resulting in a reduction in segmentation precision. Different from FCN and U-Net, the method (Leukocyte Mask) of the invention only segments the ROI located in the prediction stage, which narrows the segmentation range, eliminates the interference of red blood cells and staining impurities on leukocyte segmentation, thus improving the segmentation precision. With reference to Table 1 and FIGS. 6-9, it can be found that the Leukocyte Mask model provided by the method of the invention not only significantly improves the precision of leukocyte segmentation, but also has good robustness for blood smear images under different shooting environments and preparation conditions. References:
[1] K He, G Gkioxari, P Dollar, R Girshick. Mask R-CNN. IEEE International Conference on Computer Vision (ICCV), 2017, PP. 2961-2969.
[2] TY Lin, P Dollar, R Girshick, K He, B Hariharan, S Belongie. Feature pyramid networks for object detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, PP. 2117-2125.
[3] R Girshick. Fast R-CNN. IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440-1448.
The above are the preferred embodiments of the invention. Any changes made according to the technical solution of the invention that do not exceed the scope of the technical solution of the invention belong to the protection scope of the invention.

Claims (5)

ConclusiesConclusions 1. Een leukocytelokalisatie en -segmentatiewerkwijze op basis van een diep neuraal netwerk, bevattende de volgende stappen: stap Sl, fase van feature-extractie: het ontwerpen van een verbeterd feature-piramide-netwerk (FPN) om piramide-leukocyte-eigenschappen te extraheren om piramide-eigenschappen in kaart te brengen; stap S2, fase van het regionale voorstel: het gebruik van een regionaal voorstelnetwerk (RPN) om regio's te lokaliseren waar leukocyten kunnen bestaan in de piramidefunctiekaarten om voorgestelde regio's te verkrijgen; en stap S3, voorspellingsfase: eerst, het uitvoeren, met behulp van een RolAlign laag, van bilineaire interpolatie om de lokalisatie resultaten van de regio voorstel fase op een lijn te brengen, en het in kaart brengen van elke voorgestelde regio op een vaste grootte functie kaart, gevolgd door respectievelijk rekening houdend met de vaste grootte functie kaarten als input van een lokalisatie tak en een masker tak definitieve lokalisatie en segmentatie, aldus het realiseren van leukocyte segmentatie.1. A leukocyte localization and segmentation method based on a deep neural network, comprising the following steps: step S1, feature extraction phase: designing an improved feature pyramid network (FPN) to extract pyramid leukocyte properties to map pyramid properties; step S2, phase of the regional proposal: using a regional proposal network (RPN) to locate regions where leukocytes may exist in the pyramid function maps to obtain suggested regions; and step S3, prediction phase: first, performing, using a RolAlign layer, bilinear interpolation to align the localization results of the region proposal phase, and mapping each proposed region to a fixed size function. map, followed by taking into account the fixed size function maps as input from a localization branch and a mask branch respectively, final localization and segmentation, thus realizing leukocyte segmentation. 2. De leukocytelokalisatie- en segmentatiewerkwijze op basis van een diep neuraal netwerk volgens Conclusie 1, waarin in stap S1 de verbeterde FPN bestaat uit drie delen: een bottom-up pathway module, een top-down pathway module en een laterale verbindingsmodule.The deep neural network leukocyte localization and segmentation method of Claim 1, wherein in step S1 the enhanced FPN consists of three parts: a bottom-up pathway module, a top-down pathway module, and a lateral link module. 3. De leukocytelokalisatie- en segmentatiewerkwijze op basis van een diep neuraal netwerk volgens Conclusie 2, waarbij de bottom-up pathway module bestaat uit een verbeterde ResNet50, bevattende 5 bouwstenen; d.w.z. dat de optimalisatie van de structuur van het bloedcel-beeld georiënteerde netwerk wordt uitgevoerd op een origineel netwerk ResNet50 dat wordt gebruikt om kenmerken van natuurlijke scènebeelden te extraheren, en dat specifiek als volgt wordt geïmplementeerd: 1) een verbeterde conv 1-module maakt gebruik van twee convolutielagen met een convolutiekern van 3x3; 2) het aantal bouwstenen in de oorspronkelijke conv3 x- en conv4 x-modules wordt teruggebracht tot respectievelijk 2 en 3; en 3) de laatste laag van de netwerkoutput van elke bouwsteen vormt een tussenresultaat van de piramidefunctiekaart; in de top-down pathway-module wordt de dichtstbijzijnde upsampling-werkwijze gebruikt om een laterale verbinding uit te voeren op tussenliggende kenmerken die door de bottom-up pathway-module worden geëxtraheerd, d.w.z, er wordt een feature map vergroting met een schaal van 2 uitgevoerd; en de resultaten na de vergroting worden verbonden en samengevoegd met de overeenkomstige originele tussenliggende feature maps; en vervolgens worden er piramide feature maps gebouwd om de feature resolutie van de doelregio verder uit te breiden.The deep neural network leukocyte localization and segmentation method according to Claim 2, wherein the bottom-up pathway module consists of an improved ResNet50, containing 5 building blocks; i.e. the optimization of the structure of the blood cell image oriented network is performed on an original network ResNet50 which is used to extract features of natural scene images, and which is specifically implemented as follows: 1) an improved conv 1 module uses of two convolution layers with a convolution core of 3x3; 2) the number of building blocks in the original conv3 x and conv4 x modules is reduced to 2 and 3 respectively; and 3) the last layer of the network output of each device is an intermediate result of the pyramid function map; in the top-down pathway module, the closest upsampling method is used to perform a lateral connection on intermediate features extracted by the bottom-up pathway module, i.e., a feature map magnification with a scale of 2 executed; and the results after the magnification are linked and merged with the corresponding original intermediate feature maps; and then pyramid feature maps are built to further expand the feature resolution of the target region. 4. De leukocytelokalisatie en segmentatiewerkwijze op basis van een diep neuraal netwerk volgens Conclusie 1, waarbij stap S2 als volgt wordt geïmplementeerd: ten eerste, het in kaart brengen van een feature map geëxtraheerd uit een schuifraam naar een 2048-dimensionale feature vector, waarin de feature mapping is geïmplementeerd door een convolutionele laag met een convolutie kernel grootte van 3x3; vervolgens, waardoor de feature vector afzonderlijk door twee convolutionele lagen met een convolutie kernel grootte van 1x1 gaat om de uiteindelijke box classificatie te bereiken en box regressie om een score te verkrijgen 2k en een positie uitgang 4k, respectievelijk, waarin de score wordt gebruikt om de waarschijnlijkheid dat een box behoort tot de leukocyte regio te evalueren.The leukocyte localization and segmentation method based on a deep neural network according to Claim 1, wherein step S2 is implemented as follows: first, mapping a feature map extracted from a sliding window to a 2048-dimensional feature vector, in which the feature mapping is implemented by a convolutional layer with a convolution kernel size of 3x3; then, making the feature vector separate through two convolutional layers with a convolution kernel size of 1x1 to achieve the final box classification and box regression to obtain a score 2k and a position output 4k, respectively, in which the score is used to calculate the likelihood that a box belongs to the leukocyte region. 5. De leukocyte lokalisatie en segmentatie werkwijze op basis van een diep neuraal netwerk volgens een van de Conclusies 1-4, waarin na stap S3, de prestatiemeting van de leukocyte segmentatie resultaten nodig is om het totale netwerk te optimaliseren, en in het bijzonder, de prestatiemeting wordt als volgt uitgevoerd: het verstrekken van een multi-task loss functie om het leren van het netwerk te begeleiden, waarbij de multi-task loss functie verwijst naar de som van de verliesfunctie 7, of Box Localization, de verliesfunctie ZL, of Box Classification, en de verliesfunctie L of Mask Segmentation, die is gedefinieerd als: L — Los + Lys + L mask (1) Waarbij L, als volgt wordt gedefinieerd volgens een binaire cross entropie-verliesfunctie: L Ls log ji +(1-y,)log(1- y! mask m? Ii, j<m Vi 0g Vi ( Vi ) og( Vi ) (2) waarbij y, het echte categorielabel van een pixel (i, J) aangeeft, yh de categorie voorspelde waarde van de pixel (i, j) aangeeft, en de binaire variabelen k = 0 en 1 aangeven dat de huidige pixel respectievelijk tot de categorie leukocyten en de categorie niet-leukocyten behoort.The leukocyte localization and segmentation method based on a deep neural network according to any one of Claims 1 to 4, wherein after step S3, the performance measurement of the leukocyte segmentation results is required to optimize the total network, and in particular, the performance measurement is performed as follows: providing a multi-task loss function to guide the learning of the network, where the multi-task loss function refers to the sum of the loss function 7, or Box Localization, the loss function ZL, or Box Classification, and the loss function L or Mask Segmentation, which is defined as: L - Los + Lys + L mask (1) Where L, is defined according to a binary cross entropy loss function as follows: L Ls log ji + (1- y,) log (1- y! mask m? Ii, j <m Vi 0g Vi (Vi) og (Vi) (2) where y, indicates the true category label of a pixel (i, J), yh predicted the category value of the pixel (i, j), and the binary variables k = 0 and 1 indicate da t the current pixel belongs to the leucocyte and non-leucocyte class, respectively.
NL2024772A 2019-05-21 2020-01-28 Leukocyte localization and segmentation method based on deep neural network NL2024772B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910426658.1A CN110136149A (en) 2019-05-21 2019-05-21 Leucocyte positioning and dividing method based on deep neural network

Publications (1)

Publication Number Publication Date
NL2024772B1 true NL2024772B1 (en) 2020-12-01

Family

ID=67572051

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2024772A NL2024772B1 (en) 2019-05-21 2020-01-28 Leukocyte localization and segmentation method based on deep neural network

Country Status (2)

Country Link
CN (1) CN110136149A (en)
NL (1) NL2024772B1 (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490860A (en) * 2019-08-21 2019-11-22 北京大恒普信医疗技术有限公司 Diabetic retinopathy recognition methods, device and electronic equipment
CN110532681B (en) * 2019-08-28 2023-01-31 哈尔滨工业大学 Combustion engine abnormity detection method based on NARX network-boxline diagram and normal mode extraction
CN110729045A (en) * 2019-10-12 2020-01-24 闽江学院 Tongue image segmentation method based on context-aware residual error network
CN110807465B (en) * 2019-11-05 2020-06-30 北京邮电大学 Fine-grained image identification method based on channel loss function
CN110837809A (en) * 2019-11-11 2020-02-25 湖南伊鸿健康科技有限公司 Blood automatic analysis method, blood automatic analysis system, blood cell analyzer, and storage medium
CN111062296B (en) * 2019-12-11 2023-07-18 武汉兰丁智能医学股份有限公司 Automatic white blood cell identification and classification method based on computer
CN111489327A (en) * 2020-03-06 2020-08-04 浙江工业大学 Cancer cell image detection and segmentation method based on Mask R-CNN algorithm
CN111666850A (en) * 2020-05-28 2020-09-15 浙江工业大学 Cell image detection and segmentation method for generating candidate anchor frame based on clustering
CN111882551B (en) * 2020-07-31 2024-04-05 北京小白世纪网络科技有限公司 Pathological image cell counting method, system and device
CN111968088B (en) * 2020-08-14 2023-09-15 西安电子科技大学 Building detection method based on pixel and region segmentation decision fusion
CN112070772B (en) * 2020-08-27 2024-01-12 闽江学院 Blood leukocyte image segmentation method based on UNet++ and ResNet
CN112508931A (en) * 2020-12-18 2021-03-16 闽江学院 Leukocyte segmentation method based on U-Net and ResNet
CN112784767A (en) * 2021-01-27 2021-05-11 天津理工大学 Cell example segmentation algorithm based on leukocyte microscopic image
CN112750132A (en) * 2021-02-01 2021-05-04 闽江学院 White blood cell image segmentation method based on dual-path network and channel attention
CN112907603B (en) * 2021-02-05 2024-04-19 杭州电子科技大学 Cell instance segmentation method based on Unet and watershed algorithm
CN113159171B (en) * 2021-04-20 2022-07-22 复旦大学 Plant leaf image fine classification method based on counterstudy
CN113239786B (en) * 2021-05-11 2022-09-30 重庆市地理信息和遥感应用中心 Remote sensing image country villa identification method based on reinforcement learning and feature transformation
CN117197224B (en) * 2023-08-16 2024-02-06 广东工业大学 Raman spectrometer self-adaptive focusing device and method based on residual error network
CN117078761B (en) * 2023-10-07 2024-02-27 深圳爱博合创医疗机器人有限公司 Automatic positioning method, device, equipment and medium for slender medical instrument

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204642B (en) * 2016-06-29 2019-07-09 四川大学 A kind of cell tracker method based on deep neural network
CN107977671B (en) * 2017-10-27 2021-10-26 浙江工业大学 Tongue picture classification method based on multitask convolutional neural network
CN108021903B (en) * 2017-12-19 2021-11-16 南京大学 Error calibration method and device for artificially labeling leucocytes based on neural network
CN109034045A (en) * 2018-07-20 2018-12-18 中南大学 A kind of leucocyte automatic identifying method based on convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks - Fan - 2019 - Journal of Biophotonics - Wiley Online Library", 19 March 2019 (2019-03-19), XP055713369, Retrieved from the Internet <URL:https://onlinelibrary.wiley.com/doi/full/10.1002/jbio.201800488> [retrieved on 20200709] *
HAOYI FAN ET AL: "LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks", JOURNAL OF BIOPHOTONICS, vol. 12, no. 7, 10 April 2019 (2019-04-10), DE, XP055713367, ISSN: 1864-063X, DOI: 10.1002/jbio.201800488 *
K HEG GKIOXARIP DOLLARR GIRSHICK.MASK R-CNN, IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, 2017, pages 2961 - 2969
R GIRSHICKFAST R-CNN, IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, 2015, pages 1440 - 1448
T Y LINP DOLLARR GIRSHICKK HEB HARIHARANS BELONGIE: "Feature pyramid networks for object detection", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR, 2017, pages 2117 - 2125

Also Published As

Publication number Publication date
CN110136149A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
NL2024772B1 (en) Leukocyte localization and segmentation method based on deep neural network
CN112070772B (en) Blood leukocyte image segmentation method based on UNet++ and ResNet
Lu et al. WBC-Net: A white blood cell segmentation network based on UNet++ and ResNet
Aswathy et al. Detection of breast cancer on digital histopathology images: Present status and future possibilities
Panicker et al. Automatic detection of tuberculosis bacilli from microscopic sputum smear images using deep learning methods
Fan et al. LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks
NL2024774B1 (en) Blood leukocyte segmentation method based on adaptive histogram thresholding and contour detection
CN111899229A (en) Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
Ngugi et al. A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks
CN112750132A (en) White blood cell image segmentation method based on dual-path network and channel attention
Albayrak et al. A hybrid method of superpixel segmentation algorithm and deep learning method in histopathological image segmentation
Anari et al. Computer-aided detection of proliferative cells and mitosis index in immunohistichemically images of meningioma
de Souza Oliveira et al. A new approach for malaria diagnosis in thick blood smear images
Narayanan et al. DeepSDCS: Dissecting cancer proliferation heterogeneity in Ki67 digital whole slide images
Yu et al. Large-scale gastric cancer screening and localization using multi-task deep neural network
CN115063592A (en) Multi-scale-based full-scanning pathological feature fusion extraction method and system
Lu et al. Breast cancer mitotic cell detection using cascade convolutional neural network with U-Net
Song et al. Red blood cell classification based on attention residual feature pyramid network
Sunny et al. Oral epithelial cell segmentation from fluorescent multichannel cytology images using deep learning
Zhang et al. Histopathological image recognition of breast cancer based on three-channel reconstructed color slice feature fusion
Benazzouz et al. Modified U‐Net for cytological medical image segmentation
Khoshdeli et al. Deep learning models delineates multiple nuclear phenotypes in h&e stained histology sections

Legal Events

Date Code Title Description
MM Lapsed because of non-payment of the annual fee

Effective date: 20240201