CN110796117B - Blood cell automatic analysis method, system, blood cell analyzer and storage medium - Google Patents

Blood cell automatic analysis method, system, blood cell analyzer and storage medium Download PDF

Info

Publication number
CN110796117B
CN110796117B CN201911099037.3A CN201911099037A CN110796117B CN 110796117 B CN110796117 B CN 110796117B CN 201911099037 A CN201911099037 A CN 201911099037A CN 110796117 B CN110796117 B CN 110796117B
Authority
CN
China
Prior art keywords
microscopic image
mask
detected
focusing microscopic
example segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911099037.3A
Other languages
Chinese (zh)
Other versions
CN110796117A (en
Inventor
郑陆一
胡双
刘蕾
蔡韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Ehome Health Technology Co ltd
Original Assignee
Hunan Ehome Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Ehome Health Technology Co ltd filed Critical Hunan Ehome Health Technology Co ltd
Priority to CN201911099037.3A priority Critical patent/CN110796117B/en
Publication of CN110796117A publication Critical patent/CN110796117A/en
Application granted granted Critical
Publication of CN110796117B publication Critical patent/CN110796117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a blood cell automatic analysis method, a blood cell automatic analysis system, a blood cell analyzer and a storage medium, wherein the method comprises the following steps: acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected; respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a trained mask-RCNN model, and outputting example segmentation images respectively corresponding to the front side focusing microscopic image and the back side focusing microscopic image, wherein the example segmentation images comprise classification results and masks; and counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image. The invention solves the problem of low accuracy of the quantity of blood cells counted by a cell identification and statistical method based on the focusing microscopic image of the surface to be detected of the sample to be detected.

Description

Blood cell automatic analysis method, system, blood cell analyzer and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a blood cell automatic analysis method and system, a blood cell analyzer and a computer readable storage medium.
Background
At present, blood cells in a blood sample are counted, an inspector counts the blood cells in the blood sample based on a blood smear, and the blood smear is placed under a microscope, and then a traditional machine vision method is adopted to perform cell identification and statistics on an optimal focusing microscope image observed by the blood smear under the microscope, so that the final blood cell number in the blood smear is obtained. However, because the blood sample in the blood smear has a certain thickness, and some positions in the blood sample are far away from the surface to be detected, blood cells at these positions may not be displayed in the focused microscopic image of the surface to be detected, so that the number of blood cells counted by the cell identification and statistical method based on the focused microscopic image of the surface to be detected is less than the actual number of blood cells, and the accuracy is not high.
Disclosure of Invention
The invention mainly aims to provide a blood cell automatic analysis method, a blood cell automatic analysis system, a blood cell analyzer and a computer readable storage medium, and aims to solve the problem that the accuracy of the number of blood cells counted by a cell identification and statistical method based on a focused microscopic image of a surface to be detected of a sample to be detected is low in the prior art.
In order to achieve the above object, the present invention provides an automatic blood cell analysis method, comprising the steps of:
acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a trained mask-RCNN model, and outputting example segmentation images respectively corresponding to the front side focusing microscopic image and the back side focusing microscopic image, wherein the example segmentation images comprise classification results and masks;
and counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image.
Optionally, the step of inputting the front-side focusing microscope image and the back-side focusing microscope image into a trained mask-RCNN model, and outputting example segmentation maps corresponding to the front-side focusing microscope image and the back-side focusing microscope image, respectively, where the example segmentation maps include classification results and masks includes:
respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a feature extraction network of a trained mask-RCNN model, and outputting a corresponding first feature map and a corresponding second feature map;
respectively inputting the first feature map and the second feature map into an RPN network, and correspondingly obtaining a first ROI and a second ROI;
pooling and pixel aligning the first ROI with a first feature map to obtain a first ROI feature map;
pooling and pixel aligning the second ROI with a second feature map to obtain a second ROI feature map;
and respectively inputting the first ROI feature map and the second ROI feature map into an FCN (fiber channel network) to perform frame regression, classification and mask generation, and outputting example segmentation maps respectively corresponding to the front focusing microscopic image and the back focusing microscopic image, wherein the example segmentation maps comprise a classification result and a mask of the target to be detected.
Optionally, the step of counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation map of the front-side focusing microscope image and the classification result and the mask in the example segmentation map of the back-side focusing microscope image includes:
respectively counting the number M1 of masks of which the classification result is the target to be detected in the example segmentation image of the front focusing microscopic image and the number M2 of masks of which the classification result is the target to be detected in the example segmentation image of the back focusing microscopic image according to the classification results in the example segmentation images corresponding to the front focusing microscopic image and the back focusing microscopic image;
carrying out coordinate alignment on the example segmentation drawing of the front focusing microscopic image and the example segmentation drawing of the back focusing microscopic image, and calculating the overlapping area ratio of each mask belonging to the target to be detected in the example segmentation drawing of the front focusing microscopic image to the mask belonging to the target to be detected at the corresponding position in the example segmentation drawing of the back focusing microscopic image;
counting the number M3 of the overlapping area ratios larger than a preset threshold value from the calculated overlapping area ratios;
obtaining the number M of targets to be detected in the sample to be detected according to M1, M2 and M3, wherein M is M1+ M2-M3.
Optionally, the step of acquiring a front-side focusing microscopic image and a back-side focusing microscopic image of the sample to be detected includes:
acquiring a focused microscopic image of a blood sample as a training sample;
and training the RPN and the FCN in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame and the training sample to obtain the trained mask-RCNN model.
Optionally, the training of the RPN network and the FCN network in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame, and the training sample, and the step of obtaining the trained mask-RCNN model includes:
inputting the training sample into a feature extraction network of a mask-RCNN model to be trained to obtain a feature map corresponding to the training sample;
inputting the feature map of the training sample into an RPN network to be trained to obtain a training frame, and obtaining an expected offset according to the training frame and an expected frame;
inputting the training frame into a classification layer of the RPN network to be trained for classification training, inputting the training frame into a regression layer for regression training, and respectively outputting a first prediction classification result and a first prediction offset of the training frame;
constructing a first loss function according to the first prediction offset, the first prediction classification result, the expected classification result and the expected offset of the training frame;
performing iterative training on the RPN to be trained in a reverse propagation mode according to the first loss function to obtain a trained RPN;
inputting the feature map of the training sample into the trained RPN network to obtain a predicted ROI;
pooling and pixel alignment are carried out on the predicted ROI and the training samples, and a predicted ROI feature map is obtained;
inputting the characteristic map of the predicted ROI into the FCN to be trained, and outputting a second predicted classification result, a second predicted offset and a predicted mask of the predicted ROI;
constructing a second loss function according to a second prediction classification result, a second prediction offset, a prediction mask, an expected classification result, an expected offset and an expected mask of the prediction ROI;
and according to the second loss function, performing iterative training on the FCN network to be trained by adopting a back propagation mode and a gradient descent algorithm to obtain the trained FCN network.
Optionally, the first loss function is:
Figure BDA0002268645530000031
wherein L is1Is the first loss function value, Nclsλ and NregTo a predetermined coefficient, N1In order to train the number of the borders,
Figure BDA0002268645530000032
for the classification log-loss function value of the ith training frame,
Figure BDA0002268645530000041
value of the regression loss function for the ith training frame, piThe probability value of the target to be detected in the first prediction classification result of the ith training frame,
Figure BDA0002268645530000042
is a preset value, t, corresponding to the expected classification result of the ith training frameiFor the first prediction offset of the ith training frame,
Figure BDA0002268645530000043
is the desired offset for the ith training frame.
Optionally, the second loss function is:
Figure BDA0002268645530000044
wherein L is2Is the second loss function value, N2To prepareMeasuring the number of ROIs, Nclsλ and NregIs a predetermined coefficient, Lcls(Pi,Pi *) The log loss function value for the classification of the ith prediction ROI,
Figure BDA0002268645530000045
regression loss function value, L, for the ith predicted ROImaskAs a function of mask loss, PiProbability values, P, belonging to the object to be detected in the second prediction classification result of the ith prediction ROIi *For the expected classification result of the ith predicted ROI, corresponding to a preset value, TiSecond prediction offset, T, for ith prediction ROIi *The desired offset for the ith predicted ROI.
In addition, to achieve the above object, the present invention provides an automatic blood cell analysis system, comprising:
the acquisition module is used for acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
the input module is used for respectively inputting the front focusing microscopic image and the back focusing microscopic image into a trained mask-RCNN model and outputting example segmentation images respectively corresponding to the front focusing microscopic image and the back focusing microscopic image, wherein the example segmentation images comprise classification results and masks;
and the counting module is used for counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image.
In order to achieve the above object, the present invention also provides a blood cell analyzer including a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the automatic blood cell analysis method as described above.
To achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the automatic blood cell analysis method as described above.
The invention provides a blood cell automatic analysis method, a system, a blood cell analyzer and a computer readable storage medium, which are characterized in that a front focusing microscopic image and a back focusing microscopic image of a sample to be detected are obtained; respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a trained mask-RCNN model, and outputting example segmentation images respectively corresponding to the front side focusing microscopic image and the back side focusing microscopic image, wherein the example segmentation images comprise classification results and masks; and counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image. Because the sample to be detected has a certain thickness, not all cells in the sample to be detected are at the same height, some cells may be closer to the surface to be detected of the sample to be detected, some cells are farther from the surface to be detected, and the other surface opposite to the surface to be detected is closer, so if only microscopic observation and cell identification statistics are performed on the surface to be detected of the sample to be detected, cells which may be farther from the surface to be detected cannot be observed. Therefore, the surface to be detected of the sample to be detected and the other surface opposite to the surface to be detected are subjected to microscopic observation at the same time, and cell identification and statistics are carried out on the two observed images, so that the counted number of blood cells is closer to the actual number of blood cells of the sample to be detected, and the result accuracy is higher.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a first embodiment of the automatic blood cell analysis method according to the present invention;
FIG. 3 is a detailed flowchart of step S30 in the second embodiment of the method for automatically analyzing blood cells according to the present invention;
FIG. 4 is a schematic flow chart illustrating a third embodiment of the automatic blood cell analysis method according to the present invention;
FIG. 5 is a flowchart illustrating a step S50 of the method for automatically analyzing blood cells according to the present invention;
FIG. 6 is a functional block diagram of the automatic blood cell analysis system according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a blood cell analyzer according to various embodiments of the present invention. The blood cell analyzer includes a communication module 100, a memory 200, a processor 300, and the like. Those skilled in the art will appreciate that the blood cell analyzer shown in FIG. 1 may also include more or fewer components than shown, or combine certain components, or a different arrangement of components. Wherein, the processor 300 is connected to the memory 200 and the communication module 100, respectively, and the memory 200 stores thereon a computer program, which is executed by the processor 300 at the same time.
The communication module 100 may be connected to an external device through a network. The communication module 100 may receive data sent by an external device, and may also send data, instructions, and information to the external device, where the external device may be an electronic device such as a tablet computer, a notebook computer, and a desktop computer.
The memory 200 may be used to store software programs and various data. The memory 200 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (acquiring a microscope image of the front side of the sample to be detected) required by at least one function, and the like; the storage data area may store data or information created from use of the blood cell analyzer, or the like. Further, the memory 200 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 300, which is a control center of the blood cell analyzer, connects various parts of the whole blood cell analyzer using various interfaces and lines, and performs various functions of the blood cell analyzer and processes data by operating or executing software programs and/or modules stored in the memory 200 and calling data stored in the memory 200, thereby performing overall monitoring of the blood cell analyzer. Processor 300 may include one or more processing units; preferably, the processor 300 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 300.
Although not shown in fig. 1, the blood cell analyzer may further include a circuit control module, which is used for being connected to a mains supply to implement power control and ensure normal operation of other components.
It will be understood by those skilled in the art that the configuration of the blood cell analyzer shown in FIG. 1 does not constitute a limitation of the blood cell analyzer and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
Various embodiments of the method of the present invention are presented in terms of the above-described hardware architecture.
Referring to fig. 2, in a first embodiment of the automatic blood cell analysis method according to the present invention, the automatic blood cell analysis method includes the steps of:
step S10, acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
in the scheme, the blood to be detected is placed in a transparent detection slide to form a sample to be detected, and the target to be detected can be red blood cells, platelets, white blood cells or the like in the blood. The method comprises the steps of placing the front surface of a sample to be detected on an object stage of a hematology analyzer, moving the object stage or an objective lens of the hematology analyzer to enable the front surface of the sample to be detected to be in the optimal focusing position, obtaining a front surface focusing microscopic image of the sample to be detected observed under a microscope, turning over the sample to be detected by 180 degrees, moving the object stage or the objective lens of the hematology analyzer to enable the back surface of the sample to be detected to be in the optimal focusing position, and obtaining a back surface focusing microscopic image of the sample to be detected observed under the microscope.
Step S20, inputting the front focusing microscopic image and the back focusing microscopic image into a trained mask-RCNN model respectively, and outputting example segmentation images corresponding to the front focusing microscopic image and the back focusing microscopic image respectively, wherein the example segmentation images comprise classification results and masks;
and respectively inputting the obtained front focusing microscopic image and the back focusing microscopic image of the sample to be detected into a trained mask-RCNN model, and respectively outputting example segmentation graphs corresponding to the front focusing microscopic image and the back focusing microscopic image, wherein the example segmentation graphs comprise classification results and masks of the target to be detected.
Specifically, the step S20 includes:
step S21, respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a feature extraction network of a trained mask-RCNN model, and outputting a corresponding first feature map and a corresponding second feature map;
and respectively taking the front focusing microscopic image and the back focusing microscopic image as input parameters of a feature extraction network of the trained mask-RCNN model, and inputting the input parameters into the feature extraction network, wherein the feature extraction network can be a ResNet residual network, such as two extensive network models ResNet-50 and ResNet-101, and the number of network layers in the ResNet residual network can be reduced according to requirements, such as setting ResNet-20 with 20 network layers. And performing convolution and pooling for multiple times through each convolution layer and each pooling layer in the feature extraction network, and finally respectively outputting a first feature map corresponding to the front focusing microscopic image and a second feature map corresponding to the back focusing microscopic image.
Step S22, inputting the first feature map and the second feature map into the RPN network respectively, and correspondingly obtaining a first ROI and a second ROI;
the method comprises the steps of inputting a first feature map and a second feature map into a trained RPN network respectively, firstly, scanning the input first feature map and the input second feature map by the RPN network through a sliding window respectively, and generating 9 kinds of suggested frames with preset length-width ratios and areas by taking a pixel point as a center for each pixel point on the first feature map and the second feature map, wherein the 9 kinds of suggested frames comprise three areas (128 x 128, 256 x 256 and 512 x 512), and each area comprises three length-width ratios (1:1, 1:2 and 2: 1).
Inputting all the suggested frames corresponding to the generated first feature map and the second feature map into frame regression branches of the trained RPN respectively, respectively carrying out first frame regression, namely first frame coordinate correction on all the suggested frames corresponding to the first characteristic map and the second characteristic map, simultaneously, all the suggested frames corresponding to the generated first characteristic map and the second characteristic map are respectively input into the classification branches of the trained RPN network, respectively carrying out primary recognition on the features enclosed in all the suggested frames corresponding to the first feature map and the second feature map to finish primary classification of the first feature map and the second feature map, outputting a probability value that each suggested frame corresponding to the first characteristic map and the second characteristic map belongs to a target to be detected and a probability value that each suggested frame belongs to a target not to be detected, and taking the category to which the maximum probability value belongs as the category corresponding to the suggested frame; and screening the suggested frames belonging to the target to be detected by adopting a Non-Maximum Suppression (NMS) algorithm to respectively obtain a first ROI (region of interest) corresponding to the first feature map and a second ROI corresponding to the second feature map.
Step S23, pooling and pixel aligning the first ROI with the first feature map to obtain a first ROI feature map;
pooling and pixel alignment are carried out on the obtained first ROI and the first characteristic map by adopting a bilinear interpolation algorithm, so that the characteristic map of the first ROI is obtained.
Step S24, pooling and pixel aligning the second ROI with a second feature map to obtain a second ROI feature map;
pooling and pixel alignment are carried out on the obtained second ROI and the first feature map by adopting a bilinear interpolation algorithm, so that the feature map of the second ROI is obtained.
And step S25, inputting the first ROI feature map and the second ROI feature map into an FCN (fiber channel network) respectively to perform frame regression, classification and mask generation, and outputting example segmentation maps corresponding to the front-side focusing microscopic image and the back-side focusing microscopic image respectively, wherein the example segmentation maps comprise classification results and masks of the target to be detected.
Simultaneously inputting the obtained first ROI feature maps into frame regression branches in a trained FCN network for secondary frame regression, namely secondary frame coordinate correction, inputting into classification branches for secondary recognition of all ROI feature maps to finish secondary classification of all first ROIs, inputting into mask generation branches for mask processing of parts of all targets to be detected to obtain masks of the targets to be detected, and finally outputting example segmentation maps of the samples to be detected, which correspond to the front focusing microscopic images and comprise information such as classification frames, classification results and masks of the targets to be detected.
And simultaneously inputting the obtained second ROI feature maps into frame regression branches in a trained FCN network for second frame regression, namely second frame coordinate correction, inputting the second ROI feature maps into a classification branch for second recognition to finish second classification of all second ROIs, inputting the second ROI feature maps into a mask generation branch for mask processing of each part recognized as the target to be detected to obtain a mask of the target to be detected, and finally outputting an example segmentation map of the sample to be detected, which corresponds to the reverse focusing microscopic image and comprises information such as a classification frame, a classification result and the mask of the target to be detected.
And step S30, counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image.
And counting the total mask amount M1 of the target to be detected according to the classification result in the example segmentation chart of the front focusing microscopic image.
And counting the mask number M2 of the object to be detected according to the classification result in the example segmentation graph of the reverse focusing microscopic image.
The sum of M1 and M2 can be used as the number of targets to be detected in the sample to be detected.
In the embodiment, a front focusing microscopic image and a back focusing microscopic image of a sample to be detected are obtained; respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a trained mask-RCNN model, and outputting example segmentation images respectively corresponding to the front side focusing microscopic image and the back side focusing microscopic image, wherein the example segmentation images comprise classification results and masks; and counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image. The sample to be detected has a certain thickness, not all cells in the sample to be detected are at the same height, some cells may be closer to the surface to be detected of the sample to be detected, some cells are farther from the surface to be detected, and the other surface opposite to the surface to be detected is closer, so if only microscopic observation and cell identification statistics are performed on the surface to be detected of the sample to be detected, cells which may be farther from the surface to be detected cannot be observed. Therefore, the surface to be detected of the sample to be detected and the other surface opposite to the surface to be detected are subjected to microscopic observation at the same time, and cell identification and statistics are carried out on the two observed images, so that the counted number of blood cells is closer to the actual number of blood cells of the sample to be detected, and the result accuracy is higher.
Further, referring to fig. 3, a second embodiment of the automatic blood cell analysis method according to the present application is proposed according to the first embodiment of the automatic blood cell analysis method according to the present application, and in this embodiment, step S30 includes:
step S31, respectively counting the number M1 of masks of which the classification results are the targets to be detected in the example segmentation image of the front focusing microscopic image and the number M2 of masks of which the classification results are the targets to be detected in the example segmentation image of the back focusing microscopic image according to the classification results in the example segmentation images corresponding to the front focusing microscopic image and the back focusing microscopic image;
step S32, carrying out coordinate alignment on the example segmentation drawing of the front focusing microscopic image and the example segmentation drawing of the back focusing microscopic image, and calculating the overlapping area ratio of each mask belonging to the target to be detected in the example segmentation drawing of the front focusing microscopic image to the mask belonging to the target to be detected at the corresponding position in the example segmentation drawing of the back focusing microscopic image;
step S33, counting the number M3 of the overlapping area ratios larger than a preset threshold value from the calculated overlapping area ratios;
step S34, obtaining the number M of the targets to be detected in the sample to be detected according to M1, M2 and M3, wherein M is M1+ M2-M3.
In this example, the total mask amount M1 of the object to be detected is counted as the classification result in the example segmentation map of the front focus microscope image.
And counting the mask number M2 of the object to be detected according to the classification result in the example segmentation graph of the reverse focusing microscopic image.
Because the thickness of the sample to be detected is smaller, certain cells in the sample to be detected can be observed in the front focusing microscopic image of the sample to be detected and can also be observed in the back focusing microscopic image of the sample to be detected. Coordinate alignment is carried out on the example segmentation image of the front focusing microscopic image and the example segmentation image of the back focusing microscopic image, and the example segmentation image of the front focusing microscopic image and the example segmentation image of the back focusing microscopic image after coordinate alignment are overlapped, so that the overlapping area ratio of some masks identified as the target to be detected in the example segmentation image of the front focusing microscopic image and masks identified as the target to be detected in the example segmentation image of the back focusing microscopic image is calculated, wherein if no mask belonging to the target to be detected exists in the corresponding position of the mask belonging to the target to be detected in the example segmentation image of the front focusing microscopic image in the example segmentation image of the back focusing microscopic image, the overlapping area ratio is 0. By setting a threshold, masks corresponding to the target to be detected in the example segmentation maps of the front focusing microscopic image and the example segmentation maps of the back focusing microscopic image, which have the overlapping area ratio larger than the preset threshold, are regarded as the same cell observed in both the front focusing microscopic image and the back focusing microscopic image, so that the number M3 of which the overlapping area ratio is larger than the preset threshold is counted from the calculated overlapping area ratios.
And finally, after the number of masks M1 of which the classification result is the target to be detected in the front focusing microscopic image and the number of masks M2 of which the classification result is the target to be detected in the segmentation example image of the back focusing microscopic image are subtracted, the counted classification results in the two segmentation example images are subtracted to obtain the number of masks M3 of which the overlapping area ratio is larger than a preset threshold value, namely M1+ M2-M3 is used as the number M of the targets to be detected in the sample to be detected.
In the embodiment, the embodiment segmentation images of the front focusing microscopic image and the embodiment segmentation images of the back focusing microscopic image are subjected to superposition analysis, and the number of masks belonging to the same target to be detected in the two embodiment segmentation images is determined, so that the number of the targets belonging to the same target to be detected in the two embodiment segmentation images is prevented from being repeatedly counted during counting, and the accuracy of the counting result of the target to be detected in the sample to be detected is further improved.
Further, referring to fig. 4, a third embodiment of the automatic blood cell analysis method according to the present application is proposed according to the first embodiment of the automatic blood cell analysis method according to the present application, and in this embodiment, step S10 is preceded by:
step S40, acquiring a focused microscopic image of the blood sample as a training sample, labeling the training sample, and acquiring an expected mask, an expected classification result and an expected frame;
in this example, the focused microscopic image of the blood sample is collected as a training sample, and the targets to be detected in the focused microscopic image of the blood sample are labeled, wherein the labeling process includes performing class labeling on each target to be detected, generating a mask for each target to be detected, and generating a frame for each target to be detected, so as to obtain an expected mask, an expected classification result, and an expected frame. Because the more samples in the data set, the better, in practical application, the marking of the focused microscope image of the blood sample needs to consume a lot of time and energy, and the consideration of time cost and sample number, a small amount of focused microscope images of the blood sample can be collected, and the collected focused microscope images of the blood sample are subjected to data expansion by using an image enhancement means, wherein the image enhancement means comprises: random cut, horizontal mirror, vertical mirror, flip, etc.
And step S50, training the RPN and the FCN in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame and the training sample, and obtaining the trained mask-RCNN model.
Inputting the training samples into a mask-RCNN model to be trained, obtaining a prediction result output by the RPN network to be trained, wherein the prediction result comprises a prediction classification result and a prediction frame, and the prediction result output by the FCN network to be trained comprises a prediction classification result, a prediction frame and a prediction mask, and then training the RPN network to be trained and the FCN network to be trained according to the expected mask, the expected classification result, the expected frame, the prediction result output by the RPN network to be trained and the prediction result output by the FCN network to be trained, so as to obtain the trained RPN network and the trained FCN network. In this embodiment, a sequential training mode may be adopted, that is, the RPN network is trained first, and then the FCN network is trained on the basis of the trained RPN network, or the RPN network and the FCN network may be trained by using an approximate joint training method, or the RPN network and the FCN network may be trained by using a joint training method, or the RPN network and the FCN network may be trained by using an alternate training method.
Specifically, referring to fig. 5, fig. 5 is a schematic flow chart of a fourth embodiment of the method for automatically analyzing blood cells according to the present invention, based on the third embodiment, and based on the above embodiments, the step S50 includes:
step S51, inputting the training sample into a feature extraction network of the mask-RCNN model to be trained, and obtaining a feature map corresponding to the training sample;
inputting the training samples into a feature extraction network of a mask-RCNN model to be trained, wherein the feature extraction network is a pre-trained feature extraction network, performing convolution and pooling for multiple times through each convolution layer and pooling layer in the feature extraction network, and finally outputting feature maps corresponding to the samples. The feature extraction network may be a ResNet residual network, for example, two relatively extensive network models ResNet-50 and ResNet-101, and of course, the number of network layers in the ResNet residual network may also be reduced according to the requirement, for example, ResNet-20 with 20 network layers is set.
Step S52, inputting the feature map of the training sample into the RPN network to be trained to obtain a training frame, and obtaining an expected offset according to the training frame and the expected frame;
inputting the characteristic map of the sample into an RPN network to be trained, scanning the characteristic map by the RPN network to be trained by adopting a sliding window, and generating 9 kinds of suggested frames with preset length-width ratios and areas by taking the pixel point as the center for each pixel point, wherein the 9 kinds of suggested frames comprise three areas (128 multiplied by 128, 256 multiplied by 256 and 512 multiplied by 512), and each area comprises three length-width ratios (1:1, 1:2 and 2: 1).
And calculating the overlapping degree (IoU) of each suggested frame and a target frame correspondingly marked in the focusing microscopic image of the sample according to the areas and the positions of all the generated suggested frames, and selecting the suggested frames with the overlapping degree larger than a first preset threshold and the overlapping degree smaller than a second preset threshold as training frames, wherein 1 is larger than the first preset threshold and is larger than or equal to the second preset threshold. Typically, the first predetermined threshold value is 0.7, and the second predetermined threshold value is 0.3.
Then, according to the position coordinate information of the training frame and the position coordinate information of the corresponding expected frame obtained before, the expected offset of the training frame is obtained
Step S53, inputting the training frame into the classification layer of the RPN network to be trained for classification training and inputting the training frame into the regression layer for regression training, and respectively outputting a first prediction classification result and a first prediction offset of the training frame;
inputting all the screened training frames into a classification branch of an RPN network to be trained, carrying out first recognition on the features enclosed in all the training frames, finishing first classification, and outputting a first prediction classification result, wherein the first prediction classification result comprises a probability value of each training frame belonging to a target to be detected and a probability value of each training frame belonging to a target not to be detected, the sum of the probability values of the targets belonging to the targets to be detected and the probability values of the targets belonging to the targets not to be detected is 1, meanwhile, inputting the training frames screened from all the suggested frames into a regression branch of the RPN network to be trained, carrying out first frame regression on all the training frames, namely, correcting first frame coordinates, and outputting first prediction offset of each training frame.
Step S54, constructing a first loss function according to the first prediction offset, the first prediction classification result, the expected classification result and the expected offset of the training frame;
step S55, according to the first loss function, iterative training is carried out on the RPN to be trained in a reverse propagation mode, and the trained RPN is obtained;
in the construction of the first loss function according to the first prediction offset, the prediction classification result, the expected classification result and the expected offset of each training frame, the first loss function is as follows:
Figure BDA0002268645530000131
wherein L is1Is the first loss function value, Nclsλ and NregTo a predetermined coefficient, N1In order to train the number of the borders,
Figure BDA0002268645530000132
for the classification log-loss function value of the ith training frame,
Figure BDA0002268645530000133
value of the regression loss function for the ith training frame, piThe probability value of the target to be detected in the first prediction classification result of the ith training frame,
Figure BDA0002268645530000134
is the firstPreset values, t, corresponding to expected classification results of i training framesiFor the first prediction offset of the ith training frame,
Figure BDA0002268645530000135
is the desired offset for the ith training frame.
Figure BDA0002268645530000136
If the expected classification result of the training frame is the target to be detected,
Figure BDA0002268645530000137
if the expected classification result of the training frame is not the target to be detected
Figure BDA0002268645530000138
Function of regression loss
Figure BDA0002268645530000139
Where R is the smoothL1 function,
Figure BDA00022686455300001310
wherein t isi=(tx,ty,tw,th),
Figure 3
txFor the predicted offset, t, of the ith training frame center in the x directionyFor the predicted offset, t, of the ith training frame center in the y directionwFor the predicted magnification, t, of the ith training frame widthhFor the predicted magnification of the ith training frame length,
Figure BDA0002268645530000141
for the desired offset in the x-direction of the ith training frame center to the corresponding target frame center,
Figure BDA0002268645530000142
for the desired offset in the y-direction of the ith training frame center to the corresponding target frame center,
Figure BDA0002268645530000143
for the desired magnification of the ith training frame width,
Figure BDA0002268645530000144
the desired magnification for the ith training frame length.
And performing iterative training on the RPN by adopting a back propagation mode and a gradient descent algorithm according to the first loss function to obtain the trained RPN.
Step S56, inputting the feature map of the training sample into the trained RPN network to obtain a predicted ROI;
inputting the feature map of the sample output by the feature extraction network into the trained RPN network, firstly, scanning the fused feature map by the trained RPN network by adopting a sliding window, and generating 9 kinds of suggested frames with preset length-width ratios and areas centered on the pixel point for each pixel point, wherein the 9 kinds of suggested frames comprise three areas (128 × 128, 256 × 256 and 512 × 512), and each area comprises three length-width ratios (1:1, 1:2 and 2: 1).
Inputting all generated suggested frames into frame regression branches of a trained RPN, performing first frame regression on all the suggested frames, namely first frame coordinate correction, simultaneously inputting all the generated suggested frames into classification branches of the trained RPN, performing first recognition on the features enclosed in all the suggested frames, outputting the probability value of each suggested frame belonging to a target to be detected and the probability value of each suggested frame belonging to a target not to be detected, wherein the probability value of each suggested frame belonging to the target to be detected plus the probability value of each suggested frame belonging to the target not to be detected is equal to 1, and taking the category to which the maximum probability value belongs as the category corresponding to the suggested frame; and (3) screening the suggested frames belonging to the target to be detected by adopting a Non-Maximum Suppression (NMS) algorithm to obtain a predicted ROI.
Step S57, pooling and pixel alignment are carried out on the predicted ROI and the training sample, and a predicted ROI feature map is obtained;
and pooling and pixel alignment are carried out on the obtained prediction ROI and a second sample, namely a fusion feature map, by adopting a bilinear interpolation algorithm, so as to obtain the feature map of each prediction ROI.
Step S58, inputting the characteristic map of the predicted ROI into the FCN to be trained, and outputting a second prediction classification result, a second prediction offset and a prediction mask of the predicted ROI;
respectively inputting the obtained characteristic maps of the predicted ROI into frame regression branches in an FCN network to be trained for second frame regression, namely correcting coordinates of a second frame to output first prediction offset of the predicted ROI, simultaneously inputting the characteristic maps of the predicted ROI into classification branches for second recognition to finish second classification of the predicted ROI, and outputting a second prediction classification result of the predicted ROI, wherein the second prediction classification result comprises a probability value that each predicted ROI belongs to a target to be detected and a probability value that each predicted ROI belongs to a target not to be detected, the sum of the probability value belonging to the target to be detected and the probability value belonging to the target not to be detected is 1, the result corresponding to the maximum probability value is used as a recognition result corresponding to the predicted ROI, and simultaneously inputting the recognition result into a mask generation branch for mask processing of the predicted ROI so as to generate a predicted mask of the predicted ROI.
Step S59, constructing a second loss function according to the second prediction classification result, the second prediction offset, the prediction mask, the expected classification result, the expected offset and the expected mask of the prediction ROI;
and step S60, performing iterative training on the FCN network to be trained by adopting a back propagation mode and a gradient descent algorithm according to the second loss function to obtain the trained FCN network.
Constructing a second loss function according to the second prediction classification result, the second prediction offset, the prediction mask, the expected classification result, the expected offset and the expected mask of the prediction ROI:
Figure BDA0002268645530000151
wherein L is2Is the second loss function value, N2To predict the number of ROIs, Nclsλ and NregIs a predetermined coefficient, Lcls(Pi,Pi *) The log loss function value for the classification of the ith prediction ROI,
Figure BDA0002268645530000152
regression loss function value, L, for the ith predicted ROImaskAs a function of mask loss, PiProbability values, P, belonging to the object to be detected in the second prediction classification result of the ith prediction ROIi *For the expected classification result of the ith predicted ROI, corresponding to a preset value, TiSecond prediction offset, T, for ith prediction ROIi *The desired offset for the ith ROI. The classification loss function and the regression loss function in the second loss function are the same as those in the first loss function except that the values of the parameter inputs are different. The mask generation branch adopts an FCN network to divide each prediction ROI to output a binary mask with dimension K m, namely, m of K categories, wherein m represents the size of the characteristic map of the ROI, and K is the number of the identified categories, for example, if the constructed network aims to identify red blood cells, platelets, white blood cells and background at the same time, K is 4, and if the constructed network aims to identify only the red blood cells and takes other platelets and white blood cells as the background, K is 2; according to the prediction classification result of each prediction ROI by the classification network branches, using an average binary cross entropy loss function (average cross-entropy loss) defined by pixel level sigmoi to perform loss calculation on a binary mask of m x m corresponding to the prediction classification result of each prediction ROI to obtain a loss value of each prediction ROI, and finally obtaining a mask loss value L corresponding to a training samplemask. For example, if K is 4 and the prediction classification result of a certain prediction ROI is red blood cells, the average binary cross entropy loss function defined at the pixel level sigmoi is used to perform loss calculation on the binary mask of m × m belonging to the red blood cells and output by the mask generation branch as the prediction ROI, thereby obtaining the predicted mask loss value.
And after the second loss function is constructed, performing iterative training on the FCN network to be trained by adopting a back propagation mode and a gradient descent algorithm according to the second loss function to obtain the trained FCN network.
In the embodiment, the mask-RCNN network is trained based on a real microscopic image of the blood sample as a training sample, so that the accuracy of identifying the target to be detected of the blood sample to be detected by adopting the finally trained mask-RCNN network in the subsequent actual use process is higher.
Referring to fig. 6, the present invention also provides an automatic blood cell analysis system, including:
the acquisition module 10 is used for acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
an input module 20, configured to input the front-side focusing microscopic image and the back-side focusing microscopic image into a trained mask-RCNN model, and output example segmentation maps corresponding to the front-side focusing microscopic image and the back-side focusing microscopic image, respectively, where the example segmentation maps include a classification result and a mask;
and the counting module 30 is used for counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image.
Further, the input module 20 includes:
the first input unit 21 is configured to input the front-side focusing microscopic image and the back-side focusing microscopic image into a feature extraction network of a trained mask-RCNN model, and output a corresponding first feature map and a corresponding second feature map;
a second input unit 22, configured to input the first feature map and the second feature map to the RPN network, respectively, so as to obtain a first ROI and a second ROI correspondingly;
a first pooling unit 23 for pooling and pixel-aligning the first ROI with the first feature map to obtain a first ROI feature map;
a second pooling unit 24 for pooling and pixel-aligning the second ROI with the second feature map to obtain a second ROI feature map;
and the first output unit 25 is configured to input the first ROI feature map and the second ROI feature map to the FCN network respectively for frame regression, classification, and mask generation, and output example segmentation maps corresponding to the front-side focused microscope image and the back-side focused microscope image respectively, where the example segmentation maps include a classification result of the target to be detected and a mask.
Further, the statistic module 30 includes:
the first counting unit 31 is configured to count the number M1 of masks whose classification results are the objects to be detected in the example segmentation map of the front focusing microscope image and the number M2 of masks whose classification results are the objects to be detected in the example segmentation map of the back focusing microscope image, respectively, according to the classification results in the example segmentation maps corresponding to the front focusing microscope image and the back focusing microscope image;
the calculation unit 32 is configured to perform coordinate alignment on the example segmentation map of the front-side focused microscopic image and the example segmentation map of the back-side focused microscopic image, and calculate an overlapping area ratio of each mask belonging to the target to be detected in the example segmentation map of the front-side focused microscopic image to a mask belonging to the target to be detected at a corresponding position in the example segmentation map of the back-side focused microscopic image;
a second counting unit 33 for counting the number M3 of the overlapping area ratios greater than the preset threshold value from among the calculated overlapping area ratios;
a first obtaining unit 34, configured to obtain the number M of the targets to be detected in the sample to be detected according to M1, M2, and M3, where M is M1+ M2-M3.
Further, the system also comprises
The second obtaining module 40 is configured to obtain a focused microscopic image of the blood sample as a training sample, label the training sample, and obtain an expected mask, an expected classification result, and an expected frame;
and the training module 50 is configured to train the RPN network and the FCN network in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame, and the training sample, so as to obtain a trained mask-RCNN model.
Further, the training module 50 includes:
the second obtaining unit 51 is configured to input the training sample into a feature extraction network of the mask-RCNN model to be trained, and obtain a feature map corresponding to the training sample;
a third obtaining unit 52, configured to input the feature map of the training sample into an RPN network to be trained to obtain a training frame, and obtain an expected offset according to the training frame and an expected frame;
a second output unit 53, configured to input the training frame to the classification layer of the RPN network to be trained for classification training and input the regression layer for regression training, and output a first prediction classification result and a first prediction offset of the training frame respectively;
a first constructing unit 54, configured to construct a first loss function according to the first prediction offset, the first prediction classification result, the expected classification result, and the expected offset of the training frame, where the first loss function is:
Figure BDA0002268645530000171
wherein L is1Is the first loss function value, Nclsλ and NregTo a predetermined coefficient, N1In order to train the number of the borders,
Figure BDA0002268645530000172
for the classification log-loss function value of the ith training frame,
Figure BDA0002268645530000181
value of the regression loss function for the ith training frame, piThe probability value of the target to be detected in the first prediction classification result of the ith training frame,
Figure BDA0002268645530000182
is a preset value, t, corresponding to the expected classification result of the ith training frameiFor the first prediction offset of the ith training frame,
Figure BDA0002268645530000183
the expected offset of the ith training frame;
the first training unit 55 is configured to perform iterative training on the RPN network to be trained in a back propagation manner according to the first loss function, so as to obtain a trained RPN network;
a fourth obtaining unit 56, configured to input the feature map of the training sample into the trained RPN network, so as to obtain a predicted ROI;
a third pooling unit 57, configured to pool and align pixels of the predicted ROI and the training samples to obtain a feature map of the predicted ROI;
a third output unit 58, configured to input the feature map of the predicted ROI into the FCN network to be trained, and output a second prediction classification result, a second prediction offset, and a prediction mask of the predicted ROI;
a second constructing unit 59, configured to construct a second loss function according to the second prediction classification result, the second prediction offset, the prediction mask, the expected classification result, the expected offset, and the expected mask of the prediction ROI, where the second loss function is:
Figure BDA0002268645530000184
wherein L is2Is the second loss function value, N2For predicting the number of ROIs, Ncls, λ and Nreg are preset coefficients, Lcls(Pi,Pi *) The log loss function value for the classification of the ith prediction ROI,
Figure BDA0002268645530000185
regression loss function value for the ith prediction ROI, Lmak is the mask loss function, PiProbability values, P, belonging to the object to be detected in the second prediction classification result of the ith prediction ROIi *For the expected classification result of the ith predicted ROI, corresponding to a preset value, TiSecond prediction offset, T, for ith prediction ROIi *A desired offset for the ith predicted ROI;
and a second training unit 60, configured to perform iterative training on the FCN network to be trained by using a back propagation method and a gradient descent algorithm according to the second loss function, so as to obtain a trained FCN network.
The invention also proposes a computer-readable storage medium on which a computer program is stored. The computer readable storage medium may be the Memory 200 of the blood cell analyzer of fig. 1, or may be at least one of a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, and an optical disk, and includes information for enabling the blood cell analyzer to perform the method according to the embodiments of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. An automatic blood cell analysis method, comprising the steps of:
acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a trained mask-RCNN model, and outputting example segmentation images respectively corresponding to the front side focusing microscopic image and the back side focusing microscopic image, wherein the example segmentation images comprise classification results and masks;
counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image;
the step of counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image comprises the following steps:
respectively counting the number M1 of masks of which the classification result is the target to be detected in the example segmentation image of the front focusing microscopic image and the number M2 of masks of which the classification result is the target to be detected in the example segmentation image of the back focusing microscopic image according to the classification results in the example segmentation images corresponding to the front focusing microscopic image and the back focusing microscopic image;
carrying out coordinate alignment on the example segmentation drawing of the front focusing microscopic image and the example segmentation drawing of the back focusing microscopic image, and calculating the overlapping area ratio of each mask belonging to the target to be detected in the example segmentation drawing of the front focusing microscopic image to the mask belonging to the target to be detected at the corresponding position in the example segmentation drawing of the back focusing microscopic image;
counting the number M3 of the overlapping area ratios larger than a preset threshold value from the calculated overlapping area ratios;
obtaining the number M of targets to be detected in the sample to be detected according to M1, M2 and M3, wherein M is M1+ M2-M3.
2. The method according to claim 1, wherein the step of inputting the front-side focused microscopic image and the back-side focused microscopic image into a trained mask-RCNN model, respectively, and outputting instance segmentation maps corresponding to the front-side focused microscopic image and the back-side focused microscopic image, respectively, the step of including classification results and masks comprises:
respectively inputting the front side focusing microscopic image and the back side focusing microscopic image into a feature extraction network of a trained mask-RCNN model, and outputting a corresponding first feature map and a corresponding second feature map;
respectively inputting the first feature map and the second feature map into an RPN network, and correspondingly obtaining a first ROI and a second ROI;
pooling and pixel aligning the first ROI with a first feature map to obtain a first ROI feature map;
pooling and pixel aligning the second ROI with a second feature map to obtain a second ROI feature map;
and respectively inputting the first ROI feature map and the second ROI feature map into an FCN (fiber channel network) to perform frame regression, classification and mask generation, and outputting example segmentation maps respectively corresponding to the front focusing microscopic image and the back focusing microscopic image, wherein the example segmentation maps comprise a classification result and a mask of the target to be detected.
3. The automatic blood cell analysis method according to any one of claims 1 to 2, wherein the step of obtaining the front side focused microscope image and the back side focused microscope image of the sample to be tested is preceded by:
acquiring a focused microscopic image of a blood sample as a training sample, and labeling the training sample to obtain an expected mask, an expected classification result and an expected frame;
and training the RPN and the FCN in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame and the training sample to obtain the trained mask-RCNN model.
4. The method according to claim 3, wherein the step of training the RPN network and the FCN network in the mask-RCNN model to be trained according to the obtained expected mask, the expected classification result, the expected frame and the training sample to obtain the trained mask-RCNN model comprises:
inputting the training sample into a feature extraction network of a mask-RCNN model to be trained to obtain a feature map corresponding to the training sample;
inputting the feature map of the training sample into an RPN network to be trained to obtain a training frame, and obtaining an expected offset according to the training frame and an expected frame;
inputting the training frame into a classification layer of the RPN network to be trained for classification training, inputting the training frame into a regression layer for regression training, and respectively outputting a first prediction classification result and a first prediction offset of the training frame;
constructing a first loss function according to the first prediction offset, the first prediction classification result, the expected classification result and the expected offset of the training frame;
performing iterative training on the RPN to be trained in a reverse propagation mode according to the first loss function to obtain a trained RPN;
inputting the feature map of the training sample into the trained RPN network to obtain a predicted ROI;
pooling and pixel alignment are carried out on the predicted ROI and the training samples, and a predicted ROI feature map is obtained;
inputting the characteristic map of the predicted ROI into the FCN to be trained, and outputting a second predicted classification result, a second predicted offset and a predicted mask of the predicted ROI;
constructing a second loss function according to a second prediction classification result, a second prediction offset, a prediction mask, an expected classification result, an expected offset and an expected mask of the prediction ROI;
and according to the second loss function, performing iterative training on the FCN network to be trained by adopting a back propagation mode and a gradient descent algorithm to obtain the trained FCN network.
5. The method for automatically analyzing blood cells according to claim 4, wherein the first loss function is:
Figure FDA0003541859700000031
wherein L is1Is the first loss function value, Nclsλ and NregTo a predetermined coefficient, N1In order to train the number of the borders,
Figure FDA0003541859700000032
for the classification log-loss function value of the ith training frame,
Figure FDA0003541859700000033
value of the regression loss function for the ith training frame, piThe probability value of the target to be detected in the first prediction classification result of the ith training frame,
Figure FDA0003541859700000034
is a preset value, t, corresponding to the expected classification result of the ith training frameiFor the first prediction offset of the ith training frame,
Figure FDA0003541859700000035
is the desired offset for the ith training frame.
6. The method for automatically analyzing blood cells according to claim 4, wherein the second loss function is:
Figure FDA0003541859700000041
wherein L is2Is the second loss function value, N2To predict the number of ROIs, Nclsλ and NregIn order to set the coefficients to a predetermined value,
Figure FDA0003541859700000042
the log loss function value for the classification of the ith prediction ROI,
Figure FDA0003541859700000043
regression loss function value, L, for the ith predicted ROImaskAs a function of mask loss, PiThe probability value of the object to be detected in the second prediction classification result of the ith prediction ROI,
Figure FDA0003541859700000044
for the expected classification result of the ith predicted ROI, corresponding to a preset value, TiSecond prediction offset, T, for ith prediction ROIi *The desired offset for the ith predicted ROI.
7. An automated blood cell analysis system, comprising:
the acquisition module is used for acquiring a front focusing microscopic image and a back focusing microscopic image of a sample to be detected;
the input module is used for respectively inputting the front focusing microscopic image and the back focusing microscopic image into a trained mask-RCNN model and outputting example segmentation images respectively corresponding to the front focusing microscopic image and the back focusing microscopic image, wherein the example segmentation images comprise classification results and masks;
the counting module is used for counting the number of the targets to be detected in the sample to be detected according to the classification result and the mask in the example segmentation image of the front focusing microscopic image and the classification result and the mask in the example segmentation image of the back focusing microscopic image; and
respectively counting the number M1 of masks of which the classification result is the target to be detected in the example segmentation image of the front focusing microscopic image and the number M2 of masks of which the classification result is the target to be detected in the example segmentation image of the back focusing microscopic image according to the classification results in the example segmentation images corresponding to the front focusing microscopic image and the back focusing microscopic image;
carrying out coordinate alignment on the example segmentation drawing of the front focusing microscopic image and the example segmentation drawing of the back focusing microscopic image, and calculating the overlapping area ratio of each mask belonging to the target to be detected in the example segmentation drawing of the front focusing microscopic image to the mask belonging to the target to be detected at the corresponding position in the example segmentation drawing of the back focusing microscopic image;
counting the number M3 of the overlapping area ratios larger than a preset threshold value from the calculated overlapping area ratios;
obtaining the number M of targets to be detected in the sample to be detected according to M1, M2 and M3, wherein M is M1+ M2-M3.
8. A blood cell analyzer, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the automatic blood cell analysis method according to any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, carries out the steps of the automatic blood cell analysis method according to any one of claims 1 to 6.
CN201911099037.3A 2019-11-11 2019-11-11 Blood cell automatic analysis method, system, blood cell analyzer and storage medium Active CN110796117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911099037.3A CN110796117B (en) 2019-11-11 2019-11-11 Blood cell automatic analysis method, system, blood cell analyzer and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911099037.3A CN110796117B (en) 2019-11-11 2019-11-11 Blood cell automatic analysis method, system, blood cell analyzer and storage medium

Publications (2)

Publication Number Publication Date
CN110796117A CN110796117A (en) 2020-02-14
CN110796117B true CN110796117B (en) 2022-04-15

Family

ID=69443968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911099037.3A Active CN110796117B (en) 2019-11-11 2019-11-11 Blood cell automatic analysis method, system, blood cell analyzer and storage medium

Country Status (1)

Country Link
CN (1) CN110796117B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614104B (en) * 2020-12-22 2023-07-14 湖南伊鸿健康科技有限公司 Segmentation counting method and terminal for red blood cell overlapping

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108844906A (en) * 2018-06-04 2018-11-20 江苏柯伦迪医疗技术有限公司 A kind of blood cell component analyzer and method
CN109800631A (en) * 2018-12-07 2019-05-24 天津大学 Fluorescence-encoded micro-beads image detecting method based on masked areas convolutional neural networks
CN110032985A (en) * 2019-04-22 2019-07-19 清华大学深圳研究生院 A kind of automatic detection recognition method of haemocyte

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303979B2 (en) * 2016-11-16 2019-05-28 Phenomic Ai Inc. System and method for classifying and segmenting microscopy images with deep multiple instance learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108844906A (en) * 2018-06-04 2018-11-20 江苏柯伦迪医疗技术有限公司 A kind of blood cell component analyzer and method
CN109800631A (en) * 2018-12-07 2019-05-24 天津大学 Fluorescence-encoded micro-beads image detecting method based on masked areas convolutional neural networks
CN110032985A (en) * 2019-04-22 2019-07-19 清华大学深圳研究生院 A kind of automatic detection recognition method of haemocyte

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Utilizing Mask R-CNN for Detection and Segmentation of Oral Diseases;Rajaram Anantharaman 等;《2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)》;20190124;第2197-2204页 *

Also Published As

Publication number Publication date
CN110796117A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN109447169B (en) Image processing method, training method and device of model thereof and electronic system
CN110163114B (en) Method and system for analyzing face angle and face blurriness and computer equipment
CN109934115B (en) Face recognition model construction method, face recognition method and electronic equipment
CN109377445B (en) Model training method, method and device for replacing image background and electronic system
CN111353413A (en) Low-missing-report-rate defect identification method for power transmission equipment
CN110837809A (en) Blood automatic analysis method, blood automatic analysis system, blood cell analyzer, and storage medium
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN111462075B (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy region
CN111462076A (en) Method and system for detecting fuzzy area of full-slice digital pathological image
CN111814850A (en) Defect detection model training method, defect detection method and related device
CN115731164A (en) Insulator defect detection method based on improved YOLOv7
CN112000226B (en) Human eye sight estimation method, device and sight estimation system
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN110147833A (en) Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN116740728B (en) Dynamic acquisition method and system for wafer code reader
CN113435407A (en) Small target identification method and device for power transmission system
CN115984543A (en) Target detection algorithm based on infrared and visible light images
CN115424264A (en) Panorama segmentation method, related device, electronic equipment and storage medium
CN110796117B (en) Blood cell automatic analysis method, system, blood cell analyzer and storage medium
WO2014192182A1 (en) Image processing device, program, storage medium, and image processing method
CN114067339A (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN115115552B (en) Image correction model training method, image correction device and computer equipment
CN115661564A (en) Training method and device of image processing model, electronic equipment and storage medium
CN114331987A (en) Edge side lightweight processing method for wiring terminal corrosion panoramic monitoring image
Laungrungthip et al. Edge-based detection of sky regions in images for solar exposure prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant