WO2021104410A1 - 血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法 - Google Patents

血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法 Download PDF

Info

Publication number
WO2021104410A1
WO2021104410A1 PCT/CN2020/132018 CN2020132018W WO2021104410A1 WO 2021104410 A1 WO2021104410 A1 WO 2021104410A1 CN 2020132018 W CN2020132018 W CN 2020132018W WO 2021104410 A1 WO2021104410 A1 WO 2021104410A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood cell
image
blood
convolution
recognition model
Prior art date
Application number
PCT/CN2020/132018
Other languages
English (en)
French (fr)
Inventor
李柏蕤
连荷清
方喆君
吕东琦
Original Assignee
北京小蝇科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911186777.0A external-priority patent/CN110647874B/zh
Priority claimed from CN201911186888.1A external-priority patent/CN110647875B/zh
Priority claimed from CN201911186889.6A external-priority patent/CN110647876B/zh
Application filed by 北京小蝇科技有限责任公司 filed Critical 北京小蝇科技有限责任公司
Priority to US17/762,780 priority Critical patent/US20220343623A1/en
Publication of WO2021104410A1 publication Critical patent/WO2021104410A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the embodiment of the present invention relates to the technical field of blood cell analysis, and particularly relates to a blood cell segmentation model and a method for constructing a recognition model of a blood smear full-field intelligent analysis method.
  • the current blood test process in the hospital is: blood samples-blood analyzer-push-staining machine-manual microscopy. The whole process takes 60 minutes.
  • the electrical impedance method is a physical method.
  • the blood is diluted in a certain proportion and sucked through a microporous tube of the instrument by negative pressure. Since blood cells are a poor conductor compared with the diluent, each blood cell replaces the same volume of dilution when passing through the micropores.
  • the liquid forms a short-term resistance on the circuit and causes the voltage to change, and the corresponding pulse signal is generated, which is amplified and discriminated, and then accumulated and recorded. Analyzing equipment using this principle often causes blockage of the microporous tubules to varying degrees, causing large fluctuations in the results of blood cell classification and counting.
  • the blood is diluted in a certain proportion to form a very thin liquid flow through the laser beam, and each blood cell is irradiated by the laser to produce light scattering and be received by the photomultiplier tube.
  • the forward angle scattering of a cell is related to the size of the cell, and the side angle (or high angle) scattering is related to the internal structure of the cell and the nature of the particles.
  • the number of cells is the same as the number of pulses scattered by the cell through the laser beam.
  • Various detection signals are amplified, screened and processed by computer to obtain the average number and volume size of various blood cells, the coefficient of variation, the percentage of the whole blood volume, and the histogram of the volume size distribution.
  • the efficiency of manual re-examination is too low, and subjective factors in the recognition process are too large, resulting in extremely susceptible to subjective experience and human factors.
  • the analysis and counting of blood cells in the full field of view of the blood smear has not been realized, and the amount of data samples is not enough, resulting in one-sided results, not comprehensive and accurate;
  • the counting and classification algorithms are relatively traditional, the morphological analysis effect is poor, and the recognition accuracy Not high;
  • the subjectivity of manual microscopy physicians cannot be controlled, and the re-examination rate is high; fourth, the time is longer and the efficiency is low.
  • the purpose of the embodiment of the present invention is to provide a blood cell segmentation model and a method for constructing a recognition model of a blood smear full-field intelligent analysis method, which is used for blood cell morphology recognition and quantitative analysis in the field of blood routine testing.
  • a blood smear scanner or a microphotographic system to perform full-field photography on multiple blood smears, multiple blood smear images are obtained, thereby establishing a blood smear image group.
  • the training data set and the verification data set are prepared for the three parts of image restoration, image segmentation and image recognition.
  • the image recognition data set also needs the support of the image segmentation model.
  • the blood smear to be tested also needs to go through a blood smear scanner or a microphotographic system for full-field photography to establish a blood smear scan image, and then process the image restoration model to obtain a clear blood after restoration.
  • a single blood cell image can be obtained, and finally the cell classification result is obtained through the blood cell image recognition and the report is output.
  • a blood smear full-field intelligent analysis method comprising the steps of: collecting multiple original blood smear single-field images, establishing an original blood smear single-field image group, and establishing based on the multiple original blood smear single-field images Full-field image of blood smear;
  • the multiple original blood smear single-field images select original blood smear single-field images with white blood cells and no overlap between white blood cells and red blood cells to obtain a second training set and a second validation set, based on the second training set Constructing an image segmentation model with the second verification set, the multiple original blood smear single-field images with white blood cells and no overlap between white blood cells and red blood cells are processed by the image segmentation model to obtain multiple segmented single blood cell images;
  • the blood smear full-field image is restored by an image restoration model to obtain a restored full-field image, and the restored full-field image is processed by an image segmentation model to obtain multiple single blood cell images, the multiple single blood cell images
  • the image is processed by the image recognition model to obtain the blood cell classification result.
  • the single-field image of the original blood smear with white blood cells and no overlap between white blood cells and red blood cells is selected from the plurality of single-field images of the original blood smear to obtain a second training set and a second verification set, based on all the original blood smear single-field images.
  • the second training set and the second validation set construct an image segmentation model, and the multiple original blood smear single-field images with white blood cells and no overlap between white blood cells and red blood cells are processed by the image segmentation model to obtain multiple segmented single blood cells
  • the image steps include:
  • Step S310 the input layer of the second convolutional neural network outputs a certain original blood smear single-field image data in the second training set to the convolution block of the second convolutional neural network;
  • Step S320 setting the number of convolution blocks of the coding structure of the second convolutional neural network, the number of convolution layers of each convolution block, the number of pooling layers of each convolution block, and each convolution The number and size of the convolution kernel of the layer, the number and size of the convolution kernel of each pooling layer, and extract the first key feature;
  • Step S330 Set the same number of decoding structure convolution blocks as the number of coding structure convolution blocks, the number and size of the convolution kernel of the convolution layer of each convolution block of the decoding structure, and each convolution block The number of pooling layers and the number of convolution kernels of each pooling layer are consistent with the corresponding convolution block in the coding structure, and the decoded data is obtained based on the first key feature;
  • Step S340 performing a convolution operation on the decoded data again, the size of the convolution kernel of the convolution operation is 1, and the number of convolution kernels is set to the number of categories that need to be divided;
  • step S350 the fully connected layer of each convolution block of the second convolutional neural network fully connects the decoded data after the reconvolution operation and the multiple neurons of the output layer of the second convolutional neural network.
  • the output layer of the second convolutional neural network outputs the predicted segmentation result
  • Step S360 Repeat steps S310 to S350, use the second training set for training, and obtain an image segmentation model by iterative parameter adjustment.
  • the step of obtaining a third training set and a third verification set based on the multiple segmented single blood cell images, and constructing an image recognition model based on the third training set and the third verification set includes:
  • Step S410 the input layer of the third convolutional neural network outputs a certain segmented single blood cell image data in the third training set to the convolution block of the third convolutional neural network;
  • Step S420 setting the number of convolutional blocks of the third convolutional neural network, the number of convolutional layers of each convolutional block, the number of pooling layers of each convolutional block, and the convolution of each convolutional layer The number and size of the cores, the number and size of the convolution cores of each pooling layer, and extract the second key feature;
  • Step S430 the fully connected layer of each convolution block of the third neural network fully connects the second key feature and the multiple neurons of the output layer of the third convolutional neural network, and the third convolutional neural network
  • the output layer of the network outputs the predicted recognition results
  • Step S440 Repeat steps S410 to S430, use the third training set for training, and obtain an image recognition model through iterative parameter adjustment.
  • the size of the convolution kernels in each convolutional layer of the coding structure of the second convolutional neural network is the same, and the number of convolution kernels in each convolutional layer of the next convolution block of the coding structure is the previous one.
  • the number of convolution kernels of each convolution layer of the convolution block is twice, the number of pooling layers of each convolution block of the coding structure is the same, and the number and size of the convolution kernels of each pooling layer are the same.
  • the size of the convolution kernels in each convolutional layer of the decoding structure of the second convolutional neural network is the same, and the number of convolution kernels in each convolutional layer of the next convolution block of the decoding structure is the previous one.
  • the number of convolution kernels of each convolution layer of the convolution block is 1/2, the number of pooling layers of each convolution block of the decoding structure is the same, and the number and size of the convolution kernels of each pooling layer are the same.
  • the invention provides a blood smear full-field intelligent analysis method, which collects multiple original blood smear single-field images, establishes an original blood smear single-field image group, and establishes based on the multiple original blood smear single-field images Blood smear full-field image; build an image restoration model based on the first training set and the first verification set; build an image segmentation model based on the second training set and the second verification set, based on the multiple segmented single blood cell images , Obtain the third training set and the third verification set, build the image recognition model; finally get the blood cell classification result.
  • the blood smear full-field intelligent analysis method can realize the selection and open update of artificial intelligence algorithms according to different application fields; the analysis of full-field blood cells based on artificial intelligence algorithms is extremely effective. It greatly reduces the interference of human factors, improves the objectivity of the test results, and has a high accuracy of blood cell analysis and classification; recognition and analysis can be realized for the image input that meets the requirements, and the algorithm's robustness and accuracy are higher than traditional image recognition algorithms , Overturning the existing medical inspection process, greatly shortening the overall time.
  • the embodiments of the present invention provide a blood cell recognition segmentation model and a method for constructing the recognition model.
  • constructing the blood cell segmentation model and the recognition model accurate scanning and analysis of blood smears can be realized, and the comprehensiveness and accuracy of blood cell recognition can be improved.
  • Construct a blood cell segmentation model and a recognition model select samples from the initial blood cell image library to form a training set and a validation set, and train the blood cell segmentation model until it meets the requirements for the accuracy of single blood cell segmentation; select samples from the labeled blood cell image library A training set and a validation set are formed, and the blood cell recognition model is trained until it meets the requirements of recognition accuracy.
  • the splicing method includes: method one, extracting feature points of the single-field images that are adjacent in physical location pair by pair, and then performing image feature matching, and finally forming a complete full-field image; or method two, judging two adjacent ones The size of the overlap area of the single-field image is then weighted and averaged to obtain the overlapped portion image, and finally the full-field image is obtained.
  • the method for manual labeling is to label the types of white blood cells and/or red blood cells and image clarity on a computer or mobile phone, and cross-validate the labeling results.
  • the blood cell recognition model is constructed using a feedforward neural network with a deep structure.
  • the feedforward neural network with deep structure uses convolutional layers to extract the feature vectors of various cells, extracts the required feature vectors through maximum pooling, performs residual learning through residual blocks, and performs classification through two fully connected layers Output category information;
  • the input of the residual block is convolved 3*3, activated by the first Relu activation function, and then superimposed with the input after 3*3 convolution, and finally output after being activated by the second Relu activation function.
  • the blood cell segmentation model is constructed using methods of normalization, color space conversion, histogram averaging, or deep learning.
  • deep learning methods include but are not limited to YOLO, SSD, or DenseBox.
  • Another aspect of the present invention provides a method for blood cell recognition, including:
  • the blood cell segmentation model performs image segmentation on the single-field slide scan image, it also includes determining the segmented field of view.
  • the field of view includes a specific area ideal for imaging, an important part with more blood cells, and/or a doctor-specified area.
  • it also includes performing manual evaluation of the blood cell segmentation model, the segmentation of the recognition model, and the recognition results respectively, and reversely transfer the gradient according to the evaluation results to optimize the blood cell segmentation model and the blood cell recognition model.
  • the blood cell recognition segmentation model and the method for constructing the recognition model provided by the embodiment of the present invention have the following advantages:
  • the blood cell segmentation model and recognition model of the present invention are open. According to different application fields, it can realize the selection and open update of artificial intelligence algorithms; it has good versatility and can realize recognition analysis for image input that meets the requirements of the software system;
  • the blood cell segmentation model and the recognition model of the present invention are intelligent, and the software algorithm has self-learning properties. With the increase of high-quality labeled images, the training efficiency of the recognition model is gradually improved, and the accuracy of the software recognition and classification can be continuously optimized.
  • the present invention uses a computer to realize full-field blood cell analysis, avoids the loss of marginal blood cells, greatly reduces the interference of man-made objective factors, and improves the objectivity and consistency of the test results.
  • the present invention generates a blood image database based on the full-field image to train the blood cell segmentation model, which ensures the accuracy and comprehensiveness of the sample data, and improves the accuracy of the blood cell segmentation model segmentation.
  • the present invention generates a sample library based on full-field images to avoid missing edge incomplete cells in a single field of view.
  • this patent can quickly accurately locate and identify blood cells, it can ensure that all blood cells in the full-field image (a few Thousands, as many as 100,000) analysis accuracy and efficiency.
  • the training of the blood cell instance segmentation recognition model improves the accuracy and comprehensiveness of the sample data, and improves the accuracy of identification and labeling.
  • the embodiments of the present invention provide an end-to-end blood cell recognition model construction method and application.
  • the blood cell recognition model is trained based on full-field images to realize accurate scanning and analysis of blood smears and improve the comprehensiveness and accuracy of blood cell segmentation and recognition.
  • a data sample set is formed based on the full-field image
  • artificial intelligence technology is used to train the blood cell recognition model
  • the model is optimized through continuous parameter tuning and error analysis to finally form a mature recognition model.
  • the input of the model is a single-field blood smear image, and the output is all cell positions, edges and categories on the image.
  • the invention utilizes a computer to realize full-field blood cell analysis, which greatly reduces the interference of man-made objective factors and improves the objectivity and consistency of test results.
  • the blood cell recognition model is intelligent, and the software algorithm has self-learning properties. With the increase of high-quality labeled images, the training efficiency of the recognition model is gradually improved, and the accuracy of the software recognition and classification can be continuously optimized. Specifically, it can be achieved through the following technical solutions:
  • the present invention provides an end-to-end blood cell recognition model construction method, including:
  • Construct a blood cell recognition model select samples from the instance annotation database to form a training set and a validation set, and train the blood cell recognition model until the blood cell recognition model meets the requirements of edge segmentation accuracy and type judgment accuracy.
  • Another aspect of the present invention provides an end-to-end blood cell recognition model construction method, which includes: acquiring multiple single-field images of each blood smear in at least one blood smear, and analyzing the multiple images of each blood smear. Single-field images are spliced to form a full-field image, and the types and edges of blood cells in each full-field image are manually annotated to form an instance annotation database;
  • Construct a blood cell recognition model select samples from the instance annotation database to form a training set and a verification set, and train the blood cell recognition model until the blood cell recognition model meets the requirements of edge segmentation accuracy and type judgment accuracy.
  • the splicing methods include: method one, extracting feature points of the single-field images that are physically adjacent in pairs, and then performing image feature matching, and finally forming a complete full-field image; or method two, judging two adjacent single-field images The size of the overlapping area of the image is then weighted and averaged to obtain the overlapping part of the image, and finally the full-field image is obtained.
  • the method for manually labeling categories is to label the types of white blood cells and/or red blood cells at the control terminal.
  • the method of performing edge manual labeling is that the labeler collects cell edge information, and generates a file containing the outline, area, and position information of a single blood cell for each image.
  • the blood cell recognition model adopts a fully-supervised, semi-supervised or unsupervised artificial intelligence algorithm.
  • the blood cell recognition model adopts a fully convolutional neural network;
  • the fully convolutional neural network adopts an encoder-decoder structure, and the encoder encodes the input image and extracts features; the decoder decodes the extracted features and restores Image semantics.
  • the blood cell recognition model first performs an encoding operation, inputs a single-field blood smear image, performs a double convolution operation on each layer, extracts shallow features, and then performs a maximum pooling operation to perform required feature extraction , Perform the convolution operation again to increase the number of channels;
  • Perform a decoding operation first perform a convolution operation to upsample the decoding result, then perform a double convolution operation, continue to extract features, perform convolution again, and connect the shallow features to the deep layer, and the last layer of convolution outputs features Figure to obtain the potential region of the object to be segmented; then perform a convolution operation on the potential region to extract features, and use the residual block structure to extract the features of the potential region to propagate the gradient backward to obtain a finer-grained feature map;
  • the finer-grained feature map performs regression and target object classification tasks through a fully connected network, and the output of the last fully connected layer is the coordinate and category information of each pixel of the object to be detected; the finer granularity is higher.
  • the feature map of is convolved and the mask of the object to be detected is obtained through the Mask algorithm, and then the mask and the category information obtained by the fully connected layer are fused to obtain the result of instance segmentation.
  • the Mask algorithm includes obtaining the position and edge information corresponding to the finer-grained feature map, performing full convolutional neural network FCN processing, obtaining whether each pixel belongs to a target pixel or a background pixel, and performing residual Difference processing, the result of gradient transfer is obtained, pooling is performed, the vector after feature reduction is obtained, and convolution is performed, and finally the edge information corresponding to the blood cell at the position is obtained.
  • Another aspect of the present invention provides an end-to-end blood cell segmentation and recognition method, including:
  • each single-field slide scan image process each single-field slide scan image to obtain the position, edge and category of blood cells in each single-field slide scan image, which are marked in Each of the single-field glass slides is scanned on the image and output.
  • the segmented field of view is determined first and then processed; the field of view includes a specific area ideal for imaging, an important part with more blood cells and/or a doctor-specified area.
  • edge and category annotation results of the blood cell recognition model are manually evaluated, and the gradient is transmitted backwards according to the evaluation result to optimize the blood cell recognition model;
  • the blood cell recognition model uses an encoding-decoding architecture to extract the ROI feature map of the blood cell location area, uses a residual network to extract the ROI feature map feature, and uses a classifier to obtain the coordinates and category corresponding to the feature map based on the extracted features ,
  • the Mask algorithm obtains the corresponding edge based on the coordinates.
  • the end-to-end blood cell recognition model construction method provided by the embodiment of the present invention has the following advantages:
  • the blood cell recognition model of the present invention is based on the neural network-based artificial intelligence recognition and analysis system architecture and information flow design. It is open. According to different application fields, it can realize the selection and open update of artificial intelligence algorithms; The image input required by the software system can realize recognition and analysis;
  • the blood cell recognition model of the present invention is intelligent, and the software algorithm has self-learning properties. With the increase of high-quality labeled images, the training efficiency of the recognition model is gradually improved, and the accuracy of the software recognition and classification can be continuously optimized.
  • the present invention uses a computer to realize full-field blood cell analysis, which ensures the comprehensiveness and accuracy of the sample, improves the accuracy of model recognition, greatly reduces the interference of human objective factors, and improves the objectivity and consistency of the test results .
  • the present invention generates a sample library based on the full-field image to avoid missing edge incomplete cells in a single field of view.
  • the present invention can quickly locate and identify blood cells accurately, it can ensure that all blood cells (a few counts) in the full-field image Thousands, as many as 100,000) analysis accuracy and efficiency.
  • the training of the blood cell recognition model ensures the accuracy and comprehensiveness of the sample data, and improves the accuracy of the identification and labeling.
  • the image quality of the single-field image is evaluated, and the image with the clearest cells is selected as the final single-field image of the field, which ensures the quality of the single-field image as the sample.
  • the present invention only needs to input a single-field blood smear image to output the recognition result, realizes an end-to-end design, and is convenient for users to operate.
  • Figure 1 is a flow chart of the blood smear full-field intelligent analysis method of the present invention
  • Fig. 2 is a schematic diagram of the full-field intelligent analysis method for blood smears of the present invention
  • FIG. 3 is a schematic diagram of blood cell labeling in the image segmentation method of the present invention.
  • Figure 4 is a schematic diagram of the image segmentation model of the present invention.
  • Figure 5 is a schematic diagram of the image recognition model of the present invention.
  • Figure 6 is a diagram of the image recognition result of the present invention.
  • Fig. 7 is a flow chart of constructing a blood cell recognition model based on deep learning of the present invention.
  • FIG. 8 is a flowchart of blood cell segmentation, recognition model training and work flow in an embodiment
  • Figure 9 is a blood cell recognition model in an embodiment
  • FIG. 10 is a detailed structure diagram in an embodiment of a residual block
  • Fig. 11(a) is a schematic diagram of a first single-field image
  • Fig. 11(b) is a schematic diagram of a second single-field image
  • Fig. 11(c) is a schematic diagram of a spliced image of the first and second single-field images
  • Figure 12 is an identification diagram of a single-field blood smear blood cell identification model in an embodiment
  • FIG. 13 is a schematic diagram of the recognition result of the embodiment in FIG. 12;
  • Figure 14 is a flowchart of the construction of a blood cell recognition model
  • Figure 15 is a flow chart of blood cell recognition model training
  • Figure 16 is a schematic diagram of a blood cell recognition model in an embodiment
  • Figure 17 is a schematic diagram of edge information labeling in an embodiment
  • Figure 18 is a schematic diagram of category labeling in an embodiment
  • Figure 19 is a single-field blood smear image
  • Fig. 20 is an image marked in Fig. 19.
  • the present invention provides a full-field intelligent analysis method for blood smears, which includes the following steps, as shown in Figs. 1 and 2:
  • Step S100 Collect multiple original blood smear single-field images, establish an original blood smear single-field image group, and establish a blood smear full-field image based on the multiple original blood smear single-field images.
  • blood samples are collected and made into blood smears.
  • a blood smear scanner based on automated technology or a manually adjusted microphotographic system is used to take full-field blood smear photos.
  • the full-field imaging process one is the image stitching method based on feature comparison, and the other is the motion shooting method based on blur adaptive.
  • the image stitching method based on feature comparison is to synthesize multiple single-field images into a full-field image. In this process, it is necessary to use feature comparison and pattern matching to identify mechanical errors and photographing errors, and to register and stitch adjacent single-field images.
  • the motion shooting method based on blur adaptive abandons the traditional photomicrography scheme of focusing first and then taking pictures. Instead, take pictures several times during a uniform motion in the focal length direction, and then apply a weighted synthesis algorithm based on motion compensation to get more shots.
  • the single-field images are combined into a clear full-field image.
  • Step S200 Obtain a first training set and a first verification set based on the multiple original blood smear single-field images, and construct an image restoration model based on the first training set and the first verification set.
  • an image restoration model based on deep convolutional neural networks is constructed.
  • the input of the model is a low-quality image (hereinafter referred to as a degraded image), and the output is a high-quality image after denoising, deblurring, and sharpening.
  • a degradation model train according to the degradation model to obtain specific degradation parameters, and then establish a restoration model by removing noise, etc., so as to restore the image.
  • the restoration model establishment process is as follows:
  • the A set contains all low-quality images of the multiple blood smear single-field images (hereinafter referred to as degraded images), and the B set contains the multiple blood smear single-field images.
  • degraded images all low-quality images of the multiple blood smear single-field images
  • the B set contains the multiple blood smear single-field images. Smear all the high-definition images of the single-view image, and the degraded image in the A set and the high-definition image in the B set have a many-to-one relationship, that is, one high-definition image in the B set corresponds to multiple degraded images in the A set.
  • method 1 Reconstruction with prior knowledge.
  • g(x,y) h(x,y)*f(x,y)+ ⁇ (x,y), the "*" in the formula means convolution ;
  • the degradation function is estimated through observation, experience, and modeling.
  • the noise of the camera mainly comes from the image acquisition process and the transmission process, so the degradation function is constructed from the spatial and frequency domain of the noise.
  • Some important noises such as Gaussian noise, Rayleigh noise, Gamma noise, etc. are restored by means such as mean filter, statistical order filter, adaptive filter, band stop filter, band pass filter, notch filter, Notch band pass filter, optimal notch filter, inverse filter, Wiener filter, etc.
  • the second method using the first convolutional neural network to perform super-resolution image reconstruction.
  • An image restoration model based on the first convolutional neural network is constructed based on the first training set and the first verification set, and image reconstruction is performed based on the image restoration model to obtain a reconstructed blood smear full-view image.
  • the learning method consists of forward propagation and backward error propagation.
  • the degraded image first enters the input layer, then enters the intermediate hidden layer through the input layer, and then to the output layer. If the output layer cannot match the expectation, backpropagation is performed according to the difference between the output layer and the expectation. In this process, the weights of the hidden layer are adjusted to make the feedback error smaller. The above process is iterated repeatedly until the difference between the output layer and the expected value is less than the set threshold, and the final image restoration model is generated.
  • Step S300 Select the original blood smear single-field images that have white blood cells and the white blood cells and red blood cells do not overlap at all from the multiple original blood smear single-field images to obtain a second training set and a second verification set, based on the first
  • the second training set and the second validation set construct an image segmentation model.
  • the multiple original blood smear single-field images with white blood cells and no overlap between white blood cells and red blood cells are processed by the image segmentation model to obtain multiple segmented single blood cell images.
  • the input of the image segmentation model is a single-field image of the entire blood smear, and the output is an image of a single blood cell.
  • a single-field image of a blood smear with white blood cells and no overlap between white blood cells and red blood cells is selected, and the positions and outlines of the white blood cells and red blood cells in the image are marked (as shown in Figure 3, it is a single-field image of the entire blood smear.
  • the box part is a small image of blood cells).
  • An image segmentation model is constructed based on the second verification set and the second training set.
  • the second convolutional neural network includes multiple convolutional blocks.
  • Each convolutional block includes multiple convolutional layers, 1 pooling layer, 1 activation layer, and 1 fully connected layer.
  • Each convolutional layer contains multiple Convolution kernel.
  • Step S310 the input layer of the second convolutional neural network outputs a certain original blood smear single-field image data in the second training set to the convolution block of the second convolutional neural network;
  • Step S320 setting the number of convolution blocks of the coding structure of the second convolutional neural network, the number of convolution layers of each convolution block, the number of pooling layers of each convolution block, and each convolution The number and size of the convolution kernel of the layer, the number and size of the convolution kernel of each pooling layer, and extract the first key feature;
  • the size of the convolution kernels in each convolution layer of the coding structure of the second convolutional neural network is the same, and the number of convolution kernels in each convolution layer of the next convolution block is the same as that of the previous convolution block
  • the number of convolution kernels in each convolution layer is twice; the number of pooling layers in each convolution block is the same, and the number and size of convolution kernels in each pooling layer are the same.
  • the number of convolutional blocks of the second convolutional neural network is set to 5, the number of convolutional layers of each convolutional block is 3, and the size of each convolution kernel of each convolutional layer is 3.
  • the number of convolution kernels in each convolution layer of the first convolution block is 60
  • the number of convolution kernels in each convolution layer of the second convolution block is 120
  • the third convolution block The number of convolution kernels in each convolution layer is 240
  • the number of convolution kernels in each convolution layer of the 4th convolution block is 480
  • the number of convolution kernels in each convolution layer of the 5th convolution block is
  • the number of cores is 960
  • the number of pooling layers for each convolution block is 1, and the size of the convolution core for each pooling layer is 2.
  • Step S330 Set the same number of decoding structure convolution blocks as the number of coding structure convolution blocks, the number and size of the convolution kernel of the convolution layer of each convolution block of the decoding structure, and each convolution block The number of pooling layers and the number of convolution kernels of each pooling layer are consistent with the convolution block corresponding to the coding structure, and the decoded data is obtained based on the first key feature;
  • Step S340 performing a final convolution operation on the decoded data, the size of the convolution kernel of the final convolution operation is 1, and the number of convolution kernels is set to the number of categories to be divided;
  • step S350 the fully connected layer of each convolution block of the second convolutional neural network fully connects the decoded data after the reconvolution operation and the multiple neurons of the output layer of the second convolutional neural network.
  • the output layer of the second convolutional neural network outputs the predicted segmentation result
  • step S360 steps S310 to S350 are repeated, and the image segmentation model is obtained by iterative parameter adjustment.
  • the size of the blood smear single-field image of the input layer is 512 ⁇ 512 pixels
  • the second convolutional neural network coding structure has a total of 5 convolutional blocks, and the pool of each convolutional block
  • the number of chemical layers is 1.
  • Each convolutional block has 3 convolutional layers, and the size of each convolution kernel in each convolutional layer is 3.
  • the convolution kernel of each convolutional layer in the first convolutional block is The number is 60.
  • the decoding operation is then carried out.
  • the first convolution block of the decoding structure is the fifth convolution block of the encoding structure, and the convolution layer of each convolution block of the decoding structure is 2 ,
  • the number of pooling layers of each convolution block is 1, and the size of each convolution kernel in each convolution layer is 3.
  • a convolution kernel size of 2 is performed.
  • the product operation performs upsampling to obtain feature a, and then sets the number of convolution kernels of each convolution layer of the second convolution block to 480, and performs two convolution operations with a convolution kernel size of 3 to obtain feature a' ; Perform another up-convolution operation with a convolution kernel size of 2 for up-sampling to obtain feature b, and then set the number of convolution kernels for each convolution layer of the third convolution block to 240, and perform 2 convolutions A convolution operation with a kernel size of 3 to obtain feature b'; perform another up-convolution operation with a convolution kernel size of 2 for upsampling to obtain feature c, and then set the number of convolution kernels for the fourth convolution block as 120.
  • Step S400 Obtain a third training set and a third verification set based on the multiple segmented single blood cell images, and construct an image recognition model based on the third training set and the third verification set.
  • the input of the image recognition model is the segmented single blood cell image, and the output is the probability value of the cell belonging to certain categories.
  • the original single-field blood smear image is segmented into white blood cell, red blood cell, and platelet images through the image segmentation model. Label these white blood cell, red blood cell, and platelet maps, and mark the category to which the cell belongs. When the label quantity reaches the training requirement (the number of graphs in each category is greater than 10000), 1/10 of them is randomly selected as the third verification set, and the rest as the third training set.
  • constructing an image recognition model includes the following steps:
  • Step S410 the input layer of the third convolutional neural network outputs a certain segmented single blood cell image data in the third training set to the convolution block of the third convolutional neural network;
  • Step S420 setting the number of convolutional blocks of the third convolutional neural network, the number of convolutional layers of each convolutional block, the number of pooling layers of each convolutional block, and the convolution of each convolutional layer The number and size of the cores, the number and size of the convolution cores of each pooling layer, and extract the second key feature;
  • the convolutional neural network can be set to include multiple convolutional blocks, each convolutional block including multiple convolutional layers, 1 pooling layer, 1 activation layer, 1 fully connected layer, each The convolutional layer contains multiple convolution kernels.
  • Step S430 the fully connected layer of the third convolutional neural network fully connects the second key feature and the multiple neurons of the output layer of the third convolutional neural network, and the output layer of the third convolutional neural network Output prediction recognition results;
  • Step S440 Repeat steps S410 to S430, use the third training set for training, and obtain the completed image recognition model through iterative parameter adjustment.
  • the input image size is 224*224*3
  • the image size is 224*224 pixels
  • 3 is an RGB image.
  • the third convolutional neural network has four convolutional blocks, and each convolutional block contains multiple convolutional layers, a pooling layer, and an activation layer.
  • the first convolutional block contains 96 convolutional layers, the size of the convolution kernel of each convolutional layer is 11*11*3, and the number of convolution kernels of each convolutional layer is 27*27*96;
  • the second A convolutional block contains 256 convolutional layers, the size of the convolution kernel of each convolutional layer is 5*5*48, and the number of convolution kernels of each convolutional layer is 27*27*128;
  • the third volume The block contains 384 convolutional layers, the size of the convolution kernel of each convolutional layer is 3*3*256, and the number of convolution kernels of each convolutional layer is 13*13*192;
  • the fourth convolutional block There are 384 convolutional layers inside, the size of the convolution kernel of each convolutional layer is 3*3*192, and the number of convolution kernels of each convolutional layer is 13*13*256.
  • the fully connected layer fully connects the output of the fourth convolution block with 100 neurons in the final output layer,
  • step S500 the blood smear full-field image is restored through an image restoration model to obtain a restored full-field image, and the restored full-field image is processed by an image segmentation model to obtain multiple single blood cell images.
  • a single blood cell image is processed by the image recognition model to obtain the blood cell classification result.
  • the blood smear to be tested also needs to go through a blood smear scanner or a photomicrography system to perform full-field photography, establish a blood smear scan image, and then process it through an image restoration model to obtain restoration After the clear blood smear scan image, a single blood cell image can be obtained after image segmentation processing, and finally the cell classification result is obtained through the blood cell image recognition and the report is output.
  • the overall time has been shortened by more than two-thirds.
  • the accuracy of the prototype system on the test set of 200,000 white blood cell pictures is higher than 95%, and the false negative rate is lower than 1.6%.
  • the present invention provides a full-field intelligent analysis method for blood smears, which collects multiple original blood smear single-field images, establishes an original blood smear single-field image group, and based on the multiple original blood smear single-field images.
  • a single field of view image is used to establish a blood smear full field of view image;
  • an image restoration model is constructed based on the first training set and the first verification set;
  • an image segmentation model is constructed based on the second training set and the second verification set, and based on the multiple segmentation
  • the third training set and the third verification set are obtained, and the image recognition model is constructed; finally, the blood cell classification result is obtained.
  • the invention analyzes blood cells in the whole field of view based on artificial intelligence algorithms, which greatly reduces the interference of human factors, improves the objectivity of test results, and has high accuracy in blood cell analysis and classification; recognition and analysis can be realized for input of images that meet the requirements. Robustness and accuracy are higher than traditional image recognition algorithms, which overturns the existing medical inspection process and greatly reduces the overall time.
  • the acquisition of electronic data of whole slides is the basis for the realization of comprehensive and objective testing.
  • the current medical inspection field especially the routine blood test, has arduous tasks and heavy workload.
  • a considerable number of hospitals have introduced more advanced auxiliary inspection systems, but they cannot solve the problem of whole slides.
  • Test problems often result in one-sided results and high manual re-examination rates.
  • the serious shortage and uneven distribution of high-level laboratory physicians leads to inconsistent judgments of abnormal cell morphology in peripheral blood.
  • the current main recognition and classification algorithm It belongs to the traditional sequence. In the actual operation process, the recognition accuracy rate is not high and it is extremely easy to be interfered by subjective experience and human factors.
  • an embodiment of the present invention provides a method for constructing a blood cell recognition model to obtain a blood cell segmentation and recognition model for blood cell recognition.
  • the specific steps are as follows:
  • the camera has a limited shooting range under a high-power microscope, especially a 100-fold objective lens, it can only shoot single-field images with a physical size of about 150*100 ⁇ m (micrometers), as shown in Figure 11(a) and (b).
  • the blood cells at the edge of the field of view image cannot be accurately identified.
  • the blood cells at the edge are formed after stitching.
  • Complete blood cell images, compared to single-field images, full-field images can extract incomplete cells at the edge of a single field without omission.
  • Commonly used algorithms for stitching include but are not limited to FAST algorithm, SURF algorithm, image registration, etc.
  • the acquisition method of the full-field image includes: first pushing the collected blood sample to obtain a blood slide, and then using high-precision photomicrography and mechanical control technology to take a full-field blood photo.
  • the imaging system first focuses on the entire glass slide, and then continuously moves along the same distance from the corner of the slide to take all the sub-field photos, and finally stitches together to form a full-field image.
  • the splicing method includes, but is not limited to, method 1: extracting feature points of single-view images that are physically adjacent in pairs.
  • the feature factors include but not limited to sift, surf, harris corner, ORB, etc., and then image feature matching is performed. Finally, a complete full-field image is formed.
  • Method 2 Determine the size of the overlapping area of two adjacent single-field images, and then perform a weighted average of the overlapping parts to obtain the overlapping part of the image, and finally obtain the full-field image.
  • the labeling of blood cell types needs to be completed by experienced blood laboratory doctors, and you can choose to cross-validate the labeling results.
  • an expert labeling system for white blood cell and red blood cell labeling based on three platforms can be optionally equipped.
  • the three platforms include iOS, Android and PC.
  • the portability of the mobile device is fully utilized, the corresponding APP is developed to distribute the data to the mobile device of the annotator, and data such as definition and category can be annotated for different image types at any time.
  • a 10-fold cross-validation method is adopted to divide the data set into ten parts, and nine of them are used as training data and one is used as test data in turn for training and model optimization.
  • the target blood cells are detected and extracted from the single field image of the target, so as to generate the target single blood cell image library, and prepare the necessary conditions for the recognition of the single blood cell.
  • the main techniques used are divided into two categories. One is the traditional image pattern recognition methods, such as normalization, color space conversion, and histogram averaging. The other is based on deep learning methods, such as YOLO, SSD, DenseBox, etc.
  • Both types of recognition methods can be used for modeling the blood cell segmentation model of the present invention. Since the blood cell image has a single category composition compared with the natural image, in one embodiment, a deep learning method is used to model the blood cell segmentation model.
  • the blood cell recognition model adopts Feedforward Neural Networks, which includes convolution calculations and has a deep structure, to train the feedforward neural network model, thereby implicitly performing the training from the training data. Learn to extract features, and optimize the model through continuous parameter tuning and error analysis, and finally form a mature blood cell recognition model.
  • the accuracy rate (R) is not greater than the set threshold, the accuracy rate (R) of the blood cell recognition model is passed backward, and the weight of each convolutional layer is adjusted.
  • the classification of the detected single blood cell image is determined.
  • a feedforward neural network with a deep structure based on transfer learning is used to construct a blood cell recognition model, which is based on the ImageNet data set.
  • the original image recognition model is obtained through the above training, and then the blood cell image annotation library is used for migration learning, and the test model is obtained after adjusting the parameters.
  • the network uses a convolutional neural network to extract image features to achieve the purpose of image classification.
  • Residual learning can effectively alleviate the disappearance of gradient back propagation and network degradation, so the network can be extended to the deep layer. It makes the network stronger and more robust.
  • the difference block is followed by two fully connected layers (FC) for network classification. The number of neurons in the first fully connected layer is 4096.
  • the 4096 features are passed to the next layer of neurons, and the classification network (classes) is used to classify the image
  • classification the number of neurons in the last layer is the number of target categories.
  • blood cell images have a relatively single category composition. Therefore, pruning on the basis of traditional algorithms, as well as changes in convolution kernels and neural network layers, can achieve faster calculation speeds and more accurate categories. judgment.
  • a detailed structure of the residual block is shown in Figure 10, adding an identity mapping from input to output.
  • the residual block (Res-Block) can solve the gradient while deepening the network (to extract more advanced features) The problem disappeared.
  • the residual module can obtain activation from a certain layer, and then feed it back to another layer or even a deeper layer.
  • a residual network can be constructed to train a deeper layer.
  • the network structure directly skips two 3x3,64 network layers, and transfers the features to a deeper layer. That is, the input x is convolved by 3*3, activated by the Relu activation function, and then superimposed with the input x after the 3*3 convolution, and then output after being activated by the Relu activation function.
  • Blood cell recognition models include but are not limited to convolutional neural networks, and can also be implemented based on traditional recognition algorithms or reinforcement learning ideas.
  • the present invention proposes the concept of full-field blood cell analysis for the first time.
  • the full-field range includes specific areas, designated areas and important parts of the glass slide (head, middle, tail), etc., as well as the entire glass slide range. It can be further increased to determine the field of view before image segmentation.
  • manual evaluation can be used in the application process to evaluate the blood cell segmentation model, the segmentation of the recognition model, and the recognition results, respectively, and transfer the gradient in reverse according to the evaluation results to optimize the blood cell segmentation model and recognition model.
  • the blood cell segmentation and recognition model of the present invention can be integrated and loaded in the same intelligent stand-alone device, for example, a computer is used to load two of the models. It is also possible to load the two models into different smart stand-alone devices respectively according to actual needs.
  • the blood cell segmentation model is first used to segment the scanned image based on a single field of view glass slide to obtain a single blood cell image and the corresponding position after the target segmentation.
  • the blood cell recognition model is used to identify the cell category.
  • the position and type of blood cells are obtained, and the recognition result is shown in Figure 13.
  • the position and category information are used to label the scanned image of the single-field slide glass, and the recognition diagram of the single-field blood smear blood cell recognition model is obtained as shown in Figure 12.
  • the blood cell recognition model of the present invention can realize the labeling of 50 kinds of white blood cells and more than 20 kinds of red blood cells, training according to actual needs, and good scalability.
  • the invention is based on artificial intelligence algorithm to realize the recognition of blood cells. Compared with the traditional recognition method, the accuracy is qualitatively improved, and the accuracy can reach more than 85%; the whole field of view blood cells can be analyzed, which greatly improves the scientificity.
  • the blood image database is generated based on the full-field image, and the blood cell segmentation model is trained to ensure the accuracy and comprehensiveness of the data and improve the accuracy of the blood cell segmentation model.
  • the use of computer to achieve full-field blood cell analysis greatly reduces the interference of human objective factors and improves the objectivity and consistency of the test results.
  • the blood cell segmentation model and recognition model are intelligent, and the software algorithm has self-learning properties. With the increase of high-quality labeled images, the training efficiency of the recognition model is gradually improved, and the accuracy of the software recognition and classification can be continuously optimized.
  • the current blood test process in the hospital is: blood sample-blood analyzer-pusher-manual microscopy, the whole process takes about 60 minutes. Manually draw blood to obtain blood samples; obtain various blood cell counts, white blood cell classifications and hemoglobin content through blood analysis equipment; use push-staining machine to stain and mark the slides for manual microscopy; obtain the final blood cell morphological analysis results after manual microscopy , Such as the recognition of abnormal blood cells.
  • the existing blood analyzer technology is mainly based on three types of realization: electrical impedance, laser measurement, and comprehensive methods (flow cytometry, cytochemical staining, special cytoplasmic division, etc.).
  • the problems of the prior art are: first, the analysis and counting of blood cells in the full field of view of the blood smear is not realized, and the data sample size is not enough, resulting in one-sided and inaccurate results; second, the counting and classification algorithms are relatively traditional, and the morphological analysis effect is relatively good. Poor, the recognition accuracy is not high; thirdly, there is a serious shortage of high-level laboratory physicians and manual microscopic examinations.
  • the blood smear is first taken under the microscope to take full-field photography, and the slide scan image group is established; then the annotation team composed of professional doctors and ordinary annotators analyzes the original
  • the blood cell pictures are manually annotated, and images are randomly selected to establish a training set and a verification set; finally, artificial intelligence technology is used for model training, and the model is optimized through continuous parameter tuning and error analysis, and finally a mature image example recognition model is formed.
  • the input of the model is a single-field blood smear image, and the output is the position, edge and category of all target cells on the image.
  • the image clarity evaluation algorithm includes but is not limited to PSNR (Peak Signal to Noise Ratio, that is, peak signal-to-noise ratio), SSIM (structural similarity index, structural similarity), etc.
  • the camera has a limited shooting range under a high-power microscope, especially a 100-fold objective lens, it can only shoot a single-field image with a physical size of about 150*100 ⁇ m, and blood cells at the edge of the single-field image cannot be accurately identified.
  • it is necessary to stitch approximately 25,000 single-field images into a full-field image.
  • the blood cells at the edge of the single-field image form a complete blood cell image after stitching.
  • the full-field image The image can extract incomplete cells at the edge of a single field without omission.
  • Commonly used algorithms for splicing include but are not limited to FAST algorithm, SURF algorithm, image registration, etc.
  • the collected blood samples are digitally processed and a blood picture database is established.
  • the database saves the full-slide full-field blood image of the blood smear or the single-field image with the best image quality after image quality evaluation.
  • Instance segmentation data labeling is divided into blood cell edge labeling and category labeling, which can be completed by ordinary labelers and experienced hematologists, and cross-validate the labeling results. There are at least two cross-validation processes. The annotators participate, and the specific process is to distribute the same batch of data to different annotators. If the annotation results are the same, the annotation is considered valid. Otherwise, if the annotation is invalid, delete the image or re-annotate.
  • the blood cell edge annotation is assisted by professional annotation software.
  • the annotator collects the cell edge information of the full-field blood image or single-field image in the blood picture database, and generates a corresponding json format file for each image. This file contains a single blood cell. The contour, area, positioning and other information of the camera.
  • an expert labeling system based on three types of white blood cell and red blood cell labeling of two types of cells can be optionally equipped.
  • the three platforms include iOS, Android and PC.
  • the portability of the mobile device is fully utilized, and the corresponding APP is developed to distribute the data to the mobile device of the annotator, and the user can mark the blood cell category for different image types at any time.
  • edge labeling After edge labeling, category labeling and cross-validation are completed, single-field or full-field blood images with valid labeling results are gathered to form an instance labeling database as a training sample set.
  • the blood cell recognition model is implemented using artificial intelligence algorithms, including but not limited to using convolutional neural networks, and other fully-supervised, semi-supervised or unsupervised artificial intelligence algorithms. Realize the rapid recognition of single-field blood smear images.
  • the data set is equally divided into ten parts, and nine of them are used as training data and one is used as test data for training and optimization.
  • select the effective image of cell edge annotation from the sample set to form the cell edge data set select the effective image of cell type annotation to form the cell category data set, and extract the training set and the cell edge data set from the cell edge data set and the cell category data set respectively.
  • the validation set trains the blood cell recognition model.
  • the model training is completed and the model is packaged; otherwise, if any If the accuracy rate (R) does not meet the threshold requirement, the gradient reverse transfer is performed to improve the accuracy rate (R) and adjust the blood cell recognition model.
  • the blood cell recognition model is input as a single-field blood smear image, each pixel in the picture is marked, and each pixel is corresponding to the category it represents.
  • the network structure based on the Fully Convolutional Neural Network (FCN) can separate blood cells from the background and classify them.
  • FCN Fully Convolutional Neural Network
  • the segmentation recognition uses convolutional autoencoders. Structure, the core part of the network is mainly divided into two parts: encoder and decoder. The encoder encodes the input image and extracts features; the decoder decodes the extracted features and restores the image semantics.
  • FIG. 16 is an example of a blood cell recognition model based on a Fully Convolutional Neural Network (FCN).
  • FCN Fully Convolutional Neural Network
  • the network design uses the Encoder-Decoder architecture to extract the ROI feature map of the blood cell location area, and uses the residual network to extract the ROI feature map feature.
  • the decoder decoding operation is performed.
  • feature_map feature output feature map
  • the potential region of the object to be segmented is obtained, and then convolution operation is performed on this region, features are extracted, the residual block structure is used to design a deep network, and the learning residual is used to extract the features of the potential region, making the gradient easier to propagate backward.
  • the vc dimension of is larger, and a finer-grained feature map is obtained, and this feature map is used for prediction.
  • the output of the last fully connected layer is the coordinates and category of the object to be detected, and the vectorized encoding method is used here, that is, the coordinate value + Number of categories + confidence.
  • the result of instance segmentation can be obtained by fusion with the category information obtained by the fully connected layer.
  • the Mask algorithm includes obtaining the position and edge information corresponding to the feature map, performing full convolutional neural network FCN processing, and obtaining the category to which each pixel belongs, that is, judging whether each pixel belongs to a background pixel or a target pixel, and performing residual processing to obtain After the result of the gradient transfer, pooling is performed to obtain the feature-reduced vector, and convolution (2*2up_conv) is performed, and finally the edge information corresponding to the blood cell at the position is obtained.
  • FIG. 17 it is a schematic diagram of the edge information labeling result, and the dotted line in the figure is the labelled edge information.
  • Figure 18 is a schematic diagram of the result of category labeling. Below each blood cell picture is the result of category labeling. It can be seen that the edge information and category labeling of the present invention is clear and accurate. The position, category information, and edge information are used to mark the scanned image of a single field of view glass slide to obtain a single field of view blood smear marking result, as shown in Figure 20, and the original image is shown in Figure 19.
  • the present invention proposes the concept of full-field blood cell analysis for the first time.
  • the full-field range includes specific areas, designated areas, and important parts of the slide, such as the head, middle, and tail, and the entire slide range. It can be further increased to determine the field of view first after image input.
  • manual evaluation can be used to evaluate the edge labeling and category labeling results of the blood cell recognition model respectively, and the gradient is transmitted backward according to the evaluation results to optimize the blood cell recognition model.
  • the blood cell recognition model of the present invention can be loaded in a control device, such as a smart stand-alone device, such as a computer or a smart phone.
  • a control device such as a smart stand-alone device, such as a computer or a smart phone.
  • the blood cell recognition model of the present invention can realize the labeling of at least 50 types of white blood cells and at least more than 20 types of red blood cells, training according to actual needs, and good scalability.
  • the invention only needs to input the single-field blood smear image to output the recognition result, and realizes the end-to-end design.
  • the invention is based on the artificial intelligence algorithm, realizes the marking of blood cells, and qualitatively improves the accuracy compared with the traditional recognition method, reaching a recognition accuracy rate of more than 85%; it can analyze the blood cells in the whole field of view, which greatly improves the scientificity.

Abstract

一种血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法,该分析方法采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像;基于第一训练集和第一验证集构建图像复原模型;基于第二训练集和第二验证集构建图像分割模型,基于多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,构建图像识别模型;最终得到血细胞分类结果。上述方法基于人工智能算法对全视野血细胞进行分析,极大降低了人为因素的干扰,提高检验结果的客观性,对血细胞分析分类准确度高;对符合要求的图片输入均可实现识别分析,算法鲁棒性和准确性比传统图像识别算法高,整体时间大大缩短。

Description

血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法
本申请基于申请号为2019111868881、2019111868896、2019111867770,申请日为2019年11月28日的中国专利申请提出,并要求所述中国专利申请的优先权,所述中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明实施例涉及血细胞分析技术领域,特别涉及一种血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法。
背景技术
当前血细胞检验技术中,血细胞的识别率低、人为干扰因素大,现有的仪器设备未能对血涂片全视野进行分析,检验结果的准确性不高。目前医院血液检验流程是:血液样品——血液分析仪——推染片机——人工镜检,整个流程耗时60分钟。对患者进行人工抽血得到血液样品;通过血液分析仪得到各种血细胞计数、白细胞分类和血红蛋白含量;通过推染片机进行染色标记,得到用于人工镜检的血涂片;最终由专业医生进行人工镜检后得到人工分析的血细胞形态分析结果,包括异常血细胞计数、异常血细胞分类等。现有的血液分析仪(血球仪、血球计数仪等)技术实现主要基于电阻抗、激光测定以及综合方法(流式细胞术、细胞化学染色、特殊细胞质往除法等)等三类。
电阻抗法属于物理方法,血液按一定比例稀释后经负压吸引通过仪器的一个微孔小管,由于血细胞与稀释液相比是不良导体,当每个血细胞通过微孔时均取代等体积的稀释液在电路上形成一短暂的电阻而导致电压的变化,产生相应的脉冲信号并经放大、甄别后累加记录。采用此种原理的分析设备经常会不同程度的出现微孔小管堵塞的情况,造成血细胞分类计数结果波动较大。激光测定法是血液按一定比例稀释后形成一个极细的液流穿过激光束,每个血细胞被激光照射后产生光散射并被光电倍增管接收。细胞的前向角散射与细胞的体积大小有关、侧向角(或高角)散射与细胞的内部结构、颗粒性质等有关,细胞数量则与细胞通过激光束时光散射的脉冲次数相同。各种检测信号被放大、甄别后经计算机处理可得到各种血细胞的数量和体积大小的平均数、变异系数、占全血体积的百分比及体积大小分布直方图等。激 光型虽然比电阻型仪器稳定,但激光管寿命有限,而且测定白细胞种类有限。综合方法:此类仪器是多种先进的细胞分析技术的高度综合应用,对血细胞的分析参数更多,但其检验结果的客观性不足且识别精度不高,且无法完全取代人工镜检。
在现有技术中,人工复检效率太低,识别过程主观因素太大,从而导致极易受主观经验和人为因素干扰。一是未实现血涂片全视野范围内血细胞的分析和计数,数据样本量不够,导致结果片面性大,不够全面和准确;二是计数和分类算法比较传统,形态分析效果较差,识别准确度不高;三是人工镜检医师主观性无法控制,复检率高;四是时间较长,效率低。
发明内容
本发明实施方式的目的在于提供一种血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法,用于血常规检验领域的血细胞形态识别和定量分析。通过使用血涂片扫描仪或显微照相系统对多个血涂片进行全视野摄影,得到多个血涂片图像,从而建立血涂片图像组。从血涂片图像组中分别为图像复原、图像分割和图像识别三部分工作准备训练数据集和验证数据集,其中图像识别数据集还需要图像分割模型的支持。使用人工智能技术(深度学习和卷积神经网络)进行模型训练,并通过不断地参数调优和误差分析优化模型,最终得到成熟的图像复原模型、图像分割模型和图像识别模型,并进行部署。在系统应用过程中,待检测血涂片也同样需要经过血涂片扫描仪或者显微照相系统进行全视野摄影,建立血涂片扫描图像,再通过图像复原模型处理后获得复原后的清晰血涂片扫描图像,再经过图像分割处理后可以得到单张血细胞图像,最后经过血细胞图像识别获得细胞分类结果并输出报告。通过对血涂片全视野扫描分析,结果识别率高、准确度高。
本发明采用如下的技术方案实现:
一种血涂片全视野智能分析方法,包括如下步骤:采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像;
基于所述多幅原始血涂片单视野图像,获得第一训练集和第一验证集,基于所述第一训练集和第一验证集构建图像复原模型;
在多幅所述原始血涂片单视野图像中选取存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像,获得第二训练集和第二验证集,基于所述第二训练集和第二验证集构建图像分割模型,所述多幅存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像经过图像分割模型处理,得到多幅分割后的单个血细胞图像;
基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,基于所述第三 训练集和第三验证集构建图像识别模型;
所述血涂片全视野图像经过图像复原模型进行复原,得到复原后的全视野图像,所述复原后的全视野图像经过图像分割模型处理,得到多幅单个血细胞图像,所述多幅单个血细胞图像经过图像识别模型处理,得到血细胞分类结果。
进一步的,所述在多幅所述原始血涂片单视野图像中选取存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像,获得第二训练集和第二验证集,基于所述第二训练集和第二验证集构建图像分割模型,所述多幅存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像经过图像分割模型处理,得到多幅分割后的单个血细胞图像的步骤包括:
步骤S310,第二卷积神经网络的输入层输出第二训练集中的某一个原始血涂片单视野图像数据到第二卷积神经网络的卷积块;
步骤S320,设置所述第二卷积神经网络的编码结构的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量及大小,提取第一关键特征;
步骤S330,设定与编码结构卷积块的数量相同的解码结构卷积块数量,所述解码结构的每个卷积块的卷积层的卷积核的数量及大小、每个卷积块的池化层的数量、每个池化层的卷积核的数量均与编码结构中相对应的卷积块一致,基于所述第一关键特征得到解码后的数据;
步骤S340,对所述解码后的数据再进行卷积运算,所述卷积运算的卷积核大小为1、卷积核数量设置为需要分割的类别数;
步骤S350,第二卷积神经网络的每个卷积块的全连接层将所述再进行卷积运算的解码后数据和第二卷积神经网络的输出层的多个神经元进行全连接,所述第二卷积神经网络的输出层输出预测分割结果;
步骤S360,重复步骤S310至S350,使用第二训练集进行训练,通过迭代调参得到图像分割模型。
进一步的,所述基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,基于所述第三训练集和第三验证集构建图像识别模型的步骤包括:
步骤S410,第三卷积神经网络的输入层输出第三训练集中的某一分割后的单个血细胞图像数据到第三卷积神经网络的卷积块;
步骤S420,设置所述第三卷积神经网络的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量 和大小,提取第二关键特征;
步骤S430,第三神经网络的每个卷积块的全连接层将所述第二关键特征和第三卷积神经网络的输出层的多个神经元进行全连接,所述第三卷积神经网络的输出层输出预测识别结果;
步骤S440,重复步骤S410至S430,使用第三训练集进行训练,通过迭代调参得到图像识别模型。
进一步的,所述第二卷积神经网络的编码结构的各个卷积层中的卷积核大小相同,编码结构的下一个卷积块的每个卷积层的卷积核数量均为上一个卷积块的每个卷积层的卷积核数量的2倍,编码结构的各个卷积块的池化层的数量均相同、各个池化层的卷积核的数量和大小相同。
进一步的,所述第二卷积神经网络的解码结构的各个卷积层中的卷积核大小相同,解码结构的下一个卷积块的每个卷积层的卷积核数量均为上一个卷积块的每个卷积层的卷积核数量的1/2,解码结构的各个卷积块的池化层的数量均相同、各个池化层的卷积核的数量和大小相同。
本发明提供了一种血涂片全视野智能分析方法,采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像;基于所述第一训练集和第一验证集构建图像复原模型;基于第二训练集和第二验证集构建图像分割模型,基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,构建图像识别模型;最终得到血细胞分类结果。
与现有技术相比,本发明实施例提供的血涂片全视野智能分析方法,根据不同应用领域,可以实现人工智能算法的选择和开放更新;基于人工智能算法对全视野血细胞进行分析,极大降低了人为因素的干扰,提高检验结果的客观性,对血细胞分析分类准确度高;对于符合要求的图片输入均可实现识别分析,算法鲁棒性和准确性都比传统图像识别算法要高,颠覆了现有医学检验流程,整体时间大大缩短。
本发明的实施方式提供了一种血细胞识别分割模型及识别模型的构造方法,通过构建血细胞分割模型及识别模型实现对血涂片进行准确的扫描分析,提高血细胞识别的全面性和准确性。
包括:
获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的多个所述单视野图像进行拼接形成一个全视野图像,构成全视野图像数据库,对所述全视野图像数据库中的 各个全视野图像进行人工图像分割,得到单个血细胞图像汇聚形成初始血细胞图像库;
对所述初始血细胞图像库中的单个血细胞图像进行人工标注,形成标注血细胞图像库;
构建血细胞分割模型及识别模型,在所述初始血细胞图像库中选取样本形成训练集和验证集,对血细胞分割模型进行训练,直至满足单个血细胞分割准确率的要求;在标注血细胞图像库中选取样本形成训练集和验证集,对血细胞识别模型进行训练,直至满足识别准确率的要求。
进一步地,所述拼接的方式包括:方式一,将物理位置相邻的单视野图像两两进行特征点提取,然后进行图像特征匹配,最终形成完整的全视野图像;或者方式二,判断两张相邻单视野图像重合区域大小,然后将重合部分进行加权平均,获取重合部分图像,最终获取全视野图像。
进一步地,进行人工标注的方法为在计算机或手机端,对白细胞和/或红细胞的类型以及图像清晰度进行标注,对标注结果进行交叉验证。
进一步地,血细胞识别模型采用具有深度结构的前馈神经网络构建。
进一步地,具有深度结构的前馈神经网络采用卷积层提取各类细胞的特征向量,通过最大池化提取所需特征向量,通过残差块进行残差学习,通过两层全连接层进行分类输出类别信息;
残差块输入经3*3的卷积,采用第一Relu激活函数激活,再经过3*3的卷积后与输入叠加,最后经第二Relu激活函数激活后输出。
进一步地,血细胞分割模型采用归一化、色彩空间转换、直方图均值化或深度学习的方法构建。
进一步地,深度学习的方法包含但不限于为YOLO、SSD或DenseBox。
本发明另一方面提供一种血细胞识别的方法,包括:
利用所述的血细胞分割模型及识别模型的构造方法构建血细胞分割模型及血细胞识别模型;
利用所述血细胞分割模型对单视野玻片扫描图像进行图像分割获得单个血细胞图像及对应位置;
利用所述血细胞识别模型对单个血细胞进行细胞类别的识别;
基于单个血细胞的位置和类别在单视野玻片扫描图像上进行标注。
进一步地,血细胞分割模型对单视野玻片扫描图像进行图像分割前还包括确定分割的视野范围,视野范围包括成像理想的特定区域、血细胞分布较多的重要部位和/或医生指定区域。
进一步地,还包括对血细胞分割模型、识别模型的分割、识别结果分别进行人工评估,根据评估结果反向传递梯度,优化所述血细胞分割模型、血细胞识别模型。
本发明实施例提供的一种血细胞识别分割模型及识别模型的构造方法与现有技术相比具有如下优点:
(1)本发明的血细胞分割模型、识别模型具有开放性,根据不同应用领域,可以实现人工智能算法的选择和开放更新;通用性好,对于符合软件系统要求的图像输入均可实现识别分析;
(2)本发明血细胞分割模型、识别模型具有智能性,软件算法具有自学习属性,随着高质量标注图像的增加,识别模型训练效率逐步提高,可不断优化软件识别分类准确度。
(3)本发明利用计算机实现全视野血细胞分析,避免了边缘血细胞的损失,极大降低了人为客观因素的干扰,提高检验结果的客观性和一致性。
(4)本发明基于全视野图像生成血液图像数据库,进行血细胞分割模型的训练,保证了样本数据的准确性和全面性,提高了血细胞分割模型分割的准确性。
(5)本发明基于全视野图像生成样本库,避免在单视野遗漏边缘不完整细胞,另外由于本专利能快速对血细胞进行准确定位与识别,因此能够保证全视野图像中全部血细胞(少则数千个,多则十万个)分析的准确性和高效性,同时,进行血细胞实例分割识别模型的训练,了样本数据的准确性和全面性,提高了识别标注的准确性。
本发明的实施方式提供一种端到端的血细胞识别模型构造方法及应用,基于全视野图像训练血细胞识别模型实现对血涂片进行准确的扫描分析,提高血细胞分割与识别的全面性和准确性。具体的,基于全视野图像形成数据样本集,使用人工智能技术训练血细胞识别模型,并通过不断地参数调优和误差分析优化模型,最终形成成熟的识别模型。模型输入为单视野血涂片图像,输出为该图像上所有的细胞位置、边缘及类别。本发明利用计算机实现全视野血细胞分析,极大降低了人为客观因素的干扰,提高检验结果的客观性和一致性。血细胞识别模型具有智能性,软件算法具有自学习属性,随着高质量标注图像的增加,识别模型训练效率逐步提高,可不断优化软件识别分类准确度。具体通过如下技术方案予以实现:
本发明提供一种端到端的血细胞识别模型构造方法,包括:
获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的每个所述单视野图像中的血细胞的类别和边缘进行人工标注,形成实例标注数据库;
构建血细胞识别模型,在所述实例标注数据库中选取样本形成训练集和验证集,对所述 血细胞识别模型进行训练,直至所述血细胞识别模型满足边缘分割准确率及类型判断准确率的要求。
本发明另一方面提供一种端到端的血细胞识别模型构造方法,包括:获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的多个所述单视野图像进行拼接形成一个全视野图像,对每个全视野图像中的血细胞的类别和边缘进行人工标注,形成实例标注数据库;
构建血细胞识别模型,在所述实例标注数据库中选取样本形成训练集和验证集,对所述血细胞识别模型进行训练,直至所述血细胞识别模型满足边缘分割准确率及类型判断准确率的要求。
进一步地,拼接的方式包括:方式一,将物理位置相邻的单视野图像两两进行特征点提取,然后进行图像特征匹配,最终形成完整的全视野图像;或者方式二,判断两张相邻单视野图像重合区域大小,然后将重合部分进行加权平均,获取重合部分图像,最终获取全视野图像。
进一步地,进行类别人工标注的方法为在控制终端对白细胞和/或红细胞的类型进行标注。
进一步地,进行边缘人工标注的方法为标注者对细胞边缘信息进行采集,针对每张图像生成包含单个血细胞的轮廓、面积以及位置信息的文件。
进一步地,所述血细胞识别模型采用全监督、半监督或无监督类型的人工智能算法。
进一步地,所述血细胞识别模型采用全卷积神经网络;全卷积神经网络采用编码器-解码器结构,编码器对输入的图像进行编码,提取特征;解码器对提取的特征进行解码,还原图像语义。
进一步地,所述血细胞识别模型首先进行编码运算,输入单视野血涂片图像,在每一层会进行双卷积运算,提取浅层特征,随后进行一次最大池化运算,进行所需特征提取,再次进行卷积运算,增加通道数量;
进行解码运算,首先进行一次卷积运算对解码结果进行上采样,随后进行双卷积运算,继续提取特征,再次进行卷积运算,浅层特征进行连接传递给深层,最后一层卷积输出特征图,得到待分割对象的潜在区域;然后对于所述潜在区域进行卷积运算,提取特征,利用残差块结构提取潜在区域的特征向后传播梯度,得到一张细粒度更高的特征图;
所述细粒度更高的特征图通过全连接网络进行回归以及目标对象的分类任务,最后一层全连接层的输出即为待检测对象每个像素的坐标和类别信息;所述细粒度更高的特征图进行卷积还通过Mask算法得到待检测对象的掩膜,再将所述掩膜和全连接层得到的所述类别信 息融合,得到实例分割的结果。
进一步地,所述Mask算法包括获取所述细粒度更高的特征图对应的位置及边缘信息,进行全卷积神经网络FCN处理,获得每个像素所属的类别为目标像素还是背景像素,进行残差处理,获得梯度传递后结果,进行池化,获得特征降维后向量,进行卷积,最后获得该位置处的血细胞对应的边缘信息。
本发明另一方面提供一种端到端的血细胞分割识别方法,包括:
利用所述的端到端的血细胞识别模型构造方法构建的血细胞识别模型,对每个单视野玻片扫描图像进行处理,获得各个所述单视野玻片扫描图像中血细胞位置、边缘及类别,标注在各个所述单视野玻片扫描图像上并输出。
进一步地,对于单视野玻片扫描图像,先确定分割的视野范围再进行处理;视野范围包括成像理想的特定区域、血细胞分布较多的重要部位和/或医生指定区域。
进一步地,对所述血细胞识别模型的边缘和类别标注结果分别进行人工评估,根据评估结果反向传递梯度,优化血细胞识别模型;
所述血细胞识别模型采用编码-解码架构进行血细胞位置区域的ROI特征图的提取,运用残差网络进行ROI特征图特征的提取,采用分类器基于所提取的特征获得该特征图对应的坐标和类别,Mask算法基于所述坐标获得对应的边缘。
本发明实施例提供的端到端的血细胞识别模型构造方法与现有技术相比具有如下优点:
(1)本发明的血细胞识别模型基于神经网络的人工智能识别分析系统架构和信息流程设计,具有开放性,根据不同应用领域,可以实现人工智能算法的选择和开放更新;通用性好,对于符合软件系统要求的图像输入均可实现识别分析;
(2)本发明血细胞识别模型具有智能性,软件算法具有自学习属性,随着高质量标注图像的增加,识别模型训练效率逐步提高,可不断优化软件识别分类准确度。
(3)本发明利用计算机实现全视野血细胞分析,保证了样本的全面性和准确性,提高了模型识别的准确性,极大降低了人为客观因素的干扰,提高检验结果的客观性和一致性。
(4)本发明基于全视野图像生成样本库,避免在单视野遗漏边缘不完整细胞,另外由于本发明能快速对血细胞进行准确定位与识别,因此能够保证全视野图像中全部血细胞(少则数千个,多则十万个)分析的准确性和高效性,同时,进行血细胞识别模型的训练,保证了样本数据的准确性和全面性,提高了识别标注的准确性。
(5)对单视野图像进行了图像质量评价,选择细胞最清晰的图像作为该视野的最终单视野图像,保证了作为样本的单视野图像的质量。
(6)本发明仅需输入单视野血涂片图像即可输出识别结果,实现了端到端的设计,使用者操作便捷。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本发明的血涂片全视野智能分析方法的流程图;
图2是本发明的血涂片全视野智能分析方法的示意图;
图3是本发明的图像分割方法中的血细胞标注的示意图;
图4是本发明的图像分割模型的示意图;
图5是本发明的图像识别模型的示意图;
图6是本发明的图像识别结果图;
图7为本发明基于深度学习的血细胞识别模型构建流程图;
图8为一个实施例中的血细胞分割、识别模型训练及工作流程图;
图9为一个实施例中的血细胞识别模型;
图10为残差块实施例中的详细结构图;
图11(a)为第一单视野图像示意图,图11(b)为第二单视野图像示意图;图11(c)为第一、第二单视野图像拼接后的图像示意图;
图12为一个实施例中的单视野血涂片血细胞识别模型识别图;
图13为图12为实施例的识别结果示意图;
图14为血细胞识别模型构建流程图;
图15为血细胞识别模型训练流程图;
图16为一个实施例中血细胞识别模型示意图;
图17为一个实施例中边缘信息标注示意图;
图18为一个实施例中类别标注示意图;
图19为一个单视野血涂片图像;
图20为图19标注后的图像。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合附图对本发明的各实施方式进行详细的阐述。然而,本领域的普通技术人员可以理解,在本发明各实施方式中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施方式的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本发明的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
本发明提供了一种血涂片全视野智能分析方法,包括如下步骤,如图1和图2所示:
步骤S100,采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像。
首先采集血液样本,制作成血涂片,利用基于自动化技术的血涂片扫描仪或者基于手工调节的显微照相系统,拍摄全视野血涂片照片。在全视野成像过程中,一是基于特征比对的图像拼接方法,一是基于模糊自适应的运动拍摄方法。
基于特征比对的图像拼接方法是为了将多张单视野图像合成为全视野图像。在这个过程中,需要利用特征比对和模式匹配来识别机械误差和拍照误差,将相邻单视野图像配准并拼接。
基于模糊自适应的运动拍摄方法放弃了传统的先对焦后拍照的显微摄影方案,而是在焦距方向的匀速运动过程中多次拍照,再应用基于运动补偿的加权合成算法将拍摄得到的多张单视野图像合成为一张清晰的全视野图像。
步骤S200,基于所述多幅原始血涂片单视野图像,获得第一训练集和第一验证集,基于所述第一训练集和第一验证集构建图像复原模型。
具体的,由于设备本身的机械运动抖动和光学部件差异会导致血涂片扫描仪或基于手工调节的显微照相系统拍摄的血涂片图像质量变坏,最终导致图像与实际不符。为了能有效的消除低质量图像带来的不利影响,构建了基于深度卷积神经网络的图像复原模型。该模型的输入为低质量图像(以下称退化图像),输出为经过去噪、去模糊、锐化后的高质量图像。先建立退化模型,根据该退化模型进行训练得到具体退化参数,然后通过去除噪声等建立复原模型,从而恢复图像。
具体的,复原模型建立过程如下:
将多幅血涂片单视野图像分成A和B两个集合,A集合中是该多幅血涂片单视野图像的所有低质量图像(下称退化图像),B集合中是该多幅血涂片单视野图像的所有高清图像,且A集合中的退化图像和B集合中的高清图像是多对一的关系,即B集合中的一个高清图像对应A集合中的多个退化图像。在获得A集合和B集合之后,随机抽取A集合中的1/10数量的退化图像和B集合中的1/10数量的与A集合中抽取出来的退化图像相对应的高清图像作为第一验证集,其余的图像作为第一训练集。
具体的,方式一:用先验知识进行重建。
建立退化模型,根据该退化模型进行图像复原;通过去除噪声建立复原模型,从而恢复图像。
假设退化函数是一个线性时不变的过程,则g(x,y)=h(x,y)*f(x,y)+η(x,y),式中的“*”表示卷积;其频率域的表示为:G(u,v)=H(u,v)F(u,v)+N(u,v)。退化函数通过观察、经验、建模等方式进行估计。相机的噪声主要来自于图像的获取过程和传输过程,因此从噪声的空域和频域构造退化函数。一些重要的噪声如高斯噪声、瑞利噪声、Gamma噪声等,采取的复原方式如均值滤波器、统计顺序滤波器、自适应滤波器、带阻滤波器、带通滤波器、陷波滤波器、陷波带通滤波器、最优陷波滤波器、反向滤波、维纳滤波等。
具体的,方式二:采用第一卷积神经网络进行超分辨率图像重建。
基于所述第一训练集、第一验证集构建基于第一卷积神经网络的图像复原模型,基于图像复原模型进行图像重建,得到重建后的血涂片全视野图像。
学习方式由正向传播和反向误差传播组成。退化图像先进入输入层,再经由输入层进入中间隐含层,然后至输出层。如果输出层与预期无法匹配,则根据输出层与预期的差值进行逆向传播,在这个过程中,调整隐含层的各个权重,从而使得反馈的误差变小。以上过程进行反复迭代,直至输出层与预期的差值小于设定的阈值,生成最终图像复原模型。
对完成的模型进行打包并部署。
步骤S300,在多幅所述原始血涂片单视野图像中选取存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像,获得第二训练集和第二验证集,基于所述第二训练集和第二验证集构建图像分割模型,所述多幅存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像经过图像分割模型处理,得到多幅分割后的单个血细胞图像。
图像分割模型的输入为整张血涂片单视野图像,输出是单个血细胞图像。
具体的,选取存在白细胞以及白细胞和红细胞完全不重叠的血涂片单视野图像,标注出 图像中的白细胞和红细胞的位置和轮廓(如图3所示,为整张血涂片单视野图,其中方框部分是血细胞小图),当标注量达到训练要求(5000-15000个)后,随机抽取其中的1/10数量的标注后的图像作为第二验证集,其余作为第二训练集
基于所述第二验证集、第二训练集构建图像分割模型。
第二卷积神经网络包括多个卷积块,每个卷积块包括多个卷积层、1个池化层、1个激活层和1个全连接层,每个卷积层包含多个卷积核。
具体的,包括如下步骤:
步骤S310,第二卷积神经网络的输入层输出第二训练集中的某一个原始血涂片单视野图像数据到第二卷积神经网络的卷积块;
步骤S320,设置所述第二卷积神经网络的编码结构的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量及大小,提取第一关键特征;
具体的,所述第二卷积神经网络的编码结构的各个卷积层中的卷积核大小相同,下一个卷积块的每个卷积层的卷积核数量均为上一个卷积块的每个卷积层的卷积核数量的2倍;每个卷积块的池化层的数量均相同、各个池化层的卷积核的数量和大小相同。
具体的,设置所述第二卷积神经网络的卷积块的数量为5,每个卷积块的卷积层的数量为3,每个卷积层的每个卷积核的大小均为3,第1个卷积块的每个卷积层的卷积核的数量为60,第2个卷积块的每个卷积层的卷积核的数量为120,第3个卷积块的每个卷积层的卷积核的数量为240,第4个卷积块的每个卷积层的卷积核的数量为480,第5个卷积块的每个卷积层的卷积核的数量为960,每个卷积块的池化层的数量为1,每个池化层的卷积核的大小为2。
步骤S330,设定与编码结构卷积块的数量相同的解码结构卷积块数量,所述解码结构的每个卷积块的卷积层的卷积核的数量及大小、每个卷积块的池化层的数量、每个池化层的卷积核的数量与编码结构相对应的卷积块一致,基于所述第一关键特征得到解码后的数据;
步骤S340,对所述解码后的数据再进行最后卷积运算,所述最后卷积运算的卷积核大小为1、卷积核数量设置为要分割的类别数;
步骤S350,第二卷积神经网络的每个卷积块的全连接层将所述再进行卷积运算的解码后数据和第二卷积神经网络的输出层的多个神经元进行全连接,所述第二卷积神经网络的输出层输出预测分割结果;
步骤S360,重复步骤S310至S350,通过迭代调参得到图像分割模型。
对完成的模型进行打包并部署。
具体的,可以为,如图4所示,输入层的血涂片单视野图像大小为512×512像素,第二卷积神经网络编码结构共5个卷积块,每个卷积块的池化层的数量为1。每个卷积块的卷积层均为3个,每个卷积层中的每个卷积核的大小均为3,第一个卷积块中的每个卷积层的卷积核的数量为60,进行3次卷积核大小为3的卷积运算,充分提取浅层特征A,然后进行1次卷积核大小为2的最大池化运算,提取出关键特征A’;再将第二个卷积块中的每个卷积层的卷积核数量设置为120,进行3次卷积核大小为3的卷积运算,充分提取浅层特征B,然后进行卷积核大小为2的最大池化运算,提取出关键特征B’;再将第三个卷积块中的每个卷积层的卷积核数量设置为240,进行3次卷积核大小为3的卷积运算,充分提取浅层特征C,然后进行1次卷积核大小为2的最大池化运算,提取出关键特征C’;再将第四个卷积块中的每个卷积层的卷积核数量设置为480,进行3次卷积核大小为3的卷积运算,充分提取浅层特征D,然后进行1次卷积核大小为2的最大池化运算,提取出关键特征D’;再将第五个卷积块中的每个卷积层的卷积核数量设置为960,进行3次卷积核大小为3的卷积运算,充分提取浅层特征E,然后进行1次卷积核大小为2的最大池化运算,从而提取最终关键特征;
随后进行解码运算,解码结构共5个卷积块,解码结构的第一个卷积块是编码结构的第五个卷积块,解码结构的每个卷积块的卷积层均为2个,每个卷积块的池化层的数量为1,每个卷积层中的每个卷积核的大小均为3,首先基于上述最终关键特征进行一次卷积核大小为2的上卷积运算进行上采样,得到特征a,然后设置第二个卷积块的每个卷积层的卷积核数量为480,进行2次卷积核大小为3的卷积运算,得到特征a’;再进行一次卷积核大小为2的上卷积运算进行上采样,得到特征b,然后设置第三个卷积块的每个卷积层的卷积核数量为240,进行2次卷积核大小为3的卷积运算,得到特征b’;再进行一次卷积核大小为2的上卷积运算进行上采样,得到特征c,然后设置第四个卷积块的卷积核数量为120,进行2次卷积核大小为3的卷积运算,得到特征c’;再进行一次卷积核大小为2的上卷积运算进行上采样,得到特征d,然后设置第五个卷积块的卷积核数量为60,进行2次卷积核大小为3的卷积运算,得到特征d’,最后进行1*1的卷积运算,卷积核数量设置为要分割的类别数,得到分割结果。
重复上述步骤,使用第二训练集对图像分割模型进行训练,通过迭代调参得到完成的图像分割模型;这之后,对完成的模型进行打包并部署。
步骤S400,基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,基于所述第三训练集和第三验证集构建图像识别模型。
图像识别模型的输入为分割后的单个血细胞图像,输出是该细胞属于某些类别的概率值。
具体的,通过图像分割模型将原始单视野血涂片图像分割为一个个白细胞、红细胞、血小板图。对这些白细胞、红细胞、血小板图进行标注,标注出该细胞所属的类别。当标注量达到训练要求(每一个类别中的图的数量大于10000)后,随机抽取其中1/10作为第三验证集,其余作为第三训练集。
具体的,构建图像识别模型,包括如下步骤:
步骤S410,第三卷积神经网络的输入层输出第三训练集中的某一分割后的单个血细胞图像数据到第三卷积神经网络的卷积块;
步骤S420,设置所述第三卷积神经网络的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量和大小,提取第二关键特征;
具体的,卷积神经网络可以设定为:包括多个卷积块,每个卷积块包括多个卷积层、1个池化层、1个激活层,1个全连接层,每个卷积层包含多个卷积核。
步骤S430,第三卷积神经网络的全连接层将所述第二关键特征和第三卷积神经网络的输出层的多个神经元进行全连接,所述第三卷积神经网络的输出层输出预测识别结果;
步骤S440,重复步骤S410至S430,使用第三训练集进行训练,通过迭代调参得到完成的图像识别模型。
对完成的模型进行打包并部署。
具体的,如图5所示,输入的图像大小为224*224*3,图像大小为224*224像素,3为RGB图像。第三卷积神经网络的卷积块为4个,每个卷积块包含多个卷积层、1层池化层和1层激活层。第1个卷积块内部包含96个卷积层,每个卷积层的卷积核大小为11*11*3,每个卷积层的卷积核数量为27*27*96;第2个卷积块内部包含256个卷积层,每个卷积层的卷积核大小为5*5*48,每个卷积层的卷积核数量为27*27*128;第3个卷积块内部包含384个卷积层,每个卷积层的卷积核大小为3*3*256,每个卷积层的卷积核数量为13*13*192;第4个卷积块内部包含384个卷积层,每个卷积层的卷积核大小为3*3*192,每个卷积层的卷积核数量为13*13*256。全连接层将第4个卷积块的输出和最后输出层的100个神经元进行全连接,输出预测识别结果。
使用第三训练集对图像识别模型进行训练,通过迭代调参得到完成的图像识别模型;这之后,对完成的图像识别模型进行打包并部署。
步骤S500,所述血涂片全视野图像经过图像复原模型进行复原,得到复原后的全视野图像,所述复原后的全视野图像经过图像分割模型处理,得到多幅单个血细胞图像,所述多幅 单个血细胞图像经过图像识别模型处理,得到血细胞分类结果。
具体的,在系统应用过程中,待检测血液血涂片也同样需要经过血涂片扫描仪或者显微照相系统进行全视野摄影,建立血涂片扫描图像,再通过图像复原模型处理后获得复原后的清晰血涂片扫描图像,经过图像分割处理后可以得到单张血细胞图像,最后经过血细胞图像识别获得细胞分类结果并输出报告。
具体的,如图6所示,为图像识别结果。
整体时间缩短三分之二以上,原型系统在20万张白细胞图片的测试集上,精确度高于95%,漏报率低于1.6%。
综上所述,本发明提供了一种血涂片全视野智能分析方法,采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像;基于所述第一训练集和第一验证集构建图像复原模型;基于第二训练集和第二验证集构建图像分割模型,基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,构建图像识别模型;最终得到血细胞分类结果。本发明基于人工智能算法对全视野血细胞进行分析,极大降低了人为因素的干扰,提高检验结果的客观性,对血细胞分析分类准确度高;对符合要求的图片输入均可实现识别分析,算法鲁棒性和准确性比传统图像识别算法高,颠覆了现有医学检验流程,整体时间大大缩短。
全玻片电子数据的获取是实现全面、客观检测的基础,当前医疗检验领域尤其是血常规检验任务繁重、工作量大,相当一部分医院引进了较先进的辅助检验系统,但不能解决全玻片检验问题,往往造成结果片面性较大,人工复检率高;另外,高水平检验医师人才严重不足和分布不均,导致对外周血中非正常细胞形态判断结果不一,当前主要的识别分类算法属于传统序列,实际运行过程中,识别准确率不高且极易受主观经验和人为因素干扰。
现有的血细胞识别主要存在两个技术问题:(1)不能对血涂片全视野扫描分析,导致结果片面性较大,不准确;(2)由于识别分类算法缺陷,导致依赖人工复检,从而导致极易受主观经验和人为因素干扰,使得识别准确率低。
为解决上述技术问题,结合图13,本发明实施例提供一种血细胞识别模型构造方法,获得用于血细胞识别的血细胞分割、识别模型,具体步骤如下:
(1)图像采集
采集外周血,制作血涂片,将采集到的血液样本进行数字化处理并建立血液图像数据库,该数据库中保存的为血涂片的全玻片全视野图像。
由于相机在高倍显微镜下,尤其是100倍物镜下,所拍摄范围有限,仅能拍摄物理大小大约150*100μm(微米)的单视野图像,如图11(a)、(b)所示,单视野图像边缘的血细胞无法准确识别。为了无遗漏的获取整个血液玻片细胞的图像(大约15mm*25mm大小),需将大约25000张单视野图像拼接为全视野图像,如图11(c)所示,边缘的血细胞在拼接后形成完整血细胞图像,相较于单视野图像,全视野图像能够无遗漏的提取处于单视野边缘的不完整细胞,拼接常用的算法包括但不限于FAST算法、SURF算法、图像配准等。
全视野图像的获取方法包括:首先将采集到的血液样本推片得到血玻片,再利用高精度显微摄影和机械控制技术,拍摄全视野血液照片。成像系统先对全玻片进行定点对焦,再从玻片一角开始沿相等间距不断移动并拍摄下所有分视野照片,并最终拼接形成全视野图像。对血涂片图像进行图像预处理和人工图像分割,得到单个血细胞图像,汇聚形成原始血细胞图像库,作为血细胞分割模型的训练样本。
拼接的方式包括但不限于,方式一:将物理位置相邻的单视野图像两两进行特征点提取,特征因子包含但不限于sift、surf、harris角点、ORB等,然后进行图像特征匹配,最终形成完整的全视野图像。方式二:判断两张相邻单视野图像重合区域大小,然后将重合部分进行加权平均,获取重合部分图像,最终获取全视野图像。
(2)人工标注
对原始血细胞图像进行标注,形成标注血细胞图像库,作为血细胞识别模型的训练的样本集。血细胞类别标注需要具有丰富经验的血液检验科医生来完成,可选择对标注结果进行交叉验证。
为了方便专业医生及相关标注人员的标注工作,可选择性配备基于三种平台的白细胞和红细胞标注两大类细胞的专家标注系统,三种平台包括iOS、Android和PC三大平台。在一个实施例中充分利用移动设备的便携性,开发对应的APP将数据分发至标注人员的移动设备上,可随时针对不同的图像类型进行清晰度、类别等数据的标注。
(3)构建血细胞分割、识别模型并训练
随机选取训练样本形成训练集和验证集,对血细胞分割、识别模型进行训练。在一个实施例中采用照十折交叉验证(10-fold cross-validation)方法,将数据集分成十份,轮流将其中9份作为训练数据,1份作为测试数据,进行训练和模型优化。
结合图8,从初始血细胞图像库选择训练集和验证集对血细胞分割模型进行训练,如果血细胞分割模型的获得单个血细胞位置和图像的准确率(R)大于设定阈值F1,则完成模型训练,将模型打包;否则,如果准确率(R)不大于设定阈值F1,则进行梯度反向传递,提 高准确率(R),调整血细胞分割模型。初始血细胞图像库是基于全视野图像构建的,因此血细胞分割模型分割的准确性更高。
从目标的单个视野图像中检测提取目标血细胞,从而生成目标的单个血细胞图像库,为单个血细胞的识别准备必要条件。其中用到的主要技术分为两类,一类为传统的图像模式识别方式,如归一化、色彩空间转换、直方图均值化等。另一类为基于深度学习的方法,如YOLO、SSD、DenseBox等。
两类识别方式均可用于本发明血细胞分割模型的建模。由于血细胞图像与自然图像相比,其类别组成比较单一,在一个实施例中,采用深度学习的方式,对血细胞分割模型建模。
从标注血细胞图像库选择训练集和验证集对血细胞识别模型进行训练,如果血细胞识别模型的准确率(R)大于设定阈值F2,则完成模型训练,将模型打包;否则,如果准确率(R)不大于设定阈值F2,则进行梯度反向传递,提高准确率(R),调整血细胞识别模型。
可选择的,针对血细胞本身特点,血细胞识别模型采用包含卷积计算且具有深度结构的前馈神经网络(Feedforward Neural Networks),对前馈神经网络模型进行训练,从而隐式地从训练数据中进行学习提取特征,并通过不断地参数调优和误差分析优化模型,最终形成成熟的血细胞识别模型。当准确率(R)不大于设定阈值时,反向传递血细胞识别模型的准确率(R),调整各卷积层的权重。
对检测到的单个血细胞图像进行类别判定。为提高血细胞识别率,降低对原始图像质量的要求和训练样本数量的限制,在一个实施例中采用基于迁移学习的具有深度结构的前馈神经网络来构造血细胞识别模型,在ImageNet数据集的基础上训练得到原始图像识别模型,再利用血细胞图像标注库进行迁移学习,调整参数后得到测试模型。通过卷积核、神经网络层的改变,能取得更快的运算速度以及更加准确的类别判断。
作为前馈神经网络一个优选的实施例,结合图9,网络采用卷积神经网络来提取图像特征,达到对图像进行分类的目的。
网络的输入为572*572的单个血细胞图像,随后进行kernel size=3,channel=64的卷积运算,提取各类细胞的特征向量,随后进行size=2的最大池化(maxpooling)运算,提取出已提取到的特征中最重要的特征,例如边缘、纹理和色彩等特征,继续进行卷积运算。
在第三层之后,连接残差块(Res.block),进行残差学习。残差学习可以有效缓解梯度反向传播时的消失和网络退化现象,故可将网络扩展至深层。使得网络更强大,更鲁棒,共五层残差块,且在残差块内,由于要保持恒等映射,故利用conv 1*1调整输出大小和通道(channel)数目,随后,在残差块后接两层全连接层(FC)用于网络的分类,第一层全连接 层神经元数目为4096,将4096个特征传递给下一层神经元,利用分类网络(classes)对图像进行分类,最后一层神经元数目即为目标类别的数目。由于血细胞图像与自然图像相比,其类别组成比较单一,因此,在传统算法的基础上进行剪枝,以及卷积核、神经网络层的改变,能取得更快的运算速度以及更加准确的类别判断。残差块的一种详细结构如图10所示,增加一个由输入到输出的恒等映射,该残差块(Res-Block)可以在加深网络(以提取更高级特征)的情况下解决梯度消失的问题。残差模块可以从某一层获得激活,然后反馈给另外一层甚至更深层,利用skip connection可以构建残差网络来训练更深的层。在图10中曲线部分,网络结构直接跳过两个3x3,64的网络层,将特征传递到更深的层。即输入x经3*3的卷积,采用Relu激活函数激活,再经过3*3的卷积后与输入x叠加,再经Relu激活函数激活后输出。
血细胞识别模型包括但不限于卷积神经网络,也可基于传统识别算法或者强化学习思想进行实现。
对于玻片成像来说,具有成像比较理想的特定区域,能够提供更为优质的图像数据;对于一些重要部位,例如玻片的头部、中部、尾部为血细胞的重点分布区域,对识别结果的影响较大;现实情况中还存在医生对部分区域感兴趣,可能指定部分区域。本发明首次提出全视野血细胞分析概念,全视野范围包括玻片特定区域、指定区域以及玻片重要部位(头部、中部、尾部)等范围,以及全玻片范围。可以进一步增加在进行图像分割前,首先确定视野范围。
作为可选方案,在应用过程中可采用人工评估的方式对血细胞分割模型、识别模型的分割、识别结果分别进行评估,根据评估结果反向传递梯度,对血细胞分割模型、识别模型进行优化。
作为可选方案,本发明的血细胞分割、识别模型可以集成加载在同一智能单机设备中,例如采用计算机加载两个所述模型。也可以根据实际需要将两个所述模型分别加载于不同的智能单机设备中。
结合图8,在实际的应用过程中,首先采用血细胞分割模型对基于单个视野玻片扫描图像进行图像分割,获得目标分割后的单个血细胞图像及对应位置,利用血细胞识别模型进行细胞类别的识别,进而获得血细胞的位置和类别,识别结果如图13所示。采用位置和类别信息在单个视野玻片扫描图像上进行标注,获得单视野血涂片血细胞识别模型识别图如图12所示。
本发明的血细胞识别模型可实现50种白细胞及20余种红细胞的标注,根据实际的需要进行训练,可扩展性好。
本发明基于人工智能算法,实现对血细胞的识别,相对于传统的识别方法准确度有质的提高,可到达85%以上的准确率;可对全视野血细胞进行分析,大大提高科学性。
基于全视野图像生成血液图像数据库,进行血细胞分割模型的训练,保证了数据的准确性和全面性,提高了血细胞分割模型分割的准确性。利用计算机实现全视野血细胞分析,极大降低了人为客观因素的干扰,提高检验结果的客观性和一致性。血细胞分割模型、识别模型具有智能性,软件算法具有自学习属性,随着高质量标注图像的增加,识别模型训练效率逐步提高,可不断优化软件识别分类准确度。
目前医院血液检验流程是:血样品——血液分析仪——推染片机——人工镜检,整个流程约耗时60分钟。人工抽血得到血液样品;通过血液分析仪器得到各种血细胞计数、白细胞分类和血红蛋白含量;通过推染片机进行染色标记到人工镜检的玻片;人工镜检后得到最终的血细胞形态分析结果,例如异常血细胞识别等。
现有的血液分析仪技术主要基于电阻抗、激光测定以及综合方法(流式细胞术、细胞化学染色、特殊细胞质往除法等)等三类实现。
现有技术的问题在于:一是未实现血涂片全视野范围内血细胞的分析和计数,数据样本量不够,导致结果片面性大,不准确;二是计数和分类算法比较传统,形态分析效果较差,识别准确度不高;三是高水平检验医师人才严重不足,并且人工镜检医。
基于上述技术问题,结合图14,血细胞识别模型的构造,首先对血涂片进行显微镜下的全视野摄影,建立玻片扫描图像组;然后由专业的医生与普通标注者组成的标注团队对原始血细胞图片进行人工标注,并随机抽取图像建立训练集和验证集;最后,使用人工智能技术进行模型训练,并通过不断地参数调优和误差分析优化模型,最终形成成熟的图像示例识别模型。该模型输入为单视野血涂片图像,输出为该图像上所有的目标细胞位置、边缘及类别。
(1)图像采集
将已经染色推片的血涂片放入显微镜下,连接相机后,调节对焦的同时对同一视野进行高速连续拍照,然后对该视野的若干张图像进行图像质量评价,选择细胞最清晰的图像作为该视野的最终单视野图像,图像清晰度评价算法包含但不限于PSNR(Peak Signal to Noise Ratio,即峰值信噪比)、SSIM(structural similarity index,结构相似性)等。
由于相机在高倍显微镜下,尤其是100倍物镜下,所拍摄范围有限,仅能拍摄物理大小大约150*100μm的单视野图像,单视野图像边缘的血细胞无法准确识别。为了无遗漏的获取整个血液玻片细胞的图像,需将大约25000张单视野图像拼接为全视野图像,单视野图像 边缘的血细胞在拼接后形成完整血细胞图像,相较于单视野图像,全视野图像能够无遗漏的提取处于单视野边缘的不完整细胞。拼接常用的算法包括但不限于FAST算法、SURF算法、图像配准等。
将采集到的血液样本进行数字化处理并建立血液图片数据库,该数据库中保存的是血涂片的全玻片全视野血液图像或者为图像质量评价后图像质量最好的单视野图像。
(2)人工标注
实例分割数据标注工作分为血细胞边缘标注及类别标注,可以分别由普通标注者与具有丰富经验的血液检验科医生来完成,并对标注结果进行交叉验证,交叉验证的过程至少有两个以上的标注者参与,具体过程为将同一批数据分发给不同的标注者,标注结果相同则认为标注有效。否则如果标注无效则删除该图像,或者重新标注。
血细胞边缘标注由专业的标注软件来辅助完成,标注者对血液图片数据库中全视野血液图像或者单视野图像的细胞边缘信息进行采集,针对每张图像生成对应的json格式文件,此文件包含单个血细胞的轮廓、面积、定位等信息。
类别标注部分,为了方便专业医生及相关标注人员的标注工作,可选择性配备基于三种平台的白细胞和红细胞标注两大类细胞的专家标注系统,三种平台包括iOS、Android和PC三大平台。在一个实施例中充分利用移动设备的便携性,开发对应的APP将数据分发至标注人员的移动设备上,使用者可随时针对不同的图像类型进行血细胞类别的标注。
完成边缘标注、类别标注及交叉验证后,标注结果有效的单视野或全视野血液图像汇聚形成实例标注数据库,作为训练样本集。
(3)构建血细胞识别模型并训练
血细胞识别模型采用人工智能算法实现,包括但不限于采用卷积神经网络,也可以其他全监督、半监督类型或非监督类人工智能算法。实现单视野血涂片图像的快速识别。
随机从样本集中选取全视野血液图像或单视野图像形成训练集和验证集。按照十折交叉验证(10-fold cross-validation)的要求,将数据集平均分成十份,轮流将其中9份作为训练数据,1份作为测试数据,进行训练和优化。
结合图15,从样本集中分别选择细胞边缘标注有效的图像形成细胞边缘数据集,选择细胞类别标注有效的图像形成细胞类别数据集,并分别从细胞边缘数据集和细胞类别数据集中提取训练集和验证集对血细胞识别模型进行训练。
如果血细胞识别模型的获得单个边缘分割准确率(R)大于设定阈值F1,图像的类型判别的准确率(R)大于设定阈值F2,则完成模型训练,将模型打包;否则,如果任一准确率 (R)不满足阈值要求,则进行梯度反向传递,提高准确率(R),调整血细胞识别模型。
在实际应用中,血细胞识别模型输入为单视野血涂片图像,标记图片中每一个像素,并将每一个像素与其表示的类别对应起来。对已标记像素级图像分割数据集训练以获得识别模型,基于全卷积神经网络(FCN)的网络结构能够很好的将血细胞从背景中分离出来并进行分类,分割识别采用卷积自编码器结构,网络的核心部分主要分为编码器和解码器两部分,编码器对输入的图像进行编码,提取特征;解码器对提取的特征进行解码,还原图像语义。
图16为基于全卷积神经网络(FCN)血细胞识别模型的实施例。网络设计运用了编码-解码(Encoder-Decoder)架构进行血细胞位置区域的ROI特征图的提取,运用残差网络进行ROI特征图特征的提取。首先进行编码Encoder运算,利用全卷积网络input的输入单视野血涂片图像大小为572*572,在每一层会进行kernel_size=3的双卷积运算(con),设置通道(channel)数量为100,来充分提取浅层特征,随后进行一次kernel_size=2的最大池化(maxpooling)运算,使得网络可以提取出最关键的特征。随后再次进行卷积运算(up_conv),增加通道(channel)数量为原来2倍,以此规则最终channel=300。
随后进行Decoder解码运算,首先进行一次up_conv运算对Decoder结果进行上采样,随后进行两次kernel_size=3的卷积运算,继续提取特征输出特征图(feature_map)。此时得到待分割对象的潜在区域,然后对于此区域进行卷积运算,提取特征,利用残差块结构,设计深层网络,利用学习残差提取潜在区域的特征,使得梯度更易向后传播,网络的vc维更大,得到一张细粒度更高的特征图,利用此特征图进行预测。在特征图后接全连接网络FC进行Bbox的回归以及目标对象的分类任务,最后一层全连接层的输出即为待检测对象的坐标和类别,此处采用向量化的编码方式,即坐标值+类别数+置信度。同时,在之前学习到的特征图后接卷积层,conv 1*1,channel=待分割目标类别数目,通过Mask算法得到待检测对象的掩膜Mask,即边缘信息,随后将此掩膜Mask和全连接层所得到的类别信息进行融合,即可得到实例分割的结果。Mask算法包括获取该特征图对应的位置及边缘信息,进行全卷积神经网络FCN处理,获得每个像素所属的类别,即判断每个像素是属于背景像素还是目标像素,进行残差处理,获得梯度传递后结果,进行池化,获得特征降维后向量,进行卷积(2*2up_conv),最后获得该位置处的血细胞对应的边缘信息。
结合图17,为边缘信息标注结果示意图,图中点线为标注的边缘信息。图18为类别标注结果示意图,每个血细胞图片下方为类别标注的结果。可以看出本发明的边缘信息及类别标注清晰、准确。采用位置、类别信息、边缘信息在单个视野玻片扫描图像上进行标注,获得单视野血涂片标注结果,如图20所示,原图如图19所示。
对于玻片成像来说,具有成像比较理想的特定区域,能够提供更为优质的图像数据;对于一些重要部位,例如玻片的头部、中部、尾部为血细胞的重点分布区域,对识别结果的影响较大;现实情况中还存在医生对部分区域感兴趣,可能指定部分区域。本发明首次提出全视野血细胞分析概念,全视野范围包括玻片特定区域、指定区域以及玻片重要部位,例如头部、中部、尾部等范围,以及全玻片范围。可以进一步增加在进行图像输入后,首先确定视野范围。
作为可选方案,在血细胞识别模型实际应用过程中可采用人工评估的方式对血细胞识别模型的边缘标注、类别标注结果分别进行评估,根据评估结果反向传递梯度,对血细胞识别模型进行优化。
作为可选方案,本发明的血细胞识别模型可以加载在控制设备中,例如智能单机设备,例如计算机或智能手机。
本发明的血细胞识别模型可实现至少50种白细胞及至少20余种红细胞的标注,根据实际的需要进行训练,可扩展性好。
本发明仅需输入单视野血涂片图像即可输出识别结果,实现了端到端的设计。本发明基于人工智能算法,实现对血细胞的标注,相对于传统的识别方法准确度有质的提高,达到85%以上识别准确率;可对全视野血细胞进行分析,大大提高科学性。
本领域的普通技术人员可以理解,上述各实施方式是实现本发明的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本发明的精神和范围。

Claims (27)

  1. 一种血涂片全视野智能分析方法,其特征在于,包括如下步骤:采集多幅原始血涂片单视野图像,建立原始血涂片单视野图像组,并基于所述多幅原始血涂片单视野图像建立血涂片全视野图像;
    基于所述多幅原始血涂片单视野图像,获得第一训练集和第一验证集,基于所述第一训练集和第一验证集构建图像复原模型;
    在多幅所述原始血涂片单视野图像中选取存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像,获得第二训练集和第二验证集,基于所述第二训练集和第二验证集构建图像分割模型,所述多幅存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像经过图像分割模型处理,得到多幅分割后的单个血细胞图像;
    基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,基于所述第三训练集和第三验证集构建图像识别模型;
    所述血涂片全视野图像经过图像复原模型进行复原,得到复原后的全视野图像,所述复原后的全视野图像经过图像分割模型处理,得到多幅单个血细胞图像,所述多幅单个血细胞图像经过图像识别模型处理,得到血细胞分类结果。
  2. 根据权利要求1所述的方法,其特征在于,所述在多幅所述原始血涂片单视野图像中选取存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像,获得第二训练集和第二验证集,基于所述第二训练集和第二验证集构建图像分割模型,所述多幅存在白细胞以及白细胞和红细胞完全不重叠的原始血涂片单视野图像经过图像分割模型处理,得到多幅分割后的单个血细胞图像的步骤包括:
    步骤S310,第二卷积神经网络的输入层输出第二训练集中的某一个原始血涂片单视野图像数据到第二卷积神经网络的卷积块;
    步骤S320,设置所述第二卷积神经网络的编码结构的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量及大小,提取第一关键特征;
    步骤S330,设定与编码结构卷积块的数量相同的解码结构卷积块数量,所述解码结构的每个卷积块的卷积层的卷积核的数量及大小、每个卷积块的池化层的数量、每个池化层的卷积核的数量均与编码结构中相对应的卷积块一致,基于所述第一关键特征得到解码后的数据;
    步骤S340,对所述解码后的数据再进行卷积运算,所述卷积运算的卷积核大小为1、卷积核数量设置为需要分割的类别数;
    步骤S350,第二卷积神经网络的每个卷积块的全连接层将所述再进行卷积运算的解码后 数据和第二卷积神经网络的输出层的多个神经元进行全连接,所述第二卷积神经网络的输出层输出预测分割结果;
    步骤S360,重复步骤S310至S350,使用第二训练集进行训练,通过迭代调参得到图像分割模型。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述多幅分割后的单个血细胞图像,获得第三训练集和第三验证集,基于所述第三训练集和第三验证集构建图像识别模型的步骤包括:
    步骤S410,第三卷积神经网络的输入层输出第三训练集中的某一分割后的单个血细胞图像数据到第三卷积神经网络的卷积块;
    步骤S420,设置所述第三卷积神经网络的卷积块的数量、每个卷积块的卷积层的数量、每个卷积块的池化层的数量、每个卷积层的卷积核的数量及大小、每个池化层的卷积核的数量和大小,提取第二关键特征;
    步骤S430,第三神经网络的每个卷积块的全连接层将所述第二关键特征和第三卷积神经网络的输出层的多个神经元进行全连接,所述第三卷积神经网络的输出层输出预测识别结果;
    步骤S440,重复步骤S410至S430,使用第三训练集进行训练,通过迭代调参得到图像识别模型。
  4. 根据权利要求3所述的方法,其特征在于,所述第二卷积神经网络的编码结构的各个卷积层中的卷积核大小相同,编码结构的下一个卷积块的每个卷积层的卷积核数量均为上一个卷积块的每个卷积层的卷积核数量的2倍,编码结构的各个卷积块的池化层的数量均相同、各个池化层的卷积核的数量和大小相同。
  5. 根据权利要求4所述的方法,其特征在于,所述第二卷积神经网络的解码结构的各个卷积层中的卷积核大小相同,解码结构的下一个卷积块的每个卷积层的卷积核数量均为上一个卷积块的每个卷积层的卷积核数量的1/2,解码结构的各个卷积块的池化层的数量均相同、各个池化层的卷积核的数量和大小相同。
  6. 一种血细胞分割模型及识别模型的构造方法,其特征在于,包括:
    获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的多个所述单视野图像进行拼接形成一个全视野图像,构成全视野图像数据库,对所述全视野图像数据库中的各个全视野图像进行人工图像分割,得到单个血细胞图像汇聚形成初始血细胞图像库;
    对所述初始血细胞图像库中的单个血细胞图像进行人工标注,形成标注血细胞图像库;
    构建血细胞分割模型及识别模型,在所述初始血细胞图像库中选取样本形成训练集和验 证集,对血细胞分割模型进行训练,直至满足单个血细胞分割准确率的要求;在标注血细胞图像库中选取样本形成训练集和验证集,对血细胞识别模型进行训练,直至满足识别准确率的要求。
  7. 如权利要求6所述的血细胞分割模型及识别模型的构造方法,其特征在于,所述拼接的方式包括:方式一,将物理位置相邻的单视野图像两两进行特征点提取,然后进行图像特征匹配,最终形成完整的全视野图像;或者方式二,判断两张相邻单视野图像重合区域大小,然后将重合部分进行加权平均,获取重合部分图像,最终获取全视野图像。
  8. 如权利要求6或7所述的血细胞分割模型及识别模型的构造方法,其特征在于,进行人工标注的方法为在控制终端,对白细胞和/或红细胞的类型以及图像清晰度进行标注,对标注结果进行交叉验证。
  9. 如权利要求6或7所述的血细胞分割模型及识别模型的构造方法,其特征在于,血细胞识别模型采用具有深度结构的前馈神经网络构建。
  10. 如权利要求9所述的血细胞分割模型及识别模型的构造方法,其特征在于,具有深度结构的前馈神经网络采用卷积层提取各类细胞的特征向量,通过最大池化提取所需特征向量,进行残差块进行残差学习,通过两层全连接层进行分类输出类别信息;残差块输入经3*3的卷积,采用第一Relu激活函数激活,再经过3*3的卷积后与输入叠加,最后经第二Relu激活函数激活后输出。
  11. 如权利要求6或7所述的血细胞分割模型及识别模型的构造方法,其特征在于,血细胞分割模型采用归一化、色彩空间转换、直方图均值化或深度学习的方法构建。
  12. 如权利要求11所述的血细胞分割模型及识别模型的构造方法,其特征在于,深度学习的方法包括YOLO、SSD和DenseBox中的一种。
  13. 一种血细胞识别的方法,其特征在于,包括:
    利用权利要求6-12中任一项所述的血细胞分割模型及识别模型的构造方法构建血细胞分割模型及血细胞识别模型;
    利用所述血细胞分割模型对单视野玻片扫描图像进行图像分割获得单个血细胞图像及对应位置;
    利用所述血细胞识别模型对单个血细胞进行细胞类别的识别;
    基于单个血细胞的位置和类别在单视野玻片扫描图像上进行标注。
  14. 如权利要求13所述的血细胞识别的方法,其特征在于,血细胞分割模型对单个视野玻片扫描图像进行图像分割前还包括确定分割的视野范围,视野范围包括成像理想的特定区 域、血细胞分布较多的重要部位和/或医生指定区域。
  15. 如权利要求13所述的血细胞识别的方法,其特征在于,还包括对血细胞分割模型、识别模型的分割、识别结果分别进行人工评估,根据评估结果反向传递梯度,优化所述血细胞分割模型、血细胞识别模型。
  16. 一种端到端的血细胞识别模型构造方法,其特征在于,包括:
    获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的每个所述单视野图像中的血细胞的类别和边缘进行人工标注,形成实例标注数据库;
    构建血细胞识别模型,在所述实例标注数据库中选取样本形成训练集和验证集,对所述血细胞识别模型进行训练,直至所述血细胞识别模型满足边缘分割准确率及类型判断准确率的要求。
  17. 一种端到端的血细胞识别模型构造方法,其特征在于,包括:
    获取至少一幅血涂片中每幅血涂片的多个单视野图像,对每幅血涂片的多个所述单视野图像进行拼接形成一个全视野图像,对每个全视野图像中的血细胞的类别和边缘进行人工标注,形成实例标注数据库;
    构建血细胞识别模型,在所述实例标注数据库中选取样本形成训练集和验证集,对所述血细胞识别模型进行训练,直至所述血细胞识别模型满足边缘分割准确率及类型判断准确率的要求。
  18. 如权利要求17所述端到端的血细胞识别模型构造方法,其特征在于,所述拼接的方式包括FAST算法、SURF算法或图像配准。
  19. 如权利要求16或17所述端到端的血细胞识别模型构造方法,其特征在于,进行类别人工标注的方法为在控制终端对白细胞和/或红细胞的类型进行标注。
  20. 如权利要求16或17所述端到端的血细胞识别模型构造方法,其特征在于,进行边缘人工标注的方法为标注者对细胞边缘信息进行采集,针对每张图像生成包含单个血细胞的轮廓、面积以及位置信息的文件。
  21. 如权利要求16或17所述端到端的血细胞识别模型构造方法,其特征在于,所述血细胞识别模型采用全监督、半监督或无监督类型的人工智能算法。
  22. 如权利要求16或17所述端到端的血细胞识别模型构造方法,其特征在于,所述血细胞识别模型采用全卷积神经网络;全卷积神经网络采用编码器-解码器结构,编码器对输入的图像进行编码,提取特征;解码器对提取的特征进行解码,还原图像语义。
  23. 如权利要求16或17所述端到端的血细胞识别模型构造方法,其特征在于,所述血 细胞识别模型首先进行编码运算,输入单视野血涂片图像,在每一层会进行双卷积运算,提取浅层特征,随后进行一次最大池化运算,进行所需特征提取,再次进行卷积运算,增加通道数量;
    进行解码运算,首先进行一次卷积运算对解码结果进行上采样,随后进行双卷积运算,继续提取特征输出特征图,得到待分割对象的潜在区域;然后对于所述潜在区域进行卷积运算,提取特征,利用残差块结构提取潜在区域的特征向后传播梯度,得到一张细粒度更高的特征图;
    所述细粒度更高的特征图通过全连接网络进行回归以及目标对象的分类任务,最后一层全连接层的输出即为待检测对象每个像素的坐标和类别信息;所述细粒度更高的特征图进行卷积还通过Mask算法得到待检测对象的掩膜,再将所述掩膜和全连接层得到的所述类别信息融合,得到识别的结果。
  24. 如权利要求23所述端到端的血细胞识别模型构造方法,其特征在于,所述Mask算法包括获取所述细粒度更高的特征图对应的位置及边缘信息,进行全卷积神经网络FCN处理,获得每个像素所属的类别为目标像素还是背景像素,进行残差处理,获得梯度传递后结果,进行池化,获得特征降维后向量,进行卷积,最后获得该位置处的血细胞对应的边缘信息。
  25. 一种端到端的血细胞分割识别方法,其特征在于,利用权利要求16-24中任一项所述的端到端的血细胞识别模型构造方法构建的血细胞识别模型,对每个单视野玻片扫描图像进行处理,获得各个所述单视野玻片扫描图像中血细胞位置、边缘及类别,标注在各个所述单视野玻片扫描图像上并输出。
  26. 如权利要求25所述的端到端的血细胞分割识别方法,其特征在于,对于单视野玻片扫描图像,先确定分割的视野范围再进行处理;视野范围包括成像理想的特定区域、血细胞分布较多的重要部位和/或医生指定区域。
  27. 如权利要求25或26所述的端到端的血细胞分割识别方法,其特征在于,对所述血细胞识别模型的边缘和类别标注结果分别进行人工评估,根据评估结果反向传递梯度,优化血细胞识别模型;
    所述血细胞识别模型采用编码-解码架构进行血细胞位置区域的ROI特征图的提取,运用残差网络进行ROI特征图特征的提取,采用分类器基于所提取的特征获得该特征图对应的坐标和类别,Mask算法基于所述坐标获得对应的边缘。
PCT/CN2020/132018 2019-11-28 2020-11-27 血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法 WO2021104410A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/762,780 US20220343623A1 (en) 2019-11-28 2020-11-27 Blood smear full-view intelligent analysis method, and blood cell segmentation model and recognition model construction method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201911186777.0A CN110647874B (zh) 2019-11-28 2019-11-28 一种端到端的血细胞识别模型构造方法及应用
CN201911186889.6 2019-11-28
CN201911186888.1A CN110647875B (zh) 2019-11-28 2019-11-28 一种血细胞分割、识别模型构造的方法及血细胞识别方法
CN201911186888.1 2019-11-28
CN201911186777.0 2019-11-28
CN201911186889.6A CN110647876B (zh) 2019-11-28 2019-11-28 一种血涂片全视野智能分析方法

Publications (1)

Publication Number Publication Date
WO2021104410A1 true WO2021104410A1 (zh) 2021-06-03

Family

ID=76128653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132018 WO2021104410A1 (zh) 2019-11-28 2020-11-27 血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法

Country Status (2)

Country Link
US (1) US20220343623A1 (zh)
WO (1) WO2021104410A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408480A (zh) * 2021-07-13 2021-09-17 上海交通大学医学院附属瑞金医院 基于骨髓细胞形态学的血液疾病人工智能辅助诊断系统
CN113744195A (zh) * 2021-08-06 2021-12-03 北京航空航天大学 一种基于深度学习的hRPE细胞微管自动检测方法
CN114092456A (zh) * 2021-11-26 2022-02-25 上海申挚医疗科技有限公司 细胞荧光图像判别方法及系统
CN114495097A (zh) * 2022-01-28 2022-05-13 陆建 一种基于多模型的尿液的细胞识别方法和系统
CN114708286A (zh) * 2022-06-06 2022-07-05 珠海横琴圣澳云智科技有限公司 基于伪标注动态更新的细胞实例分割方法和装置
CN114943845A (zh) * 2022-05-23 2022-08-26 天津城建大学 一种领域图片细粒度分类识别方法及系统
CN115100646A (zh) * 2022-06-27 2022-09-23 武汉兰丁智能医学股份有限公司 细胞图像高清晰快速拼接识别标记方法
CN115409840A (zh) * 2022-11-01 2022-11-29 北京石油化工学院 一种人体背部腧穴智能定位系统和方法
CN116503301A (zh) * 2023-06-27 2023-07-28 珠海横琴圣澳云智科技有限公司 基于空间域的显微镜下细胞图像融合方法及装置
CN116664550A (zh) * 2023-07-10 2023-08-29 广州医科大学附属第一医院(广州呼吸中心) 肺癌组织免疫组化pd-l1病理切片的智能识别方法及装置
CN116739949A (zh) * 2023-08-15 2023-09-12 武汉互创联合科技有限公司 一种胚胎图像的卵裂球边缘增强处理方法
CN117315655A (zh) * 2023-12-01 2023-12-29 深圳市一五零生命科技有限公司 神经干细胞培养的早期识别方法及系统
CN117474925A (zh) * 2023-12-28 2024-01-30 山东润通齿轮集团有限公司 一种基于机器视觉的齿轮点蚀检测方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020126598A1 (de) * 2020-10-09 2022-04-14 Carl Zeiss Microscopy Gmbh Mikroskopiesystem und verfahren zur verifizierung eines trainierten bildverarbeitungsmodells
CN116309543B (zh) * 2023-05-10 2023-08-11 北京航空航天大学杭州创新研究院 基于图像的循环肿瘤细胞检测设备
CN116309582B (zh) * 2023-05-19 2023-08-11 之江实验室 一种便携式超声扫描图像的识别方法、装置及电子设备
CN116434226B (zh) * 2023-06-08 2024-03-19 杭州华得森生物技术有限公司 循环肿瘤细胞分析仪
CN117057371B (zh) * 2023-08-26 2024-02-20 泓浒(苏州)半导体科技有限公司 一种基于ai识别算法的自适应晶圆读码方法
CN116958175B (zh) * 2023-09-21 2023-12-26 无锡学院 一种血液细胞分割网络的构建方法及血液细胞分割方法
CN117218443B (zh) * 2023-09-22 2024-03-05 东北大学 巴氏涂片宫颈细胞图像分类方法及系统
CN117523205B (zh) * 2024-01-03 2024-03-29 广州锟元方青医疗科技有限公司 少样本ki67多类别细胞核的分割识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097714A1 (en) * 2001-04-09 2002-12-05 Lifespan Biosciences, Inc. Computer method for image pattern recognition in organic material
CN109308695A (zh) * 2018-09-13 2019-02-05 镇江纳兰随思信息科技有限公司 基于改进U-net卷积神经网络模型的癌细胞识别方法
CN110032985A (zh) * 2019-04-22 2019-07-19 清华大学深圳研究生院 一种血细胞自动检测识别方法
CN110647874A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种端到端的血细胞识别模型构造方法及应用
CN110647875A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种血细胞分割、识别模型构造的方法及血细胞识别方法
CN110647876A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种血涂片全视野智能分析方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002097714A1 (en) * 2001-04-09 2002-12-05 Lifespan Biosciences, Inc. Computer method for image pattern recognition in organic material
CN109308695A (zh) * 2018-09-13 2019-02-05 镇江纳兰随思信息科技有限公司 基于改进U-net卷积神经网络模型的癌细胞识别方法
CN110032985A (zh) * 2019-04-22 2019-07-19 清华大学深圳研究生院 一种血细胞自动检测识别方法
CN110647874A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种端到端的血细胞识别模型构造方法及应用
CN110647875A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种血细胞分割、识别模型构造的方法及血细胞识别方法
CN110647876A (zh) * 2019-11-28 2020-01-03 北京小蝇科技有限责任公司 一种血涂片全视野智能分析方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YU, LE: "Research on Segmentation and Recognition of Microscopic Leucocytes Image", CHINESE MASTER’S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, 14 May 2012 (2012-05-14), pages 1 - 70, XP055817801 *
YU, LE: "Research on Segmentation and Recognition of Microscopic Leucocytes Image", CHINESE MASTER’S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, 14 May 2012 (2012-05-14), pages 1 - 70, XP055817809, [retrieved on 20210624] *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408480A (zh) * 2021-07-13 2021-09-17 上海交通大学医学院附属瑞金医院 基于骨髓细胞形态学的血液疾病人工智能辅助诊断系统
CN113744195A (zh) * 2021-08-06 2021-12-03 北京航空航天大学 一种基于深度学习的hRPE细胞微管自动检测方法
CN113744195B (zh) * 2021-08-06 2024-04-26 北京航空航天大学 一种基于深度学习的hRPE细胞微管自动检测方法
CN114092456A (zh) * 2021-11-26 2022-02-25 上海申挚医疗科技有限公司 细胞荧光图像判别方法及系统
CN114495097A (zh) * 2022-01-28 2022-05-13 陆建 一种基于多模型的尿液的细胞识别方法和系统
CN114943845A (zh) * 2022-05-23 2022-08-26 天津城建大学 一种领域图片细粒度分类识别方法及系统
CN114708286A (zh) * 2022-06-06 2022-07-05 珠海横琴圣澳云智科技有限公司 基于伪标注动态更新的细胞实例分割方法和装置
CN115100646A (zh) * 2022-06-27 2022-09-23 武汉兰丁智能医学股份有限公司 细胞图像高清晰快速拼接识别标记方法
CN115409840A (zh) * 2022-11-01 2022-11-29 北京石油化工学院 一种人体背部腧穴智能定位系统和方法
CN115409840B (zh) * 2022-11-01 2023-10-10 北京石油化工学院 一种人体背部腧穴智能定位系统和方法
CN116503301A (zh) * 2023-06-27 2023-07-28 珠海横琴圣澳云智科技有限公司 基于空间域的显微镜下细胞图像融合方法及装置
CN116503301B (zh) * 2023-06-27 2023-09-12 珠海横琴圣澳云智科技有限公司 基于空间域的显微镜下细胞图像融合方法及装置
CN116664550A (zh) * 2023-07-10 2023-08-29 广州医科大学附属第一医院(广州呼吸中心) 肺癌组织免疫组化pd-l1病理切片的智能识别方法及装置
CN116664550B (zh) * 2023-07-10 2024-04-12 广州医科大学附属第一医院(广州呼吸中心) 肺癌组织免疫组化pd-l1病理切片的智能识别方法及装置
CN116739949A (zh) * 2023-08-15 2023-09-12 武汉互创联合科技有限公司 一种胚胎图像的卵裂球边缘增强处理方法
CN116739949B (zh) * 2023-08-15 2023-11-03 武汉互创联合科技有限公司 一种胚胎图像的卵裂球边缘增强处理方法
CN117315655B (zh) * 2023-12-01 2024-02-06 深圳市一五零生命科技有限公司 神经干细胞培养的早期识别方法及系统
CN117315655A (zh) * 2023-12-01 2023-12-29 深圳市一五零生命科技有限公司 神经干细胞培养的早期识别方法及系统
CN117474925A (zh) * 2023-12-28 2024-01-30 山东润通齿轮集团有限公司 一种基于机器视觉的齿轮点蚀检测方法及系统
CN117474925B (zh) * 2023-12-28 2024-03-15 山东润通齿轮集团有限公司 一种基于机器视觉的齿轮点蚀检测方法及系统

Also Published As

Publication number Publication date
US20220343623A1 (en) 2022-10-27

Similar Documents

Publication Publication Date Title
WO2021104410A1 (zh) 血涂片全视野智能分析方法血细胞分割模型及识别模型的构造方法
CN110647874B (zh) 一种端到端的血细胞识别模型构造方法及应用
CN107316307B (zh) 一种基于深度卷积神经网络的中医舌图像自动分割方法
Shen et al. Domain-invariant interpretable fundus image quality assessment
Natarajan et al. Segmentation of nuclei in histopathology images using fully convolutional deep neural architecture
WO2021196632A1 (zh) 一种全景数字病理图像智能分析系统及方法
CN110647875B (zh) 一种血细胞分割、识别模型构造的方法及血细胞识别方法
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
CN108596046A (zh) 一种基于深度学习的细胞检测计数方法及系统
CN112380900A (zh) 基于深度学习的子宫颈液基细胞数字图像分类方法及系统
CN110647876B (zh) 一种血涂片全视野智能分析方法
CN115410050B (zh) 基于机器视觉的肿瘤细胞检测设备及其方法
CN105894483B (zh) 一种基于多尺度图像分析和块一致性验证的多聚焦图像融合方法
CN112215790A (zh) 基于深度学习的ki67指数分析方法
Guo et al. Liver steatosis segmentation with deep learning methods
CN113298780B (zh) 一种基于深度学习的儿童骨龄评估方法及系统
Pandey et al. Target-independent domain adaptation for WBC classification using generative latent search
Zhang et al. Image segmentation and classification for sickle cell disease using deformable U-Net
CN110827304A (zh) 一种基于深度卷积网络与水平集方法的中医舌像定位方法和系统
CN116612472B (zh) 基于图像的单分子免疫阵列分析仪及其方法
CN110728666B (zh) 基于数字病理玻片进行慢性鼻窦炎的分型方法及其系统
CN115546605A (zh) 一种基于图像标注和分割模型的训练方法及装置
CN112750132A (zh) 基于双路径网络和通道注意的白细胞图像分割方法
CN114419619B (zh) 红细胞检测分类方法、装置、计算机存储介质及电子设备
CN114419401B (zh) 白细胞检测识别方法、装置、计算机存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20894874

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20894874

Country of ref document: EP

Kind code of ref document: A1