CN115439683A - Attention mechanism-based leukocyte fine-granularity classification method - Google Patents

Attention mechanism-based leukocyte fine-granularity classification method Download PDF

Info

Publication number
CN115439683A
CN115439683A CN202211024009.7A CN202211024009A CN115439683A CN 115439683 A CN115439683 A CN 115439683A CN 202211024009 A CN202211024009 A CN 202211024009A CN 115439683 A CN115439683 A CN 115439683A
Authority
CN
China
Prior art keywords
training
model
white blood
encoder
wbclformer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211024009.7A
Other languages
Chinese (zh)
Inventor
秦飞巍
陈奔
邵艳利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211024009.7A priority Critical patent/CN115439683A/en
Publication of CN115439683A publication Critical patent/CN115439683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

The invention discloses a leukocyte fine-grained classification method based on an attention mechanism, and an end-to-end trainable Transformer and convolution combined model integrates the advantage of the Transformer in capturing long-distance dependency relationship and extracting global characteristics and the advantage of CNN in extracting local characteristics of images at low-level, can better construct a characteristic map of a leukocyte image, enrich the characteristic information of leukocytes and improve the identification accuracy of the cell image. The model has certain generalization and stability, and an optimal solution can be obtained by using two optimizers of SGD and AdamW.

Description

Attention mechanism-based leukocyte fine-granularity classification method
Technical Field
The invention belongs to the technical field of medical image recognition and deep learning, and particularly relates to a leukocyte fine-grained classification method based on Global-Local attention.
Background
In recent years, the incidence rate of serious dangerous diseases such as acute leukemia is higher and higher globally, and the trend of the acute leukemia towards the younger state is more and more obvious. The diagnosis of these serious diseases depends on the identification and classification of blood tests or blood smear micrographs, and on this basis, counting is performed, so as to count the class proportion (white blood cell count) of each type of white blood cells, thereby making a diagnosis of various serious dangerous diseases. Today in hospital clinics, it is mostly dependent on experienced physicians to identify and count manually. This is time consuming, labor intensive, and error prone, and therefore can result in misclassification, which can lead to "misdiagnosis" or "missed diagnosis" by the physician, thereby threatening the life safety of the patient.
Therefore, in order to solve the phenomenon, students introduce computer vision and machine learning technologies to automatically identify microscopic images of blood pictures. Conventional machine learning techniques such as edge detection, threshold segmentation, support vector machine, gray contrast, K-Means classification, etc. are not very accurate due to limitations of conventional computer vision techniques and machine learning techniques.
With the improvement of computer computing power, deep learning is greatly developed and successfully applied to a plurality of visual fields. The method mainly utilizes a deep neural network such as a convolutional neural network to extract local features of an image, and gradually constructs an abstract feature map through a convolution and pooling method, so that object recognition is output through a full connection layer. Compared with the traditional machine learning method, the method greatly improves the accuracy of classification.
However, the identification of leukocytes for the aided diagnosis of major diseases such as leukemia faces the following problems: 1) Under the influence of different hospitals, different machine equipment and different imaging environments, the generated white blood cell images have different color differences, which causes difficulty in automatic machine identification. 2) Due to the characteristics of the medical field of major dangerous diseases, a plurality of medical data sets have a long tail phenomenon, the distribution of samples of rare disease categories is extremely small, and the distribution of samples of common disease categories is extremely large. The class distribution is extremely unbalanced. 3) Since the white blood cell image only exists, the white blood cell image has a single feature, which makes image recognition of the model difficult. 4) Because of the diagnosis assistance of leukemia, it is necessary to perform a bone marrow smear cytological examination, i.e., to perform cell extraction on bone marrow, and the white blood cells may be of certain cell populations, with no particularly obvious differences between classes, too small, e.g., too small a difference between the early erythroid cells in red blood cells, the middle erythroid cells and the late erythroid cells. 5) For diagnosing the white blood cells, the precise identification of the white blood cells with fine granularity needs to be carried out on the white blood cells with up to 40 types.
The problem cannot be solved well based on the current CNN convolutional neural network method.
Disclosure of Invention
The invention provides a leukocyte classification method based on a Global-Local attention mechanism, aiming at the defects of the prior art.
The leukocyte classification method based on the Global-Local attention mechanism is an end-to-end trainable Transformer and convolution combined model, integrates the advantages of the Transformer in capturing long-distance dependency and extracting Global features and the advantages of CNN in extracting Local features of images at low-level, can better construct a feature map of a leukocyte image, enriches feature information of leukocytes, and improves the identification accuracy of the leukocyte image. The model has certain generalization and stability, and an optimal solution can be obtained by using both SGD and AdamW optimizers.
The method comprises the following three steps: collecting a data set, building and training a WBCLformer model, and carrying out WBCLformer model effect test.
Step 1: acquiring a white blood cell image as a basic data set, and dividing the white blood cell image into a training set and a testing set;
step 2: building and training WBCLformer model
The WBCLformer model training is divided into three steps: and (4) building a neural network model, pre-training and training in a leukocyte data set.
Step 2.1: neural network model construction
The neural network consists of three parts, namely a feature extractor, an encoder and a distinguishing region screening.
Step 2.1.1: feature extractor
This can make it difficult for the model to extract accurate white blood cell image information, since the white blood cell image is not characterized by abundance. Unlike the former ViT and its improved model, the image is cut directly without processing, which results in that local information is difficult to extract by the token after cutting. Therefore, in the step, image data is firstly converted into local characteristic data, and the characteristic data of the image is obtained by adopting convolution operation with a convolution kernel of 3*3.
Step 2.1.2: encoder for encoding a video signal
There are two kinds of encoders in WBCLformer, one is the encoder in Transformer, and the other is by modified local encoder.
Step 2.1.2.1: transformer encoder
The encoder is divided into two modules, one is Multi-head Self orientations layers, namely MSA, and the other is Feed Forward Network, namely FFN;
MSA: through mutual interactive learning among the input vectors, different weights of different objects are calculated, and therefore region information which is more concerned is found out.
FFN: after obtaining the weighted vector, the vector needs to be input into the FFN for further processing.
Step 2.1.2.2: local encoder
The local encoder combines the global attention in the self-attention mechanism with the CNN, so that fusion between the local information and the global information is achieved, and effective white blood cell characteristic information can be extracted. Since the self-attention mechanism is to learn the correlation between different patches, it is more concerned with the global feature information, and for the white blood cell image, its locality is important, so that the convolution operation is added to the encoder to realize the extraction of the local information.
Step 2.1.3: discriminating region screening
Since the differences between similar images are particularly slight, in order to enable the network to focus on the differences between different images, a discriminative area screening module is added, which functions to screen tokens having correlations.
Step 2.2: white blood cell data set training
Since WBCLformer produces an overfitting phenomenon on small datasets, this step is divided into two processes, pre-training on ImageNet-2012 dataset and training on leukocyte dataset;
step 2.2.1: pre-training
The parameters are adjusted by adopting an AdamW optimization algorithm in model training, the learning rate is 5e-4, the weight attenuation is 1e-3, the first-order exponential attenuation rate is 0.9, the second-order exponential attenuation rate is 0.999, and 150-250 rounds of training are performed in total.
Step 2.2.2: white blood cell data set training
The last layer of the model is replaced and the training of the model is followed using the pre-trained model parameters as initial values for the training.
The AdamW optimization algorithm is adopted for model training to adjust parameters, and the training parameters are the same as those of pre-training. The learning rate adjustment strategy adopts arm-up and cosine annealing, and aims to ensure the training stability of the model.
And 3, step 3: WBCLformer model Effect test
In order to quantitatively analyze the generalization ability of the model, it is necessary to test the trained model in a test set, compare the predicted result with the actual value, and analyze the result using the evaluation index.
Preferably, the acquiring the white blood cell image is used as a basic data set and is divided into a training set and a testing set, and the specific operations are as follows: a sampling of the white blood cell image dataset is performed, the dataset being sorted by category 8:2 into training and test sets.
Preferably, the evaluation indexes adopt accuracy, precision, recall rate and F1 score; before introducing the evaluation index, introducing a confusion matrix:
TP: positive samples predicted by the model as positive classes
TN: negative examples predicted by the model as negative classes
FP: negative examples predicted by the model as positive classes
FN: positive samples predicted by the model as negative classes
Accuracy: the ratio of the correct result to the total sample is predicted, and the formula is as follows
Figure BDA0003813510710000041
Precision: meaning the probability of actually being a positive sample among all samples predicted to be positive. The formula is as follows:
Figure BDA0003813510710000042
Figure BDA0003813510710000043
recall rate recalling: meaning the probability of being predicted as a positive sample among the actual positive samples. The formula is as follows:
Figure BDA0003813510710000044
Figure BDA0003813510710000045
f1 fraction: to balance accuracy and recall, an F1 score was introduced. The formula is as follows:
Figure BDA0003813510710000046
since the problem of multi-classification exists, the calculation mode of the precision rate and the recall rate is changed, the precision rate and the recall rate of each category are calculated firstly, and then are added according to the weight to calculate the average precision rate and the recall rate.
After the four indexes are obtained by the model, the model is compared with the current mainstream model, and the performance of the model is proved to be more excellent.
By adopting the method, the combination of local characteristics and global characteristics can be realized under the condition of not increasing the parameter quantity of the model, and the image characteristics of white blood cells are enriched. Optimal performance can be achieved based on the WBCLformer that has been trained. The invention has the following characteristics:
1) The technology provides a characteristic extraction mode combining local characteristics and global characteristics, enriches the image characteristics of white blood cells, and improves the identification accuracy of the model. The feature extractor and the local encoder perform local feature extraction of the image through CNN, the encoder performs global feature extraction of the image through a self-attention mechanism, and the local feature and the global feature are combined in a cascading mode.
2) The technology provides a distinguishing region screening module, and improves the accuracy of classifying the white blood cells by screening the distinguishing region of the image. Because the different types of the bone marrow leucocytes have similar forms, the part with the discrimination is screened by the self-attention mechanism, and the accuracy of classification is improved.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a network architecture diagram of the present invention;
FIG. 3 is a feature extractor in a network architecture of the present invention;
FIG. 4 is a comparison of results in the present dataset;
figure 5 compares the results in the public data set.
Detailed Description
The method specifically comprises the following three steps: collecting a data set, building and training a WBCLformer model, and carrying out WBCLformer model effect test.
Step 1: acquiring a white blood cell image as a basic data set, and dividing the white blood cell image into a training set and a testing set, wherein the specific operation is as follows:
a collection of leukocyte image data was sampled from several hospitals in local, totaling 92335 Zhang Bai cell pictures, for a total of 40 leukocyte classes, we sorted the data set by class 8: dividing by a ratio of 2, wherein the training set comprises 73877 samples, and the test set comprises 18458 samples.
Step 2: construction and training of WBCLformer model (FIG. 2)
The WBCLformer model training is divided into three steps: and (3) building a neural network model, pre-training and training a leucocyte data set.
Step 2.1: neural network model construction
The neural network consists of three parts, namely a feature extractor, an encoder and a distinguishing region screening.
Step 2.1.1: feature extractor (fig. 3)
Converting input image data into characteristic data, and specifically operating as follows:
the feature extraction of the image is performed by the convolution of the input image data with convolution channels 64, 128, 256 and 512, and the calculation formula is as follows:
z=Conv(BN(ReLU(x)))
where x is the image of the input and,
Figure BDA0003813510710000051
where C denotes the number of output channels, S is the step size of the input image, reLU is its activation function, BN (batch Normalization) is batch Normalization, and Conv is the convolution operation of convolution kernel 3*3.
Step 2.1.2: encoder for encoding a video signal
In WBCLformer, there are two encoders, one is the encoder in transform, and the other is by a modified local encoder.
Step 2.1.2.1: transformer encoder (left side of 2.b FIG. 5363)
The encoder is mainly divided into two modules, one is Multi-head Self orientations (MSA), and the other is Feed Forward Network (FFN).
MSA: through mutual interactive learning among the input vectors, different weights of different objects are calculated through calculation, so that more interesting area information is found, and the calculation formula is as follows:
Figure BDA0003813510710000061
Attention(Q,K,V)=Cos(Q,K)×V
q, K, V are all defined by the sequence X ∈ R of the input vector N×d Obtained by linear mapping, Q = XW Q ,K=XW K ,V=XW V Where Q represents the query vector, K represents the vector used to match Q, and V represents the content vector. Firstly, the cosine similarity between Q and K is calculated by using a Softmax function, the weight information of each object is obtained through calculation, then the cosine similarity is multiplied by V to obtain a final weighted vector, and the vector can be screened through the operation.
FFN: after obtaining the weighted vector, the vector needs to be input into the FFN for further processing, which is expressed as follows:
FFN(attn x )=MLP(LN(atten x ))+atten x
atten x the weight vector obtained by MSA is expressed, and LN (Layer Normalization) is expressed as Layer Normalization, which is used because Layer Normalization is performed for each piece of data, and thus does not change the distribution of data, and has the effect of preventing gradient disappearance and increasing convergence speed. MLP is a multi-layer perceptron consisting of two linear layers, the first one extending the dimensionality of the data from D to 3D, the second one reducing the dimensionality back to D, and between the layers using the Gelu activation function to add non-linear learning capabilities to the network. And finally, residual operation is realized by adding the input weighting vector, and the phenomenon of gradient disappearance or gradient explosion caused by too deep model is prevented.
Step 2.1.2.2: local encoder (the right side of 2.b)
The local encoder combines the global attention in the self-attention mechanism with the CNN, so that fusion between the local information and the global information is achieved, and effective white blood cell characteristic information can be extracted. Since the self-attention mechanism is to learn the correlation between different patches, it is more concerned with global feature information, and for the white blood cell image, its locality is very important, so a convolution operation is added to the encoder to realize the extraction of local information, and its calculation formula is as follows:
z seq+1 =I2S(DW(S2I(z seq )))
where S2I is reconstruction of the sequence data into two-dimensional image data, DW is a Depth-Wise convolution operation, and I2S is conversion of the two-dimensional image data into sequence data, so that the local encoder obtains its local features by reconstructing the sequence data into two-dimensional data and by the convolution operation.
Step 2.1.3: discriminating region screening
Since the difference between similar images is very small, in order to make the network focus on the difference between different images, a distinguishing region screening module is added, which is used to screen tokens with correlation, and the calculation formula is as follows:
atten final =Πatten i
atten i the attention weights of the ith layer are shown, and the attention weights of the layers are multiplied to screen out the token most concerned by the model, so as to carry out the final classification.
Step 2.2: white blood cell data set training
Since WBCLformer produces an overfitting phenomenon on small datasets, this step is divided into two processes, pre-training on ImageNet-2012 dataset and training on leukocyte dataset.
Step 2.2.1: pre-training (model pre-training in figure 1)
The parameters are adjusted by adopting an AdamW optimization algorithm in model training, the learning rate is 5e-4, the weight attenuation is 1e-3, the first-order exponential attenuation rate is 0.9, the second-order exponential attenuation rate is 0.999, and 200 rounds of training are performed in total.
Step 2.2.2: leukocyte data set training (model training in FIG. 1)
The last layer of the model is replaced and the model is trained subsequently using the pre-trained model parameters as initial values for training.
The AdamW optimization algorithm is adopted for model training to adjust parameters, and the training parameters are the same as those of pre-training. However, the learning rate adjustment strategy adopts arm-up and cosine annealing, and the aim is to ensure the training stability of the model.
And 3, step 3: WBCLformer model Effect test (test model Performance in FIG. 1)
And predicting in a test set by using the trained model, and calculating the test result and the true value according to the calculation mode of the evaluation index. The evaluation indexes of the model performance adopt four indexes of Accuracy, precision, recall and F1-Score, and after the four evaluation indexes are obtained by the model, the model is compared with the current mainstream model. As shown in FIG. 4, the models proposed by the present invention are ALL in the leading position, and in order to verify the generalization of the models, the disclosed data sets PBC and ALL-IDB are also used for verification, as shown in FIG. 5, the accuracy rates are ALL in the leading position and the lower position, and the experimental results verify the effectiveness and the generalization of the present invention.

Claims (3)

1. The method for classifying the fine granularity of the white blood cells based on the attention mechanism is characterized by comprising the following steps:
collecting a data set, building and training a WBCLformer model, and carrying out WBCLformer model effect test;
step 1: acquiring a white blood cell image as a basic data set, and dividing the white blood cell image into a training set and a testing set;
step 2: building and training WBCLformer model
The WBCLformer model training is divided into three steps: building a neural network model, pre-training and training in a leukocyte data set;
step 2.1: neural network model construction
The neural network consists of three parts, namely a feature extractor, an encoder and a distinguishing region screening;
step 2.1.1: feature extractor
Firstly, converting image data into local characteristic data, and acquiring the characteristic data of an image by adopting convolution operation with a convolution kernel of 3*3;
step 2.1.2: encoder for encoding a video signal
In WBCLformer, there are two kinds of encoders, one is the encoder in Transformer, and the other is by modified local encoder;
step 2.1.2.1: transformer encoder
The encoder is divided into two modules, one is Multi-head Self orientations layers, namely MSA, and the other is Feed Forward Network, namely FFN;
MSA: through mutual interactive learning among the input vectors, different weights of different objects are calculated, and therefore more concerned region information is found out;
FFN: after obtaining the weighted vector, the vector needs to be input into the FFN for further processing;
step 2.1.2.2: local encoder
The local encoder combines the global attention in the self-attention mechanism with the CNN, so that the fusion between the local information and the global information is achieved, and effective leucocyte characteristic information can be extracted; since the self-attention mechanism is realized by learning the correlation among different patches, global feature information is more concerned, and convolution operation is added into an encoder to realize the extraction of local information;
step 2.1.3: discriminating region screening
Step 2.2: white blood cell data set training
Since WBCLformer produces an overfitting phenomenon on small datasets, this step is divided into two processes, pre-training on ImageNet-2012 dataset and training on leukocyte dataset; (ii) a
Step 2.2.1: pre-training
The parameters are adjusted by adopting an AdamW optimization algorithm in model training, the learning rate is 5e-4, the weight attenuation is 1e-3, the first-order exponential attenuation rate is 0.9, the second-order exponential attenuation rate is 0.999, and 150-250 rounds of training are performed in total;
step 2.2.2: white blood cell data set training
Replacing the last layer of the model, using the pre-trained model parameters as initial values of training, and then training the model;
the model training adopts AdamW optimization algorithm to adjust parameters, and the training parameters are the same as those of pre-training; the learning rate adjustment strategy adopts arm-up and cosine annealing;
and step 3: WBCLformer model Effect test
In order to quantitatively analyze the generalization ability of the model, it is necessary to test the trained model in a test set, compare the predicted result with the actual value, and analyze the result using the evaluation index.
2. The attention-based mechanism-based fine-grained classification method for white blood cells according to claim 1, characterized in that: the method comprises the following specific operations of acquiring a white blood cell image as a basic data set, dividing the white blood cell image into a training set and a testing set: a sampling of the white blood cell image dataset is performed, the dataset being sorted by category 8:2 into training and test sets.
3. The attention-based mechanism-based fine-grained classification method for white blood cells according to claim 1, characterized in that: the accuracy, precision, recall rate and F1 score adopted by the evaluation indexes; before the evaluation indexes are introduced, a confusion matrix is introduced:
TP: positive samples predicted by the model as positive classes
TN: negative examples predicted by the model as negative classes
FP: negative examples predicted by the model as positive classes
FN: positive samples predicted by the model as negative classes
Accuracy: the ratio of the correct result to the total sample is predicted, and the formula is as follows
Figure FDA0003813510700000021
Precision: meaning the probability of actually being a positive sample among all samples predicted to be positive; the formula is as follows:
Figure FDA0003813510700000022
Figure FDA0003813510700000031
recall rate recalling: the meaning is the probability of being predicted as a positive sample in the actual positive sample; the formula is as follows:
Figure FDA0003813510700000032
Figure FDA0003813510700000033
f1 fraction: to balance accuracy and recall, an F1 score was introduced; the formula is as follows:
Figure FDA0003813510700000034
since the problem of multi-classification exists, the calculation mode of the precision rate and the recall rate is changed, the precision rate and the recall rate of each category are calculated firstly, and then are added according to the weight to calculate the average precision rate and the recall rate.
CN202211024009.7A 2022-08-24 2022-08-24 Attention mechanism-based leukocyte fine-granularity classification method Pending CN115439683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211024009.7A CN115439683A (en) 2022-08-24 2022-08-24 Attention mechanism-based leukocyte fine-granularity classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211024009.7A CN115439683A (en) 2022-08-24 2022-08-24 Attention mechanism-based leukocyte fine-granularity classification method

Publications (1)

Publication Number Publication Date
CN115439683A true CN115439683A (en) 2022-12-06

Family

ID=84244895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211024009.7A Pending CN115439683A (en) 2022-08-24 2022-08-24 Attention mechanism-based leukocyte fine-granularity classification method

Country Status (1)

Country Link
CN (1) CN115439683A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117423041A (en) * 2023-12-13 2024-01-19 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117423041A (en) * 2023-12-13 2024-01-19 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision
CN117423041B (en) * 2023-12-13 2024-03-08 成都中医药大学 Facial video discrimination traditional Chinese medicine qi-blood system based on computer vision

Similar Documents

Publication Publication Date Title
CN111444960A (en) Skin disease image classification system based on multi-mode data input
Labati et al. All-IDB: The acute lymphoblastic leukemia image database for image processing
CN109952614A (en) The categorizing system and method for biomone
CN108766559B (en) Clinical decision support method and system for intelligent disease screening
Malkawi et al. White blood cells classification using convolutional neural network hybrid system
CN111680575B (en) Human epithelial cell staining classification device, equipment and storage medium
Fitri et al. Classification of White Blood Cell Abnormalities for Early Detection of Myeloproliferative Neoplasms Syndrome Based on K-Nearest Neighborr
CN111028232A (en) Diabetes classification method and equipment based on fundus images
CN115439683A (en) Attention mechanism-based leukocyte fine-granularity classification method
CN115359264A (en) Intensive distribution adhesion cell deep learning identification method
Amiri et al. Feature selection for bleeding detection in capsule endoscopy images using genetic algorithm
CN111047590A (en) Hypertension classification method and device based on fundus images
CN1462884A (en) Method of recognizing image of lung cancer cells with high accuracy and low rate of false negative
Meng et al. Neighbor Correlated Graph Convolutional Network for multi-stage malaria parasite recognition
CN113052227A (en) Pulmonary tuberculosis identification method based on SE-ResNet
CN117195027A (en) Cluster weighted clustering integration method based on member selection
Semerjian et al. White blood cells classification using built-in customizable trained convolutional neural network
Sevinç et al. An effective medical image classification: transfer learning enhanced by auto encoder and classified with SVM
CN112508909B (en) Disease association method of peripheral blood cell morphology automatic detection system
CN113033330A (en) Tongue posture abnormality distinguishing method based on light convolutional neural network
Yuningsih et al. Anemia classification based on abnormal red blood cell morphology using convolutional neural network
Dong et al. White blood cell classification based on a novel ensemble convolutional neural network framework
Li et al. An accurate classification method based on multi-focus videos and deep learning for urinary red blood cell
Özcan et al. Comprehensive data analysis of white blood cells with classification and segmentation by using deep learning approaches
Ding et al. Leukocyte subtype classification with multi-model fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination