CN110110799B - Cell sorting method, cell sorting device, computer equipment and storage medium - Google Patents

Cell sorting method, cell sorting device, computer equipment and storage medium Download PDF

Info

Publication number
CN110110799B
CN110110799B CN201910394263.8A CN201910394263A CN110110799B CN 110110799 B CN110110799 B CN 110110799B CN 201910394263 A CN201910394263 A CN 201910394263A CN 110110799 B CN110110799 B CN 110110799B
Authority
CN
China
Prior art keywords
cell
image
target
information
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910394263.8A
Other languages
Chinese (zh)
Other versions
CN110110799A (en
Inventor
尚滨
彭铃淦
朱孝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Rongyuan Fangqing Medical Technology Co ltd
Original Assignee
Guangzhou Rongyuan Fangqing Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Rongyuan Fangqing Medical Technology Co ltd filed Critical Guangzhou Rongyuan Fangqing Medical Technology Co ltd
Priority to CN201910394263.8A priority Critical patent/CN110110799B/en
Publication of CN110110799A publication Critical patent/CN110110799A/en
Application granted granted Critical
Publication of CN110110799B publication Critical patent/CN110110799B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to a cell sorting method, apparatus, computer device and storage medium. The method comprises the following steps: the method comprises the steps of obtaining an image to be analyzed, inputting the image to be analyzed into a trained target detection model, obtaining position information and initial classification information of each target cell in the image to be analyzed, enabling marking information to comprise cell position information and cell class information, segmenting the image to be analyzed according to the position information of the target cell, obtaining a plurality of target cell images, extracting a cell feature vector of the target cell images according to a preset feature extraction network, inputting the cell feature vector into the trained SVM model, obtaining probability data of the target cell in the target cell image belonging to each preset class, marking the preset class with the maximum probability data as secondary classification information of the target cell, and marking the secondary classification information as a classification result of the target cell when the initial classification information is the same as the secondary classification information. By adopting the method, the cell category can be accurately determined, and the accuracy of cell identification is improved.

Description

Cell sorting method, cell sorting device, computer equipment and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a cell classification method, apparatus, computer device, and storage medium.
Background
With the development of medical science and technology, cell identification technology appears, wherein cell identification comprises the step of screening cell types, and pathological cells can be found in time through screening the cell types, so that diseases can be effectively dealt with. The traditional cell identification adopts a manual identification mode, and a pathologist moves pathological sections and then scans the whole section by naked eyes to identify the category of each cell in the section.
However, the conventional cell recognition method has a problem of low recognition accuracy because the diseased cells have a very high similarity to normal cells at an early stage.
Disclosure of Invention
In view of the above, it is necessary to provide a cell classification method, a cell classification apparatus, a computer device, and a storage medium capable of improving the accuracy of cell identification.
A method of cell sorting, the method comprising:
acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
dividing an image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of a target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
In one embodiment, extracting the cell feature vector of the target cell image according to the preset feature extraction network includes:
extracting a network according to the textural features in a preset feature extraction network, and extracting textural feature vectors of the target cell image;
extracting a network according to shape features in a preset feature extraction network, and extracting shape feature vectors of the target cell image;
respectively carrying out normalization processing on the texture feature vector and the shape feature vector;
and carrying out vector splicing on the normalized texture characteristic vector and the normalized shape characteristic vector to obtain a cell characteristic vector of the target cell image.
In one embodiment, inputting the cell feature vector into the trained SVM model comprises:
acquiring a sample image cell set carrying labeling information;
dividing a plurality of cell images in a sample image cell set into a training set and a testing set;
performing model training according to the training set to obtain an SVM model;
inputting the test set into an SVM model to obtain a category test result corresponding to each cell image;
comparing the cell type information in the type test result and the labeling information carried by the cell image;
and adjusting the SVM model according to the comparison result.
In one embodiment, the obtaining of the image to be analyzed and the input of the trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed includes:
obtaining a plurality of feature maps with different scales of an image to be analyzed according to a DenseNet in a trained target detection model;
inputting a plurality of feature maps with different scales into a candidate frame processing module in a trained target detection model to obtain prior frames corresponding to the feature maps;
and inputting a plurality of feature maps with different scales and a prior frame corresponding to each feature map into a convolutional network in a trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed.
In one embodiment, before obtaining the image to be analyzed and inputting the image to be analyzed into the trained target detection model, the method includes:
acquiring sample image data and N initial target detection networks;
dividing sample image data into N parts of data, sequentially selecting 1 part of the N parts of data as a test set, and taking N-1 parts of data as a training set to obtain N groups of sample image data with different combinations;
establishing an incidence relation between N initial target detection networks and N groups of sample image data;
training corresponding initial target detection networks according to the incidence relation and training sets in each group of sample image data, and calculating evaluation parameters of each initial target detection network according to test sets in each group of sample image data;
taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameter as a target detection model to be trained;
and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
In one embodiment, performing model training according to a target detection model to be trained by using a sample cell image set carrying labeling information as a training set, and obtaining the target detection model includes:
obtaining a plurality of feature maps with different scales of each sample cell image in the sample cell image set according to a DenseNet in a target detection model to be trained;
inputting a plurality of feature maps of different scales of each sample cell image into a candidate frame processing module in a target detection model to be trained, performing regression on the plurality of feature maps of different scales of each sample cell image by a plurality of candidate frames in the candidate frame processing module, and determining a prior frame corresponding to the feature map of each scale;
inputting a plurality of feature maps of different scales of each sample cell image and a prior frame corresponding to the feature map of each scale into a convolution network in a target detection model to be trained to obtain position information and initial classification information of each sample cell in each sample cell image;
and comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
In one embodiment, before obtaining the image to be analyzed and inputting the image to be analyzed into the trained target detection model, the method includes:
the method comprises the steps of obtaining a pathological section image, and preprocessing the pathological section image to obtain an image to be analyzed, wherein the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
A cell sorter, the apparatus comprising:
the target detection module is used for acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
the segmentation module is used for segmenting the image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
the characteristic extraction module is used for extracting cell characteristic vectors of the target cell image according to a preset characteristic extraction network;
the processing module is used for inputting the cell feature vectors into the trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, and the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
the marking module is used for determining the preset category with the maximum probability data and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and the classification module is used for marking the secondary classification information as the classification result of the target cells when the initial classification information is the same as the secondary classification information.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
dividing an image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of a target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
dividing an image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of a target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
According to the cell classification method, the cell classification device, the computer equipment and the storage medium, cell position information and initial classification information of each target cell in an image to be analyzed are determined according to a target detection model, a plurality of target cell images are obtained according to the cell position information, cell feature vectors of the target cell images are extracted through a network according to preset features, secondary classification information of the target cells is obtained by inputting the cell feature vectors into an SVM model, and the primary classification information and the secondary classification information are integrated to obtain a classification result of the target cells. In the whole process, the target cell is identified through the target detection model and the SVM model respectively, the cell category of the target cell is determined by integrating the classification information of the target detection model and the SVM model, the cell category is accurately determined, and the accuracy of cell identification is improved.
Drawings
FIG. 1 is a schematic flow chart of a cell sorting method according to an embodiment;
FIG. 2 is a schematic illustration of a sub-flow chart of step S106 in FIG. 1 according to an embodiment;
FIG. 3 is a schematic flow chart of a cell sorting method according to another embodiment;
FIG. 4 is a schematic illustration of a sub-flow chart of step S102 in FIG. 1 according to an embodiment;
FIG. 5 is a schematic flow chart of a cell sorting method according to still another embodiment;
FIG. 6 is a schematic sub-flow chart illustrating step S512 of FIG. 1 according to an embodiment;
FIG. 7 is a schematic flow chart of a cell sorting method according to still another embodiment;
FIG. 8 is a block diagram showing the structure of a cell sorter according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a cell sorting method is provided, which is exemplified by the application of the method to the server in fig. 1, and includes the following steps:
s102: the method comprises the steps of obtaining an image to be analyzed, inputting a trained target detection model, obtaining position information and initial classification information of each target cell in the image to be analyzed, training the target detection model by taking a sample cell image set carrying annotation information as a training set, wherein the annotation information comprises cell position information and cell category information.
The image to be analyzed refers to a preprocessed pathological section image, the target detection model is obtained by training a sample cell image set carrying labeling information as a training set, the labeling information comprises cell position information and cell category information, and model parameters in the target detection model can be optimized through training. The cell position information refers to coordinate information of the cell in the sample cell image, and the position information of each target cell refers to coordinate information of each target cell in the image to be analyzed. The initial classification information refers to initial cell class information. In the present application, the cell class includes normal cells and diseased cells.
S104: and segmenting the image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images.
After the server determines the position information of each target cell in the image to be analyzed, the image to be analyzed is segmented according to the position information of the target cell to obtain a plurality of target cell images.
S106: and extracting the cell characteristic vector of the target cell image according to a preset characteristic extraction network.
The preset feature extraction network can be used for extracting a plurality of feature vectors of the target cell image, including texture feature vectors, shape feature vectors and the like, and the server can fuse the plurality of feature vectors after extracting the plurality of feature vectors of the target cell image to obtain the cell feature vectors of the target cell image. The feature extraction networks have different feature extraction networks for different feature vectors, and input data of the different feature extraction networks may be different. For example, when the texture feature vector of the target cell image needs to be extracted, a texture feature extraction network in the feature extraction network is used for feature extraction, and the input data may be the color target cell image after preprocessing. When the shape feature vector of the target cell image needs to be extracted, feature extraction is performed by using a shape feature extraction network in the feature extraction network, and the input data can be a preprocessed gray scale target cell image.
S108: inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set.
The SVM is a support vector machine, is a generalized linear classifier for binary classification of data in a supervised learning mode, and a decision boundary of the SVM is a maximum edge distance hyperplane for solving learning samples. Since the SVM model is a binary classification of data, in the present embodiment, the total number of preset classes is 2, and specifically, the preset classes include normal cells and diseased cells. Inputting the cell feature vectors into the trained SVM model, obtaining probability data of the target cells in the target cell image belonging to two preset categories, and further determining the cell categories of the target cells according to the probability data.
S110: and determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells.
In this embodiment, the sum of the plurality of probability data is 1, the larger the probability that the target cell belongs to the preset category is, the server determines the preset category with the largest probability data, and marks the preset category with the largest probability data as secondary classification information of the target cell.
S112: and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
The server integrates the primary classification information and the secondary classification information to determine the classification result of the target cell, and when the primary classification information is the same as the secondary classification information, the server marks the secondary classification information as the classification result of the target cell. When the initial classification information is different from the secondary classification information, the server marks the secondary classification information as a classification result of the target cell and marks the target cell as an abnormal cell.
The cell classification method comprises the steps of determining cell position information and initial classification information of target cells in an image to be analyzed according to a target detection model, obtaining a plurality of target cell images according to the cell position information, extracting cell feature vectors of the target cell images according to a preset feature extraction network, inputting the cell feature vectors into an SVM model to obtain secondary classification information of the target cells, and integrating the primary classification information and the secondary classification information to obtain a classification result of the target cells. In the whole process, the target cell is identified through the target detection model and the SVM model respectively, the cell category of the target cell is determined by integrating the classification information of the target detection model and the SVM model, the cell category is accurately determined, and the accuracy of cell identification is improved.
In one embodiment, as shown in fig. 2, S106 includes:
s202: extracting a network according to the textural features in a preset feature extraction network, and extracting textural feature vectors of the target cell image;
s204: extracting a network according to shape features in a preset feature extraction network, and extracting shape feature vectors of the target cell image;
s206: respectively carrying out normalization processing on the texture feature vector and the shape feature vector;
s208: and carrying out vector splicing on the normalized texture characteristic vector and the normalized shape characteristic vector to obtain a cell characteristic vector of the target cell image.
The texture feature extraction network is composed of a plurality of convolution layers and a full connection layer, input data is a color target cell image, the shape feature extraction network adopts a full convolution neural network, and the difference between the shape feature extraction network and the texture feature extraction network is that the full connection layer is changed into the convolution layer, so that parameter quantity is reduced. The input data in the shape feature extraction network is a gray level target cell image subjected to gray level processing, and the influence of colors in the target cell image can be removed through the gray level processing, so that the shape feature extraction network only focuses on the shape feature information of the target cell image. Before vector splicing, normalization operation needs to be carried out on the texture feature vector and the shape feature vector respectively, then the normalized texture feature vector and the normalized shape feature vector are spliced according to a preset weight coefficient, and the cell feature vector of the target cell image can be obtained after the spliced vectors are normalized again. Wherein vector stitching refers to expanding vector dimensions. For example, a 2 × a dimensional cell feature vector can be obtained by vector-stitching an a dimensional texture feature vector and an a dimensional shape feature vector. For example, the normalization formula may be
Figure BDA0002057650500000091
Wherein f isfuseAs cell feature vectors, frrgbAs a texture feature vector, fsFor the shape feature vector, | | is a norm operation where we use a 2-norm, with | sign indicating that the two vectors are subjected to a concatenation operation, with the real number λ ∈ (0, 1)]The weight coefficient is an empirical value, can be determined by analyzing a plurality of experimental results, and is set according to needs.
In one embodiment, as shown in fig. 3, before S108, the method includes:
s302: acquiring a sample image cell set carrying labeling information;
s304: dividing a plurality of cell images in a sample image cell set into a training set and a testing set;
s306: performing model training according to the training set to obtain an SVM model;
s308: inputting the test set into an SVM model to obtain a category test result corresponding to each cell image;
s310: comparing the cell type information in the type test result and the labeling information carried by the cell image;
s312: and adjusting the SVM model according to the comparison result.
The sample image cell set carrying the labeling information refers to a sample image cell set which is labeled manually, and specifically, a pathologist can manually label a sample slice image which needs to be processed by a server by means of a digital pathology scanner so as to be used by the server. The marked content comprises cell type information and cell position information of each sample cell image in the sample slice image, before SVM model training is carried out, the server divides the sample slice image into a plurality of sample image cells according to the cell position information in the marked content, model training is further carried out through a sample image cell set, the sample slice image comprises a pathological change cell image, and the training sample image cell set is based on a texture structure and a shape structure in the growth process of pathological change cells in an early pathological change cell image. For example, the sample image cell set includes a single infiltrated adenocarcinoma cell image and an adenocarcinoma cell image with a tissue structure morphology. In this example, the training of a single sample of an image of infiltrating adenocarcinoma cells can be highlighted.
In one embodiment, as shown in fig. 4, S102 includes:
s402: obtaining a plurality of feature maps with different scales of an image to be analyzed according to a DenseNet in a trained target detection model;
s404: inputting a plurality of feature maps with different scales into a candidate frame processing module in a trained target detection model to obtain prior frames corresponding to the feature maps;
s406: and inputting a plurality of feature maps with different scales and a prior frame corresponding to each feature map into a convolutional network in a trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed.
In a DenseNet network, each layer accepts as its additional input all layers in front of it, and each layer is connected with all layers in front in the channel dimension and serves as the input for the next layer. DenseNet is directly connected with feature maps from different layers, so that feature reuse can be realized, and efficiency is improved. In this embodiment, a DenseBlock + Conv + pool structure may be used in the DenseNet network, where the DenseBlock is a module including many layers, each layer has the same feature map size, and a dense connection manner is adopted between layers. The Conv module and the pool module are used for connecting two adjacent DenseBlock, so that the size of the characteristic diagram is reduced, and a plurality of characteristic diagrams with different scales are obtained.
After obtaining a plurality of feature maps of different scales of an image to be analyzed, a server inputs the feature maps of the different scales into a candidate frame processing module in a trained target detection model to obtain a prior frame corresponding to each feature map. Wherein, a plurality of prior frames corresponding to feature maps with different scales exist in the candidate frame processing module, and the prior frames can be used for predicting target cells in the feature maps. Inputting a plurality of feature maps with different scales and prior frames corresponding to the feature maps into a convolutional network in a trained target detection model, obtaining the position information of each target cell in an image to be analyzed and the confidence degree of each target cell belonging to each preset category through convolution, determining the preset category with the maximum confidence degree according to the confidence degree of each target cell belonging to each preset category by a server, and marking the preset category with the maximum confidence degree as the initial classification information of each target cell.
In one embodiment, as shown in fig. 5, before S102, the method includes:
s502: acquiring sample image data and N initial target detection networks;
s504: dividing sample image data into N parts of data, sequentially selecting 1 part of the N parts of data as a test set, and taking N-1 parts of data as a training set to obtain N groups of sample image data with different combinations;
s506: establishing an incidence relation between N initial target detection networks and N groups of sample image data;
s508: training corresponding initial target detection networks according to the incidence relation and training sets in each group of sample image data, and calculating evaluation parameters of each initial target detection network according to test sets in each group of sample image data;
s510: taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameter as a target detection model to be trained;
s512: and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
The hyper-parameters in the N initial target detection networks are different, and the N initial target detection networks are trained and tested through sample image data, namely, the hyper-parameters in the target detection networks are adjusted and the capability of each target detection network is evaluated. The manner of evaluating the capability of each target detection network may be: and calculating the evaluation parameters of each initial target detection network, and taking the average value of the evaluation parameters of the N initial target detection networks as the target evaluation parameters, thereby selecting the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameters from the N initial target detection networks as the target detection model to be trained. For example, the way to train an initial target detection network may be: 5000 finely labeled pathological sections of the diseased cells were obtained, according to 8: 1: 1 into a training set, a verification set and a test set, wherein 100 ten thousand iterations are planned in the training stage, and the data in the verification set can be tested every 1000 iterations, so that the hyper-parameters of the target detection network are adjusted, and the capability of the target detection network is preliminarily evaluated. And after the training stage is finished, performing model prediction by using the test set, evaluating the target detection network, and improving the hyper-parameters in the target detection network in a targeted manner.
In one embodiment, as shown in fig. 6, S512 includes:
s602: obtaining a plurality of feature maps with different scales of each sample cell image in the sample cell image set according to a DenseNet in a target detection model to be trained;
s604: inputting a plurality of feature maps of different scales of each sample cell image into a candidate frame processing module in a target detection model to be trained, performing regression on the plurality of feature maps of different scales of each sample cell image by a plurality of candidate frames in the candidate frame processing module, and determining a prior frame corresponding to the feature map of each scale;
s606: inputting a plurality of feature maps of different scales of each sample cell image and a prior frame corresponding to the feature map of each scale into a convolution network in a target detection model to be trained to obtain position information and initial classification information of each sample cell in each sample cell image;
s608: and comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
After obtaining a plurality of feature maps of different scales of each sample cell image, the server inputs the feature maps of different scales of each sample cell image into a candidate frame processing module in the target detection model to be trained, and performs regression on the feature maps of different scales of each sample cell image by a plurality of candidate frames in the candidate frame processing module to determine a prior frame corresponding to the feature map of each scale. After a prior frame corresponding to the feature map of each scale is obtained, a plurality of feature maps of different scales of the cell image of each sample and the prior frame corresponding to the feature map of each scale are input into a convolution network in the target detection model to be trained, and position information and initial classification information of each sample cell in each sample cell are obtained through convolution. And comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting model parameters in the target detection model to be trained according to the comparison result to obtain the target detection model.
Where the regression operation is similar to the yolo algorithm. For the deep neural network, the shallow feature map contains more detailed information and is more suitable for detecting small objects, and the deeper feature map contains more global information and is more suitable for detecting large objects along with the expansion of the receptive field. In order to make the detection effect of the cells with different sizes better, a regression method of candidate frames with different sizes on different feature maps is adopted.
In one embodiment, as shown in fig. 7, before S102, the method includes:
s702: the method comprises the steps of obtaining a pathological section image, and preprocessing the pathological section image to obtain an image to be analyzed, wherein the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
The pathological section image is preprocessed to remove noise included in the pathological section image, solve the problems of uneven brightness and the like, obtain clear pathological data and use the data in the next processing. When the image is denoised, the pathological section image can be processed through Gaussian filtering, and the denoised pathological section image can be obtained through a mode of carrying out convolution operation on a preset Gaussian filtering algorithm and the pathological section image. In order to highlight the local detail characteristics of an image, expand the difference between the characteristics of a pathological change area and the characteristics of a normal area in the image and inhibit uninteresting characteristics, the image quality and the abundant information amount can be improved in an image enhancement mode, the image interpretation and identification effects are enhanced, and particularly, the pathological section image can be processed by adopting an image enhancement algorithm of Log transformation. The logarithmic transformation can expand the low gray value part of the image, display more details of the low gray value part, compress the high gray value part of the image, and reduce the details of the high gray value part, thereby achieving the purpose of emphasizing the low gray value part of the image. Normalizing the pixel values of the image refers to adjusting the luminance range from (0, 255) to (0, 1), and specifically, the luminance range can be adjusted by the formula y ═ x-MinValue)/(MaxValue-MinValue), where x represents the pixel value before normalization, y represents the adjusted pixel value, and MinValue represents the minimum value of the original image pixel; MaxValue represents the maximum value of the original pixels. In order to eliminate the difference of image colors caused by different dyeing, color space normalization operation needs to be carried out on the RGB color image, a standard color space is established for all data, and the generalization performance of the model is increased. Assuming that r, g, b represent the pixel values of a certain point in three color channels, respectively, the normalization operation is as follows:
r'=r/(r+g+b)
g'=g/(r+g+b)
b'=b/(r+g+b)
it should be understood that although the various steps in the flow charts of fig. 1-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a cell sorting apparatus including: an object detection module 802, a segmentation module 804, a feature extraction module 806, a processing module 808, a labeling module 810, and a classification module 812, wherein:
the target detection module 802 is configured to obtain an image to be analyzed, input the trained target detection model, and obtain position information and initial classification information of each target cell in the image to be analyzed, where the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information includes cell position information and cell category information;
a segmentation module 804, configured to segment the image to be analyzed according to the position information of the target cell, so as to obtain a plurality of target cell images;
a feature extraction module 806, configured to extract a cell feature vector of the target cell image according to a preset feature extraction network;
the processing module 808 is configured to input the cell feature vectors into the trained SVM model to obtain probability data that the target cells in the target cell image belong to each preset category, and the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
the marking module 810 is configured to determine a preset category with the largest probability data, and mark the preset category with the largest probability data as secondary classification information of the target cell;
a classification module 812 for labeling the secondary classification information as a classification result of the target cell when the primary classification information is the same as the secondary classification information.
The cell classification device determines cell position information and initial classification information of each target cell in an image to be analyzed according to a target detection model, acquires a plurality of target cell images according to the cell position information, extracts cell feature vectors of the target cell images according to a preset feature extraction network, inputs the cell feature vectors into an SVM model to obtain secondary classification information of the target cells, and synthesizes the primary classification information and the secondary classification information to obtain a classification result of the target cells. In the whole process, the target cell is identified through the target detection model and the SVM model respectively, the cell category of the target cell is determined by integrating the classification information of the target detection model and the SVM model, the cell category is accurately determined, and the accuracy of cell identification is improved.
In one embodiment, the feature extraction module is further configured to extract a network according to a preset feature extraction network, extract a texture feature vector of the target cell image, extract a shape feature extraction network according to a preset feature extraction network, extract a shape feature vector of the target cell image, normalize the texture feature vector and the shape feature vector respectively, and perform vector stitching on the normalized texture feature vector and the normalized shape feature vector to obtain a cell feature vector of the target cell image.
In one embodiment, the processing module is further configured to obtain a sample image cell set carrying labeling information, divide a plurality of cell images in the sample image cell set into a training set and a test set, perform model training according to the training set to obtain an SVM model, input the test set into the SVM model to obtain a category test result corresponding to each cell image, compare the category test result with cell category information in the labeling information carried by the cell images, and adjust the SVM model according to the comparison result.
In one embodiment, the target detection module is further configured to obtain a plurality of feature maps of different scales of the image to be analyzed according to a DenseNet network in the trained target detection model, input the feature maps of the different scales into a candidate frame processing module in the trained target detection model to obtain a prior frame corresponding to each feature map, and input the feature maps of the different scales and the prior frame corresponding to each feature map into a convolutional network in the trained target detection model to obtain location information and initial classification information of each target cell in the image to be analyzed.
In one embodiment, the target detection module is further configured to obtain sample image data and N initial target detection networks, divide the sample image data into N data, sequentially select 1 of the N data as a test set, and N-1 data as a training set, obtain N groups of sample image data of different combinations, establish an association relationship between the N initial target detection networks and the N groups of sample image data, train a corresponding initial target detection network according to the association relationship and the training set in each group of sample image data, calculate an evaluation parameter of each initial target detection network according to the test set in each group of sample image data, use an average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and mark the initial target detection network corresponding to the evaluation parameter with the smallest error of the target evaluation parameter as a target detection model to be trained, and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
In one embodiment, the target detection module is further configured to obtain a plurality of feature maps of different scales of each sample cell image in the sample cell image set according to a DenseNet network in the target detection model to be trained, input the plurality of feature maps of different scales of each sample cell image into a candidate frame processing module in the target detection model to be trained, perform regression on the plurality of feature maps of different scales of each sample cell image with a plurality of candidate frames in the candidate frame processing module, determine a prior frame corresponding to the feature map of each scale, input the plurality of feature maps of different scales of each sample cell image and the prior frame corresponding to the feature map of each scale into a convolutional network in the target detection model to be trained, obtain position information and initial classification information of each sample cell in each sample cell image, compare the position information and the initial classification information of each sample cell with labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
In one embodiment, the cell classification device comprises a preprocessing module, wherein the preprocessing module is used for acquiring a pathological section image and preprocessing the pathological section image to obtain an image to be analyzed, and the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
For the specific definition of the cell sorting device, reference may be made to the above definition of the cell sorting method, which is not repeated herein. The modules in the cell sorting device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of cell classification.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program:
acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
dividing an image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of a target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
The cell classification computer equipment determines cell position information and initial classification information of each target cell in an image to be analyzed according to a target detection model, acquires a plurality of target cell images according to the cell position information, extracts cell feature vectors of the target cell images according to a preset feature extraction network, inputs the cell feature vectors into an SVM model to obtain secondary classification information of the target cells, and synthesizes the primary classification information and the secondary classification information to obtain a classification result of the target cells. In the whole process, the target cell is identified through the target detection model and the SVM model respectively, the cell category of the target cell is determined by integrating the classification information of the target detection model and the SVM model, the cell category is accurately determined, and the accuracy of cell identification is improved.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting a network according to the textural features in a preset feature extraction network, and extracting textural feature vectors of the target cell image;
extracting a network according to shape features in a preset feature extraction network, and extracting shape feature vectors of the target cell image;
respectively carrying out normalization processing on the texture feature vector and the shape feature vector;
and carrying out vector splicing on the normalized texture characteristic vector and the normalized shape characteristic vector to obtain a cell characteristic vector of the target cell image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a sample image cell set carrying labeling information;
dividing a plurality of cell images in a sample image cell set into a training set and a testing set;
performing model training according to the training set to obtain an SVM model;
inputting the test set into an SVM model to obtain a category test result corresponding to each cell image;
comparing the cell type information in the type test result and the labeling information carried by the cell image;
and adjusting the SVM model according to the comparison result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining a plurality of feature maps with different scales of an image to be analyzed according to a DenseNet in a trained target detection model;
inputting a plurality of feature maps with different scales into a candidate frame processing module in a trained target detection model to obtain prior frames corresponding to the feature maps;
and inputting a plurality of feature maps with different scales and a prior frame corresponding to each feature map into a convolutional network in a trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring sample image data and N initial target detection networks;
dividing sample image data into N parts of data, sequentially selecting 1 part of the N parts of data as a test set, and taking N-1 parts of data as a training set to obtain N groups of sample image data with different combinations;
establishing an incidence relation between N initial target detection networks and N groups of sample image data;
training corresponding initial target detection networks according to the incidence relation and training sets in each group of sample image data, and calculating evaluation parameters of each initial target detection network according to test sets in each group of sample image data;
taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameter as a target detection model to be trained;
and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
obtaining a plurality of feature maps with different scales of each sample cell image in the sample cell image set according to a DenseNet in a target detection model to be trained;
inputting a plurality of feature maps of different scales of each sample cell image into a candidate frame processing module in a target detection model to be trained, performing regression on the plurality of feature maps of different scales of each sample cell image by a plurality of candidate frames in the candidate frame processing module, and determining a prior frame corresponding to the feature map of each scale;
inputting a plurality of feature maps of different scales of each sample cell image and a prior frame corresponding to the feature map of each scale into a convolution network in a target detection model to be trained to obtain position information and initial classification information of each sample cell in each sample cell image;
and comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the method comprises the steps of obtaining a pathological section image, and preprocessing the pathological section image to obtain an image to be analyzed, wherein the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
dividing an image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of a target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
and when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cells.
The cell classification storage medium determines cell position information and initial classification information of target cells in an image to be analyzed according to a target detection model, acquires a plurality of target cell images according to the cell position information, extracts cell feature vectors of the target cell images according to a preset feature extraction network, inputs the cell feature vectors into an SVM model to obtain secondary classification information of the target cells, and synthesizes the primary classification information and the secondary classification information to obtain a classification result of the target cells. In the whole process, the target cell is identified through the target detection model and the SVM model respectively, the cell category of the target cell is determined by integrating the classification information of the target detection model and the SVM model, the cell category is accurately determined, and the accuracy of cell identification is improved.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting a network according to the textural features in a preset feature extraction network, and extracting textural feature vectors of the target cell image;
extracting a network according to shape features in a preset feature extraction network, and extracting shape feature vectors of the target cell image;
respectively carrying out normalization processing on the texture feature vector and the shape feature vector;
and carrying out vector splicing on the normalized texture characteristic vector and the normalized shape characteristic vector to obtain a cell characteristic vector of the target cell image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a sample image cell set carrying labeling information;
dividing a plurality of cell images in a sample image cell set into a training set and a testing set;
performing model training according to the training set to obtain an SVM model;
inputting the test set into an SVM model to obtain a category test result corresponding to each cell image;
comparing the cell type information in the type test result and the labeling information carried by the cell image;
and adjusting the SVM model according to the comparison result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining a plurality of feature maps with different scales of an image to be analyzed according to a DenseNet in a trained target detection model;
inputting a plurality of feature maps with different scales into a candidate frame processing module in a trained target detection model to obtain prior frames corresponding to the feature maps;
and inputting a plurality of feature maps with different scales and a prior frame corresponding to each feature map into a convolutional network in a trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring sample image data and N initial target detection networks;
dividing sample image data into N parts of data, sequentially selecting 1 part of the N parts of data as a test set, and taking N-1 parts of data as a training set to obtain N groups of sample image data with different combinations;
establishing an incidence relation between N initial target detection networks and N groups of sample image data;
training corresponding initial target detection networks according to the incidence relation and training sets in each group of sample image data, and calculating evaluation parameters of each initial target detection network according to test sets in each group of sample image data;
taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameter as a target detection model to be trained;
and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining a plurality of feature maps with different scales of each sample cell image in the sample cell image set according to a DenseNet in a target detection model to be trained;
inputting a plurality of feature maps of different scales of each sample cell image into a candidate frame processing module in a target detection model to be trained, performing regression on the plurality of feature maps of different scales of each sample cell image by a plurality of candidate frames in the candidate frame processing module, and determining a prior frame corresponding to the feature map of each scale;
inputting a plurality of feature maps of different scales of each sample cell image and a prior frame corresponding to the feature map of each scale into a convolution network in a target detection model to be trained to obtain position information and initial classification information of each sample cell in each sample cell image;
and comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of obtaining a pathological section image, and preprocessing the pathological section image to obtain an image to be analyzed, wherein the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of cell sorting, the method comprising:
acquiring an image to be analyzed and inputting a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, wherein the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
segmenting the image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
extracting a cell feature vector of the target cell image according to a preset feature extraction network;
inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, wherein the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
determining a preset category with the maximum probability data, and marking the preset category with the maximum probability data as secondary classification information of the target cells;
when the initial classification information is the same as the secondary classification information, marking the secondary classification information as a classification result of the target cell;
when the initial classification information is different from the secondary classification information, marking the secondary classification information as a classification result of the target cell, and marking the target cell as an abnormal cell;
before the image to be analyzed is acquired and input into the trained target detection model, the method comprises the following steps:
acquiring sample image data and N initial target detection networks;
dividing the sample image data into N parts of data, sequentially selecting 1 part of the N parts of data as a test set, and taking N-1 parts of data as a training set to obtain N groups of sample image data with different combinations;
establishing an incidence relation between N initial target detection networks and N groups of sample image data;
training corresponding initial target detection networks according to the incidence relation and training sets in the groups of sample image data, and calculating evaluation parameters of the initial target detection networks according to test sets in the groups of sample image data;
taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum target evaluation parameter error as a target detection model to be trained;
and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
2. The method according to claim 1, wherein the extracting the cell feature vector of the target cell image according to the preset feature extraction network comprises:
extracting a texture feature extraction network in the network according to preset features, and extracting a texture feature vector of the target cell image;
extracting a shape feature vector of the target cell image according to a shape feature extraction network in a preset feature extraction network;
respectively carrying out normalization processing on the texture feature vector and the shape feature vector;
and carrying out vector splicing on the normalized texture characteristic vector and the normalized shape characteristic vector to obtain a cell characteristic vector of the target cell image.
3. The method of claim 1, wherein said inputting said cell feature vectors into a trained SVM model comprises:
acquiring a sample image cell set carrying labeling information;
dividing a plurality of cell images in the sample image cell set into a training set and a testing set;
performing model training according to the training set to obtain an SVM model;
inputting the test set into the SVM model to obtain a category test result corresponding to each cell image;
comparing the cell type test result with the cell type information in the labeling information carried by the cell image;
and adjusting the SVM model according to the comparison result.
4. The method of claim 1, wherein the obtaining the image to be analyzed and inputting the image to be analyzed into the trained target detection model, and the obtaining the position information and the initial classification information of each target cell in the image to be analyzed comprises:
obtaining a plurality of feature maps with different scales of the image to be analyzed according to a DenseNet in a trained target detection model;
inputting the feature maps with different scales into a candidate frame processing module in a trained target detection model to obtain prior frames corresponding to the feature maps;
and inputting the feature maps with different scales and the prior frames corresponding to the feature maps into a convolutional network in a trained target detection model to obtain the position information and the initial classification information of each target cell in the image to be analyzed.
5. The method according to claim 1, wherein the performing model training according to the target detection model to be trained with the sample cell image set carrying the labeling information as a training set to obtain a target detection model comprises:
obtaining a plurality of feature maps with different scales of each sample cell image in the sample cell image set according to a DenseNet in a target detection model to be trained;
inputting the feature maps of the sample cell images with different scales into a candidate frame processing module in a target detection model to be trained, performing regression on the feature maps of the sample cell images with different scales by using a plurality of candidate frames in the candidate frame processing module, and determining a prior frame corresponding to the feature map of each scale;
inputting a plurality of feature maps of different scales of each sample cell image and the prior frame corresponding to the feature map of each scale into a convolution network in a target detection model to be trained to obtain position information and initial classification information of each sample cell in each sample cell image;
and comparing the position information and the initial classification information of each sample cell with the labeling information carried by each sample cell image, and adjusting the target detection model to be trained according to the comparison result to obtain the target detection model.
6. The method of claim 1, wherein the obtaining the image to be analyzed before inputting the image to be analyzed into the trained target detection model comprises:
the method comprises the steps of obtaining a pathological section image, and preprocessing the pathological section image to obtain an image to be analyzed, wherein the preprocessing comprises image denoising, image enhancement, image scaling, color normalization and pixel value normalization.
7. A cell sorter, the apparatus comprising:
the target detection module is used for acquiring an image to be analyzed and inputting the image to be analyzed into a trained target detection model to obtain position information and initial classification information of each target cell in the image to be analyzed, the target detection model is obtained by training a sample cell image set carrying annotation information as a training set, and the annotation information comprises cell position information and cell category information;
the segmentation module is used for segmenting the image to be analyzed according to the position information of the target cell to obtain a plurality of target cell images;
the characteristic extraction module is used for extracting the cell characteristic vector of the target cell image according to a preset characteristic extraction network;
the processing module is used for inputting the cell feature vectors into a trained SVM model to obtain probability data of target cells in the target cell image belonging to each preset category, and the SVM model is obtained by training a sample image cell set carrying cell category information as a training set;
the marking module is used for determining the preset category with the maximum probability data and marking the preset category with the maximum probability data as the secondary classification information of the target cells;
a classification module, configured to label the secondary classification information as a classification result of the target cell when the initial classification information is the same as the secondary classification information, label the secondary classification information as a classification result of the target cell when the initial classification information is different from the secondary classification information, and label the target cell as an abnormal cell;
the target detection module is further used for obtaining sample image data and N initial target detection networks, dividing the sample image data into N data, sequentially selecting 1 data of the N data as a test set and N-1 data as a training set to obtain N groups of sample image data with different combinations, establishing an incidence relation between the N initial target detection networks and the N groups of sample image data, training the corresponding initial target detection networks according to the incidence relation and the training sets in the groups of sample image data, calculating evaluation parameters of the initial target detection networks according to the test sets in the groups of sample image data, taking the average value of the evaluation parameters of the N initial target detection networks as a target evaluation parameter, and marking the initial target detection network corresponding to the evaluation parameter with the minimum error of the target evaluation parameter as a target detection model to be trained, and obtaining a sample cell image set carrying annotation information, and performing model training by taking the sample cell image set carrying the annotation information as a training set according to the target detection model to be trained to obtain the target detection model, wherein the annotation information comprises cell position information and cell category information.
8. The device according to claim 7, wherein the feature extraction module is further configured to extract a texture feature extraction network according to a preset feature extraction network, extract a texture feature vector of the target cell image, extract a shape feature vector of the target cell image according to a shape feature extraction network in the preset feature extraction network, normalize the texture feature vector and the shape feature vector, and vector-splice the normalized texture feature vector and the normalized shape feature vector to obtain the cell feature vector of the target cell image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201910394263.8A 2019-05-13 2019-05-13 Cell sorting method, cell sorting device, computer equipment and storage medium Active CN110110799B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910394263.8A CN110110799B (en) 2019-05-13 2019-05-13 Cell sorting method, cell sorting device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910394263.8A CN110110799B (en) 2019-05-13 2019-05-13 Cell sorting method, cell sorting device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110110799A CN110110799A (en) 2019-08-09
CN110110799B true CN110110799B (en) 2021-11-16

Family

ID=67489695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910394263.8A Active CN110110799B (en) 2019-05-13 2019-05-13 Cell sorting method, cell sorting device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110110799B (en)

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465904A (en) * 2019-09-06 2021-03-09 上海晶赞融宣科技有限公司 Image target positioning method and device, computer equipment and storage medium
CN110705403A (en) * 2019-09-19 2020-01-17 平安科技(深圳)有限公司 Cell sorting method, cell sorting device, cell sorting medium, and electronic apparatus
CN110706772B (en) * 2019-10-11 2023-05-09 北京百度网讯科技有限公司 Ordering method and device, electronic equipment and storage medium
CN110737801B (en) * 2019-10-14 2024-01-02 腾讯科技(深圳)有限公司 Content classification method, apparatus, computer device, and storage medium
CN110781823B (en) * 2019-10-25 2022-07-26 北京字节跳动网络技术有限公司 Screen recording detection method and device, readable medium and electronic equipment
CN110807426B (en) * 2019-11-05 2023-11-21 苏州华文海智能科技有限公司 Deep learning-based parasite detection system and method
CN110889436B (en) * 2019-11-06 2022-07-22 西北工业大学 Underwater multi-class target classification method based on credibility estimation
CN110942089B (en) * 2019-11-08 2023-10-10 东北大学 Multi-level decision-based keystroke recognition method
CN112926608A (en) * 2019-12-05 2021-06-08 北京金山云网络技术有限公司 Image classification method and device, electronic equipment and storage medium
CN113035360A (en) * 2019-12-09 2021-06-25 浙江普罗亭健康科技有限公司 Cell classification model learning method
CN111178196B (en) * 2019-12-19 2024-01-23 东软集团股份有限公司 Cell classification method, device and equipment
CN111310611B (en) * 2020-01-22 2023-06-06 上海交通大学 Method for detecting cell view map and storage medium
CN111368636B (en) * 2020-02-07 2024-02-09 深圳奇迹智慧网络有限公司 Object classification method, device, computer equipment and storage medium
CN111461165A (en) * 2020-02-26 2020-07-28 上海商汤智能科技有限公司 Image recognition method, recognition model training method, related device and equipment
CN111353987A (en) * 2020-03-02 2020-06-30 中国科学技术大学 Cell nucleus segmentation method and device
CN111340126B (en) * 2020-03-03 2023-06-09 腾讯云计算(北京)有限责任公司 Article identification method, apparatus, computer device, and storage medium
CN111414921B (en) * 2020-03-25 2024-03-15 抖音视界有限公司 Sample image processing method, device, electronic equipment and computer storage medium
CN111461220B (en) * 2020-04-01 2022-11-01 腾讯科技(深圳)有限公司 Image analysis method, image analysis device, and image analysis system
CN111643079B (en) * 2020-04-26 2022-06-10 南京航空航天大学 Accurate tumor cell impedance detection method based on mutual compensation of bioimpedance spectroscopy and impedance imaging
CN111797894A (en) * 2020-05-27 2020-10-20 北京齐尔布莱特科技有限公司 Image classification method and computing device
CN113762292B (en) * 2020-06-03 2024-02-02 杭州海康威视数字技术股份有限公司 Training data acquisition method and device and model training method and device
CN111738350A (en) * 2020-06-30 2020-10-02 山东超越数控电子股份有限公司 Image recognition method and device, electronic equipment and computer readable medium
CN112016586A (en) * 2020-07-08 2020-12-01 武汉智筑完美家居科技有限公司 Picture classification method and device
CN112069874B (en) * 2020-07-17 2022-07-05 中山大学 Method, system, equipment and storage medium for identifying cells in embryo light microscope image
CN112132827A (en) * 2020-10-13 2020-12-25 腾讯科技(深圳)有限公司 Pathological image processing method and device, electronic equipment and readable storage medium
CN114639099A (en) * 2020-12-15 2022-06-17 深圳市瑞图生物技术有限公司 Method, device, equipment and medium for identifying and positioning target object in microscopic image
CN112597852B (en) * 2020-12-15 2024-05-24 深圳大学 Cell classification method, cell classification device, electronic device, and storage medium
CN112750121B (en) * 2021-01-20 2021-11-26 赛维森(广州)医疗科技服务有限公司 System and method for detecting digital image quality of pathological slide
CN113033389B (en) * 2021-03-23 2022-12-16 天津凌视科技有限公司 Method and system for image recognition by using high-speed imaging device
CN113095194A (en) * 2021-04-02 2021-07-09 北京车和家信息技术有限公司 Image classification method and device, storage medium and electronic equipment
CN113344868B (en) * 2021-05-28 2023-08-25 山东大学 Label-free cell classification screening system based on mixed transfer learning
CN113837158B (en) * 2021-11-26 2023-04-07 东南大学苏州医疗器械研究院 Virus neutralizing antibody detection method and device based on transfer learning
CN114359899B (en) * 2021-12-09 2022-09-20 首都医科大学附属北京天坛医院 Cell co-culture model, cell model construction method, computer device, and storage medium
CN114298212B (en) * 2021-12-23 2024-04-02 深圳大学 Monitoring device for cell micro-loss induction and bright field monitoring method
CN114820319A (en) * 2022-04-29 2022-07-29 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN114972849A (en) * 2022-05-10 2022-08-30 清华大学 Glioma type identification method, model training method, device and equipment
CN114897823B (en) * 2022-05-10 2024-03-19 广州锟元方青医疗科技有限公司 Cytological sample image quality control method, system and storage medium
CN114639102B (en) * 2022-05-11 2022-07-22 珠海横琴圣澳云智科技有限公司 Cell segmentation method and device based on key point and size regression
CN118279912B (en) * 2024-06-03 2024-08-06 深圳市合一康生物科技股份有限公司 Stem cell differentiation degree assessment method and system based on image analysis

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834914A (en) * 2015-05-15 2015-08-12 广西师范大学 Uterine neck cell image characteristic identification method and uterine neck cell characteristic identification apparatus
CN105095865A (en) * 2015-07-17 2015-11-25 广西师范大学 Directed-weighted-complex-network-based cervical cell recognition method and a cervical cell recognition apparatus
CN106204642A (en) * 2016-06-29 2016-12-07 四川大学 A kind of cell tracker method based on deep neural network
CN107909102A (en) * 2017-11-10 2018-04-13 天津大学 A kind of sorting technique of histopathology image
CN108550133A (en) * 2018-03-02 2018-09-18 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN
CN108629359A (en) * 2017-03-24 2018-10-09 中山大学 A kind of human epithelial cell sample image automatic classification method
CN108764292A (en) * 2018-04-27 2018-11-06 北京大学 Deep learning image object mapping based on Weakly supervised information and localization method
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109544507A (en) * 2018-10-18 2019-03-29 清影医疗科技(深圳)有限公司 A kind of pathological image processing method and system, equipment, storage medium
CN109598224A (en) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 Recommend white blood cell detection method in the Sections of Bone Marrow of convolutional neural networks based on region

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447169B (en) * 2018-11-02 2020-10-27 北京旷视科技有限公司 Image processing method, training method and device of model thereof and electronic system
CN109583369B (en) * 2018-11-29 2020-11-13 北京邮电大学 Target identification method and device based on target area segmentation network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834914A (en) * 2015-05-15 2015-08-12 广西师范大学 Uterine neck cell image characteristic identification method and uterine neck cell characteristic identification apparatus
CN105095865A (en) * 2015-07-17 2015-11-25 广西师范大学 Directed-weighted-complex-network-based cervical cell recognition method and a cervical cell recognition apparatus
CN106204642A (en) * 2016-06-29 2016-12-07 四川大学 A kind of cell tracker method based on deep neural network
CN108629359A (en) * 2017-03-24 2018-10-09 中山大学 A kind of human epithelial cell sample image automatic classification method
CN107909102A (en) * 2017-11-10 2018-04-13 天津大学 A kind of sorting technique of histopathology image
CN108550133A (en) * 2018-03-02 2018-09-18 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN
CN108764292A (en) * 2018-04-27 2018-11-06 北京大学 Deep learning image object mapping based on Weakly supervised information and localization method
CN109544507A (en) * 2018-10-18 2019-03-29 清影医疗科技(深圳)有限公司 A kind of pathological image processing method and system, equipment, storage medium
CN109493346A (en) * 2018-10-31 2019-03-19 浙江大学 It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN109598224A (en) * 2018-11-27 2019-04-09 微医云(杭州)控股有限公司 Recommend white blood cell detection method in the Sections of Bone Marrow of convolutional neural networks based on region

Also Published As

Publication number Publication date
CN110110799A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110110799B (en) Cell sorting method, cell sorting device, computer equipment and storage medium
CN110111344B (en) Pathological section image grading method and device, computer equipment and storage medium
US11681418B2 (en) Multi-sample whole slide image processing in digital pathology via multi-resolution registration and machine learning
CN110120040B (en) Slice image processing method, slice image processing device, computer equipment and storage medium
CN111753692B (en) Target object extraction method, product detection method, device, computer and medium
WO2021000524A1 (en) Hole protection cap detection method and apparatus, computer device and storage medium
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
US11977984B2 (en) Using a first stain to train a model to predict the region stained by a second stain
CN111860670A (en) Domain adaptive model training method, image detection method, device, equipment and medium
CN111402267B (en) Segmentation method, device and terminal of epithelial cell nuclei in prostate cancer pathological image
CN108090511B (en) Image classification method and device, electronic equipment and readable storage medium
CN112633297B (en) Target object identification method and device, storage medium and electronic device
CN106557740B (en) The recognition methods of oil depot target in a kind of remote sensing images
CN108830197A (en) Image processing method, device, computer equipment and storage medium
CN108520215B (en) Single-sample face recognition method based on multi-scale joint feature encoder
Stankov et al. Building detection in very high spatial resolution multispectral images using the hit-or-miss transform
CN111461101A (en) Method, device and equipment for identifying work clothes mark and storage medium
CN114119460A (en) Semiconductor image defect identification method, semiconductor image defect identification device, computer equipment and storage medium
CN112668462A (en) Vehicle loss detection model training method, vehicle loss detection device, vehicle loss detection equipment and vehicle loss detection medium
CN112819834B (en) Method and device for classifying stomach pathological images based on artificial intelligence
Molina-Giraldo et al. Image segmentation based on multi-kernel learning and feature relevance analysis
CN117218672A (en) Deep learning-based medical records text recognition method and system
CN115375674B (en) Stomach white-light neoplasia image identification method, device and storage medium
CN113766308A (en) Video cover recommendation method and device, computer equipment and storage medium
CN110751623A (en) Joint feature-based defect detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant