WO2021051875A1 - 细胞分类方法、装置、介质及电子设备 - Google Patents

细胞分类方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2021051875A1
WO2021051875A1 PCT/CN2020/093586 CN2020093586W WO2021051875A1 WO 2021051875 A1 WO2021051875 A1 WO 2021051875A1 CN 2020093586 W CN2020093586 W CN 2020093586W WO 2021051875 A1 WO2021051875 A1 WO 2021051875A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cell
classification
cells
trained
Prior art date
Application number
PCT/CN2020/093586
Other languages
English (en)
French (fr)
Inventor
王俊
高鹏
谢国彤
雷田子
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051875A1 publication Critical patent/WO2021051875A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Definitions

  • This application relates to the field of biometrics technology, and specifically to a cell classification method, cell classification device, computer readable medium and electronic equipment.
  • the analysis of cells is mainly based on the researcher's human eye observation for classification and identification. Observing a large number of sample data can easily lead to errors due to fatigue; or, some commonly used cell staining methods are used to classify and identify cells, but the staining reaction is limited. Due to the characteristics of cells, different cells may show the same or similar colors, which is not conducive to research. For example, in the diagnosis of diseases, after the pathological image is stained, the doctor will judge whether the cells have pathological changes or other medical problems based on the stained image. The inventor realizes that when doctors observe pathological images, it is easy to cause errors in diagnosis results due to various problems such as work pressure and visual fatigue.
  • the purpose of the embodiments of the present application is to provide a cell classification method so as to overcome the problem of low accuracy of cell classification at least to a certain extent.
  • a cell classification method including: acquiring a to-be-identified image containing a plurality of cells; determining cell contour information in the to-be-identified image; Segment the cell images corresponding to the multiple cells from the recognition image; use the trained classification model to classify the cell image, and obtain the classification result to determine the category of the cell corresponding to the cell image; The category to which the cell belongs, and the classification result is marked in the image to be recognized.
  • the use of the trained classification model to classify the cell image to obtain the classification result includes: inputting verification data into the classification model to obtain the verification data The recognition result, wherein the verification data includes a plurality of unrecognized cell sample images; extract the cell sample images whose predicted probability is lower than a preset value in the recognition result to obtain training data; and compare the training data based on the training data.
  • the classification model is trained to obtain a target classification model; the cell image is input into the target classification model to obtain a classification result.
  • the cell contour information in the image to be recognized is determined, and the cells corresponding to the multiple cells are segmented from the image to be recognized according to the cell contour information.
  • the image includes: using a trained segmentation model to identify the cell contour in the image to be identified, segment the image to be identified, and obtain a cell image.
  • the method before using the trained segmentation model to identify the cells in the image to be identified, the method further includes: acquiring a sample image, and labeling the cells in the sample image; and using the labeled sample Image training the segmentation model to obtain a trained segmentation model.
  • the acquiring a sample image and labeling the cells in the sample image includes: labeling different types of cells in the sample image with different labels, wherein the sample The background image of the image is labeled as the target label.
  • the training of the segmentation model using the labeled sample image includes: determining the loss function of the segmentation model based on the label of the labeled sample image, so that the segmentation The model recognizes the background image corresponding to the target tag and the cell images corresponding to the tags other than the target tag.
  • the method further includes: training the classification model using the labeled sample image so that the The trained classification model recognizes different types of cell images.
  • a cell classification device including: an image acquisition unit for acquiring an image to be identified containing a variety of cells; a cell positioning unit for determining the contour of the cell in the image to be identified Information, segmenting the cell images corresponding to the plurality of cells from the image to be recognized according to the cell contour information; a cell classification unit for classifying the cell images by using the trained classification model to obtain the classification As a result, the category to which the cell corresponding to the cell image belongs is determined; the classification identification unit is used to mark the classification result in the image to be identified according to the category to which the cell belongs.
  • a computer-readable medium on which a computer program is stored, and the program is executed by a processor to implement the cell classification method as described in the first aspect of the above-mentioned embodiments.
  • an electronic device including: one or more processors; a storage device, configured to store one or more programs, when the one or more programs are used by the one When executed by the one or more processors, the one or more processors implement the cell classification method as described in the first aspect of the foregoing embodiment.
  • the technical solutions provided by some embodiments of the present application on the one hand, by separating the cell images in the image to be identified, classifying the cell images and marking the category of the cells in the image to be identified, Each cell in the image is classified and identified, which saves the time of manual judgment one by one and improves the efficiency of identification. On the other hand, it can reduce the problem of similar staining colors that are difficult to distinguish due to the limitation of cell physical characteristics, and improve the accuracy of cell classification and recognition. On the other hand, marking the classification result in the image to be recognized can be more intuitive to the researcher, allowing the researcher to make judgments and draw conclusions more quickly, thereby improving the experience.
  • Fig. 1 schematically shows a flowchart of a cell classification method according to an embodiment of the present application.
  • Fig. 2 schematically shows a flowchart of a cell classification method according to another embodiment of the present application.
  • Fig. 3 schematically shows a block diagram of a cell sorting device according to an embodiment of the present application.
  • FIG. 4 shows a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present application.
  • this exemplary embodiment first proposes a cell classification method, and the execution subject of the method may be a device with a computing processing function, such as a server, a cloud host, and the like.
  • the cell classification method may include step S110, step S120, step S130, and step S140.
  • step S110 acquiring a to-be-recognized image containing a plurality of cells
  • step S120 determining cell contour information in the to-be-recognized image, and segmenting the plurality of cells from the to-be-recognized image according to the cell contour information Respectively corresponding cell images
  • step S130 use the trained classification model to classify the cell images, and obtain the classification results to determine the category of the cell corresponding to the cell image
  • step S140 according to the category of the cell , Mark the classification result in the image to be recognized.
  • the image to be recognized can be Each cell is classified and identified, which saves the time of manual judgment one by one and improves the efficiency of identification.
  • it can reduce the problem of similar staining colors that are difficult to distinguish due to the limitation of cell physical characteristics, and improve the accuracy of cell classification and recognition.
  • marking the classification result in the image to be recognized can be more intuitive to the researcher, allowing the researcher to make judgments and draw conclusions more quickly, thereby improving the experience.
  • step S110 an image to be identified including a plurality of cells may be acquired.
  • the image to be recognized may refer to a slice image of a biological tissue. Different biological tissues contain different types of cells. After the biological tissue is sliced, the image of the slice under the microscope may be the image to be recognized. The image to be identified can contain thousands of cells, and the morphology of the same type of cells is also different. The target biological tissue that needs to be identified can be sliced to obtain the image to be identified. Alternatively, the pathological slice of the target tissue of the patient can be acquired as the image to be recognized through the database of the medical platform. In this exemplary embodiment, the glomerular tissue is taken as an example to illustrate the recognition process of the image to be recognized. Of course, the embodiment of the present application is not limited to this, and cells contained in other tissues can also be recognized in other embodiments. .
  • step S120 cell contour information in the image to be recognized is determined, and cell images corresponding to multiple cells are segmented from the image to be recognized according to the cell contour information.
  • the cell contour information in the recognized image can be determined, so that the cell image can be separated from the image to be recognized according to the cell contour information.
  • the place where the brightness changes can be determined as the position of the edge in the image to be recognized, so as to separate the cells at that position.
  • the edge detection algorithm can be used to perform edge detection on the image to be recognized, thereby determining the contour of the cell in the image to be recognized, and segmenting the cell in the image to be recognized to obtain the cell image.
  • a segmentation model can be trained first, the trained segmentation model is used to recognize the cell contour in the image to be recognized, and the image to be recognized is segmented to obtain the cell image.
  • Training the segmentation model may specifically include the following steps: obtaining a sample image, and labeling the cells in the sample image; training the segmentation model using the labeled sample image to obtain a trained segmentation model.
  • the sample image may include an image containing cells, and the cells in the image are marked as the target of segmentation model learning. For example, the background part in the image may be marked as "0", the cells are marked as "1", and so on.
  • the image background can be labeled as "0", mesangial cells can be labeled as "1”, podocytes can be labeled as "2”, and endothelial cells can be labeled as "0". Marked as "3" and so on.
  • the labeled sample image may contain multiple labeled labels.
  • the background image may be labeled as a specific value. The specific value can be used as the target label, so that the characteristics of the target label can be processed when the model is trained.
  • the segmentation model can be trained by a convolutional neural network algorithm.
  • the network structure can include a convolution layer, a pooling layer, a deconvolution layer, and a cascade layer.
  • the input of the network structure can be a three-channel two
  • the final segmentation result is obtained by continuously extracting features and classifying each pixel.
  • the convolutional layer can use each convolution kernel to extract specific features of all positions on the input image to realize weight sharing on the same input image. In order to extract different features, different convolution kernels can be used for convolution operations.
  • nonlinear mapping can also be introduced after the convolutional layer.
  • the pooling layer can perform down-sampling operations on each feature map, and can use the maximum pooling method to keep the main features while reducing parameters and calculations, and improving the generalization ability of the model.
  • the deconvolution layer can perform convolution operation after filling the feature map. In addition to feature extraction, deconvolution can also enlarge the size of the feature map.
  • the cascading layer is the operation of combining two feature maps. After convolving the two feature maps separately, the two feature maps are cascaded, which is equivalent to adding different weights to the two feature maps. Then concatenate the convolved feature maps.
  • the segmentation model can also be trained by other methods, such as a U-shaped convolutional network algorithm, which is not particularly limited in this embodiment.
  • the feature of the target tag may be removed, so that the feature of the background image corresponding to the target tag is not calculated into the loss, and the convergence of the segmentation model is accelerated.
  • the image features with a label of "0" will not be calculated into the loss.
  • the cell contour is recognized, the smallest outer rectangle of the cell contour can be used as the target area to be cropped to obtain a partial image of the sample image.
  • Each partial image can be a cell image. After multiple cell images are obtained, all cell images can be enlarged at the same scale, which is convenient for identifying the category of the cell image.
  • step S130 the cell image is classified using the trained classification model, and the classification result is obtained to determine the category to which the cell corresponding to the cell image belongs.
  • the cell image After the cell image is obtained by segmentation, the cell image can be identified by the classification model to determine the cell category corresponding to the cell image, and the classification model can be trained by the deep residual network algorithm.
  • the training data may be obtained first.
  • the training data may be cell images, and the cell images need to be labeled, and different types of cell images are labeled with different labels, so as to obtain a large number of labeled cell images.
  • the labeled cell image is input into the classification model, and the deep residual network algorithm is used to train the classification model, so as to obtain the trained classification model.
  • the classification model can identify unrecognized cell images.
  • the classification model can be trained by continuously accumulating new data.
  • a cross-layer connected deep residual network can be used for classification and identification, and residual learning is used for each group of network layers in the deep residual network.
  • residual learning is used for each group of network layers in the deep residual network.
  • the residual network can solve the problem of gradient descent in the deep neural network. Breaking the traditional neural network n-1 layer output can only be given to n layers as input conventions, so that the output of a certain layer can directly cross several layers as the input of a later layer, and learn through the deep network of cross-layer connection The small differences between cells enable accurate classification of multiple types of cells.
  • the deep residual network algorithm can complete the training of the classification model. After the trained classification model is obtained, the classification model can be used to classify and recognize the cell image and determine the classification result. Moreover, the classification model can be trained again by the active learning method, so as to make the classification accuracy higher. Referring to FIG.
  • this embodiment may include the following steps: Step S201, input verification data into the classification model, and obtain a recognition result of the verification data, wherein the verification data includes a plurality of unidentified cell sample images; Step S202: Extract the cell sample images in the recognition result whose predicted probability is lower than the preset value to obtain training data; Step S203: Retrain the classification model based on the training data to obtain the target classification model; Step S204 , Input the cell image into the target classification model to obtain the classification result.
  • the cell sample image may refer to an image of a single unidentified cell, or may refer to a slice image containing multiple types of cells.
  • a large number of unidentified cell images are acquired as verification data, and the verification data is input into the classification model.
  • the verification data can be identified through the classification model, and the category of the cell sample image can be predicted to obtain the prediction result.
  • the prediction result is: the probability that the image belongs to category A is 0.5, and the label of the mesangial cell image is labeled A, which means that the probability that the image is a mesangial cell is 50%.
  • Each cell sample image in the verification data is predicted, and the prediction result of each cell sample image is obtained. Then, the cell sample images whose predicted probability is lower than the preset value are screened out and used as training data.
  • the preset value can be set according to actual requirements, such as 0.5, 0.2, 0.1, etc., or other probability values, such as 0.3, 0.4, etc., which are not limited in this embodiment. If the prediction probability of the classification model for the cell sample image is low, lower than the preset value, it can indicate the uncertainty of the classification model on the characteristics of the cell sample image, and the cell sample image has greater learning value for the classification model , Which is more conducive to the improvement of the accuracy of the classification model.
  • extracting cell sample images with predicted probabilities lower than the preset value can be used as training data, and the classification model is trained again using the training data to obtain the target classification model.
  • the algorithm for training the target classification model may be the same as that of the above classification model, for example, a deep residual network algorithm is used, or it may be different from the above classification model, for example, the target classification model may use a decision tree algorithm.
  • the cell image to be recognized is input into the target classification model, and the classification result can be obtained by recognizing the cell image.
  • the target classification model can also be trained again, that is, the cell image with a lower prediction probability of the target classification model is extracted to train the target classification model again to obtain a new classification model.
  • step S140 the classification result is marked in the image to be recognized according to the category to which the cell belongs.
  • each cell in the image to be recognized can be labeled, and the labeled image can be displayed to the user.
  • mesangial cells are labeled as "1”
  • podocytes are labeled as "2”
  • endothelial cells are labeled as "3" and so on. Marking the classification results in the image to be recognized allows users to more intuitively see the types of cells contained in the image to be recognized, which is convenient for users to study and judge.
  • the image to be recognized is a pathological section of the kidney, and the marked image is displayed on the Doctors can make it easy for doctors to diagnose diseases and avoid errors caused by similar colors and difficult to distinguish when staining the cells.
  • the cell classification device 300 may include an image acquisition unit 310, a cell positioning unit 320, a cell classification unit 330, and a classification identification unit 340.
  • the image acquisition unit 310 may be used to acquire a to-be-identified image containing a plurality of cells;
  • the cell positioning unit 320 is used to determine the cell contour information in the to-be-identified image and obtain the cell contour information from the to-be-identified image.
  • the cell images corresponding to the multiple cells are segmented from the recognition image; the cell classification unit 330 is configured to classify the cell images by using the trained classification model, and obtain the classification results to determine the cells corresponding to the cell images The category to which it belongs; a classification identification unit 340, configured to mark the classification result in the image to be identified according to the category to which the cell belongs.
  • the cell classification unit 330 may include: a first identification unit, configured to input verification data into the classification model to obtain a recognition result of the verification data, wherein the verification data includes A plurality of unrecognized cell sample images; a data acquisition unit for extracting cell sample images whose predicted probability is lower than a preset value in the recognition result to obtain training data; a classification model training unit for based on the training data The classification model is trained again to obtain a target classification model; the second recognition unit is used to input the cell image into the target classification model to obtain a classification result.
  • the cell positioning unit 320 may be used to identify the cell contour in the image to be identified by using the trained segmentation model, segment the image to be identified, and obtain a cell image.
  • the cell classification device 300 further includes: a labeling unit for acquiring a sample image and labeling the cells in the sample image; a first model training unit for using the labelled sample image Training the segmentation model to obtain a trained segmentation model.
  • the labeling unit is configured to label different types of cells in the sample image as different labels, wherein the background image of the sample image is labelled as a target label.
  • the model training unit may be used to determine the loss function of the segmentation model based on the label of the annotated sample image, so that the segmentation model recognizes the background image corresponding to the target label, And cell images corresponding to other tags except the target tag.
  • the cell classification device 300 further includes: a second model training unit, configured to train the classification model by using the annotated sample image, so that the trained classification model recognizes different Cell image of the category.
  • each functional module of the cell sorting device of the exemplary embodiment of the present application corresponds to the steps of the above-mentioned exemplary embodiment of the cell sorting method, for details that are not disclosed in the device embodiments of the present application, please refer to the above-mentioned cell sorting of the present application. Examples of methods.
  • FIG. 4 shows a schematic structural diagram of a computer system 400 suitable for implementing an electronic device according to an embodiment of the present application.
  • the computer system 400 of the electronic device shown in FIG. 4 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the computer system 400 includes a central processing unit (CPU) 401, which can follow a program stored in a read-only memory (ROM) 402 or a program loaded from a storage part 408 into a random access memory (RAM) 403 And perform various appropriate actions and processing.
  • CPU central processing unit
  • RAM random access memory
  • various programs and data required for system operation are also stored.
  • the CPU 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to the bus 404.
  • the following components are connected to the I/O interface 405: an input part 406 including a keyboard, a mouse, etc.; an output part 407 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 408 including a hard disk, etc. ; And a communication section 409 including a network interface card such as a LAN card, a modem, and the like. The communication section 409 performs communication processing via a network such as the Internet.
  • the driver 410 is also connected to the I/O interface 405 as needed.
  • a removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 410 as required, so that the computer program read from it is installed into the storage section 408 as required.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present application include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 409, and/or installed from the removable medium 411.
  • the central processing unit (CPU) 401 the above-mentioned functions defined in the system of the present application are executed.
  • the computer-readable medium shown in this application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

一种细胞分类方法、细胞分类装置、计算机可读介质及电子设备,可在数字医疗中实现。该细胞分类方法包括:获取包含多个细胞的待识别图像(S110);确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像(S120);利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别(S130);按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中(S140)。通过获取待识别图像中的细胞图像,基于细胞图像对细胞进行分类确定分类结果,并将分类结果标注在待识别图像中,能够提高细胞分类的准确性。

Description

细胞分类方法、装置、介质及电子设备
本申请要求于2019年09月19日提交中国专利局、申请号为2019108888517,发明名称为“细胞分类方法、装置、介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及生物识别技术领域,具体而言,涉及一种细胞分类方法、细胞分类装置、计算机可读介质及电子设备。
背景技术
随着人工智能技术的成熟,机器学习的应用范围越来越广泛,例如数据挖掘、自然语言处理、DNA序列预测等等。在生物学研究领域,大量的生物特征对于研究人员来说,识别和分类的工作量十分巨大。而,在生物特征中对于细胞特征的分析显得基础又重要。
目前,对细胞的分析主要靠研究人员人眼观察进行分类识别,观察大量的样本数据很容易因疲劳而导致误差;或者,也利用一些常用的细胞染色方法对细胞进行分类识别,但是染色反应局限于细胞自身的特性,导致不同的细胞有可能呈现同样或者相近的颜色,不利于研究。例如,在疾病诊断时,病理图像经过染色后,医生会根据染色后的图像判断细胞是否发生病变等医学问题。发明人意识到,在医生观察病理图像时,很容易因工作压力、视觉疲劳等各种问题导致诊断结果出错。
因此,亟需一种细胞识别方法来解决或改善上述问题。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本申请的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
技术问题
本申请实施例的目的在于提供一种细胞分类方法,进而至少在一定程度上克服细胞分类准确率低的问题。
技术解决方案
根据本申请实施例的第一方面,提供了一种细胞分类方法,包括:获取包含多个细胞的待识别图像;确定待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
在本申请的一种示例性实施例中,所述利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,包括:将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;基于所述训练数据对所述分类模型进行训练,获得目标分类模型;将所述细胞图像输入目标分类模型中,获得分类结果。
在本申请的一种示例性实施例中,所述确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像,包括:利用训练后的分割模型识别所述待识别图像中的细胞轮廓,对所述待识别图像进行分割,获取细胞图像。
在本申请的一种示例性实施例中,利用训练后的分割模型识别所述待识别图像中的细胞之前,还包括:获取样本图像,对样本图像中的细胞进行标注;利用标注后的样本图像训练所述分割模型,以获得训练后的分割模型。
在本申请的一种示例性实施例中,所述获取样本图像,对样本图像中的细胞进行标注,包括:对所述样本图像中不同类别的细胞标注为不同的标签,其中,所述样本图像的背景图像标注为目标标签。
在本申请的一种示例性实施例中,所述利用标注后的样本图像训练所述分割模型,包括:基于标注后的样本图像的标签确定所述分割模型的损失函数,以使所述分割模型识别所述目标标签对应的背景图像,以及除所述目标标签之外的其他标签对应的细胞图像。
在本申请的一种示例性实施例中,利用训练后的分类模型对所述细胞图像进行分类,获取分类结果之前,还包括:利用标注后的样本图像训练所述分类模型,以使所述训练后的分类模型识别出不同类别的细胞图像。
根据本申请实施例的第二方面,提供了一种细胞分类装置,包括:图像获取单元,用于获取包含多种细胞的待识别图像;细胞定位单元,用于确定待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;细胞分类单元,用于利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;分类标识单元,用于按照所述细胞所属的类别,将所述分类结果标注在待识别图像中。
根据本申请实施例的第三方面,提供了一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现如上述实施例中第一方面所述的细胞分类方法。
根据本申请实施例的第四方面,提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如上述实施例中第一方面所述的细胞分类方法。
有益效果
在本申请的一些实施例所提供的技术方案中,一方面,通过将待识别图像中的细胞图像分离出来,对细胞图像进行分类并将细胞所属的类别标注在待识别图像中,能够对待识别图像中的每一细胞进行分类识别,节省人工逐一判断的时间,提高识别效率。另一方面,可以减少由于细胞物理特性限定导致染色颜色相近难以区分的问题,提高细胞分类识别的准确性。再一方面,将分类结果标注在待识别图像中可以更加直观对研究人员展示,使得研究人员更加快速地进行判断得出结论,从而提高体验感受。
附图说明
图1示意性示出了根据本申请的实施例的细胞分类方法的流程图。
图2示意性示出了根据本申请的另一实施例的细胞分类方法的流程图。
图3示意性示出了根据本申请的实施例的细胞分类装置的框图。
图4示出了适于用来实现本申请实施例的电子设备的计算机系统的结构示意图。
本发明的实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本申请将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本申请的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本申请的技术方案而没有特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知方法、装置、实现或者操作以避免模糊本申请的各方面。
附图中所示的方框图仅仅是功能实体,不一定必须与物理上独立的实体相对应。即,可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解,而有的操作/步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。
本申请人发现,细胞识别对生物学研究以及医学诊断等领域具有重大的意义,研究人员通常是对细胞进行染色,通过染色后的细胞呈现的颜色对细胞进行分类,但是由于细胞的特性导致细胞呈现的颜色比较相近,导致分类误差较大。
基于此,本示例实施例中首先提出一种细胞分类方法,该方法的执行主体可以是计算处理功能的设备,例如服务器、云主机等。如图1所示,该细胞分类方法可以包括步骤S110、步骤S120、步骤S130以及步骤S140。其中:步骤S110,获取包含多个细胞的待识别图像;步骤S120,确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;步骤S130,利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;步骤S140,按照所述细胞所属的类别,将所述分类结果标注在待识别图像中。
在本示例实施例所提供的技术方案中,一方面,通过将待识别图像中的细胞图像分离出来,对细胞图像进行分类并将细胞所属的类别标注在待识别图像中,能够对待识别图像中的每一细胞进行分类识别,节省人工逐一判断的时间,提高识别效率。另一方面,可以减少由于细胞物理特性限定导致染色颜色相近难以区分的问题,提高细胞分类识别的准确性。再一方面,将分类结果标注在待识别图像中可以更加直观对研究人员展示,使得研究人员更加快速地进行判断得出结论,从而提高体验感受。
下面对本示例实施例中的各个步骤进行更加详细的说明。
参考图1所示,在步骤S110中,可以获取包含多个细胞的待识别图像。
待识别图像可以指生物组织的切片图像,不同的生物组织包含的细胞种类不同,对生物组织进行切片后该切片在显微镜下的图像可以为待识别图像。待识别图像中可以包含成千上万的细胞,同一种类的细胞形态也不同,可以对需要识别的目标生物组织进行切片得到待识别图像。或者,可以通过医疗平台的数据库获取患者的目标组织的病理切片为待识别图像。在本示例实施例中,以肾小球组织为例,来说明对待识别图像的识别过程,当然,本申请实施方式不限于此,在其他实施例中也可以对其他组织中包含的细胞进行识别。
在步骤S120中,确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出多个细胞分别对应的细胞图像。
通过对待识别图像进行图像处理可以确定识别图像中的细胞轮廓信息,从而根据细胞轮廓信息将细胞图像从待识别图像中分离出来。基于待识别图像的亮度,亮度发生变化的地方可以确定为待识别图像中边缘的位置,从而将该位置处的细胞分离出来。利用边缘检测算法可以对待识别图像进行边缘检测,从而确定待识别图像中的细胞轮廓,将待识别图像中的细胞分割出来得到细胞图像。或者,通过机器学习算法确定待识别图像中的细胞图像,可以先训练一分割模型,利用训练后的分割模型识别待识别图像中的细胞轮廓,对待识别图像进行分割,获取细胞图像。训练该分割模型可以具体通过以下步骤:获取样本图像,对样本图像中的细胞进行标注;利用标注后的样本图像训练分割模型,以获得训练后的分割模型。具体地:样本图像可以包括包含细胞的图像,将图像中的细胞标注出来,作为分割模型学习的目标,例如,图像中的背景部分可以标注为“0”,细胞标注为“1”等。此外,不同种类的细胞可以标注为不同的标签,例如肾小球组织细胞,可以将图像背景可以标注为“0”,系膜细胞标注为“1”,足细胞标注为“2”,内皮细胞标注为“3”等。标注后的样本图像中可以包含标注的多个标签,为了能够在分割模型训练时减少特征量,可以将背景图像标注为一特定值。该特定值可以作为目标标签,以便于在训练模型时对目标标签的特征进行处理。
然后,将带有标签的图像作为分割模型的训练数据,对分割模型进行训练。示例性地,该分割模型可以通过卷积神经网络算法进行训练,网络结构可以包括卷积层、池化层、反卷积层、级联层,该网络结构的输入可以是一个三通道的二维图像,通过不断提取特征并对每个像素进行分类,从而得到最终的分割结果。其中,卷积层可以利用每个卷积核提取输入图像上所有位置的特定特征,实现同一个输入图像上的权值共享。为了提取不同的特征,可以使用不同的卷积核进行卷积操作。为使特征更加有效,在卷积层后还可以引入非线性映射。池化层可以对每个特征图进行下采样操作,可以采用最大池化方法,以使得保留主要特征的同时,减少参数和计算量,提高模型的泛化能力。反卷积层可以对特征图填补后做卷积操作。反卷积除了进行特征的提取外,还能放大特征图的尺寸。级联层是将两个特征图进行组合的操作,对两个特征图分别进行卷积操作后将这两个特征图进行级联,这就相当于给两个特征图加了不同的权重,然后将卷积过后的特征图进行级联。此外,还可以通过其他方法训练分割模型,例如U型卷积网络算法等,本实施例对此不做特殊限定。
示例性实施例中,为了能够提高分割模型的训练效率,可以将目标标签的特征去除,使目标标签对应的背景图像的特征不被计算到损失中,加快分割模型的收敛。例如,采用损失函数dice-coef-loss,标签为“0”的图像特征不会被计算到损失中。在识别到细胞轮廓时,可以将细胞轮廓的最小外包矩形作为目标区域进行裁取,得到样本图像的局部图像。每一局部图像可以为一细胞图像,得到多个细胞图像后,还可以对所有细胞图像进行同一比例的放大,便于对细胞图像所属的类别进行识别。
在步骤S130中,利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别。
分割得到细胞图像后,可以通过分类模型对细胞图像进行识别判断细胞图像对应的细胞类别,该分类模型可以通过深度残差网络算法训练。具体地,可以先获取训练数据,该训练数据可以是细胞图像,且细胞图像需要进行标注,不同类别的细胞图像标注为不同的标签,从而得到大量的标注后的细胞图像。将标注后的细胞图像输入分类模型,并且采用深度残差网络算法对分类模型进行训练,从而得到训练完成的分类模型。该分类模型可以对未识别的细胞图像进行识别。
为了提高分类模型的精确性,可以通过不断积累新的数据训练分类模型。具体的,本示例性实施例中,可以采用跨层连接的深度残差网络进行分类识别,在深度残差网络中对每组网络层采用残差学习。随着卷积神经网络深度的增加,后面的层级的学习能力将会退化,使用残差网络可以解决深度神经网络中梯度下降的问题。打破了传统的神经网络n-1层的输出只能给n层作为输入的惯例,使某一层的输出可以直接跨过几层作为后面某一层的输入,通过跨层连接的深度网络学习细胞之间的微小差异,实现多类细胞的准确分类。
通过深度残差网络算法可以完成对分类模型的训练,得到训练后的分类模型后,可以利用分类模型对细胞图像进行分类识别,确定分类结果。并且,可以采用主动学习的方法对分类模型再次进行训练,以使得分类的精确性更高。参考图2,本实施方式可以包括以下步骤:步骤S201,将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;步骤S202,提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;步骤S203,基于所述训练数据对所述分类模型再次进行训练,获得目标分类模型;步骤S204,将所述细胞图像输入目标分类模型中,获得分类结果。
其中,细胞样本图像可以指未识别的单个细胞的图像,也可以指包含多种细胞的切片图像。获取大量未识别的细胞图像作为验证数据,将验证数据输入分类模型中,通过该分类模型可以对验证数据进行识别,预测细胞样本图像所属的类别,得到预测结果。例如,预测结果为:图像属于A类的概率是0.5,而系膜细胞图像标注的标签为A,则表示该图像为系膜细胞的概率为50%。
对验证数据中每一细胞样本图像均进行预测,得到每一细胞样本图像的预测结果。然后将预测概率低于预设值的细胞样本图像筛选出来,作为训练数据。预设值可以根据实际求进行设定,例如为0.5、0.2、0.1等,也可以为其他概率值,例如0.3、0.4等,本实施例对此不做限定。如果分类模型对细胞样本图像的预测概率较低,低于预设值,则可以说明分类模型对该细胞样本图像的特征的不确定性,该细胞样本图像对于该分类模型具有较大的学习价值,更有利于该分类模型的精度提升。
因此,将预测概率低于预设值的细胞样本图像提取出来可以作为训练数据,利用训练数据对该分类模型再次进行训练,得到目标分类模型。训练目标分类模型的算法可以与上述分类模型的相同,例如采用深度残差网络算法,也可以与上述分类模型的不同,例如目标分类模型可以采用决策树算法等。
训练得到目标分类模型后,将待识别的细胞图像输入目标分类模型中,对细胞图像进行识别可以获得分类结果。此外,还可以对目标分类模型再次进行训练,即,将目标分类模型的预测概率较低的细胞图像提取出来再次对目标分类模型进行训练,得到新的分类模型。通过主动学习,不断地改进,可以使得分类模型的精确度越来越高,从而使得分类模型对细胞图像的识别更加准确。
在步骤S140中,按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
本示例实施例中,可以将待识别图像中的各个细胞进行标注,并将标注后的图像展示给用户,例如系膜细胞标注为“1”,足细胞标注为“2”,内皮细胞标注为“3”等。将分类结果标注在待识别图像中可以使用户更加直观地看到待识别图像中包含的细胞的类别,便于用户进行研究和判断,例如待识别图像为肾脏病理切片,将标注后的图像展示在医生,可以使得医生方便地疾病诊断,避免对细胞进行染色反应时,由于颜色相近难以区分而导致误差。
以下介绍本申请的装置实施例,可以用于执行本申请上述的细胞分类方法。如图3所示,该细胞分类装置300可以包括图像获取单元310、细胞定位单元320、细胞分类单元330、分类标识单元340。具体地:图像获取单元310,可以用于获取包含多个细胞的待识别图像;细胞定位单元320,用于确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;细胞分类单元330,用于利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;分类标识单元340,用于按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
在本申请的示例性实施例中,细胞分类单元330可以包括:第一识别单元,用于将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;数据获取单元,用于提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;分类模型训练单元,用于基于所述训练数据对所述分类模型再次进行训练,获得目标分类模型;第二识别单元,用于将所述细胞图像输入所述目标分类模型中,获得分类结果。
在本申请的示例性实施例中,细胞定位单元320可以用于:利用训练后的分割模型识别所述待识别图像中的细胞轮廓,对所述待识别图像进行分割,获取细胞图像。
在本申请的示例性实施例中,该细胞分类装置300还包括:标注单元,用于获取样本图像,对样本图像中的细胞进行标注;第一模型训练单元,用于利用标注后的样本图像训练所述分割模型,以获得训练后的分割模型。
在本申请的示例性实施例中,标注单元用于:对所述样本图像中不同类别的细胞标注为不同的标签,其中,所述样本图像的背景图像标注为目标标签。
在本申请的示例性实施例中,模型训练单元可以用于:基于标注后的样本图像的标签确定所述分割模型的损失函数,以使所述分割模型识别所述目标标签对应的背景图像,以及除所述目标标签之外的其他标签对应的细胞图像。
在本申请的示例性实施例中,该细胞分类装置300还包括:第二模型训练单元,用于利用标注后的样本图像训练所述分类模型,以使所述训练后的分类模型识别出不同类别的细胞图像。
由于本申请的示例实施例的细胞分类装置的各个功能模块与上述的细胞分类方法的示例实施例的步骤对应,因此对于本申请装置实施例中未披露的细节,请参照本申请上述的细胞分类方法的实施例。
下面参考图4,其示出了适于用来实现本申请实施例的电子设备的计算机系统400的结构示意图。图4示出的电子设备的计算机系统400仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图4所示,计算机系统400包括中央处理单元(CPU)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储部分408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有系统操作所需的各种程序和数据。CPU 401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
以下部件连接至I/O接口405:包括键盘、鼠标等的输入部分406;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分407;包括硬盘等的存储部分408;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分409。通信部分409经由诸如因特网的网络执行通信处理。驱动器410也根据需要连接至I/O接口405。可拆卸介质411,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器410上,以便于从其上读出的计算机程序根据需要被安装入存储部分408。
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分409从网络上被下载和安装,和/或从可拆卸介质411被安装。在该计算机程序被中央处理单元(CPU)401执行时,执行本申请的系统中限定的上述功能。
需要说明的是,本申请所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。所述计算机可读存储介质可以是非易失性,也可以是易失性。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种细胞分类方法,其中,包括:
    获取包含多个细胞的待识别图像;
    确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;
    利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;
    按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
  2. 根据权利要求1所述的方法,其中,所述利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,包括:
    将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;
    提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;
    基于所述训练数据对所述分类模型再次进行训练,获得目标分类模型;
    将所述细胞图像输入所述目标分类模型中,获得分类结果。
  3. 根据权利要求1所述的方法,其中,所述确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像,包括:
    利用训练后的分割模型识别所述待识别图像中的细胞轮廓,对所述待识别图像进行分割,获取细胞图像。
  4. 根据权利要求3所述的方法,其中,利用训练后的分割模型识别所述待识别图像中的细胞之前,还包括:
    获取样本图像,对样本图像中的细胞进行标注;
    利用标注后的样本图像训练所述分割模型,以获得训练后的分割模型。
  5. 根据权利要求4所述的方法,其中,所述获取样本图像,对样本图像中的细胞进行标注,包括:
    对所述样本图像中不同类别的细胞标注为不同的标签,其中,所述样本图像的背景图像标注为目标标签。
  6. 根据权利要求5所述的方法,其中,所述利用标注后的样本图像训练所述分割模型,包括:
    基于标注后的样本图像的标签确定所述分割模型的损失函数,以使所述分割模型识别所述目标标签对应的背景图像,以及除所述目标标签之外的其他标签对应的细胞图像。
  7. 根据权利要求5所述的方法,其中,利用训练后的分类模型对所述细胞图像进行分类,获取分类结果之前,还包括:
    利用标注后的样本图像训练所述分类模型,以使所述训练后的分类模型识别出不同类别的细胞图像。
  8. 一种细胞分类装置,其中,包括:
    图像获取单元,用于获取包含多个细胞的待识别图像;
    细胞定位单元,用于确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;
    细胞分类单元,用于利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;
    分类标识单元,用于按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
  9. 一种电子设备,其中,包括存储器和处理器,所述处理器、和所述存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器用于执行所述存储器的所述程序指令,其中:
    获取包含多个细胞的待识别图像;
    确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;
    利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;
    按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
  10. 根据权利要求9所述的电子设备,其中,所述处理器用于:
    将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;
    提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;
    基于所述训练数据对所述分类模型再次进行训练,获得目标分类模型;
    将所述细胞图像输入所述目标分类模型中,获得分类结果。
  11. 根据权利要求9所述的电子设备,其中,所述处理器用于:
    利用训练后的分割模型识别所述待识别图像中的细胞轮廓,对所述待识别图像进行分割,获取细胞图像。
  12. 根据权利要求11所述的电子设备,其中,所述处理器用于:
    获取样本图像,对样本图像中的细胞进行标注;
    利用标注后的样本图像训练所述分割模型,以获得训练后的分割模型。
  13. 根据权利要求12所述的电子设备,其中,所述处理器用于:
    对所述样本图像中不同类别的细胞标注为不同的标签,其中,所述样本图像的背景图像标注为目标标签。
  14. 根据权利要求13所述的电子设备,其中,所述处理器用于:
    基于标注后的样本图像的标签确定所述分割模型的损失函数,以使所述分割模型识别所述目标标签对应的背景图像,以及除所述目标标签之外的其他标签对应的细胞图像。
  15. 根据权利要求13所述的电子设备,其中,所述处理器用于:
    利用标注后的样本图像训练所述分类模型,以使所述训练后的分类模型识别出不同类别的细胞图像。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时,用于实现以下步骤:
    获取包含多个细胞的待识别图像;
    确定所述待识别图像中的细胞轮廓信息,根据所述细胞轮廓信息从所述待识别图像中分割出所述多个细胞分别对应的细胞图像;
    利用训练后的分类模型对所述细胞图像进行分类,获取分类结果,以确定所述细胞图像对应的细胞所属的类别;
    按照所述细胞所属的类别,将所述分类结果标注在所述待识别图像中。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    将验证数据输入所述分类模型中,获取所述验证数据的识别结果,其中,所述验证数据包括多个未识别的细胞样本图像;
    提取出所述识别结果中预测概率低于预设值的细胞样本图像,获得训练数据;
    基于所述训练数据对所述分类模型再次进行训练,获得目标分类模型;
    将所述细胞图像输入所述目标分类模型中,获得分类结果。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    利用训练后的分割模型识别所述待识别图像中的细胞轮廓,对所述待识别图像进行分割,获取细胞图像。
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    获取样本图像,对样本图像中的细胞进行标注;
    利用标注后的样本图像训练所述分割模型,以获得训练后的分割模型。
  20. 根据权利要求19所述的计算机可读存储介质,其中,所述程序指令被处理器执行时,还用于实现以下步骤:
    对所述样本图像中不同类别的细胞标注为不同的标签,其中,所述样本图像的背景图像标注为目标标签。
PCT/CN2020/093586 2019-09-19 2020-05-30 细胞分类方法、装置、介质及电子设备 WO2021051875A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910888851.7 2019-09-19
CN201910888851.7A CN110705403A (zh) 2019-09-19 2019-09-19 细胞分类方法、装置、介质及电子设备

Publications (1)

Publication Number Publication Date
WO2021051875A1 true WO2021051875A1 (zh) 2021-03-25

Family

ID=69194756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093586 WO2021051875A1 (zh) 2019-09-19 2020-05-30 细胞分类方法、装置、介质及电子设备

Country Status (2)

Country Link
CN (1) CN110705403A (zh)
WO (1) WO2021051875A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128385A (zh) * 2021-04-08 2021-07-16 北京工业大学 一种有毒藻类监测预警方法及系统
CN115861719A (zh) * 2023-02-23 2023-03-28 北京肿瘤医院(北京大学肿瘤医院) 一种可迁移细胞识别工具
CN117422633A (zh) * 2023-11-15 2024-01-19 珠海横琴圣澳云智科技有限公司 样本视野图像的处理方法和装置
WO2024051482A1 (zh) * 2022-09-07 2024-03-14 上海睿钰生物科技有限公司 细胞单克隆源性自动分析的方法、系统和存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705403A (zh) * 2019-09-19 2020-01-17 平安科技(深圳)有限公司 细胞分类方法、装置、介质及电子设备
CN111353435A (zh) * 2020-02-28 2020-06-30 杭州依图医疗技术有限公司 细胞图像的显示方法、病理图像分析系统及存储介质
CN113066080A (zh) * 2021-04-19 2021-07-02 广州信瑞医疗技术有限公司 切片组织识别方法、装置,细胞识别模型及组织分割模型
CN113989294B (zh) * 2021-12-29 2022-07-05 北京航空航天大学 基于机器学习的细胞分割和分型方法、装置、设备及介质
CN114067118B (zh) * 2022-01-12 2022-04-15 湖北晓雲科技有限公司 一种航空摄影测量数据的处理方法
CN114972222A (zh) * 2022-05-13 2022-08-30 徕卡显微系统科技(苏州)有限公司 细胞信息统计方法、装置、设备及计算机可读存储介质
CN115019305B (zh) * 2022-08-08 2022-11-11 成都西交智汇大数据科技有限公司 一种根尖细胞的识别方法、装置、设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109886179A (zh) * 2019-02-18 2019-06-14 深圳视见医疗科技有限公司 基于Mask-RCNN的子宫颈细胞涂片的图像分割方法和系统
CN110110799A (zh) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 细胞分类方法、装置、计算机设备和存储介质
CN110135271A (zh) * 2019-04-19 2019-08-16 上海依智医疗技术有限公司 一种细胞分类方法及装置
CN110705403A (zh) * 2019-09-19 2020-01-17 平安科技(深圳)有限公司 细胞分类方法、装置、介质及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065817A1 (en) * 2017-08-29 2019-02-28 Konica Minolta Laboratory U.S.A., Inc. Method and system for detection and classification of cells using convolutional neural networks
CN109886179A (zh) * 2019-02-18 2019-06-14 深圳视见医疗科技有限公司 基于Mask-RCNN的子宫颈细胞涂片的图像分割方法和系统
CN110135271A (zh) * 2019-04-19 2019-08-16 上海依智医疗技术有限公司 一种细胞分类方法及装置
CN110110799A (zh) * 2019-05-13 2019-08-09 广州锟元方青医疗科技有限公司 细胞分类方法、装置、计算机设备和存储介质
CN110705403A (zh) * 2019-09-19 2020-01-17 平安科技(深圳)有限公司 细胞分类方法、装置、介质及电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128385A (zh) * 2021-04-08 2021-07-16 北京工业大学 一种有毒藻类监测预警方法及系统
WO2024051482A1 (zh) * 2022-09-07 2024-03-14 上海睿钰生物科技有限公司 细胞单克隆源性自动分析的方法、系统和存储介质
CN115861719A (zh) * 2023-02-23 2023-03-28 北京肿瘤医院(北京大学肿瘤医院) 一种可迁移细胞识别工具
CN117422633A (zh) * 2023-11-15 2024-01-19 珠海横琴圣澳云智科技有限公司 样本视野图像的处理方法和装置

Also Published As

Publication number Publication date
CN110705403A (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021051875A1 (zh) 细胞分类方法、装置、介质及电子设备
CN107895367B (zh) 一种骨龄识别方法、系统及电子设备
CN111161275B (zh) 医学图像中目标对象的分割方法、装置和电子设备
CN110245657B (zh) 病理图像相似性检测方法及检测装置
Sahasrabudhe et al. Self-supervised nuclei segmentation in histopathological images using attention
JP2021509713A (ja) 腫瘍を識別するための畳み込みニューラルネットワークを用いた組織像の処理
CN109544518B (zh) 一种应用于骨骼成熟度评估的方法及其系统
CN109389129A (zh) 一种图像处理方法、电子设备及存储介质
CN113724228A (zh) 舌色苔色识别方法、装置、计算机设备及存储介质
Pan et al. Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks
US11176412B2 (en) Systems and methods for encoding image features of high-resolution digital images of biological specimens
CN111476290A (zh) 检测模型训练方法、淋巴结检测方法、装置、设备及介质
Jia et al. Multi-layer segmentation framework for cell nuclei using improved GVF Snake model, Watershed, and ellipse fitting
CN112767355A (zh) 一种甲状腺结节Tirads分级自动识别模型构建方法及装置
Pan et al. Automatic strawberry leaf scorch severity estimation via faster R-CNN and few-shot learning
CN111667474A (zh) 骨折识别方法、装置、设备和计算机可读存储介质
CN115601602A (zh) 癌症组织病理图像分类方法、系统、介质、设备及终端
CN111563550A (zh) 基于图像技术的精子形态检测方法和装置
CN116485817A (zh) 图像分割方法、装置、电子设备及存储介质
Wu et al. A preliminary study of sperm identification in microdissection testicular sperm extraction samples with deep convolutional neural networks
Abdulaal et al. A self-learning deep neural network for classification of breast histopathological images
CN117036288A (zh) 一种面向全切片病理图像的肿瘤亚型诊断方法
CN112489790A (zh) 关键数据确定方法、装置、设备及存储介质
CN110363240B (zh) 一种医学影像分类方法与系统
Lin et al. Automated malaria cells detection from blood smears under severe class imbalance via importance-aware balanced group softmax

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20865666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20865666

Country of ref document: EP

Kind code of ref document: A1