WO2021139258A1 - 基于图像识别的细胞识别计数方法、装置和计算机设备 - Google Patents

基于图像识别的细胞识别计数方法、装置和计算机设备 Download PDF

Info

Publication number
WO2021139258A1
WO2021139258A1 PCT/CN2020/118534 CN2020118534W WO2021139258A1 WO 2021139258 A1 WO2021139258 A1 WO 2021139258A1 CN 2020118534 W CN2020118534 W CN 2020118534W WO 2021139258 A1 WO2021139258 A1 WO 2021139258A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
counting
cell
target
cell recognition
Prior art date
Application number
PCT/CN2020/118534
Other languages
English (en)
French (fr)
Inventor
郭冰雪
吕传峰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021139258A1 publication Critical patent/WO2021139258A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, computer equipment and storage medium for cell recognition and counting based on image recognition.
  • One of the more common inspection methods is to judge whether the corresponding tissues and organs have abnormal lesions by observing the digital pathological slice images.
  • diagnosis of blood-related cancers and other diseases mainly relies on doctors to manually distinguish images under a microscope. Based on this, doctors have more and more diagnostic tasks, and their work intensity has also increased. Therefore, technology has emerged to assist doctors in clinical diagnosis through automatic analysis of digital images of blood cells, automatic cell identification and counting, and other processing.
  • the inventors have discovered that the existing method for cell recognition and counting of cell images is based on single-field images taken under a microscope to perform partial image processing, which is generally combined with traditional image processing algorithms for processing. This method has many processing steps. The speed is slow, and the global image (slide) information analysis cannot be done quickly, which affects the efficiency of cell identification and counting.
  • the existing method for identifying and counting cell images has a problem that the efficiency of identifying and counting is not high.
  • the existing recognition and counting methods for cell images have the problem of low recognition and counting efficiency.
  • a method for cell recognition and counting based on image recognition comprising:
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • a cell recognition and counting device based on image recognition comprising:
  • Image acquisition module for acquiring global digital cell images
  • the image selection module is used to select the regional image in the global digitized cell image, where the regional image is the partial image to be recognized and counted;
  • the color transformation processing module is used to perform non-linear color transformation processing on the regional image to obtain the target image
  • the cell recognition and counting module is used to input the target image into the trained cell recognition and counting neural network, identify the unconventional cells in the target image, and count the unconventional cells identified to obtain the cell recognition and counting result;
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • the device further includes an image processing module, which is used to perform image processing on the target image using an adaptive threshold segmentation algorithm and a dilation and corrosion algorithm.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • the above-mentioned cell recognition and counting method, device, computer equipment and storage medium based on image recognition are comprehensively analyzed from a global perspective, and the color of the image is brought closer to the real image through non-linear color conversion processing, which is convenient for image recognition.
  • the counting network analyzes and processes the cells in the target image, can quickly and accurately identify the unconventional cells in the target image, and count the unconventional cells, improve the efficiency of cell identification and counting, and at the same time, can greatly reduce the manual screening of pathology
  • the workload of unconventional cells in slicing saves manpower.
  • FIG. 1 is an application environment diagram of a cell recognition and counting method based on image recognition in an embodiment
  • FIG. 2 is a schematic flowchart of a method for cell identification and counting based on image recognition in an embodiment
  • FIG. 3 is a detailed flowchart of a method for cell identification and counting based on image recognition in another embodiment
  • FIG. 4 is a schematic flow chart of the steps of identifying and counting unconventional cells in a target image based on a cell recognition and counting neural network in an embodiment
  • FIG. 5 is a structural block diagram of a cell recognition and counting device based on image recognition in an embodiment
  • FIG. 6 is a structural block diagram of a cell recognition and counting device based on image recognition in another embodiment
  • Fig. 7 is an internal structure diagram of a computer device in an embodiment.
  • the cell recognition and counting method based on image recognition can be applied to the application environment as shown in FIG. 1.
  • the terminal 102 communicates with the server 104 through the network. It can be that the user uploads the global digitized cell image obtained by the blood smear on the electronic scanner to the server 104 through the terminal 102, and the server 104 obtains the global digitized cell image, selects the regional image in the global digitized cell image, and the regional image is the number to be identified and counted. Partial image, non-linear color transformation processing is performed on the regional image to obtain the target image, and the target image is input into the trained cell recognition and counting neural network to identify the unconventional cells in the target image and count the identified unconventional cells.
  • the cell recognition and counting result is obtained, where the trained cell recognition and counting neural network is trained based on historical target images containing unconventional cells.
  • the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
  • a method for cell identification and counting based on image recognition is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • Step 202 Obtain a global digital cell image.
  • the global digital cell image is a digital pathological slice image containing cell characteristics in the full field of view.
  • the digital pathological slice image is a full-field digital pathological slice (WSI) (WSI for short).
  • the pathological slice image is a digital microscope or a magnification system under a low-power objective lens to scan the glass slice one by one to collect and image, and microscopic scan
  • the platform automatically scans and moves in the XY-axis direction of the slice, and automatically focuses in the Z-axis direction.
  • the scanning control software collects high-resolution digital images using program-controlled scanning, and the image compression and storage software automatically performs seamless stitching of the images to produce the entire full-field digital slice.
  • the global digitalized cell image is taken as an example of the global digitalized blood cell image, which may be an electronic scanner to scan a blood cell slide to generate a global digitalized image with multiple levels of resolution.
  • Step 204 Select a regional image in the global digitized cell image, and the regional image is a partial image to be recognized and counted.
  • the global digitized cells can be effectively discriminated, and an image block containing a sufficient number of counted cells to be recognized can be selected. Specifically, there may be multiple regional images.
  • selecting the regional image in the global digital cell image includes: step 224, obtaining the cell density index of the global digital cell image, and selecting the regional image of the global digital cell image based on the cell density index .
  • the selection of the regional image may be to obtain the cell density of the global digital cell image, and based on the cell density, the regional image is automatically selected by the algorithm.
  • the region image may also be selected by the doctor with professional knowledge and rich clinical experience.
  • Step 206 Perform non-linear color transformation processing on the regional image to obtain a target image.
  • the color transformation algorithm can be used to perform non-linear color processing on the regional image.
  • the color transformation algorithm can be used to perform non-linear color processing on the regional image.
  • performing non-linear color conversion processing on the regional image to obtain the target image includes: step 226, using a gamma correction non-linear color conversion algorithm to perform non-linear color change processing on the regional image , Get the target image.
  • the algorithm is as follows:
  • V out V in ⁇ ,V:R,G,B
  • V: R, G, B are the three color channels that need to be processed for the gamma correction
  • the three color channels are R, G, B, respectively
  • Vin is the value of each pixel in the cell set
  • V out is the value of each pixel in the training set
  • is a gamma correction coefficient less than 1.
  • Step 208 Input the target image into the trained cell recognition and counting neural network, identify unconventional cells in the target image, and count the identified unconventional cells to obtain a cell recognition and counting result.
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • the trained cell recognition and counting network is a fully convolutional network.
  • the image containing unconventional cells is input into the initial cell recognition and counting network (full convolutional network) for pre-training, and then the fully connected layer is used to replace the full convolutional network head fine-tuning network, so that the full convolutional network has extraction The ability of unconventional cell characteristics to obtain a neural network for cell recognition and counting.
  • nonlinear color transformation processing makes the color of the image closer to the real image, facilitating image recognition, by using a trained cell recognition and counting network to analyze the target image
  • Cell analysis and processing can quickly and accurately identify unconventional cells in the target image, and count unconventional cells, improve the efficiency of cell recognition and counting, and at the same time, can greatly reduce the work of manual screening of unconventional cells in pathological sections Quantity, save manpower.
  • inputting the target image into a trained neural network for cell recognition and counting, identifying unconventional cells in the target image, and counting the identified unconventional cells includes:
  • Step 220 Extract feature data of the input target image through the convolutional layer of the cell recognition and counting neural network, and perform a convolution operation on the feature data to obtain a corresponding feature map;
  • Step 240 Input the feature map to a preset classification network in the cell recognition and counting neural network, and combine the preset fitting weights to identify unconventional cells and count the identified unconventional cells.
  • each layer of convolution of the cell recognition and counting neural network includes a convolution kernel, a bn layer and an activation function, which can be used to extract the feature data of the input target image through the convolution kernel.
  • the bn layer performs feature normalization processing on the feature data, and then uses the activation function Transform the entire network from linear to nonlinear.
  • the full convolutional network is used to remove the last fully connected layer of the general network, so that the network outputs a two-dimensional feature map with cell positions, and then the feature map is input to the preset classification network, according to the preset classification network and feature map Perform classification prediction with the fitting weight determined by the cell recognition and counting network training process, obtain the cell location information and the probability of the existence of unconventional cells, identify the unconventional cells, and count the identified unconventional cells.
  • the fitting weight is determined according to the return of the loss function in the training process, and then the gradient calculation is performed.
  • the target image can be input to the input resnet network feature preliminary extractor, and the basic network resnet can be used for feature extraction to obtain feature data, and then the feature data can be convolved to obtain 1024 corresponding feature maps.
  • the feature map is a two-dimensional array obtained by the operation of each convolutional layer.
  • the unconventional cells in the target image can be quickly and accurately identified through the trained cell recognition and counting neural network.
  • step 240 includes:
  • Step 242 Input the feature map to the RPN network with the fully connected layer, perform classification prediction according to the preset fitting weight and the preset softmax function, and obtain the coordinate information of the prediction frame extracted by the RPN network and the category confidence corresponding to each prediction frame degree;
  • Step 244 Use a cross-and-comparison algorithm to screen each prediction frame to determine a target prediction frame
  • Step 246 According to the coordinate information of the target prediction frame and the category confidence corresponding to the target prediction frame, identify unconventional cells and count the identified unconventional cells.
  • the category confidence is the category confidence of the cell. Confidence is also called reliability, or confidence level, confidence coefficient, confidence interval, etc. It shows that the true value of this parameter has a certain probability of falling around the measurement result.
  • the degree that is, the degree of credibility of the measured value of the measured parameter is given.
  • it is the category confidence obtained by classifying and predicting cells (such as the probability of unconventional cells). In specific implementation, it can be: input the obtained feature map into the corresponding RPN network for calculation.
  • the RPN network includes n*4 and n*1 fully connected layers, where n is determined according to the dimensions of the previous layer of network, and according to the cell Identify the counting network to determine the weight of the fit, using the softmax function Calculate the coordinate information of the four points of the prediction frame extracted by the RPN network and the corresponding category confidence (that is, the probability of the existence of unconventional cells), where x is a list of array values obtained by the previous fully connected layer operation, ⁇ T It is the weight parameter that the network needs to train. Then, the prediction frame obtained above is screened by the intersection and union threshold (IOU).
  • IOU intersection and union threshold
  • is the parameter to be optimized in the network training
  • x is a list of array values obtained by the previous fully connected layer operation
  • ⁇ T is the weight parameter that the network needs to be trained.
  • the target image is analyzed and processed through the cell recognition and counting network, which can quickly determine the location of unconventional cells, and more effectively classify the effective discrimination area; by combining the prediction results of multiple common classification networks to vote, the output is more stable
  • the classification results are then counted, and detailed analysis results are obtained, which can save the time for the doctor to identify and count the cells, and reduce the burden on the doctor.
  • the trained neural network for cell recognition and counting before inputting the target image into the trained neural network for cell recognition and counting, it further includes: using an adaptive threshold segmentation algorithm and a dilation and corrosion algorithm to perform image processing on the target image.
  • Threshold segmentation is a region-based image segmentation technology.
  • the principle is to divide image pixels into several categories.
  • the purpose of image thresholding is to divide the pixel set according to the gray level, and each of the obtained subsets forms an area corresponding to the real scene. Each area has the same attributes, while the adjacent areas do not have this Consistent attributes.
  • Such division can be achieved by selecting one or more thresholds starting from the gray level.
  • the target area (cell area) and the background area are mainly separated from the binarized target image, but it is difficult to achieve the ideal segmentation effect only by setting a fixed threshold for segmentation. Therefore, an adaptive threshold segmentation algorithm can be used for segmentation.
  • the binarization threshold at the pixel location is determined according to the pixel value distribution of the pixel's neighborhood block.
  • an adaptive threshold segmentation algorithm can be used to distinguish the cells and background in the image, and the brightness distribution of different regions of the image can be calculated. Local threshold, threshold segmentation, to achieve adaptive calculation of different thresholds for different regions of the image, and then through the morphological dilation and corrosion algorithm to filter the small impurities that appear during threshold segmentation, making the target image more standardized and clear.
  • a cell recognition and counting device based on image recognition which includes: an image acquisition module 510, an image selection module 520, a color conversion processing module 530, and a cell recognition and counting module 540, among them:
  • the image acquisition module 510 is used to acquire global digitized cell images.
  • the image selection module 520 is used to select a regional image in the global digitized cell image, and the regional image is a partial image to be recognized and counted.
  • the color conversion processing module 530 is configured to perform non-linear color conversion processing on the regional image to obtain the target image.
  • the cell recognition and counting module 540 is used to input the target image into the trained cell recognition and counting neural network, identify unconventional cells in the target image, and count the unconventional cells identified to obtain the cell recognition and counting result, where
  • the trained neural network for cell recognition and counting is trained based on historical target images containing unconventional cells.
  • the cell recognition and counting module 540 is also used to extract the feature data of the input target image through the convolutional layer of the cell recognition and count neural network, and perform convolution operations on the feature data to obtain the corresponding feature map, Input the feature map into the preset classification network in the cell recognition and counting neural network, and combine the preset fitting weights to identify unconventional cells and count the identified unconventional cells, where the preset fitting weights are in In the training process of the neural network for cell recognition and counting, it is obtained according to the return of the loss function and the calculation of the weight gradient.
  • the cell recognition and counting module 540 is also used to input the feature map to the RPN network with a fully connected layer, and perform classification prediction according to the preset fitting weight and the preset softmax function, to obtain the prediction extracted by the RPN network
  • the coordinate information of the frame and the category confidence corresponding to each prediction frame are used to screen each prediction frame using the cross-and-comparison algorithm to determine the target prediction frame. According to the coordinate information of the target prediction frame and the category confidence corresponding to the target prediction frame, identify Unconventional cells and count the identified unconventional cells.
  • the cell recognition and counting module 540 is also used to select three preset thresholds to perform three-level screening on each prediction frame based on the cross-to-combination algorithm to determine the target prediction frame.
  • the image selection module 520 is also used to obtain the cell density index of the global digitized cell image, and based on the cell density index, select the regional image of the global digitized cell image.
  • the color conversion processing module 530 is further configured to use a gamma-corrected non-linear color conversion algorithm to perform non-linear color change processing on the regional image to obtain the target image.
  • the device further includes an image processing module 550, which is further configured to use an adaptive threshold segmentation algorithm and a dilation and corrosion algorithm to perform image processing on the target image.
  • the various modules in the above-mentioned image recognition-based cell recognition and counting device can be implemented in whole or in part by software, hardware, and combinations thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 7.
  • the computer equipment includes a processor, a memory, and a network interface connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer equipment is used to store data such as global digital cell images and cell recognition and counting neural networks.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a cell recognition and counting method based on image recognition.
  • FIG. 7 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, and a computer program is stored in the memory.
  • the processor executes the computer program, the following steps are implemented: obtain a global digital cell image, and select a global digital cell image
  • the regional image is the partial image to be counted and recognized.
  • the non-linear color transformation process is performed on the regional image to obtain the target image.
  • the target image is input into the trained cell recognition and counting neural network to identify the unconventional cells in the target image,
  • the identified unconventional cells are counted to obtain the cell recognition and counting results, where the trained cell recognition and counting neural network is trained based on historical target images containing unconventional cells.
  • the processor further implements the following steps when executing the computer program: extracting the feature data of the input target image through the convolutional layer of the cell recognition and counting neural network, and performing convolution operations on the feature data to obtain the corresponding Feature map, input the feature map to the preset classification network in the cell recognition and counting neural network, and combine the preset fitting weights to identify unconventional cells and count the identified unconventional cells, where the preset fitting The weight is obtained during the training process of the cell recognition and counting neural network according to the return of the loss function and the weight gradient calculation.
  • the processor further implements the following steps when executing the computer program: input the feature map to the RPN network with a fully connected layer, and perform classification prediction according to the preset fitting weight and the preset softmax function to obtain the RPN network
  • the coordinate information of the extracted prediction frame and the category confidence level corresponding to each prediction frame are used to screen each prediction frame using the cross-and-comparison algorithm to determine the target prediction frame. According to the coordinate information of the target prediction frame and the category confidence level corresponding to the target prediction frame , Identify unconventional cells and count the identified unconventional cells.
  • the processor further implements the following steps when executing the computer program: based on the intersection and ratio algorithm, three preset thresholds are selected to perform three-level screening on each prediction frame to determine the target prediction frame.
  • the processor further implements the following steps when executing the computer program: acquiring the cell density index of the global digital cell image, and selecting the regional image of the global digital cell image based on the cell density index.
  • the processor further implements the following steps when executing the computer program: using a gamma-corrected non-linear color conversion algorithm to perform non-linear color change processing on the regional image to obtain the target image.
  • the processor further implements the following steps when executing the computer program: using an adaptive threshold segmentation algorithm and a dilation and corrosion algorithm to perform image processing on the target image.
  • a computer-readable storage medium on which a computer program is stored.
  • the following steps are implemented: acquiring a global digital cell image, selecting a region image in the global digital cell image, The regional image is a partial image to be counted and identified.
  • the regional image is processed by non-linear color transformation to obtain the target image.
  • the target image is input into the trained cell recognition and counting neural network to identify unconventional cells in the target image and recognize The unconventional cells are counted to obtain the cell recognition and counting result, where the trained cell recognition and counting neural network is trained based on the historical target image containing the unconventional cells.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the following steps are also implemented: through the convolutional layer of the cell recognition and counting neural network, the feature data of the input target image is extracted, and the feature data is convolved to obtain the corresponding
  • the feature map is input to the preset classification network in the cell recognition and counting neural network, combined with preset fitting weights, to identify unconventional cells and count the identified unconventional cells, where the preset simulation
  • the total weight is obtained according to the return of the loss function and the weight gradient during the training process of the cell recognition and counting neural network.
  • the following steps are also implemented: input the feature map to the RPN network with a fully connected layer, and perform classification prediction according to the preset fitting weight and the preset softmax function to obtain the RPN
  • the coordinate information of the prediction frame extracted by the network and the category confidence level corresponding to each prediction frame, and the cross-and-comparison algorithm is used to filter each prediction frame to determine the target prediction frame.
  • the coordinate information of the target prediction frame and the category confidence corresponding to the target prediction frame Degree identify unconventional cells and count the identified unconventional cells.
  • the following steps are also implemented: based on the intersection and ratio algorithm, three preset thresholds are selected to perform three-level screening on each prediction frame, and the target prediction frame is determined.
  • the following steps are also implemented: obtaining the cell density index of the global digital cell image, and selecting the regional image of the global digital cell image based on the cell density index.
  • the following steps are also implemented: using a gamma-corrected non-linear color conversion algorithm to perform non-linear color change processing on the regional image to obtain the target image.
  • an adaptive threshold segmentation algorithm and a dilation and corrosion algorithm are used to perform image processing on the target image.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Abstract

一种基于图像识别的细胞识别计数方法、装置、计算机设备和存储介质,涉及人工智能技术领域。所述方法包括:获取全局数字化细胞图像(202),选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像(204),对区域图像进行非线性颜色变换处理,得到目标图像(206),将目标图像输入基于包含非常规细胞的历史全局数字化细胞图像训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果(208)。通过非线性颜色变换处理使得图像的颜色更接近真实图像,使用已训练的细胞识别计数网络,能够快速准确地对目标图像中的非常规细胞进行识别计数,提高细胞识别计数的效率。

Description

基于图像识别的细胞识别计数方法、装置和计算机设备
本申请要求于2020年6月19日提交中国专利局、申请号为202010566837.8,发明名称为“基于图像识别的细胞识别计数方法、装置和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,特别是涉及一种基于图像识别的细胞识别计数方法、装置、计算机设备和存储介质。
背景技术
通过观察数字化病理切片图像来判断相应组织器官是否有病变异常的检查手段是较为常见检查手段之一。目前,发明人发现与血液有关的癌症等疾病的诊断主要还是依靠医生人工在显微镜下对图像进行辨别,基于此,医生的诊断任务也越来越多,工作强度也随之增大。因此,涌现出了通过对血液细胞数字化影像的自动分析,细胞自动识别计数等处理,辅助医生进行临床诊断的技术。
目前,发明人发现现有的细胞图像的细胞识别计数方法是基于显微镜下拍摄的单视野图像,进行局部的图像处理,一般也都是结合传统的图像处理算法进行处理,该方式处理步骤多,速度慢,而且无法快速做到全局图像(玻片)信息分析,影响细胞识别计数的效率。
因此,现有的细胞图像的识别计数方法存在识别计数效率不高的问题。
技术问题
现有的细胞图像的识别计数方法存在识别计数效率不高的问题。
技术解决方案
基于此,有必要针对上述技术问题,提供一种能够提高细胞识别计数效率的基于图像识别的细胞识别计数方法、装置、计算机设备和存储介质。
一种基于图像识别的细胞识别计数方法,所述方法包括:
获取全局数字化细胞图像;
选取全局数字化细胞图像中的区域图像,其中,区域图像为待识别计数的局部图像;
对区域图像进行非线性颜色变换处理,得到目标图像;
将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
一种基于图像识别的细胞识别计数装置,所述装置包括:
图像获取模块,用于获取全局数字化细胞图像;
图像选取模块,用于选取全局数字化细胞图像中的区域图像,其中,区域图像为待识别计数的局部图像;
颜色变换处理模块,用于对区域图像进行非线性颜色变换处理,得到目标图像;
细胞识别计数模块,用于将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识 别计数结果;
其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
在其中一个实施例中,装置还包括图像处理模块,用于采用自适应阈值分割算法和膨胀腐蚀算法对目标图像进行图像处理。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取全局数字化细胞图像;
选取全局数字化细胞图像中的区域图像,其中,区域图像为待识别计数的局部图像;
对区域图像进行非线性颜色变换处理,得到目标图像;
将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取全局数字化细胞图像;
选取全局数字化细胞图像中的区域图像,其中,区域图像为待识别计数的局部图像;
对区域图像进行非线性颜色变换处理,得到目标图像;
将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
有益效果
上述基于图像识别的细胞识别计数方法、装置、计算机设备和存储介质,从全局角度综合分析,通过非线性颜色变换处理使得图像的颜色更接近真实图像,便于图像识别,通过使用已训练的细胞识别计数网络对目标图像中的细胞进行分析处理,能够快速准确地识别出目标图像中的非常规细胞,并对非常规细胞进行计数,提高细胞识别计数的效率,同时,能够大量减少人工筛查病理切片中非常规细胞的工作量,节省人力。
附图说明
图1为一个实施例中基于图像识别的细胞识别计数方法的应用环境图;
图2为一个实施例中基于图像识别的细胞识别计数方法的流程示意图;
图3为另一个实施例中基于图像识别的细胞识别计数方法的详细流程示意图;
图4为一个实施例中基于细胞识别计数神经网络对目标图像的非常规细胞进行识别计数步骤的流程示意图;
图5为一个实施例中基于图像识别的细胞识别计数装置的结构框图;
图6为另一个实施例中基于图像识别的细胞识别计数装置的结构框图;
图7为一个实施例中计算机设备的内部结构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的基于图像识别的细胞识别计数方法,可以应用于如图1所示的应用环境中。其中,终端102通过网络与服务器104进行通信。可以是用户将血液涂片在电子扫描仪得到的全局数字化细胞图像通过终端102上传至服务器104,服务器104获取全局数字化细胞图像,选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像,对区域图像进行非线性颜色变换处理,得到目标图像,将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果,其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一个实施例中,如图2所示,提供了一种基于图像识别的细胞识别计数方法,以该方法应用于图1中的服务器为例进行说明,包括以下步骤:
步骤202,获取全局数字化细胞图像。
全局数字化细胞图像即为全视野的包含细胞特征的数字病理切片图像。数字病理切片图像即为全视野的数字病理切片(Whole Slide Image,简称WSI),该病理切片图像是利用数字显微镜或放大系统在低倍物镜下对玻璃切片进行逐幅扫描采集成像,显微扫描平台自动按照切片XY轴方向扫描移动,并在Z轴方向自动聚焦。然后,由扫描控制软件在光学放大装置有效放大的基础上利用程控扫描方式采集高分辨数字图像,图像压缩与存储软件将图像自动进行无缝拼接处理,制作生成的整张全视野的数字化切片。具体实施时,全局数字化细胞图像以全局数字化血液细胞图像为例,可以是采用电子扫描仪对血液细胞载玻片进行扫描,生成包含多个级别分辨率的全局数字化影像。
步骤204,选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像。
由于全局数字化细胞图像包含的信息复杂,且包括一些空白区域。因此,为提高图像识别的效率,可对全局数字化细胞进行区域的有效判别,选取出包含足量待识别计数细胞的图像块。具体的,区域图像可以是多张。
如图3所示,在其中一个实施例中,选取全局数字化细胞图像中的区域图像包括:步骤224,获取全局数字化细胞图像的细胞密度指标,基于细胞密度指标,选取全局数字化细胞图像的区域图像。
具体实施时,区域图像的选取可以是获取全局数字化细胞图像的细胞密度,基于细胞密度,采用算法自动选取出区域图像。在其他实施例中,也可以是由医生凭借专业知识和丰富的临床经验选取出区域图像。
步骤206,对区域图像进行非线性颜色变换处理,得到目标图像。
为提高后续神经网络对细胞识别技术的准确率,在得到区域图像后,可以使用颜色变换算法对区域图像进行非线性颜色处理。以使得区域图像的颜色更接近真实显示的图像的颜色,便于定位非常规细胞。
如图3所示,在其中一个实施例中,对区域图像进行非线性颜色变换处理, 得到目标图像包括:步骤226,采用伽马矫正非线性颜色变换算法,对区域图像进行非线性颜色变化处理,得到目标图像。
具体实施时,可以是采用伽马矫正非线性颜色变换算法处理,得到目标图像。算法具体如下:
V out=V in γ,V:R,G,B
其中,V:R,G,B为所述伽马矫正所需处理的三个颜色通道,所述三个颜色通道分别为R,G,B,V in是所述细胞集内每个像素值,V out是所述训练集每个像素值,γ为小于1的伽马矫正系数。采用伽马矫正非线性颜色矫正处理,能够矫正区域图像的颜色显示,使得目标图像的颜色更接近真实显示图像的颜色,便于非常规细胞的定位识别。
步骤208,将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果。
其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。已训练的细胞识别计数网络为全卷积网络。在网络训练阶段,将包含非常规细胞的图像输入初始细胞识别计数网络(全卷积网络)进行预训练,再使用全连接层代替全卷积网络头部微调网络,使全卷积网络具有提取非常规细胞特征的能力,得到细胞识别计数神经网络。
上述基于图像识别的细胞识别计数方法中,从全局角度综合分析,通过非线性颜色变换处理使得图像的颜色更接近真实图像,便于图像识别,通过使用已训练的细胞识别计数网络对目标图像中的细胞进行分析处理,能够快速准确地识别出目标图像中的非常规细胞,并对非常规细胞进行计数,提高细胞识别计数的效率,同时,能够大量减少人工筛查病理切片中非常规细胞的工作量,节省人力。
在其中一个实施例中,将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数包括:
步骤220,通过细胞识别计数神经网络的卷积层,提取输入的目标图像的特征数据、并对特征数据进行卷积运算,得到对应的特征图;
步骤240,将特征图输入至细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数。
本实施例中,细胞识别计数神经网络(全卷积网络)的每一层卷积包括卷积核、bn层和激活函数,可以是通过卷积核来提取输入的目标图像的特征数据,通过bn层对特征数据进行特征归一化处理,再使用激活函数
Figure PCTCN2020118534-appb-000001
将整个网络从线性转变成非线性。具体地,使用全卷积网络去掉一般网络最后的全连接层,使得网络输出带有细胞位置的二维的特征图,进而将特征图输入至预设分类网络,根据预设分类网络、特征图和细胞识别计数网络训练过程确定的拟合权重,进行分类预测,得到细胞的位置信息和存在非常规细胞的概率,识别出非常规细胞、并对识别出的非常规细胞计数。其中,拟合权重的确定是根据训练过程中loss函数的回传,然后进行梯度运算得到。具体实施时,可以是将目标图像输入至输入resnet网络特征初步提取器,具体可以使用基本的网络resnet进行特征提取,得到特征数据,再对特征数据进行卷积运算得到1024个对应的特征图。特征图为每一层卷积层运算得到的二维数组。本实施例中,通过训练好的细胞识别计数神经网络,能够快速且准确地识别出目标图像中的非常规细胞。
如图4所示,在其中一个实施例中,步骤240包括:
步骤242,将特征图输入至带有全连接层的RPN网络,根据预设拟合权重和 预设softmax函数进行分类预测,得到RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度;
步骤244,采用交并比算法对各预测框进行筛选,确定目标预测框;
步骤246,根据目标预测框的坐标信息和目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
类别置信度即为细胞的所属类别置信度,置信度也称为可靠度,或置信水平、置信系数、置信区间等,其展现的是这个参数的真实值有一定概率落在测量结果的周围的程度,也就是给出的是被测量参数的测量值的可信程度。本实施例中,则为对细胞进行分类预测得到的类别置信度(如非常规细胞的概率)。具体实施时,可以是:将得到的特征图输入对应的RPN网络进行运算,RPN网络包括n*4和n*1的全连接层,其中n是根据前一层网络的维度确定的,根据细胞识别计数网络确定拟合的权重,采用softmax函数
Figure PCTCN2020118534-appb-000002
分别计算RPN网络提取出的预测框的四个点的坐标信息和对应的类别置信度(即存在非常规细胞的概率),其中,x为前一全连接层运算得到的一列数组值,θ T是网络需要训练的权重参数。然后,通过交并比阈值(IOU)对上述得到的预测框进行筛选,具体可以是选取三个阈值(0.5,0.6,0.7)对预测框进行三级筛选,筛选出目标预测框,得到最终预测结果,包括目标预选框的坐标信息、目标预测框对应的类别置信度,包括存在非常规细胞的概率。具体的,可以是将输出概率值在0.5以上的非常规细胞作为识别结果。其中,θ为网络训练中要优化的参数,x为前一全连接层运算得到的一列数组值,θ T为网络需训练的权重参数。本实施例中,通过细胞识别计数网络对目标图像进行分析处理,能够快速确定非常规细胞位置,更加有效地对有效判别区域进行分类;通过结合多个普通分类网络的预测结果投票,输出更加稳定的分类结果进而进行计数,得到详细的分析结果,能够节约医生辨认细胞并计数的时间,减轻医生负担。
在其中一个实施例中,将目标图像输入已训练的细胞识别计数神经网络之前,还包括:采用自适应阈值分割算法和膨胀腐蚀算法对目标图像进行图像处理。
阈值分割法是一种基于区域的图像分割技术,原理是把图像象素点分为若干类。图像阈值化的目的是要按照灰度级,对像素集合进行一个划分,得到的每个子集形成一个与现实景物相对应的区域,各个区域内部具有一致的属性,而相邻区域不具有这种一致属性。这样的划分可以通过从灰度级出发选取一个或多个阈值来实现。在图像阈值化操作中,主要是从二值化的目标图像中分离出目标区域(细胞区域)和背景区域,但是仅仅通过设定固定阈值进行分割很难达到理想的分割效果。因此,可采用自适应阈值分割算法进行分割,具体的是根据像素的邻域块的像素值分布来确定该像素位置上的二值化阈值。具体实施时,通过预设的伽马阈值对区域图像进行伽马矫正得到目标图像后,可以采用自适应阈值分割算法对图像中的细胞和背景进行区分,根据图像不同区域的亮度分布,计算其局部阈值,进行阈值分割,实现对于图像不同区域,自适应计算不同阈值,再通过形态学的膨胀腐蚀算法过滤阈值分割时出现的小杂质,使得目标图像更为规范清晰。
应该理解的是,虽然图2-4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-4中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些 步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在其中一个实施例中,如图5所示,提供了一种基于图像识别的细胞识别计数装置,包括:图像获取模块510、图像选取模块520、颜色变换处理模块530和细胞识别计数模块540,其中:
图像获取模块510,用于获取全局数字化细胞图像。
图像选取模块520,用于选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像。
颜色变换处理模块530,用于对区域图像进行非线性颜色变换处理,得到目标图像。
细胞识别计数模块540,用于将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果,其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
在其中一个实施例中,细胞识别计数模块540还用于通过细胞识别计数神经网络的卷积层,提取输入的目标图像的特征数据、并对特征数据进行卷积运算,得到对应的特征图,将特征图输入至细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,预设拟合权重是在细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
在其中一个实施例中,细胞识别计数模块540还用于将特征图输入至带有全连接层的RPN网络,根据预设拟合权重和预设softmax函数进行分类预测,得到RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度,采用交并比算法对各预测框进行筛选,确定目标预测框,根据目标预测框的坐标信息和目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
在其中一个实施例中,细胞识别计数模块540还用于基于交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
在其中一个实施例中,图像选取模块520还用于获取全局数字化细胞图像的细胞密度指标,基于细胞密度指标,选取全局数字化细胞图像的区域图像。
在其中一个实施例中,颜色变换处理模块530还用于采用伽马矫正非线性颜色变换算法,对区域图像进行非线性颜色变化处理,得到目标图像。
如图6所示,在其中一个实施例中,装置还包括图像处理模块550,还用于采用自适应阈值分割算法和膨胀腐蚀算法对目标图像进行图像处理。
关于基于图像识别的细胞识别计数装置的具体限定可以参见上文中对于基于图像识别的细胞识别计数方法的限定,在此不再赘述。上述基于图像识别的细胞识别计数装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在其中一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作 系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储全局数字化细胞图像、细胞识别计数神经网络等数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种基于图像识别的细胞识别计数方法。
本领域技术人员可以理解,图7中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在其中一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:获取全局数字化细胞图像,选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像,对区域图像进行非线性颜色变换处理,得到目标图像,将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果,其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:通过细胞识别计数神经网络的卷积层,提取输入的目标图像的特征数据、并对特征数据进行卷积运算,得到对应的特征图,将特征图输入至细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,预设拟合权重是在细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:将特征图输入至带有全连接层的RPN网络,根据预设拟合权重和预设softmax函数进行分类预测,得到RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度,采用交并比算法对各预测框进行筛选,确定目标预测框,根据目标预测框的坐标信息和目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:基于交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:获取全局数字化细胞图像的细胞密度指标,基于细胞密度指标,选取全局数字化细胞图像的区域图像。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:采用伽马矫正非线性颜色变换算法,对区域图像进行非线性颜色变化处理,得到目标图像。
在其中一个实施例中,处理器执行计算机程序时还实现以下步骤:采用自适应阈值分割算法和膨胀腐蚀算法对目标图像进行图像处理。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:获取全局数字化细胞图像,选取全局数字化细胞图像中的区域图像,区域图像为待识别计数的局部图像,对区域图像进行非线性颜色变换处理,得到目标图像,将目标图像输入已训练的细胞识别计数神经网络,识别目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果,其中,已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
所述计算机可读存储介质可以是非易失性,也可以是易失性。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:通过细胞识别计数神经网络的卷积层,提取输入的目标图像的特征数据、并对特征数据进行卷积运算,得到对应的特征图,将特征图输入至细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,预设拟合权重是在细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:将特征图输入至带有全连接层的RPN网络,根据预设拟合权重和预设softmax函数进行分类预测,得到RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度,采用交并比算法对各预测框进行筛选,确定目标预测框,根据目标预测框的坐标信息和目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:基于交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取全局数字化细胞图像的细胞密度指标,基于细胞密度指标,选取全局数字化细胞图像的区域图像。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:采用伽马矫正非线性颜色变换算法,对区域图像进行非线性颜色变化处理,得到目标图像。
在其中一个实施例中,计算机程序被处理器执行时还实现以下步骤:采用自适应阈值分割算法和膨胀腐蚀算法对目标图像进行图像处理。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种基于图像识别的细胞识别计数方法,其中,所述方法包括:
    获取全局数字化细胞图像;
    选取所述全局数字化细胞图像中的区域图像,其中,所述区域图像为待识别计数的局部图像;
    对所述区域图像进行非线性颜色变换处理,得到目标图像;
    将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
    其中,所述已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
  2. 根据权利要求1所述的方法,其中,所述将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数包括:
    通过所述细胞识别计数神经网络的卷积层,提取输入的所述目标图像的特征数据、并对所述特征数据进行卷积运算,得到对应的特征图;
    将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,所述预设拟合权重是在所述细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
  3. 根据权利要求2所述的方法,其中,所述将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数包括:
    将所述特征图输入带有全连接层的RPN网络,根据所述预设拟合权重和预设softmax函数进行分类预测,得到所述RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度;
    采用交并比算法对各预测框进行筛选,确定目标预测框;
    根据所述目标预测框的坐标信息和所述目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
  4. 根据权利要求3所述的方法,其中,所述采用交并比算法对各预测框进行筛选,确定目标预测框包括:
    基于所述交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
  5. 根据权利要求1至3任一项所述的方法,其中,所述选取所述全局数字化细胞图像中的区域图像包括:
    获取所述全局数字化图像的细胞密度指标;
    基于所述细胞密度指标,选取所述全局数字化细胞图像的区域图像。
  6. 根据权利要求1至3任一项所述的方法,其中,所述对所述区域图像进行非线性颜色变换处理,得到目标图像包括:
    采用伽马矫正非线性颜色变换算法,对所述区域图像进行非线性颜色变化处理,得到目标图像。
  7. 根据权利要求1至3任一项所述的方法,其中,所述将所述目标图像输入已训练的细胞识别计数神经网络之前,还包括:
    采用自适应阈值分割算法和膨胀腐蚀算法对所述目标图像进行图像处理。
  8. 一种基于图像识别的细胞识别计数装置,其中,所述装置包括:
    图像获取模块,用于获取全局数字化细胞图像;
    图像选取模块,用于选取所述全局数字化细胞图像中的区域图像,其中,所述区域图像为待识别计数的局部图像;
    颜色变换处理模块,用于对所述区域图像进行非线性颜色变换处理,得到目标图像;
    细胞识别计数模块,用于将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
    其中,所述已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
  9. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现以下步骤:
    获取全局数字化细胞图像;
    选取所述全局数字化细胞图像中的区域图像,其中,所述区域图像为待识别计数的局部图像;
    对所述区域图像进行非线性颜色变换处理,得到目标图像;
    将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
    其中,所述已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
  10. 根据权利要求9所述的计算机设备,其中,所述将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数包括:
    通过所述细胞识别计数神经网络的卷积层,提取输入的所述目标图像的特征数据、并对所述特征数据进行卷积运算,得到对应的特征图;
    将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,所述预设拟合权重是在所述细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
  11. 根据权利要求10所述的计算机设备,其中,所述将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数包括:
    将所述特征图输入带有全连接层的RPN网络,根据所述预设拟合权重和预设softmax函数进行分类预测,得到所述RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度;
    采用交并比算法对各预测框进行筛选,确定目标预测框;
    根据所述目标预测框的坐标信息和所述目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
  12. 根据权利要求11所述的计算机设备,其中,所述采用交并比算法对各预测框进行筛选,确定目标预测框包括:
    基于所述交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
  13. 根据权利要求9至11任一项所述的计算机设备,其中,所述选取所述 全局数字化细胞图像中的区域图像包括:
    获取所述全局数字化图像的细胞密度指标;
    基于所述细胞密度指标,选取所述全局数字化细胞图像的区域图像。
  14. 根据权利要求9至11任一项所述的计算机设备,其中,所述对所述区域图像进行非线性颜色变换处理,得到目标图像包括:
    采用伽马矫正非线性颜色变换算法,对所述区域图像进行非线性颜色变化处理,得到目标图像。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现以下步骤:
    获取全局数字化细胞图像;
    选取所述全局数字化细胞图像中的区域图像,其中,所述区域图像为待识别计数的局部图像;
    对所述区域图像进行非线性颜色变换处理,得到目标图像;
    将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数,得到细胞识别计数结果;
    其中,所述已训练的细胞识别计数神经网络基于包含非常规细胞的历史目标图像训练得到。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述将所述目标图像输入已训练的细胞识别计数神经网络,识别所述目标图像中的非常规细胞、并对识别出的非常规细胞进行计数包括:
    通过所述细胞识别计数神经网络的卷积层,提取输入的所述目标图像的特征数据、并对所述特征数据进行卷积运算,得到对应的特征图;
    将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数,其中,所述预设拟合权重是在所述细胞识别计数神经网络的训练过程中,根据损失函数的回传和权重梯度运算得到。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述将所述特征图输入至所述细胞识别计数神经网络中的预设分类网络,结合预设拟合权重,识别出非常规细胞、并对识别出的非常规细胞进行计数包括:
    将所述特征图输入带有全连接层的RPN网络,根据所述预设拟合权重和预设softmax函数进行分类预测,得到所述RPN网络提取的预测框的坐标信息和各预测框对应的类别置信度;
    采用交并比算法对各预测框进行筛选,确定目标预测框;
    根据所述目标预测框的坐标信息和所述目标预测框对应的类别置信度,识别出非常规细胞、并对识别出的非常规细胞进行计数。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述采用交并比算法对各预测框进行筛选,确定目标预测框包括:
    基于所述交并比算法,选取三个预设阈值对各预测框进行三级筛选,确定目标预测框。
  19. 根据权利要求15至17任一项所述的计算机可读存储介质,其中,所述选取所述全局数字化细胞图像中的区域图像包括:
    获取所述全局数字化图像的细胞密度指标;
    基于所述细胞密度指标,选取所述全局数字化细胞图像的区域图像。
  20. 根据权利要求15至17任一项所述的计算机可读存储介质,其中,所述 对所述区域图像进行非线性颜色变换处理,得到目标图像包括:
    采用伽马矫正非线性颜色变换算法,对所述区域图像进行非线性颜色变化处理,得到目标图像。
PCT/CN2020/118534 2020-06-19 2020-09-28 基于图像识别的细胞识别计数方法、装置和计算机设备 WO2021139258A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010566837.8 2020-06-19
CN202010566837.8A CN111524137B (zh) 2020-06-19 2020-06-19 基于图像识别的细胞识别计数方法、装置和计算机设备

Publications (1)

Publication Number Publication Date
WO2021139258A1 true WO2021139258A1 (zh) 2021-07-15

Family

ID=71909931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118534 WO2021139258A1 (zh) 2020-06-19 2020-09-28 基于图像识别的细胞识别计数方法、装置和计算机设备

Country Status (2)

Country Link
CN (1) CN111524137B (zh)
WO (1) WO2021139258A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861275A (zh) * 2022-12-26 2023-03-28 中南大学 细胞计数方法、装置、终端设备及介质
CN115994874A (zh) * 2023-03-22 2023-04-21 赛维森(广州)医疗科技服务有限公司 玻片图像处理方法、装置、玻片、计算机设备和存储介质
CN116758072A (zh) * 2023-08-17 2023-09-15 苏州熠品质量技术服务有限公司 一种基于Faster-RCNN的细胞识别计数方法、装置及计算机存储介质
CN117094966A (zh) * 2023-08-21 2023-11-21 青岛美迪康数字工程有限公司 基于图像扩增的舌图像识别方法、装置和计算机设备

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524137B (zh) * 2020-06-19 2024-04-05 平安科技(深圳)有限公司 基于图像识别的细胞识别计数方法、装置和计算机设备
CN112330616A (zh) * 2020-10-28 2021-02-05 上海交通大学 一种脑脊液细胞图像自动化识别和计数的方法
CN113705318B (zh) * 2021-04-22 2023-04-18 腾讯医疗健康(深圳)有限公司 基于图像的识别方法、装置、设备及可读存储介质
CN113763315B (zh) * 2021-05-18 2023-04-07 腾讯医疗健康(深圳)有限公司 玻片图像的信息获取方法、装置、设备及介质
CN114066818B (zh) * 2021-10-23 2023-04-07 广州市艾贝泰生物科技有限公司 细胞检测分析方法、装置、计算机设备和存储介质
CN114418995B (zh) * 2022-01-19 2023-02-03 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 一种基于显微镜图像的级联藻类细胞统计方法
CN114529724A (zh) * 2022-02-15 2022-05-24 推想医疗科技股份有限公司 图像目标的识别方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120040A (zh) * 2019-05-13 2019-08-13 广州锟元方青医疗科技有限公司 切片图像处理方法、装置、计算机设备和存储介质
CN110135271A (zh) * 2019-04-19 2019-08-16 上海依智医疗技术有限公司 一种细胞分类方法及装置
CN110415212A (zh) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 异常细胞检测方法、装置及计算机可读存储介质
CN110705583A (zh) * 2019-08-15 2020-01-17 平安科技(深圳)有限公司 细胞检测模型训练方法、装置、计算机设备及存储介质
CN110765855A (zh) * 2019-09-12 2020-02-07 杭州迪英加科技有限公司 一种病理图像处理方法及系统
CN111524137A (zh) * 2020-06-19 2020-08-11 平安科技(深圳)有限公司 基于图像识别的细胞识别计数方法、装置和计算机设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2948499C (en) * 2016-11-16 2020-04-21 The Governing Council Of The University Of Toronto System and method for classifying and segmenting microscopy images with deep multiple instance learning
CN109903282B (zh) * 2019-02-28 2023-06-09 安徽省农业科学院畜牧兽医研究所 一种细胞计数方法、系统、装置和存储介质
CN111222530A (zh) * 2019-10-14 2020-06-02 广州极汇信息科技有限公司 一种细粒度图像分类方法、系统、装置和存储介质
CN111079620B (zh) * 2019-12-10 2023-10-17 北京小蝇科技有限责任公司 基于迁移学习的白细胞图像检测识别模型构建方法及应用

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135271A (zh) * 2019-04-19 2019-08-16 上海依智医疗技术有限公司 一种细胞分类方法及装置
CN110120040A (zh) * 2019-05-13 2019-08-13 广州锟元方青医疗科技有限公司 切片图像处理方法、装置、计算机设备和存储介质
CN110415212A (zh) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 异常细胞检测方法、装置及计算机可读存储介质
CN110705583A (zh) * 2019-08-15 2020-01-17 平安科技(深圳)有限公司 细胞检测模型训练方法、装置、计算机设备及存储介质
CN110765855A (zh) * 2019-09-12 2020-02-07 杭州迪英加科技有限公司 一种病理图像处理方法及系统
CN111524137A (zh) * 2020-06-19 2020-08-11 平安科技(深圳)有限公司 基于图像识别的细胞识别计数方法、装置和计算机设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861275A (zh) * 2022-12-26 2023-03-28 中南大学 细胞计数方法、装置、终端设备及介质
CN115861275B (zh) * 2022-12-26 2024-02-06 中南大学 细胞计数方法、装置、终端设备及介质
CN115994874A (zh) * 2023-03-22 2023-04-21 赛维森(广州)医疗科技服务有限公司 玻片图像处理方法、装置、玻片、计算机设备和存储介质
CN115994874B (zh) * 2023-03-22 2023-06-02 赛维森(广州)医疗科技服务有限公司 玻片图像处理方法、装置、玻片、计算机设备和存储介质
CN116758072A (zh) * 2023-08-17 2023-09-15 苏州熠品质量技术服务有限公司 一种基于Faster-RCNN的细胞识别计数方法、装置及计算机存储介质
CN116758072B (zh) * 2023-08-17 2023-12-22 苏州熠品质量技术服务有限公司 一种基于Faster-RCNN的细胞识别计数方法、装置及计算机存储介质
CN117094966A (zh) * 2023-08-21 2023-11-21 青岛美迪康数字工程有限公司 基于图像扩增的舌图像识别方法、装置和计算机设备
CN117094966B (zh) * 2023-08-21 2024-04-05 青岛美迪康数字工程有限公司 基于图像扩增的舌图像识别方法、装置和计算机设备

Also Published As

Publication number Publication date
CN111524137B (zh) 2024-04-05
CN111524137A (zh) 2020-08-11

Similar Documents

Publication Publication Date Title
WO2021139258A1 (zh) 基于图像识别的细胞识别计数方法、装置和计算机设备
CN111985536B (zh) 一种基于弱监督学习的胃镜病理图像分类方法
WO2020253629A1 (zh) 检测模型训练方法、装置、计算机设备和存储介质
JP6900581B1 (ja) 顕微鏡スライド画像のための焦点重み付き機械学習分類器誤り予測
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
CN111310841B (zh) 医学图像分类方法、装置、设备、计算机设备和存储介质
CN109829882B (zh) 一种糖尿病视网膜病变分期预测方法
Zhang et al. Automated semantic segmentation of red blood cells for sickle cell disease
WO2021189771A1 (zh) 玻片数字化信息质量检测方法、装置、设备及介质
CN111369523B (zh) 显微图像中细胞堆叠的检测方法、系统、设备及介质
CN108830149B (zh) 一种目标细菌的检测方法及终端设备
CN113962976B (zh) 用于病理玻片数字图像的质量评估方法
CN112215790A (zh) 基于深度学习的ki67指数分析方法
CN110796661B (zh) 基于卷积神经网络的真菌显微图像分割检测方法及系统
Rachna et al. Detection of Tuberculosis bacilli using image processing techniques
Di Ruberto et al. Accurate blood cells segmentation through intuitionistic fuzzy set threshold
CN112464802B (zh) 一种玻片样本信息的自动识别方法、装置和计算机设备
CN113129281B (zh) 一种基于深度学习的小麦茎秆截面参数检测方法
CN112927215A (zh) 一种消化道活检病理切片自动分析方法
CN109859218B (zh) 病理图关键区域确定方法、装置、电子设备及存储介质
WO2021139447A1 (zh) 一种宫颈异常细胞检测装置及方法
Alzu'bi et al. A new approach for detecting eosinophils in the gastrointestinal tract and diagnosing eosinophilic colitis.
CN114037868A (zh) 图像识别模型的生成方法及装置
JP6329651B1 (ja) 画像処理装置及び画像処理方法
CN111401119A (zh) 细胞核的分类

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912703

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20912703

Country of ref document: EP

Kind code of ref document: A1