CN112884725B - Correction method for neural network model output result for cell discrimination - Google Patents

Correction method for neural network model output result for cell discrimination Download PDF

Info

Publication number
CN112884725B
CN112884725B CN202110145319.3A CN202110145319A CN112884725B CN 112884725 B CN112884725 B CN 112884725B CN 202110145319 A CN202110145319 A CN 202110145319A CN 112884725 B CN112884725 B CN 112884725B
Authority
CN
China
Prior art keywords
cell
neural network
network model
tumor cells
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110145319.3A
Other languages
Chinese (zh)
Other versions
CN112884725A (en
Inventor
蔡佳桐
杨林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Diyingjia Technology Co ltd
Original Assignee
Hangzhou Diyingjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Diyingjia Technology Co ltd filed Critical Hangzhou Diyingjia Technology Co ltd
Priority to CN202110145319.3A priority Critical patent/CN112884725B/en
Publication of CN112884725A publication Critical patent/CN112884725A/en
Application granted granted Critical
Publication of CN112884725B publication Critical patent/CN112884725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Geometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a correction method for a neural network model output result for cell discrimination, which is characterized in that the lightest color positive tumor cells are marked, negative tumor cells and positive tumor cells predicted by a neural network model (or a deep learning model) are redefined, and the influence of background pixels on the final judgment result is reduced, so that the final KI67 index is closer to a real value. Inputting pathological images under a lens into a trained neural network model, identifying tumor cells on the pathological images by the neural network model, outputting the pathological images marked with tumor cell positions and tumor cell types, wherein the tumor cell types comprise positive tumor cells and negative tumor cells, taking the positive tumor cells with the lightest color output by the neural network model as standard cells, and correcting the tumor cell types output by the neural network model according to the color comparison between the standard cells and other tumor cells.

Description

Correction method for neural network model output result for cell discrimination
Technical Field
The invention relates to the technical field of medical treatment, in particular to a method for correcting an output result of a neural network model for cell discrimination.
Background
The artificial intelligence assists the doctor to interpret the KI67 immunohistochemical pathological section image algorithm to locate, classify and count all cells on the under-mirror field image. Patent CN201610710869.4Ki67 index automatic analysis method discloses a Ki67 index automatic analysis method, which comprises the following steps: s10, preprocessing an image; s20, screening hot spot areas; s30, calculating the number of cells; and S40, outputting the result. The automatic analysis method for the Ki67 index has better reproducibility, the digital images of the Ki67 index are automatically analyzed in batches by a computer, the Ki67 index is stably and efficiently obtained, negative and positive cell nucleuses are marked in the digital images through different colors, and the hot spot area and the negative and positive cell nucleuses in the hot spot area are rapidly identified and counted through an algorithm, so that medical workers are helped to more accurately analyze relevant characteristics of tissue pathology. However, this algorithm cannot accurately distinguish between negative and positive tumors. This is because the color information is the main information used by pathologists in differentiating between yin and yang tumors. In KI67 immunohistochemical pathology images, tumor cells with a bluish color were generally considered as negative tumor cells, and tumor cells with a reddish color were generally considered as positive tumor cells. However, images under different data sources and different illumination conditions are very different, and the color boundaries of the yin and yang tumor cells cannot be well learned by the deep learning algorithm. This may cause KI67 index to deviate from the actual value, affecting diagnostic accuracy.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The application provides a correction method for a neural network model output result for cell discrimination, which is characterized in that the lightest positive tumor cells are marked, negative tumor cells and positive tumor cells predicted by a neural network model (or called a deep learning model) are redefined, and the influence of background pixels on a final judgment result is reduced, so that the final KI67 index is closer to a real numerical value.
According to one aspect of the application, a correction method for a neural network model output result for cell discrimination is provided, and includes inputting an under-mirror pathological image into a trained neural network model, identifying tumor cells on the pathological image by the neural network model, outputting a pathological image marked with tumor cell positions and tumor cell types, wherein the tumor cell types include positive tumor cells and negative tumor cells, and correcting the tumor cell types output by the neural network model according to color comparison between the standard cells and other tumor cells by taking the positive tumor cells with the lightest color output by the neural network model as standard cells.
Further, performing a first expansion operation by taking the center of the central point of the standard cell and the estimated value R of the average radius of the cell as a radius to obtain a first expansion area, and acquiring a red channel pixel mean value of the first expansion area;
acquiring a predicted central point of the tumor cell according to the position of the tumor cell, judging whether the predicted central point is positioned at the edge of the current cell, performing second expansion operation on the predicted central point positioned at the edge of the current cell by taking an estimated value R of the average radius of the cell as the radius, eliminating background color information positioned at the periphery of the current cell in a second expansion area, and acquiring a red channel pixel mean value of a residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius to obtain the pixel mean value of the red channel of the third expansion area; and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value.
Further, the method for determining whether the predicted center point is located at the edge of the current cell is to calculate the self-information of the center point of the standard cell and the self-information of all the predicted center points, and use the self-information of the center point of the standard cell as a threshold, if the self-information of the predicted center point is greater than the self-information of the center point of the standard cell, the predicted center point is located at the edge of the current cell and is an edge predicted point.
Further, comprising:
s10, inputting the pathological image under the lens into the trained neural network model for prediction, outputting the pathological image marked with the position and the category of the tumor cells by the neural network model, and marking the tumor cells as T i
S20, obtaining the central point of the standard cell and marking as x std
S30, taking the central point x of the standard cell std Taking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and calculating the average value S of the red channel in the expansion area std
S40, obtaining all the tumor cell T i Central point x of i At each center point x i Taking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and taking the expanded region as the tumor cell T i Corresponding mask, denoted mask i ,mask i Is a central point x i A valid neighborhood of;
s50, calculating T of all the tumor cells i Central point x of i Self information of (1), then x i Is represented as follows:
Figure BDA0002929987160000021
n is the valid neighborhood mask i σ is a constant;
s60, calculating standard cell x std The central point self-information of (1) is taken as standard self-information I std
S70, if I i >I std Then x is i Judging as an edge prediction point, and taking a corresponding effective neighborhood mask i The pixel values of all the red channels are subjected to k-means two-classification to obtain the pixel value mean values of two classes, namely
Figure BDA0002929987160000022
And
Figure BDA0002929987160000023
if I i <I std Then x is i Judging as a non-edge predicted point, S i Get the corresponding valid neighborhood mask i Average pixel value of all red channels;
s80, taking the average value S of the red channels of the expansion area std As a threshold, positive and negative tumor cells are distinguished, if S i <S std And Ti is predicted as negative tumor cells by the neural network model, correcting Ti to beA positive tumor cell;
if S i >S std And Ti is predicted to be positive tumor cells by the neural network model, correcting Ti to be negative tumor cells.
According to yet another aspect of the present application, there is provided an electronic device comprising a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the method of correcting the neural network model output result for cell discrimination.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of modifying a neural network model output result for cell discrimination.
Compared with the prior art, the correction method for the output result of the neural network model for cell discrimination is adopted to accurately correct the tumor cell type prediction result output by the neural network model according to the form and color information of the positive tumor cells and other tumor cells, and the corrected tumor cell type accords with the actual cell type, so that the accuracy of KI67 index calculation is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a visualization of the cell class prediction results output by the neural network model;
FIG. 2 is a graph of results after treatment using the method disclosed herein;
FIG. 3 is a graph of two tumor cell types overlaid together as output by a neural network model;
FIG. 4 is a graph of cell types after treatment using the methods disclosed herein.
Detailed Description
Hereinafter, example embodiments of the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Exemplary method
The method for correcting the output result of the neural network model for cell discrimination includes the steps of inputting a pathological image under a mirror into a trained neural network model, identifying tumor cells on the pathological image by the neural network model, and outputting the pathological image marked with the positions and types of the tumor cells. The tumor cell classes identified by the neural network model include positive tumor cells and negative tumor cells. The positive tumor cells with the lightest color output by the neural network model are manually marked by a pathologist to serve as standard cells, the negativity and the positivity of all the tumor cells predicted by the neural network in the slice are distinguished according to the shape and the color information of the standard cells and other tumor cells, and the types of the tumor cells output by the neural network model are corrected according to the color comparison of the standard cells and other tumor cells. The network structure of the neural network model is not changed, and only the correction of the output result of the neural network model is involved.
Specifically, a first expansion operation is performed by taking the center of the center point of the standard cell and the estimated value R of the average radius of the cell as a radius to obtain a first expansion area, and the mean value of the red channel pixels of the first expansion area is obtained. And acquiring the predicted central point of the tumor cell according to the position information of the tumor cell. In experiments, it was found that because some of the predicted center points are located close to the cell edges, this results in a mask after dilation i The intersection between the region and the original region of the tumor cell is small, i.e. some pixels not belonging to the tumor cell are included in the mask, and some pixels belonging to the tumor cell are instead marked out of the mask, so that it is necessary to judgePredicting whether the position of the center point is at the cell edge.
The method for judging whether the prediction center point is positioned at the edge of the current cell is to calculate the self-information of the center point of the standard cell and the self-information of all the prediction center points, take the self-information of the center point of the standard cell as a threshold value, and if the self-information of the prediction center point is greater than the self-information of the center point of the standard cell, the prediction center point is positioned at the edge of the current cell and is an edge prediction point. Self-information (self-information) is usually used to describe the uncertainty of the occurrence of an event, i.e. the amount of information carried by an event. Whether the predicted central point is at the cell edge (the predicted point at the cell edge carries higher information content) is judged by calculating the self information of the cell central point predicted by the model, and for the predicted point at the cell edge, only the color information contained in the cell is calculated, and the background color information around the cell is removed, so that the robustness of the algorithm is improved.
Performing second expansion operation on the predicted central point at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius, eliminating background color information around the current cell in a second expansion area, and acquiring the mean value of the red channel pixels in the residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius to obtain the pixel mean value of the red channel of a third expansion area; and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value. For positive tumor cells, the dominant hue is red, so the red channel is selected, and the pixel intensity pair of the red channel is used to determine whether the cell type is misjudged.
The specific treatment process comprises the following steps:
s10, inputting the pathological image under the lens into a trained neural network model for prediction, outputting the pathological image marked with the position and the category of the tumor cell by the neural network model, and marking the tumor cell as T i
S20, acquiring the central point of the standard cell and marking as x std
S30, taking the central point x of the standard cell std Taking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and calculating the average value S of the red channel in the expansion area std
S40, obtaining all the tumor cell T i Central point x of i At each center point x i Taking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and taking the expanded region as the tumor cell T i The corresponding mask, denoted as mask i ,mask i Is a central point x i The valid neighborhood of (c).
S50, calculating T of all the tumor cells i Central point x of i Self information of (a) x i Self information of I (x) i ) As a measure, judge x i Whether it is an edge predicted point, x i The self information of (a) is represented as follows:
Figure BDA0002929987160000051
n is the valid neighborhood mask i σ is a constant;
further, x is i Is regarded as a random variable, x i And its effective neighborhood mask i Obey the distribution q of pixel values i Then x is i The self information of (a) is represented as follows:
I(x i )=-logq i (x i ),
approximating x with a kernel density estimate i And mask i Distribution q of i . Estimated value
Figure BDA0002929987160000052
Wherein K (x) i ,x i ') is a kernel function (a Gaussian kernel function is used in this application):
Figure BDA0002929987160000053
n is C i Effective neighborhood mask i The number of pixels. x is the number of i ′∈mask i
To sum up, x i The self information is represented as follows:
Figure BDA0002929987160000054
s60, calculating standard cell x std The central point self-information of (2) is used as standard self-information I std
S70, if I i >I std Then x is i Judging as an edge prediction point, and taking a mask corresponding to an effective neighborhood mask i The pixel values of all the red channels are subjected to k-means two-classification to obtain the pixel value mean values of two classes, namely
Figure BDA0002929987160000055
And
Figure BDA0002929987160000056
Figure BDA0002929987160000057
and
Figure BDA0002929987160000058
is x i Effective neighborhood mask of i Taking the mean value of the middle background pixels, taking the larger value as the mean value of the cell pixels
Figure BDA0002929987160000059
The purpose of (1) is to calculate only the color information contained in the cell and to eliminate the background color information around the cell.
If I i <I std Then x i Judging as a non-edge predicted point, S i Get the corresponding effective neighborhood mask i The mean of the pixel values of all red channels. S80, opening the inflation area in redTrace average value S std As a threshold, positive and negative tumor cells are distinguished, if S i <S std And if Ti is predicted to be a negative tumor cell by the neural network model, correcting Ti to be a positive tumor cell;
if S i >S std And Ti is predicted to be positive tumor cells by the neural network model, correcting Ti to be negative tumor cells.
Taking a cell type prediction result visualization graph output by a neural network model as an experiment sample, as shown in fig. 1, correcting the graph 1 by adopting the method disclosed by the application, and comparing the results before and after the operation of the algorithm of the invention in fig. 1 and fig. 2. Wherein the red marked cells are positive tumor cells, the green marked cells are negative tumor cells, and the red circles are inconsistent cell classification results. Experiments show that the classification result processed by the method disclosed by the application is more accurate. The brown cells located at the top of FIG. 1 are located at the interface between the negative and positive tumor cells, and in this case, the lightest cells are labeled by the physician and are identified as negative tumor cells. The blue cells in the lower part of FIG. 1 were clearly negative, in this case corrected for cell type.
Fig. 3 and 4 are graphs comparing the results of the algorithm operations before and after step S70 is added. This is two tumor cells that are overlaid together, in which case the central point predicted by the neural network model is near the cell edge, which results in some pixels not belonging to the tumor cell being included by the mask. The area encircled by the yellow circle in fig. 3 is a mask area, and it is obvious that more than half of background pixels do not belong to tumor cells but are divided in the mask, so that the pixel mean value in the mask is reduced by the background light-colored area, and the cell is determined as a negative tumor cell. After processing in steps S60-S70, the spot was corrected to positive tumor cells, as shown in FIG. 4.
Cell prediction is performed on 47 cases of images by respectively adopting a neural network model, and correction is performed by adopting the correction method disclosed by the application, so that two groups of data are obtained, as shown in tables 1 and 2:
TABLE 1 results of prediction of 47 case images using neural network model
Negative fiber Negative lymph Negative tumors Positive fiber Positive tumors Other cells Total number of cells KI67 value
Total number of labels 2080 1178 14705 323 13405 2838 34529 22.55
Total number of predictions 2505 1194 11218 26574 19623 3162 64276 29.48
AE 1075 1256 4291 26251 6232 1112 29747 7.396
MAE 22.87234043 26.72340426 91.29787234 558.5319149 132.5957447 23.65957447 632.9148936 0.157361702
RMSE 35.1528577 70.30632065 110.4552475 578.3593634 148.0816336 31.63152212 654.3867846 0.185862566
Table 2 results obtained by correcting the output results of the neural network model for 47 cases of images
Figure BDA0002929987160000061
As is apparent from tables 1 and 2, the pathological images of 47 cases were manually labeled, and the total number of negative tumor cells labeled was 14705 and the total number of positive tumor cells labeled was 13405. The neural network model was used for prediction, resulting in a total number of negative tumor cell predictions of 11218 (deviation from total number labeled-3487) and a total number of positive tumor cell predictions of 19634 (deviation from total number labeled 6229). The result output by the neural network model is corrected by using the correction method disclosed by the application, and the obtained predicted total number of the negative tumor cells is 16628 (deviating from the total number 1923 marked), and the total number of the positive tumor cells is 14213 (deviating from the total number 808 marked). Further, the three indexes AE, MAE and RMSE are all reduced, which shows that the corrected result is closer to the labeled value, the corrected predicted total number is also closer to the labeled total number, and the KI67 value (21.287) obtained after correction is closer to the true value (22.55) relative to the KI67 value (29.48) obtained by prediction of the neural network model.
Exemplary electronic device
An electronic device comprising a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to execute the method of correcting the neural network model output result for cell discrimination.
Exemplary computer readable storage Medium
A computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of modifying a neural network model output result for cell discrimination.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method for modification of neural network model output results for cell discrimination according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for correcting output results of a neural network model for cell discrimination according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by one skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably herein. As used herein, the words "or" and "refer to, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (4)

1. The correction method for the output result of the neural network model for cell discrimination is characterized in that an under-lens pathological image is input into a trained neural network model, the neural network model identifies tumor cells on the pathological image, a pathological image marked with tumor cell positions and tumor cell types is output, the tumor cell types comprise positive tumor cells and negative tumor cells, the positive tumor cells with the lightest color output by the neural network model are used as standard cells, and the tumor cell types output by the neural network model are corrected according to the color comparison between the standard cells and other tumor cells;
the correction method further comprises the steps of performing first expansion operation by taking the center of the central point of the standard cell and the estimated value R of the average radius of the standard cell as a radius to obtain a first expansion area, and acquiring the pixel mean value of a red channel of the first expansion area;
acquiring a predicted central point of the tumor cell according to the position of the tumor cell, judging whether the predicted central point is positioned at the edge of the current cell, performing a second expansion operation on the predicted central point positioned at the edge of the current cell by taking the estimated value R of the average radius of the standard cell as the radius, eliminating background color information positioned at the periphery of the current cell in a second expansion area, and acquiring a red channel pixel mean value of a residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the standard cell as the radius to obtain the pixel mean value of the red channel of a third expansion area;
and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value.
2. The method of claim 1, wherein the method of determining whether the predicted center point is located at the edge of the current cell is to calculate self-information of the standard cell center point and self-information of all predicted center points, use the self-information of the standard cell center point as a threshold, and if the self-information of the predicted center point is greater than the self-information of the standard cell center point, locate the predicted center point at the edge of the current cell and determine the predicted center point.
3. An electronic device comprises
A processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of any one of claims 1-2 for correction of neural network model output results for cell discrimination.
4. A computer-readable medium, on which computer program instructions are stored, which, when executed by a processor, cause the processor to carry out the method of any one of claims 1-2 for correction of neural network model output results for cell discrimination.
CN202110145319.3A 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination Active CN112884725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110145319.3A CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110145319.3A CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Publications (2)

Publication Number Publication Date
CN112884725A CN112884725A (en) 2021-06-01
CN112884725B true CN112884725B (en) 2022-12-20

Family

ID=76055977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110145319.3A Active CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Country Status (1)

Country Link
CN (1) CN112884725B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299490B (en) * 2021-12-01 2024-03-29 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN116824579B (en) * 2023-06-27 2023-12-22 长沙金域医学检验实验室有限公司 Method and device for detecting yarrowia pneumocystis based on direct immunofluorescence staining

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070584A1 (en) * 2015-10-23 2017-04-27 Novartis Ag Computer processes behind an enhanced version of aqua

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3570753B1 (en) * 2017-02-23 2024-08-07 Google LLC Method and system for assisting pathologist identification of tumor cells in magnified tissue images
EP3721373A1 (en) * 2017-12-07 2020-10-14 Ventana Medical Systems, Inc. Deep-learning systems and methods for joint cell and region classification in biological images
CN109190567A (en) * 2018-09-10 2019-01-11 哈尔滨理工大学 Abnormal cervical cells automatic testing method based on depth convolutional neural networks
US11966842B2 (en) * 2019-05-23 2024-04-23 Icahn School Of Medicine At Mount Sinai Systems and methods to train a cell object detector
CN112132166B (en) * 2019-06-24 2024-04-19 杭州迪英加科技有限公司 Intelligent analysis method, system and device for digital cell pathology image
CN112215790A (en) * 2019-06-24 2021-01-12 杭州迪英加科技有限公司 KI67 index analysis method based on deep learning
CN110632069B (en) * 2019-08-20 2022-07-26 西人马大周(深圳)医疗科技有限公司 Circulating tumor cell detection method, device, equipment and medium
CN111325103B (en) * 2020-01-21 2020-11-03 华南师范大学 Cell labeling system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070584A1 (en) * 2015-10-23 2017-04-27 Novartis Ag Computer processes behind an enhanced version of aqua

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Computer-Aided Colorectal Tumor Classification in NBI Endoscopy Using CNN Features;Toru Tamaki,et al;《arXiv:1608.06709v1》;20160824;第1-5页 *

Also Published As

Publication number Publication date
CN112884725A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
Khan et al. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution
Arslan et al. A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images
US10438096B2 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
Chang et al. Gold-standard and improved framework for sperm head segmentation
CN112884725B (en) Correction method for neural network model output result for cell discrimination
WO2020253508A1 (en) Abnormal cell detection method and apparatus, and computer readable storage medium
Shen et al. A dicentric chromosome identification method based on clustering and watershed algorithm
Mungle et al. MRF‐ANN: a machine learning approach for automated ER scoring of breast cancer immunohistochemical images
Phan et al. Automatic Screening and Grading of Age‐Related Macular Degeneration from Texture Analysis of Fundus Images
EP3271864B1 (en) Tissue sample analysis technique
He et al. iCut: an integrative cut algorithm enables accurate segmentation of touching cells
CN113344894B (en) Method and device for extracting features of fundus leopard spots and determining feature indexes
CN111507957B (en) Identity card picture conversion method and device, computer equipment and storage medium
Boukouvalas et al. Automatic segmentation method for CFU counting in single plate-serial dilution
US11847817B2 (en) Methods and systems for automated assessment of spermatogenesis
Su et al. Detection of tubule boundaries based on circular shortest path and polar‐transformation of arbitrary shapes
CN112884782A (en) Biological object segmentation method, apparatus, computer device and storage medium
Kumarganesh et al. An efficient approach for brain image (tissue) compression based on the position of the brain tumor
Tewary et al. AutoIHC‐scoring: a machine learning framework for automated Allred scoring of molecular expression in ER‐and PR‐stained breast cancer tissue
CN112907581A (en) MRI (magnetic resonance imaging) multi-class spinal cord tumor segmentation method based on deep learning
Tosta et al. Computational method for unsupervised segmentation of lymphoma histological images based on fuzzy 3-partition entropy and genetic algorithm
Somasundaram et al. Automatic segmentation of nuclei from pap smear cell images: A step toward cervical cancer screening
Fernández-Carrobles et al. Automatic quantification of IHC stain in breast TMA using colour analysis
Gunawan et al. Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
CN116168012A (en) Method, device and computer equipment for training color spot detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant