CN112884725A - Correction method for neural network model output result for cell discrimination - Google Patents

Correction method for neural network model output result for cell discrimination Download PDF

Info

Publication number
CN112884725A
CN112884725A CN202110145319.3A CN202110145319A CN112884725A CN 112884725 A CN112884725 A CN 112884725A CN 202110145319 A CN202110145319 A CN 202110145319A CN 112884725 A CN112884725 A CN 112884725A
Authority
CN
China
Prior art keywords
cell
neural network
tumor cells
network model
cells
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110145319.3A
Other languages
Chinese (zh)
Other versions
CN112884725B (en
Inventor
蔡佳桐
杨林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Diyingjia Technology Co ltd
Original Assignee
Hangzhou Diyingjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Diyingjia Technology Co ltd filed Critical Hangzhou Diyingjia Technology Co ltd
Priority to CN202110145319.3A priority Critical patent/CN112884725B/en
Publication of CN112884725A publication Critical patent/CN112884725A/en
Application granted granted Critical
Publication of CN112884725B publication Critical patent/CN112884725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention relates to a correction method for a neural network model output result for cell discrimination, which is characterized in that the lightest color positive tumor cells are marked, negative tumor cells and positive tumor cells predicted by a neural network model (or a deep learning model) are redefined, and the influence of background pixels on the final judgment result is reduced, so that the final KI67 index is closer to a real value. Inputting a pathological image under a lens into a trained neural network model, identifying tumor cells on the pathological image by the neural network model, outputting the pathological image marked with the positions of the tumor cells and the types of the tumor cells, wherein the types of the tumor cells comprise positive tumor cells and negative tumor cells, correcting the types of the tumor cells output by the neural network model according to the color comparison between the standard cells and other tumor cells by taking the positive tumor cells with the lightest color output by the neural network model as standard cells.

Description

Correction method for neural network model output result for cell discrimination
Technical Field
The invention relates to the technical field of medical treatment, in particular to a correction method for a neural network model output result for cell discrimination.
Background
The artificial intelligence assists the doctor in interpreting the KI67 immunohistochemical pathological section image algorithm to locate, classify and count all cells on the under-mirror field image. Patent CN201610710869.4Ki67 index automatic analysis method discloses a Ki67 index automatic analysis method, which comprises the following steps: s10, preprocessing the image; s20, screening hot spot areas; s30, calculating the number of cells; and S40, outputting the result. The automatic Ki67 index analysis method has good reproducibility, automatically analyzes digital images of Ki67 in batches through a computer, stably and efficiently obtains the Ki67 index, marks negative and positive cell nuclei through different colors in the digital images, and quickly identifies a hot spot area and counts the negative and positive cell nuclei in the hot spot area through an algorithm, so that medical workers are helped to more accurately analyze relevant characteristics of histopathology. However, this algorithm cannot accurately distinguish between negative and positive tumors. This is because the color information is the main information used by pathologists in differentiating between yin and yang tumors. In the KI67 immunohistochemical pathology image, the bluish tumor cells were generally considered as negative tumor cells, and the reddish tumor cells were generally considered as positive tumor cells. However, images under different data sources and different illumination conditions are very different, and the color boundaries of the yin and yang tumor cells cannot be well learned by the deep learning algorithm. This may cause KI67 index to deviate from the actual value, affecting diagnostic accuracy.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The application provides a correction method for a neural network model output result for cell discrimination, which is characterized in that the lightest color positive tumor cells are marked, negative tumor cells and positive tumor cells predicted by a neural network model (or a deep learning model) are redefined, and the influence of background pixels on a final judgment result is reduced, so that the final KI67 index is closer to a real value.
According to one aspect of the application, a correction method for a neural network model output result for cell discrimination is provided, and includes inputting an under-mirror pathological image into a trained neural network model, identifying tumor cells on the pathological image by the neural network model, outputting a pathological image marked with tumor cell positions and tumor cell types, wherein the tumor cell types include positive tumor cells and negative tumor cells, and correcting the tumor cell types output by the neural network model according to color comparison between the standard cells and other tumor cells by taking the positive tumor cells with the lightest color output by the neural network model as standard cells.
Further, performing a first expansion operation by taking the center of the central point of the standard cell and the estimated value R of the average radius of the cell as a radius to obtain a first expansion area, and acquiring a red channel pixel mean value of the first expansion area;
acquiring a prediction central point of the tumor cell according to the position of the tumor cell, judging whether the prediction central point is positioned at the edge of the current cell, performing second expansion operation on the prediction central point positioned at the edge of the current cell by taking an estimated value R of the average radius of the cell as the radius, removing background color information positioned at the periphery of the current cell in a second expansion area, and acquiring a red channel pixel mean value of a residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius to obtain the pixel mean value of the red channel of the third expansion area; and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value.
Further, the method for determining whether the prediction center point is located at the current cell edge is to calculate the self-information of the standard cell center point and the self-information of all the prediction center points, use the self-information of the standard cell center point as a threshold, and if the self-information of the prediction center point is greater than the self-information of the standard cell center point, the prediction center point is located at the current cell edge and is an edge prediction point.
Further, comprising:
s10, inputting the pathology image under the mirror into the trained neural network model for prediction, outputting the pathology image marked with the position and category of the tumor cell, and marking the tumor cell as Ti
S20, obtaining the central point of the standard cell and marking as xstd
S30, with the central point x of the standard cellstdTaking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and calculating the average value S of the red channel in the expansion areastd
S40, obtaining all the tumor cell TiCentral point x ofiAt each center point xiTaking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and taking the expanded region as the tumor cell TiThe corresponding mask, denoted as maski,maskiIs a central point xiA valid neighborhood of;
s50, calculating T of all the tumor cellsiCentral point x ofiSelf information of (1), then xiThe self information of (a) is represented as follows:
Figure BDA0002929987160000021
n is the valid neighborhood maskiσ is a constant;
s60, calculating standard cell xstdThe central point self-information of (1) is taken as standard self-information Istd
S70, if Ii>IstdThen xiJudging as an edge prediction point, and taking a mask corresponding to an effective neighborhood maskiThe pixel values of all the red channels are subjected to k-means two-classification to obtain the pixel value mean values of two classes, namely
Figure BDA0002929987160000022
And
Figure BDA0002929987160000023
if Ii<IstdThen xiJudging as a non-edge predicted point, SiGet the corresponding valid neighborhood maskiAverage pixel value of all red channels;
s80, taking the average value S of the red channel of the expansion areastdAs a threshold, positive and negative tumor cells are distinguished, if Si<SstdAnd if Ti is predicted to be negative tumor cells by the neural network model, correcting Ti to be positive tumor cells;
if Si>SstdAnd Ti is predicted to be positive tumor cells by the neural network model, correcting Ti to be negative tumor cells.
According to yet another aspect of the present application, there is provided an electronic device comprising a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the method of correcting the neural network model output result for cell discrimination.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of modifying a neural network model output result for cell discrimination.
Compared with the prior art, the correction method for the output result of the neural network model for cell discrimination is adopted to accurately correct the prediction result of the tumor cell type output by the neural network model according to the form and color information of the positive tumor cells and other tumor cells, and the corrected tumor cell type accords with the actual cell type, so that the accuracy of KI67 index calculation is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a visualization of the cell class prediction results output by the neural network model;
FIG. 2 is a graph of results after treatment using the method disclosed herein;
FIG. 3 is a graph of two tumor cell types overlaid together as output by a neural network model;
FIG. 4 is a graph of cell types after treatment using the methods disclosed herein.
Detailed Description
Hereinafter, example embodiments of the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Exemplary method
The method for correcting the output result of the neural network model for cell discrimination includes the steps of inputting a pathological image under a mirror into a trained neural network model, identifying tumor cells on the pathological image by the neural network model, and outputting the pathological image marked with the positions and types of the tumor cells. The tumor cell classes identified by the neural network model include positive tumor cells and negative tumor cells. The positive tumor cells with the lightest color output by the neural network model are manually marked by a pathologist to serve as standard cells, the negativity and the positivity of all the tumor cells predicted by the neural network in the slice are distinguished according to the shape and the color information of the standard cells and other tumor cells, and the types of the tumor cells output by the neural network model are corrected according to the color comparison of the standard cells and other tumor cells. The network structure of the neural network model is not changed, and only the correction of the output result of the neural network model is involved.
Specifically, a first expansion operation is performed by taking the center of the center point of the standard cell and the estimated value R of the average radius of the cell as a radius to obtain a first expansion area, and the mean value of the red channel pixels of the first expansion area is obtained. And acquiring the predicted central point of the tumor cell according to the position information of the tumor cell. In experiments, it was found that because some of the predicted center points are located close to the cell edges, this results in a mask after dilationiThe intersection between the region and the original region of the tumor cell is small, i.e. some pixels not belonging to the tumor cell are included in the mask, and some pixels belonging to the tumor cell are instead marked outside the mask, so it is necessary to determine whether the position of the predicted central point is at the cell edge.
The method for judging whether the prediction center point is positioned at the edge of the current cell is to calculate the self-information of the center point of the standard cell and the self-information of all the prediction center points, take the self-information of the center point of the standard cell as a threshold value, and if the self-information of the prediction center point is greater than the self-information of the center point of the standard cell, the prediction center point is positioned at the edge of the current cell and is an edge prediction point. Self-information (self-information) is usually used to describe the uncertainty of the occurrence of an event, i.e. the amount of information carried by the event. Whether the predicted central point is at the cell edge (the predicted point at the cell edge carries higher information content) is judged by calculating the self information of the cell central point predicted by the model, and for the predicted point at the cell edge, only the color information contained in the cell is calculated, and the background color information around the cell is removed, so that the robustness of the algorithm is improved.
Performing second expansion operation on the predicted central point at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius, removing background color information around the current cell in a second expansion area, and obtaining the red channel pixel mean value of the residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius to obtain the pixel mean value of the red channel of a third expansion area; and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value. For positive tumor cells, the dominant hue is red, so the red channel is selected, and the pixel intensity pair of the red channel is used to determine whether the cell type is misjudged.
The specific treatment process comprises the following steps:
s10, inputting the pathology image under the mirror into the trained neural network model for prediction, outputting the pathology image marked with the position and category of the tumor cell, and marking the tumor cell as Ti
S20, obtaining the central point of the standard cell and marking as xstd
S30, with the central point x of the standard cellstdTaking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and calculating the average value S of the red channel in the expansion areastd
S40, obtaining all the tumor cell TiCentral point x ofiAt each center point xiTaking the estimated value R of the average cell radius as the radius of the expansion element as the center, performing expansion operation, and taking the expanded region as the tumor cell TiThe corresponding mask, denoted as maski,maskiIs a central point xiIs used to determine the effective neighborhood of (c).
S50, calculating T of all the tumor cellsiCentral point x ofiSelf information of (a) xiSelf information of I (x)i) As a measure, x is judgediWhether it is an edge predicted point, xiThe self information of (a) is represented as follows:
Figure BDA0002929987160000051
n is the valid neighborhood maskiσ is a constant;
further, x isiIs regarded as a random variable, xiAnd its effective neighborhood maskiObey the distribution q of pixel valuesiThen xiFromThe information is represented as follows:
I(xi)=-logqi(xi),
approximating x with a kernel density estimateiAnd maskiDistribution q ofi. Estimated value
Figure BDA0002929987160000052
Wherein K (x)i,xi') is a kernel function (a Gaussian kernel function is used in this application):
Figure BDA0002929987160000053
n is CiEffective neighborhood maskiThe number of pixels. x is the number ofi′∈maski
In summary, xiThe self information is represented as follows:
Figure BDA0002929987160000054
s60, calculating standard cell xstdThe central point self-information of (1) is taken as standard self-information Istd
S70, if Ii>IstdThen xiJudging as an edge prediction point, and taking a mask corresponding to an effective neighborhood maskiThe pixel values of all the red channels are subjected to k-means two-classification to obtain the pixel value mean values of two classes, namely
Figure BDA0002929987160000055
And
Figure BDA0002929987160000056
Figure BDA0002929987160000057
and
Figure BDA0002929987160000058
is xiEffective neighborhood mask ofiTaking the mean value of the middle background pixels, taking the larger value as the mean value of the cell pixels
Figure BDA0002929987160000059
The purpose of (1) is to calculate only the color information contained in the cell and to eliminate the background color information around the cell.
If Ii<IstdThen xiJudging as a non-edge predicted point, SiGet the corresponding valid neighborhood maskiThe mean of the pixel values of all red channels. S80, taking the average value S of the red channel of the expansion areastdAs a threshold, positive and negative tumor cells are distinguished, if Si<SstdAnd if Ti is predicted to be negative tumor cells by the neural network model, correcting Ti to be positive tumor cells;
if Si>SstdAnd Ti is predicted to be positive tumor cells by the neural network model, correcting Ti to be negative tumor cells.
Taking a cell type prediction result visualization graph output by a neural network model as an experimental sample, as shown in fig. 1, correcting the graph 1 by adopting the method disclosed by the application, and fig. 1 and 2 are comparison graphs of results before and after the operation of the algorithm of the invention. Wherein the red marked cells are positive tumor cells, the green marked cells are negative tumor cells, and the red circles are inconsistent cell classification results. Experiments show that the classification result processed by the method disclosed by the application is more accurate. The brown cells located at the top of FIG. 1 are located at the interface between the negative and positive tumor cells, and in this case, the lightest cells are labeled by the physician and are identified as negative tumor cells. The blue cells in the lower part of FIG. 1 were clearly negative, in this case corrected for cell type.
Fig. 3 and 4 are graphs showing comparison of the results of the algorithm operation before and after the addition of step S70. This is two tumor cells that are overlaid together, in which case the central point predicted by the neural network model is near the cell edge, which results in some pixels not belonging to the tumor cell being included by the mask. The area encircled by the yellow circle in fig. 3 is a mask area, and it is obvious that more than half of background pixels do not belong to tumor cells but are divided in the mask, so that the pixel mean value in the mask is reduced by the background light-colored area, and the cell is determined as a negative tumor cell. After the processing of steps S60-S70, the spot was corrected to positive tumor cells, as shown in FIG. 4.
Cell prediction is performed on 47 cases of images by respectively adopting a neural network model, and correction is performed by adopting the correction method disclosed by the application, so that two groups of data are obtained, as shown in tables 1 and 2:
TABLE 1 results of prediction of 47 case images using neural network model
Negative fiber Negative lymph Negative tumors Positive fiber Positive tumors Other cells Total number of cells KI67 value
Total number of labels 2080 1178 14705 323 13405 2838 34529 22.55
Total number of predictions 2505 1194 11218 26574 19623 3162 64276 29.48
AE 1075 1256 4291 26251 6232 1112 29747 7.396
MAE 22.87234043 26.72340426 91.29787234 558.5319149 132.5957447 23.65957447 632.9148936 0.157361702
RMSE 35.1528577 70.30632065 110.4552475 578.3593634 148.0816336 31.63152212 654.3867846 0.185862566
Table 2 results obtained by correcting the output results of the neural network model for 47 cases of images
Figure BDA0002929987160000061
As is apparent from tables 1 and 2, 47 pathological images were manually labeled, and the total number of negative tumor cells labeled was 14705 and the total number of positive tumor cells labeled was 13405. The neural network model was used for prediction, resulting in a total number of negative tumor cell predictions of 11218 (deviation from total number labeled-3487) and a total number of positive tumor cell predictions of 19634 (deviation from total number labeled 6229). The result output by the neural network model is corrected by the correction method disclosed by the application, and the obtained predicted total number of the negative tumor cells is 16628 (deviating from the total number 1923 marked), and the total number of the positive tumor cells is 14213 (deviating from the total number 808 marked). Further, the three indexes AE, MAE and RMSE are all reduced, which shows that the corrected result is closer to the labeled value, the corrected predicted total number is also closer to the labeled total number, and the corrected predicted KI67 value (21.287) is closer to the true value (22.55) than the predicted KI67 value (29.48) of the neural network model.
Exemplary electronic device
An electronic device comprising a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the method of correcting the neural network model output result for cell discrimination.
Exemplary computer readable storage Medium
A computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of modifying a neural network model output result for cell discrimination.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for modifying neural network model output results for cell discrimination according to various embodiments of the present application described in the "exemplary methods" section above of the specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for modifying neural network model output results for cell discrimination according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (5)

1. The method for correcting the output result of the neural network model for cell discrimination is characterized in that an under-mirror pathological image is input into a trained neural network model, the neural network model identifies tumor cells on the pathological image, a pathological image marked with tumor cell positions and tumor cell types is output, the tumor cell types comprise positive tumor cells and negative tumor cells, the positive tumor cells with the lightest color output by the neural network model are used as standard cells, and the tumor cell types output by the neural network model are corrected according to the color comparison between the standard cells and other tumor cells.
2. The method according to claim 1, wherein a first expansion operation is performed on a center of a center point of the standard cell and with an estimated value R of a cell mean radius as a radius to obtain a first expansion region, and a mean value of red channel pixels of the first expansion region is obtained;
acquiring a prediction central point of the tumor cell according to the position of the tumor cell, judging whether the prediction central point is positioned at the edge of the current cell, performing second expansion operation on the prediction central point positioned at the edge of the current cell by taking an estimated value R of the average radius of the cell as the radius, removing background color information positioned at the periphery of the current cell in a second expansion area, and acquiring a red channel pixel mean value of a residual area of the second expansion area; performing third expansion operation on the predicted central point which is not positioned at the edge of the current cell by taking the estimated value R of the average radius of the cell as the radius to obtain the pixel mean value of the red channel of the third expansion area;
and correcting the cell types of the current cells corresponding to the second expansion area and the third expansion area according to the red channel pixel mean value of the rest area of the second expansion area and the red channel pixel mean value of the third expansion area by taking the red channel pixel mean value of the first expansion area as a threshold value.
3. The method of claim 2, wherein the method of determining whether the predicted center point is located at the edge of the current cell is to calculate the self-information of the standard cell center point and the self-information of all the predicted center points, use the self-information of the standard cell center point as a threshold, and if the self-information of the predicted center point is greater than the self-information of the standard cell center point, the predicted center point is located at the edge of the current cell and is an edge predicted point.
4. An electronic device comprises
A processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of any one of claims 1-4 for modifying a neural network model output result for cell discrimination.
5. A computer-readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 4 for modifying a neural network model output result for cell discrimination.
CN202110145319.3A 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination Active CN112884725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110145319.3A CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110145319.3A CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Publications (2)

Publication Number Publication Date
CN112884725A true CN112884725A (en) 2021-06-01
CN112884725B CN112884725B (en) 2022-12-20

Family

ID=76055977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110145319.3A Active CN112884725B (en) 2021-02-02 2021-02-02 Correction method for neural network model output result for cell discrimination

Country Status (1)

Country Link
CN (1) CN112884725B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299490A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN116824579A (en) * 2023-06-27 2023-09-29 长沙金域医学检验实验室有限公司 Method and device for detecting yarrowia pneumocystis based on direct immunofluorescence staining

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070584A1 (en) * 2015-10-23 2017-04-27 Novartis Ag Computer processes behind an enhanced version of aqua
CN109190567A (en) * 2018-09-10 2019-01-11 哈尔滨理工大学 Abnormal cervical cells automatic testing method based on depth convolutional neural networks
CN110632069A (en) * 2019-08-20 2019-12-31 西人马帝言(北京)科技有限公司 Circulating tumor cell detection method, device, equipment and medium
US20200066407A1 (en) * 2017-02-23 2020-02-27 Google Llc Method and System for Assisting Pathologist Identification of Tumor Cells in Magnified Tissue Images
CN111325103A (en) * 2020-01-21 2020-06-23 华南师范大学 Cell labeling system and method
US20200342597A1 (en) * 2017-12-07 2020-10-29 Ventana Medical Systems, Inc. Deep-learning systems and methods for joint cell and region classification in biological images
WO2020237185A1 (en) * 2019-05-23 2020-11-26 Icahn School Of Medicine At Mount Sinai Systems and methods to train a cell object detector
CN112132166A (en) * 2019-06-24 2020-12-25 杭州迪英加科技有限公司 Intelligent analysis method, system and device for digital cytopathology image
CN112215790A (en) * 2019-06-24 2021-01-12 杭州迪英加科技有限公司 KI67 index analysis method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017070584A1 (en) * 2015-10-23 2017-04-27 Novartis Ag Computer processes behind an enhanced version of aqua
US20200066407A1 (en) * 2017-02-23 2020-02-27 Google Llc Method and System for Assisting Pathologist Identification of Tumor Cells in Magnified Tissue Images
US20200342597A1 (en) * 2017-12-07 2020-10-29 Ventana Medical Systems, Inc. Deep-learning systems and methods for joint cell and region classification in biological images
CN109190567A (en) * 2018-09-10 2019-01-11 哈尔滨理工大学 Abnormal cervical cells automatic testing method based on depth convolutional neural networks
WO2020237185A1 (en) * 2019-05-23 2020-11-26 Icahn School Of Medicine At Mount Sinai Systems and methods to train a cell object detector
CN112132166A (en) * 2019-06-24 2020-12-25 杭州迪英加科技有限公司 Intelligent analysis method, system and device for digital cytopathology image
CN112215790A (en) * 2019-06-24 2021-01-12 杭州迪英加科技有限公司 KI67 index analysis method based on deep learning
CN110632069A (en) * 2019-08-20 2019-12-31 西人马帝言(北京)科技有限公司 Circulating tumor cell detection method, device, equipment and medium
CN111325103A (en) * 2020-01-21 2020-06-23 华南师范大学 Cell labeling system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
TORU TAMAKI,ET AL: "Computer-Aided Colorectal Tumor Classification in NBI Endoscopy Using CNN Features", 《ARXIV:1608.06709V1》 *
TORU TAMAKI,ET AL: "Computer-Aided Colorectal Tumor Classification in NBI Endoscopy Using CNN Features", 《ARXIV:1608.06709V1》, 24 August 2016 (2016-08-24), pages 1 - 5 *
肖月: "基于梯度修正的改进分水岭模型在细胞分割中的应用", 《生物医学工程研究》, vol. 39, no. 4, 15 December 2020 (2020-12-15), pages 330 - 336 *
黄敏 等: "用聚类分析的方法分类观察者颜色匹配函数", 《光谱学与光谱分析》, vol. 40, no. 2, 15 February 2020 (2020-02-15), pages 454 - 460 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299490A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN114299490B (en) * 2021-12-01 2024-03-29 万达信息股份有限公司 Tumor microenvironment heterogeneity evaluation method
CN116824579A (en) * 2023-06-27 2023-09-29 长沙金域医学检验实验室有限公司 Method and device for detecting yarrowia pneumocystis based on direct immunofluorescence staining
CN116824579B (en) * 2023-06-27 2023-12-22 长沙金域医学检验实验室有限公司 Method and device for detecting yarrowia pneumocystis based on direct immunofluorescence staining

Also Published As

Publication number Publication date
CN112884725B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
Arslan et al. A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images
Khan et al. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution
US10438096B2 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
JP4529172B2 (en) Method and apparatus for detecting red eye region in digital image
Chang et al. Gold-standard and improved framework for sperm head segmentation
CN112884725B (en) Correction method for neural network model output result for cell discrimination
Shen et al. A dicentric chromosome identification method based on clustering and watershed algorithm
EP3271864B1 (en) Tissue sample analysis technique
CN112884782B (en) Biological object segmentation method, apparatus, computer device, and storage medium
US11847817B2 (en) Methods and systems for automated assessment of spermatogenesis
Su et al. Detection of tubule boundaries based on circular shortest path and polar‐transformation of arbitrary shapes
JPWO2011061905A1 (en) Object region extraction device, object region extraction method, and program
Boukouvalas et al. Automatic segmentation method for CFU counting in single plate-serial dilution
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
Tosta et al. Computational method for unsupervised segmentation of lymphoma histological images based on fuzzy 3-partition entropy and genetic algorithm
Somasundaram et al. Automatic segmentation of nuclei from pap smear cell images: A step toward cervical cancer screening
Kumarganesh et al. An efficient approach for brain image (tissue) compression based on the position of the brain tumor
CN111507957B (en) Identity card picture conversion method and device, computer equipment and storage medium
Tosta et al. Application of evolutionary algorithms on unsupervised segmentation of lymphoma histological images
CN112837304B (en) Skin detection method, computer storage medium and computing device
WO2014181024A1 (en) Computer-implemented method for recognising and classifying abnormal blood cells, and computer programs for performing the method
CN113762136A (en) Face image occlusion judgment method and device, electronic equipment and storage medium
Graf et al. Robust image segmentation in low depth of field images
Bhagavatula et al. A vocabulary for the identification and delineation of teratoma tissue components in hematoxylin and eosin-stained samples
CN112084953B (en) Face attribute identification method, system, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant