CN112819821B - Cell nucleus image detection method - Google Patents

Cell nucleus image detection method Download PDF

Info

Publication number
CN112819821B
CN112819821B CN202110224315.4A CN202110224315A CN112819821B CN 112819821 B CN112819821 B CN 112819821B CN 202110224315 A CN202110224315 A CN 202110224315A CN 112819821 B CN112819821 B CN 112819821B
Authority
CN
China
Prior art keywords
rcnn
detection
cell nucleus
image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110224315.4A
Other languages
Chinese (zh)
Other versions
CN112819821A (en
Inventor
屈爱平
梁豪
钟海勤
程志明
肖硕旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of South China
Original Assignee
University of South China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of South China filed Critical University of South China
Priority to CN202110224315.4A priority Critical patent/CN112819821B/en
Publication of CN112819821A publication Critical patent/CN112819821A/en
Application granted granted Critical
Publication of CN112819821B publication Critical patent/CN112819821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cell nucleus image detection method, which comprises the following steps: acquiring a data set sample with a cell nucleus image under a microscope; optimizing the cell nucleus image by adopting a data enhancement method and a data standardization method; constructing a fast-rcnn variant model of a two-stage neural network architecture, wherein the first stage is an RPN (resilient packet network) for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frames; adding a positioning prediction head in a Fast-rcnn network, taking an IoU value of a prediction detection frame as an index for measuring positioning quality, and taking a fusion score between classification quality and positioning quality as a ranking index in a non-maximum inhibition algorithm; training a model through the optimized cell nucleus image; and inputting the cell nucleus image to be tested into the model for detection. Therefore, the precision and the robustness of the cell nucleus detection can be effectively improved.

Description

Cell nucleus image detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a cell nucleus image detection method.
Background
The detection of certain specific types of nuclei or cells in microscopic images is a prerequisite in many clinical medicine, biomedical research, including quantitative information measurement, cancer progression, grading, and prognosis prediction. For example: breast cancer is the most common cancer in women and is also the leading cause of death worldwide. The world health organization uses the percentage of proliferating tumor cells to determine the prognosis of a disease. To calculate the percentage of tumor cells, a pathologist manually assesses positive tumor cells, which are usually mixed with normal cells. Even a professional pathologist, this assessment process is subjective, time consuming and varies widely. With the development of high-throughput technology, the amount of image data is also increasing rapidly, and relying on manual assessment of nuclear detection seems to be a bottleneck. Therefore, automated techniques for accurately and rapidly detecting nuclear instances in histological images covering different patients, organs and disease states have a significant contribution to the development of computer-aided systems for clinical and medical applications.
Currently, traditional methods of nuclear image detection employ classical image processing techniques such as intensity thresholding, feature detection, morphological filtering, region accumulation and deformable model fitting. They require significant prior knowledge of the target nucleus and background to select the appropriate features, and these selected features require trial and error to adjust parameters critical to overall performance, which results in limited generalization capability of these classical approaches.
Deep learning methods have recently been applied to various computer vision problems and achieve better performance in nuclear detection challenges than traditional nuclear detection methods. These methods generally fall into two categories: the first type is that the picture is subjected to a binary classification prediction density function at the pixel level, and the peak value of the density corresponds to the center of a cell nucleus; the second type is that the first stage model is used for selecting the candidate area, and the retrieved candidate area is input into the second stage model for further distinguishing and positioning.
For the density prediction method, if the cell nucleus exists densely, the peak value in the density function tends to be flat, so that the position of the cell nucleus cannot be estimated. Furthermore, pixel-by-pixel density prediction can burden computational resources, with limitations for clinical applications. Region-based methods, which are only used to classify and regress the nuclear region, rather than all image pixels, can efficiently speed up the algorithm. However, currently in most region-based approaches, there is an inconsistency between the test scenario (selecting boxes based on classification scores only during NMS) and the training scenario (minimizing classification and localization losses). Ideally, the quality of the detection frame should be measured not only from the classification but also from the location. And when the cell nuclei in the image have adhesion and clustering, the method is easy to cause the inhibition of real frames, thereby generating a large number of false positive detection frames.
Disclosure of Invention
In view of this, the present invention provides a method for detecting a cell nucleus image, which can effectively improve the accuracy and robustness of cell nucleus detection and stably output a detection result. The specific scheme is as follows:
a cell nucleus image detection method comprises the following steps:
acquiring a data set sample with a cell nucleus image under a microscope;
optimizing images of cell nuclei in the data set sample by adopting a data enhancement method and a data standardization method;
constructing a fast-rcnn variant model of an end-to-end two-stage neural network architecture, wherein the first stage is an RPN network and is used for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frame; adding a positioning prediction head in the Fast-rcnn network, taking an IoU value of a prediction detection frame as an index for measuring positioning quality, and taking a fusion score between classification quality and positioning quality as a ranking index in a non-maximum inhibition algorithm;
training the Faster-rcnn variant model through the optimized nuclear images;
and inputting the nuclear image to be tested into the trained Faster-rcnn variant model for detection.
Preferably, in the method for detecting a nuclear image provided in an embodiment of the present invention, the using the fusion score between the classification quality and the localization quality as a ranking index in the non-maximum suppression algorithm specifically includes:
preserving localization bounding boxes in the Fast-rcnn network by fusing scores between classification quality and localization quality;
meanwhile, in the non-maximum suppression algorithm, the detection frame with the highest score in the fusion score is selected, the score which is larger than a predefined threshold value with all other detection frames is attenuated, and the true positive candidate frame after the score is attenuated is reserved for detection.
Preferably, in the above method for detecting a nuclear image provided in an embodiment of the present invention, training the Faster-rcnn variant model by the optimized nuclear image specifically includes:
(iii) batching the optimized images of nuclei into the Faster-rcnn variant model;
binary cross entropy loss in classified predictors, L in localized predictors, output from last layer of the Faster-rcnn variant modeliouThe loss and the Smooth-L1 loss in the regression prediction head are compared with the real label of the image to calculate the loss;
transmitting the calculated loss in a network in a reverse direction to obtain the gradient of the network parameter;
and adjusting the network parameters by a random gradient descent optimizer to minimize the loss.
Preferably, in the method for detecting a nuclear image provided by the embodiment of the present invention, the inputting the nuclear image to be tested into the trained fast-rcnn variant model for detection specifically includes:
inputting the nuclear image to be tested into the trained Faster-rcnn variant model, and outputting a classification probability map, a positioning probability map and coordinate values of a detection frame of a target area;
taking the fusion score of the classification probability map and the positioning probability map as the distribution confidence of the final target region;
in a non-maximum suppression algorithm, sorting according to the distribution confidence, selecting a detection frame with the highest confidence, attenuating the score which is greater than a predefined threshold value with all other detection frames, and reserving a true positive candidate frame after the attenuation score so as to guide the detection of each cell nucleus in the cell nucleus image to be tested.
Preferably, in the above-mentioned nuclear image detection method provided by the embodiment of the present invention, the feature extraction network of the Faster-rcnn variant model is a resnet50 network, and is formed by connecting four residual blocks, and is used for extracting and learning features of the input image target.
Preferably, in the above method for detecting a cell nucleus image provided in the embodiment of the present invention, the data enhancement method is to randomly expand, crop, flip, distort contrast and distort brightness of the cell nucleus image in the data set sample;
the data standardization method is to enable the cell nucleus image sample data in the data set sample to fall into a [0, 1] interval by adopting linear normalization.
Preferably, in the above method for detecting a nuclear image provided by the embodiment of the present invention, before constructing the fast-rcnn variant model, the method further includes:
the model is initialized using transfer learning.
According to the technical scheme, the cell nucleus image detection method provided by the invention comprises the following steps: acquiring a data set sample with a cell nucleus image under a microscope; optimizing a cell nucleus image in a data set sample by adopting a data enhancement method and a data standardization method; constructing a fast-rcnn variant model of an end-to-end two-stage neural network architecture, wherein the first stage is an RPN network and is used for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frames; adding a positioning prediction head in the Fast-rcnn network, taking IoU values of a prediction detection frame as an index for measuring positioning quality, and taking a fusion score between classification quality and positioning quality as a ranking index in a non-maximum inhibition algorithm; training a Faster-rcnn variant model through the optimized nuclear images; and inputting the nuclear image to be tested into a trained Faster-rcnn variant model for detection.
The invention provides a fast-rcnn variant model of a two-stage neural network architecture, IoU of a prediction detection frame is used as an index for measuring positioning quality, and a fusion score of classification and positioning is used as a ranking index in a non-maximum inhibition algorithm, so that the difference between a training target and a testing target is reduced, the target of a post-processing program is favorably optimized, a clustered nucleus is effectively detected, the method can better meet some challenges existing in a nucleus image, the precision and the robustness of nucleus detection are effectively improved, and a detection result can be stably output.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the related arts, the drawings used in the description of the embodiments or the related arts will be briefly introduced below, it is obvious that the drawings in the description below are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a nuclear image detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network structure of a Faster-rcnn variant model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a Fusion-NMS process according to an embodiment of the present invention;
fig. 4 is a comparative illustration of a detection structure provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a cell nucleus image detection method, as shown in figure 1, comprising the following steps:
s101, acquiring a data set sample with a cell nucleus image under a microscope;
in practical applications, the nuclear image sample may be derived from a data science publication challenge race dataset (DSB 2018), containing 670 raw nuclear images of different resolutions, wherein the images are extracted under various microscopic imaging conditions, microscopy instruments, operators and staining protocols; the invention can divide the cell nucleus image data set sample: 402 sheets were used for training, 134 sheets for verification, and 134 sheets for testing.
S102, optimizing a cell nucleus image in a data set sample by adopting a data enhancement method and a data standardization method;
specifically, the data enhancement method is to randomly expand, cut, flip, contrast distortion and brightness distortion the cell nucleus image in the data set sample; the data standardization method adopts linear normalization to enable the cell nucleus image sample data in the data set sample to fall in a [0, 1] interval, and can accelerate the training process of the neural network.
S103, constructing an end-to-end two-stage neural network architecture Faster-rcnn variant model, wherein the first stage is an RPN (resilient packet network) and is used for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frames; adding a positioning prediction head in a Fast-rcnn network, taking an IoU value of a prediction detection frame as an index for measuring positioning quality, and taking a fusion score between classification quality and positioning quality as a ranking index in a non-maximum inhibition algorithm;
specifically, the RPN network and the Fast-rcnn network are in a cascade relation; a positioning prediction module is added in a Faster-rcnn model to measure the positioning quality of a cell nucleus area, and the IoU value of a prediction detection frame is used as an index for measuring the positioning quality, so that the difference between a training target and a testing target can be better reduced; the use of a Fusion score between classification quality and localization quality as a ranking indicator in a non-maxima suppression algorithm, which may be referred to as Fusion-NMS, helps to optimize the post-processor goal.
S104, training a Faster-rcnn variant model through the optimized cell nucleus image;
and S105, inputting the nuclear image to be tested into a trained Faster-rcnn variant model for detection.
In the method for detecting the nuclear image, provided by the embodiment of the invention, a fast-rcnn variant model of a two-stage neural network architecture is constructed and trained, IoU of a prediction detection frame is used as an index for measuring positioning quality, and a fusion score of classification and positioning is used as a ranking index in a non-maximum inhibition algorithm, so that the difference between a training target and a testing target is reduced, the target of a post-processing program is optimized, a clustered nucleus is effectively detected, the method can better deal with some challenges existing in the nuclear image, the precision and the robustness of nuclear detection are effectively improved, and a detection result can be stably output.
Further, in practical implementation, in the above method for detecting a nuclear image provided by the embodiment of the present invention, before the step S103 of constructing a fast-rcnn variant model, the method may further include: the model is initialized using transfer learning. The specific process of the step can be used for initializing a network model by using parameters trained in an ImageNet data set, according to the theory of transfer learning, a network learns shallow layer characteristics in an initial layer, and with the increase of the number of network layers, the network characteristics of a previous layer are further abstracted by a network of a next layer, the shallow layer network has more generalization, and the learned parameters can have larger differences according to different specific tasks by the network of the next layer.
In specific implementation, in the cell nucleus image detection method provided in the embodiment of the present invention, the step S103 uses a fusion score between the classification quality and the localization quality as a ranking index in the non-maximum suppression algorithm, which may specifically include: in the Fast-rcnn network, a more accurate positioning boundary box is reserved through a Fusion score between classification quality and positioning quality, which can be called Fusion-NMS (network management system), and in the Fusion-NMS, the Fusion score can be used as confidence to rank the detection box; meanwhile, in the non-maximum suppression algorithm, the detection frame with the highest score in the fusion scores is selected, namely the detection frame with the highest confidence coefficient is selected in the ranking sequence, the scores of the detection frame and all other detection frames which are larger than a predefined threshold value are attenuated, the candidate frames with the true positive after the attenuation scores are reserved for detection, namely the confidence coefficient of the other frames which are larger than the specified threshold value and have the highest score with IoU is reduced to 0 for deletion, and finally a reserved frame set is output, so that the adhesion and focusing kernels can be effectively detected.
In a specific implementation, in the above-described nuclear image detection method provided in the embodiment of the present invention, the feature extraction network of the fast-rcnn variant model constructed in step S103 may be a resnet50 network, and is formed by connecting four residual blocks, and is used to extract and learn features of the input image target. By utilizing the four residual blocks to core the network, the fusion of the shallow layer and the deep layer characteristics can be carried out.
As shown in fig. 2, the network structure of the Faster-rcnn variant model is mainly divided into the following five parts:
the first part is Resnet 50: resnet50 is formed by connecting four residual blocks and is used for extracting and continuously learning the characteristics of an input image target;
the second part is FPN: the FPN is used as a characteristic pyramid network, and a shallow characteristic diagram and an up-sampled deep characteristic diagram are fused to detect targets with different scales;
the third part is an RPN network: the RPN network is used for generating a candidate frame, the input of the RPN network is a feature map of the FPN, the RPN firstly carries out convolution operation of 3 x 3 on the feature map after the FPN to obtain 256-dimensional output, which is equivalent to that each pixel fuses peripheral 3 x 3 spatial information, and then each pixel point is mapped back to an original image by using 9 anchors; meanwhile, the foreground and background of anchors are classified by the upper branch and the lower branch respectively, and the anchors are corrected through the regression of a boundary box. By performing the series of operations, the RPN equivalently realizes the primary target positioning;
the fourth part is the RoI-align operation: corresponding areas of the position coordinates of the preliminary candidate frames generated through the rpn network are changed into feature maps with fixed sizes in the feature maps obtained by the FPN fusion module, so that the position coordinates can be conveniently sent to a subsequent full-connection layer to judge the category of the target, strengthen the positioning information and accurately determine the coordinate values of the boundary frames;
the fifth part is Classification and Localization: calculating the probability value of each region belonging to each category through a full connection layer and softmax by using a region feature map with a fixed size obtained by RoI-align; and calculating the positioning accuracy in each region through the full-connection layer and the sigmoid function, and obtaining the offset of each region by using the regression of the bounding box, thereby finally obtaining the accurate position of the detection frame.
In a specific implementation, in the method for detecting a nuclear image provided in the embodiment of the present invention, the step S104 trains the fast-rcnn variant model through the optimized nuclear image, which may specifically include: feeding the optimized cell nucleus images into a Faster-rcnn variant model in batches; binary cross entropy loss in classified predictors, L in localized predictors, output from last layer of Faster-rcnn variant modeliouThe loss and the Smooth-L1 loss in the regression prediction head are compared with the real label of the image to calculate the loss; transmitting the calculated loss in a network in a reverse direction to obtain the gradient of the network parameters; network parameters are adjusted by a Stochastic Gradient Descent (SGD) optimizer to minimize losses and optimize the network.
In particular, a binary cross entropy loss, LiouThe loss and Smooth-L1 loss calculation formulas are respectively as follows:
Figure BDA0002956442690000071
wherein, PiAnd GjRespectively representing a predicted feature map and a real label mask;
Liou=CrossEntropyLoss(iouscore,ioutarget)
ioutarget=bboxpred/gtbbox
wherein bboxpredTo predict the coordinates of the box gtbboxCoordinates of a real frame corresponding to the prediction frame;
Figure BDA0002956442690000081
wherein, f (x)i) And yiRespectively representing the predicted feature map and the true label mask.
It should be noted that the invention can also use the appointed training step number to set the dynamic learning rate to adjust the model, when the evaluation index of the network is not promoted any more, the learning rate of the network is reduced to improve the network performance, and at the same time, in 24 iterations, when the verification loss reaches the minimum, the parameters of the model at this time are saved.
In a specific implementation, in the method for detecting a nuclear image provided in the embodiment of the present invention, after the training is completed, the step S105 inputs the nuclear image to be tested into the trained fast-rcnn variant model for detection, which may specifically include: directly inputting the nuclear image to be tested into a trained Faster-rcnn variant model, predicting the nuclear image to be tested by utilizing the trained Faster-rcnn variant model, and outputting a classification probability map, a positioning probability map and coordinate values of a detection frame of a target area after the nuclear image to be tested passes through the trained Faster-rcnn variant model; taking the fusion score of the classification probability map and the positioning probability map as the distribution confidence of the final target region; in the non-maximum suppression algorithm, the Fusion score is used as a confidence ranking index in the non-maximum suppression algorithm, namely Fusion-NMS, namely, the Fusion-NMS performs sorting according to the distribution confidence, selects the detection frame with the highest confidence, attenuates the score which is larger than a predefined threshold value with all other detection frames, and retains the true positive candidate frame after the attenuation of the score so as to guide the detection of each cell nucleus in the cell nucleus image to be tested, especially the detection of adhesive and aggregated nuclei.
It should be noted that in most non-maximum suppression algorithms based on area detectors, the detection frame M with the highest confidence score is selected, and other frames with scores greater than a specified threshold are deleted, which may cause the omission of some valid frames, especially the omission of overlapping, aggregated frames. Thus, in Fusion-NMS, by decaying the score of M above a predefined threshold with all other detection boxes, the true positive candidate box after the decay score is retained for detection. As shown in fig. 3, the present invention applies the following rules for decay scoring:
Figure BDA0002956442690000082
wherein S isFscoreFusion score for classification and localization, SFscoreThe definition of (A) is as follows:
SFscore=(Scls+Siou)/2
wherein S isclsRepresents a classification score, SiouIndicating a location score.
In practical application, the cell nucleus image detection method provided by the embodiment of the invention can be realized under a Pytorch deep learning framework, and a computer is configured to adopt: an Intel Core i 56600K processor, a 16G memory, an NVIDIA V100 display card and a Linux operating system.
The following performance evaluations were performed on the Faster-rcnn variant model of the invention:
in recent years, in the field of biomedical image detection, fast-rcnn has been widely used, which is a two-stage detection model architecture that achieves very good performance in different biological detection applications. So far, the Faster-rcnn has many variants, and at present, many new convolutional neural network design schemes exist, but many still continue the core idea of the Faster-rcnn, and add new modules or integrate other design concepts. Wherein Cascade-rcnn introduces 3 Cascade networks in Faster-rcnn to achieve the aim of continuously optimizing the prediction result; HTC the method leads the frame branches and the mask branches of 3 cascaded networks to be interweaved and trained, and a semantic segmentation branch is added on the basis of fast-rcnn to provide context information. The DetectoRS method provides a Recursive Feature Pyramid (RFP) to replace the original FPN fusion feature information module in Faster-rcnn; ours is the process of the invention.
TABLE A comparison of the detection Performance of the method of the present invention with that of the prior art
Figure BDA0002956442690000091
As shown in Table I, the performance of the method of the present invention and the above algorithms is improvedIn comparison, the evaluation indices in the table are based on the average Accuracies (AP) of the threshold values 0.5 to 0.95 of box IoU, respectivelybbox) Threshold 0.5 Accuracy (AP) based on box IoU50) Threshold accuracy of 0.75 (AP) based on block IoU75). It is clear from table 1 that the method of the present invention achieves the best performance in the above test indexes over the previous methods.
TABLE II segmentation Performance comparison of the inventive method to the existing method
Figure BDA0002956442690000101
As shown in table two, the detection results of the method of the present invention were input into fcn for example segmentation, and compared in performance with the fcn segmentation branches, which are also identical to the algorithms described above. The evaluation indexes in the table are average Accuracies (AP) based on mask IoU threshold values of 0.5 to 0.95, respectivelybbox) Accuracy (AP) of 0.5 threshold based on mask IoU50) Accuracy (AP) of 0.75 threshold based on mask IoU75) Polymeric Jaccard index (AJI). The method of the present invention achieves the best performance over the previous methods on the above segmentation indices. The improvement of the segmentation performance further proves the effectiveness of the detection effect.
As shown in fig. 4, illustrating the detection result of the method of the present invention compared with the existing method, the first line is the input image of the original image; the second line is a real frame of a size corresponding to the input image; the detection results of the fast-rcnn, Cascade-rcnn, HTC and DetectORS for biomedical images are shown in the third row to the sixth row respectively, and the detection result graph shows that the cell nucleus detection based on the fast-rcnn and the variants thereof has a better result, but the detection results are missed and wrong for the adhered, aggregated, tiny and elongated nuclei; the last column is the method of the invention, and the detection image shows that the method of the invention has certain promotion on the targets of background interference, different scales and densities compared with the prior method, and can relatively well realize the segmentation of the cell nucleus image.
The cell nucleus image detection method provided by the embodiment of the invention is based on a regional convolutional neural network, provides a new branch for predicting the positioning quality on the basis of the Faster-rcnn network, and takes the Fusion score of the positioning quality and the classification quality as a scoring index in a non-maximum value inhibition algorithm, which is called Fusion-NMS. In order to solve the characteristics of cell nucleus clustering and aggregation, the method selects the detection box with the maximum score by fusing scores during Fusion-NMS, and attenuates the score which is greater than a predefined threshold value with all other detection boxes, so that the true positive candidate nucleus after the score is attenuated is reserved for detection. Due to some challenges with nuclear images: the scale of the cell nucleus is changed greatly, more background interference exists in the image, and the cell nucleus is gathered and adhered, so the method can better deal with some challenges existing in the cell nucleus image, effectively improve the precision and robustness of cell nucleus detection, and can stably output the detection result.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The embodiment of the invention provides a cell nucleus image detection method, which comprises the following steps: acquiring a data set sample with a cell nucleus image under a microscope; optimizing a cell nucleus image in a data set sample by adopting a data enhancement method and a data standardization method; constructing a fast-rcnn variant model of an end-to-end two-stage neural network architecture, wherein the first stage is an RPN network and is used for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frames; adding a positioning prediction head in a Fast-rcnn network, taking an IoU value of a prediction detection frame as an index for measuring positioning quality, and taking a fusion score between classification quality and positioning quality as a ranking index in a non-maximum inhibition algorithm; training a Faster-rcnn variant model through the optimized nuclear images; and inputting the nuclear image to be tested into a trained Faster-rcnn variant model for detection. The invention provides a fast-rcnn variant model of a two-stage neural network architecture, IoU of a prediction detection frame is used as an index for measuring positioning quality, and a fusion score of classification and positioning is used as a ranking index in a non-maximum inhibition algorithm, so that the method is beneficial to optimizing the target of a post-processing program and effectively detecting clustered nuclei.
Finally, it should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for the cell nucleus image detection method provided by the present invention, and the principle and the implementation of the present invention are explained in the present document by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (4)

1. A cell nucleus image detection method is characterized by comprising the following steps:
acquiring a data set sample with a cell nucleus image under a microscope;
optimizing images of cell nuclei in the data set sample by adopting a data enhancement method and a data standardization method;
constructing a fast-rcnn variant model of an end-to-end two-stage neural network architecture, wherein the first stage is an RPN network and is used for generating an initial candidate frame; the second stage is a Fast-rcnn network used for classifying and regressing the initial candidate frame; adding a positioning prediction head in the Fast-rcnn network, taking an IoU value of a prediction detection frame as an index for measuring positioning quality, and reserving a positioning boundary frame in the Fast-rcnn network through a fusion score between classification quality and positioning quality; meanwhile, in a non-maximum value inhibition algorithm, selecting a detection frame with the highest score in the fusion scores, attenuating the score which is larger than a predefined threshold value with all other detection frames, and reserving a true positive candidate frame after the score is attenuated for detection;
training the Faster-rcnn variant model with the optimized nuclear images, including: (iii) batching the optimized images of nuclei into the Faster-rcnn variant model; binary cross entropy loss in classified predictors, localized predictors output from last layer of the Faster-rcnn variant model
Figure DEST_PATH_IMAGE002
The loss and the Smooth-L1 loss in the regression prediction head are compared with the real label of the image to calculate the loss; transmitting the calculated loss in a network in a reverse direction to obtain the gradient of the network parameter; adjusting the network parameters by a stochastic gradient descent optimizer to minimize the loss;
inputting a nuclear image to be tested into the trained Faster-rcnn variant model for detection, wherein the method comprises the following steps: inputting the nuclear image to be tested into the trained Faster-rcnn variant model, and outputting a classification probability map, a positioning probability map and coordinate values of a detection frame of a target area; taking the fusion score of the classification probability map and the positioning probability map as the distribution confidence of the final target region; in a non-maximum suppression algorithm, sorting according to the distribution confidence, selecting a detection frame with the highest confidence, attenuating the score which is greater than a predefined threshold value with all other detection frames, and reserving a true positive candidate frame after the attenuation score so as to guide the detection of each cell nucleus in the cell nucleus image to be tested.
2. The nuclear image detecting method according to claim 1, wherein the feature extraction network of the Faster-rcnn variant model is a resnet50 network, which is formed by connecting four residual blocks, and is used for extracting and learning features of an input image object.
3. The method for detecting the nuclear image as claimed in claim 1, wherein the data enhancement method is to randomly expand, crop, flip and contrast distortion and brightness distortion the nuclear image in the data set sample;
the data standardization method is to enable the cell nucleus image sample data in the data set sample to fall into a [0, 1] interval by adopting linear normalization.
4. The nuclear image detection method according to claim 1, further comprising, before constructing the Faster-rcnn variant model:
the model is initialized using transfer learning.
CN202110224315.4A 2021-03-01 2021-03-01 Cell nucleus image detection method Active CN112819821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110224315.4A CN112819821B (en) 2021-03-01 2021-03-01 Cell nucleus image detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110224315.4A CN112819821B (en) 2021-03-01 2021-03-01 Cell nucleus image detection method

Publications (2)

Publication Number Publication Date
CN112819821A CN112819821A (en) 2021-05-18
CN112819821B true CN112819821B (en) 2022-06-17

Family

ID=75862489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224315.4A Active CN112819821B (en) 2021-03-01 2021-03-01 Cell nucleus image detection method

Country Status (1)

Country Link
CN (1) CN112819821B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361428B (en) * 2021-06-11 2023-03-24 浙江澄视科技有限公司 Image-based traffic sign detection method
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN114120127B (en) * 2021-11-30 2024-06-07 济南博观智能科技有限公司 Target detection method, device and related equipment
CN114782372B (en) * 2022-04-25 2023-04-18 昆明金域医学检验所有限公司 DNA fluorescence in situ hybridization BCR/ABL fusion state detection method and detection system
CN117274869B (en) * 2023-09-25 2024-03-26 北方工业大学 Cell deformation dynamic classification method and system based on deformation field extraction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550133A (en) * 2018-03-02 2018-09-18 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN
KR20190095901A (en) * 2018-02-07 2019-08-16 울산과학기술원 Method and apparatus for image conversion using machine learning algorithm
CN110245620A (en) * 2019-06-18 2019-09-17 杭州电子科技大学 A kind of non-maximization suppressing method based on attention
CN111951288A (en) * 2020-07-15 2020-11-17 南华大学 Skin cancer lesion segmentation method based on deep learning
CN112069874A (en) * 2020-07-17 2020-12-11 中山大学 Method, system, equipment and storage medium for identifying cells in embryo optical lens image
CN112153382A (en) * 2020-09-21 2020-12-29 南华大学 Dynamic 3D point cloud compression rapid CU partitioning method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190095901A (en) * 2018-02-07 2019-08-16 울산과학기술원 Method and apparatus for image conversion using machine learning algorithm
CN108550133A (en) * 2018-03-02 2018-09-18 浙江工业大学 A kind of cancer cell detection method based on Faster R-CNN
CN110245620A (en) * 2019-06-18 2019-09-17 杭州电子科技大学 A kind of non-maximization suppressing method based on attention
CN111951288A (en) * 2020-07-15 2020-11-17 南华大学 Skin cancer lesion segmentation method based on deep learning
CN112069874A (en) * 2020-07-17 2020-12-11 中山大学 Method, system, equipment and storage medium for identifying cells in embryo optical lens image
CN112153382A (en) * 2020-09-21 2020-12-29 南华大学 Dynamic 3D point cloud compression rapid CU partitioning method and device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Fast and Accurate Algorithm for Nuclei Instance Segmentation in Microscopy Images;Z. Cheng and A. Qu;《IEEE Access》;20200831;第158679页-158689页 *
An Integration Convolutional Neural Network for Nuclei Instance Segmentation;Aiping Qu,Zhiming Cheng,Xiaofeng He,Yue Li;《2020 IEEE International Conference on Bioinformatics and Biomedicine》;20210113;第1104-1109页 *
基于卷积神经网络和迁移学习的宫颈细胞学图像异常区域检测;何君婷;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20200215;E068-44 *

Also Published As

Publication number Publication date
CN112819821A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112819821B (en) Cell nucleus image detection method
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
CN112215117A (en) Abnormal cell identification method and system based on cervical cytology image
US20190205760A1 (en) Using a First Stain to Train a Model to Predict the Region Stained by a Second Stain
CN110648303B (en) Fundus image analysis method, computer device, and storage medium
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN111079620A (en) Leukocyte image detection and identification model construction method based on transfer learning and application
CN113658174B (en) Microkernel histology image detection method based on deep learning and image processing algorithm
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN115393351B (en) Method and device for judging cornea immune state based on Langerhans cells
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN109146891B (en) Hippocampus segmentation method and device applied to MRI and electronic equipment
CN115295154B (en) Tumor immunotherapy curative effect prediction method and device, electronic equipment and storage medium
CN112926652A (en) Fish fine-grained image identification method based on deep learning
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115210779A (en) Systematic characterization of objects in biological samples
CN111414930B (en) Deep learning model training method and device, electronic equipment and storage medium
CN115359264A (en) Intensive distribution adhesion cell deep learning identification method
CN111950544A (en) Method and device for determining interest region in pathological image
CN114972202A (en) Ki67 pathological cell rapid detection and counting method based on lightweight neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant