CN117593253A - Method, system, storage medium and device for detecting mitosis of mammary gland pathology image - Google Patents
Method, system, storage medium and device for detecting mitosis of mammary gland pathology image Download PDFInfo
- Publication number
- CN117593253A CN117593253A CN202311430004.9A CN202311430004A CN117593253A CN 117593253 A CN117593253 A CN 117593253A CN 202311430004 A CN202311430004 A CN 202311430004A CN 117593253 A CN117593253 A CN 117593253A
- Authority
- CN
- China
- Prior art keywords
- module
- radial basis
- candidate
- sample
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000011278 mitosis Effects 0.000 title claims abstract description 60
- 230000007170 pathology Effects 0.000 title claims abstract description 53
- 210000005075 mammary gland Anatomy 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000000394 mitotic effect Effects 0.000 claims abstract description 86
- 238000000605 extraction Methods 0.000 claims abstract description 73
- 238000001514 detection method Methods 0.000 claims abstract description 69
- 238000012549 training Methods 0.000 claims abstract description 44
- 230000006870 function Effects 0.000 claims description 97
- 210000004027 cell Anatomy 0.000 claims description 51
- 238000012795 verification Methods 0.000 claims description 41
- 230000004927 fusion Effects 0.000 claims description 40
- 230000001575 pathological effect Effects 0.000 claims description 23
- 210000000481 breast Anatomy 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 11
- 210000001519 tissue Anatomy 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000003064 k means clustering Methods 0.000 claims description 6
- 210000004940 nucleus Anatomy 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 210000003855 cell nucleus Anatomy 0.000 claims 1
- 230000000877 morphologic effect Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 11
- 230000004913 activation Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 206010006187 Breast cancer Diseases 0.000 description 2
- 208000026310 Breast neoplasm Diseases 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000004660 morphological change Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 235000018453 Curcuma amada Nutrition 0.000 description 1
- 241001512940 Curcuma amada Species 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000030270 breast disease Diseases 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 210000000349 chromosome Anatomy 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000036210 malignancy Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
- G06V10/763—Non-hierarchical techniques, e.g. based on statistics of modelling distributions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a task-guided method for detecting mitosis of a mammary gland pathology image by a radial basis network, which comprises the steps of obtaining the mammary gland pathology image, positive sample and negative sample image blocks and mitosis samples; constructing a training model of mitosis detection; training candidate modules of the model by using the image block sample and the mammary gland pathology image pair to obtain image candidate blocks; pre-training a feature extraction module of the candidate block input model to obtain an initial weight of the feature extraction module; and constructing a feature extractor which is the same as the feature extraction module, inputting the mitotic sample into the feature extractor, updating the weight of the feature extractor by using an initial weight value, outputting mitotic features, initializing a radial basis function center by using a feature operation clustering algorithm, reclassifying the output of the feature extraction module by using a convolution network embedded with the radial basis function, and determining an optimal radial basis function center by using iterative clustering update. The invention can better treat the variability of the morphological structure of the mitotic cells.
Description
Technical Field
The invention relates to an image processing technology, in particular to a method, a system, a storage medium and equipment for detecting mitosis of a mammary gland pathology image.
Background
Mitosis count is an important index for distinguishing the malignancy of breast cancer, and can assist diagnosis, treatment and prognosis. In clinical practice, the detection of mitotic cells in breast cancer sections is mainly performed manually, and pathologists typically observe pathological tissue sections under a microscope at high power fields (usually 40 x magnification) to identify regions of interest (ROIs), then analyze the overall tissue structure and local information of the cells, and make decisions based on their own experience. On the one hand, the process of manual detection is very complex, cumbersome and time consuming, and requires a pathologist with a great deal of expertise. On the other hand, manual detection is based on personal experience and subjective judgment of pathologists, different pathologists often obtain different results on the same pathological section, and the consistency of the grading diagnosis results is not high. For this reason, the limitations of manual detection procedures create the need for automation of mitotic cell counts, which is critical to the improvement of detection efficiency and reliability of the counting results.
With the development of deep learning, the field of computer vision has made a great progress. Mitotic detection proposes various models based on a deep learning framework, and can be mainly classified into pixel classification, semantic segmentation and object detection. The mitotic data sets used for these model training procedures are mainly centroid annotations. Centroid labeling (i.e., weak labeling) provides only the centroid coordinates of each mitotic cell, which is easier to label. Convolutional neural networks R-CNN will typically be used to process such centroid annotated datasets, but processing weakly labeled datasets is inefficient due to the lack of centroid labeled bounding box labels. Depth detection models typically require accurate mitotic bounding box labeling. However, the bounding box generated by the weakly supervised depth segmentation network is usually a rough estimate, and the reference true value (ground score) is not supervised reliably, which affects the detection performance of the centroid labeled dataset. At present, the existing mitosis detection method still cannot effectively solve the problem of large morphological changes in the mitosis cell, and the detection result has high false negative and false positive due to the fact that the morphological changes of chromosomes of the mitosis cell are various in four stages (early stage, middle stage, later stage and end stage) of division, so that the detection performance and generalization of the whole model are affected.
Disclosure of Invention
In order to solve the problems, the invention provides a task-guided method for detecting mitosis of a mammary gland pathology image by a radial basis network, which comprises the following steps:
s1, acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample;
s2, constructing a training model for mitosis detection, which comprises the following steps: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer;
the feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain weighted fused features;
the verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the mitotic cells of the pathological image based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, the radial basis network performs further category judgment on the mitotic cells of the pathological image according to the characteristics of the mitotic samples to obtain a mitotic detection result of the pathological image, and the center updating module is used for iteratively updating the radial basis function center of the radial basis network;
s3, training the candidate module by using the image block sample to obtain a trained candidate module;
inputting the mammary gland pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the mammary gland pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinate of each mitotic cell candidate point and the manually marked position coordinate;
s4, constructing a feature extractor, wherein the feature extractor is the same as a feature extraction module of the verification module, and the extracted candidate blocks are input into the feature extraction module of the verification module for classification pre-training to obtain an initial weight of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on candidate block classification tasks, the optimal radial basis function center is determined through iterative updating of a center updating module, a mitotic sample is input into the feature extractor, the weight of the feature extractor is updated through the initial weight, the feature extractor outputs feature expression of the mitotic sample, the feature expression of the mitotic sample is operated to initialize the radial basis function center of a convolution network embedded with the radial basis function by a K-means clustering algorithm, the initialized convolution network embedded with the radial basis function is used for reclassifying the output of the feature extraction module, and the optimal radial basis function center is determined through iterative clustering updating;
s5, connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells.
Further, the positive sample sampling randomly shifts by taking the manually marked position in the mammary gland pathology image as a reference point, then samples image blocks with the same size and the same reference point inside by taking the shifted position as a center, and stores the shift amount for a position regression task; the negative sample is generated by randomly sampling an image block with the same size as the positive sample in the mammary gland pathology image, and the distance between the center of the negative sample and the reference point is larger than the sampling size of the image block.
Further, the feature extraction layer takes a convolutional neural network as a main network, the main network does not adopt a full-connection layer, each layer of the main network is convolved and then adopts a batch normalization layer and an activation function Relu, and a maximum pooling layer is inserted between the continuous convolution layers;
the input of each side output layer is connected with each pooling layer and the last convolution layer of the feature extraction layer, the channel features are recalibrated by an attention module to obtain the features of different calibrated channels, the features of different calibrated channels are combined by using a convolution layer with the kernel size of 1 multiplied by 1 to obtain combined features, a full convolution layer is used as a classifier of the combined features, and the output of the side output layer is obtained by a softmax layer; and the fusion layer performs weighted fusion on the outputs of the side output layers with different scales to obtain the weighted fused characteristics.
Further, the total loss function of the candidate module during training consists of a loss function of the side output layer and a loss function of the fusion layer;
the loss function of the side output layer comprises position loss and category loss;
the loss function of the fusion layer comprises position loss and category loss;
the expression of the total loss function when the candidate module is trained is as follows:
where L (W) is the total loss function of the candidate block during training, W represents all parameters to be learned in the candidate block,representing class loss of the s-th output of the side output layer, i representing binary label true value of the image block sample, negative sample being identified by 0, positive sample being identified by 1,/->Derived for each side output layer Softmax classifier [0,1 ]]A range of predicted mitotic nucleus probabilities; />Representing class loss of fusion layer, gamma is a super parameter balancing costs between classification and regression tasks, +.>For the position loss of the side output layer, (x, y) is the true value of the nuclear center offset recorded in the image block sampling process, +.>Offset of candidate block estimated for candidate block, +.>Is the loss of position of the fusion layer.
Further, the class loss for each output of the side output layer is defined by a cross entropy function, expressed as follows:
wherein,class loss representing the s-th output of the side output layer, N + Representing the number of positive sample image blocks; n (N) - Representing the number of negative sample image blocks, l i A binary label true value representing the ith input image block sample.
Further, the position loss of the fusion layer is defined using the euclidean L2 norm, expressed as follows:
wherein 1 (·) is the indicator function, 1 (l) i =1) is represented at l i When=1, it is 1, l i A binary label true value representing the ith input image block sample.
Further, training of the radial basis network uses a binary cross entropy loss function to perform category judgment learning on candidate blocks:
wherein,for class loss of radial basis network, N represents the number of samples, l i Binary tag truth value representing the ith candidate block, non-mitotic cells are identified with 0, mitotic cells are identified with 1,/o>The i candidate block mitotic nucleus prediction probability is obtained for a weighted combination of different radial basis functions.
The invention also proposes a task-guided radial basis network system for detecting mitosis of a mammary gland pathology image, comprising:
the data acquisition module is used for acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample;
a model construction module for constructing a training model for mitotic detection, comprising: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer;
the feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain weighted fused features;
the verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the mitotic cells of the pathological image based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, further performs category judgment on the preliminary classification result of the feature extraction module to obtain a mitotic detection result of the pathological image, and the center updating module is used for iteratively updating the radial basis function center of the radial basis network;
the first-stage training module is used for training the candidate module by using the image block sample to obtain a trained candidate module; inputting the mammary gland pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the mammary gland pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinate of each mitotic cell candidate point and the manually marked position coordinate;
the second stage training module of the model is used for training the verification module according to the following method: constructing a feature extractor, wherein the feature extractor is the same as a feature extraction module of the verification module, and inputting the extracted candidate blocks into the feature extraction module of the verification module for classification pre-training to obtain initial weights of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updating of a center updating module; inputting a mitotic sample into the feature extractor, updating the weight of the feature extractor by using the initial weight, outputting the feature expression of the mitotic sample by the feature extractor, initializing the radial basis function center of a convolution network embedded with a radial basis function by running a K-means clustering algorithm on the feature expression of the mitotic sample, reclassifying the output of a feature extraction module by using the initialized convolution network embedded with the radial basis function, and determining the optimal radial basis function center by adopting iterative cluster updating;
the detection module is used for connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells.
The invention also proposes a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for mitosis detection of a breast pathology image described above.
The invention also proposes an electronic device comprising a processor and a memory, the processor being interconnected with the memory, wherein the memory is adapted to store a computer program comprising computer readable instructions, the processor being configured to invoke the computer readable instructions for performing the above-mentioned detection of mitosis of a mastopathy image.
The technical scheme provided by the invention has the beneficial effects that:
the invention constructs a task-guided deep radial basis network, which comprises a candidate module and a verification module, wherein the candidate module is trained by using positive and negative samples, candidate points are acquired by using the trained candidate module and a whole mammary gland pathology image so as to obtain candidate blocks, position expression is added into image block information, a deep supervision mechanism is integrated into a candidate block detection network so as to obtain high-quality mitotic cell candidate blocks and more accurate position positioning, and the verification module integrates characteristic extraction and radial basis function expression into a unified frame, so that the advantages of good approximation capability and generalization capability of the radial basis network are fully exerted; the radial basis function center definition is combined with the task of identifying the mitotic cells, and the variability of the morphological structure of the mitotic cells is better treated by utilizing different radial basis function centers.
Drawings
FIG. 1 is a flow chart of a method of mitosis detection of a breast pathology image according to an embodiment of the present invention;
FIG. 2 is a block diagram of a candidate module in an embodiment of the invention;
FIG. 3 is a training process diagram of candidate modules in an embodiment of the invention;
FIG. 4 is a training process diagram of a verification module in an embodiment of the invention;
FIG. 5 is a block diagram of a feature extraction module of a verification module in an embodiment of the invention;
fig. 6 is a block diagram of an electronic device in an exemplary embodiment of an embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
The flow chart of the method for detecting mitosis of a mammary gland pathology image according to the embodiment of the invention is shown in fig. 1, and specifically comprises the following steps:
s1, acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample.
The method comprises the steps that positive samples are randomly offset by taking the manually marked positions in a mammary gland pathology image as reference points, sampling image blocks with the same size and the reference points inside by taking the offset positions as centers, and storing offset of each positive sample for a position regression task;
the negative sample is generated by randomly sampling an image block with the same size as the positive sample in the mammary gland pathology image, and the distance between the center of the negative sample and the reference point is larger than the sampling size of the image block.
In a further embodiment, the block size of the sampled positive and negative samples is set to 60×60 pixels, and the positive and negative sample ratio is 1:2, the random offset of the positive sample is 0 to 10 pixels.
S2, constructing a training model for mitosis detection, which comprises the following steps: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer, and the structure diagram of the candidate module is shown in fig. 2.
The feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain the weighted fused features.
The feature extraction layer takes a convolutional neural network as a backbone network, the backbone network does not adopt a full connection layer, and a maximum pooling layer is inserted between continuous convolutional layers.
Further embodiments referring to fig. 2, the feature extraction layer comprises 9 convolutions, 32 x 2 convolutions, a convolution kernel of the 1 st convolution and the 2 nd convolution of 3 x 3, an output channel of 64, the 1 st convolution and the 2 nd convolution are connected, and then the 1 st convolution is added to the 1 st input of the side output layer; the 1 st input of the side output layer is subjected to 3 rd convolution and 4 th convolution, and the 2 nd input of the side output layer is obtained through 2 nd pooling, the convolution kernel of the 3 rd convolution and the 4 th convolution is 3 multiplied by 3, and the output channel is 128; the 2 nd input of the side output layer is subjected to 5 th convolution, 6 th convolution and 7 th convolution, is subjected to 3 rd pooling, and is subjected to 8 th convolution and 9 th convolution to obtain the 3 rd input of the side output layer, wherein the convolution kernels of the 5 th convolution, the 6 th convolution and the 7 th convolution are 3 multiplied by 3, the output channel is 256, the convolution kernel of the 8 th convolution is 3 multiplied by 3, the output channel is 512, the convolution kernel of the 9 th convolution is 1 multiplied by 1, and the output channel is 1.
The input of each side output layer is connected with the first two pooling layers and the last convolution layer of the feature extraction layer, supervised learning is carried out on each side output layer, the input of each side output layer is subjected to a concentration module (CBAM) to recalibrate channel features to obtain the features of different calibrated channels, the features of different calibrated channels are combined by using a convolution layer with the kernel size of 1 multiplied by 1 to obtain combined features, a full convolution layer is used as a classifier of the combined features, and the output of the side output layer is obtained by a softmax layer; the fusion layer performs weighted fusion on the outputs of the side output layers with different scales.
In a further embodiment, referring to fig. 2, after the 1 st input of the side output layer passes through the 1 st attention module (CBAM), the 1 st convolution with the 32 st output channel of the 1 st input, the 1 st full convolution with the 18×18 kernel size, the 3 channels number and the 4 step size, and finally the 1 st output of the output side output layer is obtained by the 1 st softmax layer to obtain the classification and regression result; after the 2 nd input of the side output layer passes through a2 nd attention module (CBAM), the 2 nd convolution with the 32 nd output channel of the 2 nd input is performed, the 2 nd full convolution with the core size of 7 multiplied by 7, the channel number of 3 and the step length of 2 is performed, and finally the classification and regression result is obtained through the 2 nd softmax layer, and the 2 nd output of the output side output layer is performed; after the 3 rd input of the side output layer passes through a 3 rd attention module (CBAM), the 3 rd convolution with the output channel of the 3 rd input being 32, the 3 rd full convolution with the core size being 1×1 and the output channel being 3, and finally the 3 rd output of the output side output layer is obtained through the 3 rd softmax layer to obtain classification and regression results.
The fusion layer firstly carries out Concat operation on 3 outputs of the output layer at the output side, and then the final output result is obtained through 1 convolution with the core size of 1 multiplied by 1 and the output channel of 3 and 1 softmax layer.
The verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the pathological image mitotic cells based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, the radial basis network performs further category judgment on the pathological image mitotic cells according to the characteristics of the mitotic samples to obtain a pathological image mitotic detection result, the center updating module is used for iteratively updating the radial basis function center of the radial basis network, and the radial basis function center is continuously and iteratively updated to gradually reduce errors generated by the network, further optimize the model and improve the mitotic detection performance.
S3, training the candidate module by using the image block sample to obtain a trained candidate module;
inputting the breast pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the breast pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinates of each mitotic cell candidate point and the manually marked position coordinates. The training process for candidate modules refers to fig. 3.
The total loss function of the candidate module during training consists of a loss function of the side output layer and a loss function of the fusion layer;
the loss function of the side output layer includes position loss and category loss.
The class loss for each output of the side output layer is defined by a cross entropy function, expressed as follows:
wherein,class loss representing the s-th output of the side output layer, N + Representing the number of positive sample image blocks; n (N) - Representing the number of negative sample image blocks, l i A binary label true value representing the ith input image block sample.
The loss function of the fusion layer comprises position loss and category loss.
The loss of position of the fusion layer is defined using the euclidean L2 norm, expressed as follows:
wherein 1 (·) is the indicator function, 1 (l) i =1) is represented at l i When=1, it is 1, l i A binary label true value representing the ith input image block sample.
The expression of the total loss function when the candidate module is trained is as follows:
where L (W) is the total loss function of the candidate block during training, W represents all parameters to be learned in the candidate block,representing class loss of the s-th output of the side output layer, i representing binary label true value of the image block sample, negative sample image block is identified with 0, positive sample is identified with 1,/>Derived for each side output layer Softmax classifier [0,1 ]]A range of predicted mitotic nucleus probabilities; />Loss of category representing fusion layer, gamma is a super parameter balancing costs between classification and regression tasks, ++>For the position loss of the side output layer, (x, y) is the true value of the nuclear center offset recorded in the image block sampling process, +.>The offset of the candidate block estimated for the candidate block,is the loss of position of the fusion layer.
S4, constructing a feature extractor, wherein the feature extractor is identical to a feature extraction module of the verification module, and a structural diagram of the feature extraction module of the verification module is shown in FIG. 5. The input of the feature extraction module firstly passes through a first 3 multiplied by 3 convolution, a first BN (Batch Normalization) layer and a first Maxpooling to obtain a first output, the output 1 sequentially passes through a second 3 multiplied by 3 convolution, a second BN layer, a first Relu activation function, a third 3 multiplied by 3 convolution and a third BN layer to obtain a second output, and the first output is added with the second output through jump connection to obtain a third output; the third output obtains a fourth output through a second Relu activation function, the fourth output sequentially obtains a fifth output through a fourth 3×3 convolution, a fourth BN layer, a third Relu activation function, a fifth 3×3 convolution, a fifth BN layer, and the fourth output obtains a sixth output through jump connection and fifth output addition after the fourth output passes through the first 1×1 convolution; the sixth output is subjected to a fourth Relu activation function to obtain a seventh output, the seventh output is sequentially subjected to a sixth 3 multiplied by 3 convolution, a sixth BN layer, a fifth Relu activation function, a seventh 3 multiplied by 3 convolution and a seventh BN layer to obtain an eighth output, and the seventh output is subjected to a second 1 multiplied by 1 convolution and then added with the eighth output to obtain a ninth output through jump connection; the ninth output is subjected to a sixth Relu activation function to obtain tenth output, the tenth output is sequentially subjected to eighth 3×3 convolution, eighth BN layer, seventh Relu activation function, ninth 3×3 convolution and ninth BN layer to obtain eleventh output, and the tenth output is subjected to third 1×1 convolution and then added with the eleventh output through jump connection to obtain twelfth output; the twelfth output sequentially passes through a tenth 3×3 convolution, a tenth BN layer, an eighth Relu activation function, and a second Maxpooling to obtain a final output.
Inputting the extracted candidate blocks into a feature extraction module of a verification module for classification pre-training to obtain initial weights of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through the iterative updating of the center updating module based on the classification task, and utilizes different radial basis function centers to better treat the variability of the morphological structure of the mitotic cell, so as to further improve the detection accuracy of the mitotic cell, and specifically comprises the following steps: inputting the mitotic sample into the feature extractor, updating the weight of the feature extractor by using the initial weight, outputting the feature expression of the mitotic sample by the feature extractor, initializing the radial basis function center of the convolution network embedded with the radial basis function by running a K-means clustering algorithm on the feature expression of the mitotic sample, reclassifying the output of the feature extraction module by using the initialized convolution network embedded with the radial basis function, re-endowing the clustering algorithm with the radial basis network weight obtained after reclassifying to perform the clustering to obtain a new clustering center, updating the basis function center of the radial basis network in the next iteration by using the new clustering center, and determining the optimal radial basis function center by adopting the iterative clustering updating. The training process for the verification module is referred to in fig. 4.
Training of the radial basis network uses a binary cross entropy loss function to perform category judgment learning on candidate blocks:
wherein,for class loss of radial basis network, N represents the number of samples, l i Binary tag truth value representing the ith candidate block, non-mitotic cells are identified with 0, mitotic cells are identified with 1,/o>The i candidate block nuclear mitosis prediction probability is obtained for a weighted combination of different radial basis functions.
S5, connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells. And inputting the pictures into a candidate module of the mitosis detection model to obtain candidate anchor points, and inputting the obtained candidate anchor points into a radial basis network of the mitosis detection model to further refine and classify the candidate anchor points to obtain a final mitosis detection result of the breast pathological tissue image cells.
The invention also proposes a task-guided radial basis network system for detecting mitosis of a mammary gland pathology image, comprising:
the data acquisition module is used for acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample;
a model construction module for constructing a training model for mitotic detection, comprising: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer;
the feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain weighted fused features;
the verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the mitotic cells of the pathological image based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, further performs category judgment on the preliminary classification result of the feature extraction module to obtain a mitotic detection result of the pathological image, and the center updating module is used for iteratively updating the radial basis function center of the radial basis network;
the first-stage training module is used for training the candidate module by using the image block sample to obtain a trained candidate module; inputting the mammary gland pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the mammary gland pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinate of each mitotic cell candidate point and the manually marked position coordinate;
the second stage training module of the model is used for training the verification module according to the following method: constructing a feature extractor, wherein the feature extractor is the same as a feature extraction module of the verification module, and inputting the extracted candidate blocks into the feature extraction module of the verification module for classification pre-training to obtain initial weights of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updating of a center updating module; inputting a mitotic sample into the feature extractor, updating the weight of the feature extractor by using the initial weight, outputting the feature expression of the mitotic sample by the feature extractor, initializing the radial basis function center of a convolution network embedded with a radial basis function by running a K-means clustering algorithm on the feature expression of the mitotic sample, reclassifying the output of a feature extraction module by using the initialized convolution network embedded with the radial basis function, and determining the optimal radial basis function center by adopting iterative cluster updating;
the detection module is used for connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells.
In an exemplary embodiment, a computer readable storage medium is included, the computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for mitotic detection of a breast pathology image described above.
Referring to fig. 6, in an exemplary embodiment, an electronic device is further included that includes at least one processor, at least one memory, and at least one communication bus.
Wherein the memory has stored thereon a computer program comprising computer readable instructions, and the processor invokes the computer readable instructions stored in the memory via the communication bus to perform the above-described detection of mitosis of the breast pathology image.
In this example, the performance evaluation index of the method of the present invention was compared with several other conventional methods on ICPR 2014 and AMIDA2013 mitotic cell public datasets based on centroid labeling, and the comparison results are shown in table 1 and table 2.
Table 1 shows the results of different methods on ICPR 2014 verification set, among them: deep misis, MSSN (multi-scale and similarity learning convnets, multiscale and similarity learning convolutional Network), segmiss (mitosis segmentation model, mitotic segmentation model), RCNN based (regional convolutional neural Network based, based on regional convolutional neural Network), resnet-101 (Residual Network), bia+pms (Box-supervised Instance-Aware and Pseudo-Mask-supervised Semantic, box supervision instance awareness and Pseudo Mask supervision semantics).
Table 1 results of different methods on ICPR 2014 verification set
Table 2 shows the results of different methods on the AMIDA2013 validation set, among which the method lightweight dnn (Lightweight Deep Neural Networks, lightweight deep neural network), deep res net + houghvoing (Deep Residual Network and Hough voting, depth residual network and hough voting), segMitos-random (mitosis segmentation model with the random concentric label, mitotic segmentation model with random concentric circle tags), partMitosis (Partially Supervised Deep Learning network forMitosis Detection, partially supervised deep learning network based on mitotic detection).
Table 2 results of different methods on the amada 2013 validation set
Method | Accuracy rate of | Recall rate of recall | F fraction |
LightweightDNN | 0.470 | 0.780 | 0.556 |
DeepResNet+HoughVoting | 0.547 | 0.686 | 0.609 |
SegMitos-random | 0.669 | 0.677 | 0.673 |
PartMitosis | 0.743 | 0.658 | 0.698 |
OurNet | 0.751 | 0.707 | 0.728 |
The comparison result shows that the method for mitosis detection by using the detection model of the deep supervision mechanism and the deep convolution network model of the radial basis function is obviously superior to other various methods in terms of performance evaluation indexes such as accuracy, recall rate and F fraction.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. The task-guided radial basis network mitosis detection method for the mammary gland pathology image is characterized by comprising the following steps of:
s1, acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample;
s2, constructing a training model for mitosis detection, which comprises the following steps: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer;
the feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain weighted fused features;
the verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the mitotic cells of the pathological image based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, further performs category judgment on the preliminary classification result of the feature extraction module to obtain a mitotic detection result of the pathological image, and the center updating module is used for iteratively updating the radial basis function center of the radial basis network;
s3, training the candidate module by using the image block sample to obtain a trained candidate module; inputting the mammary gland pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the mammary gland pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinate of each mitotic cell candidate point and the manually marked position coordinate;
s4, constructing a feature extractor, wherein the feature extractor is the same as a feature extraction module of the verification module, and the extracted candidate blocks are input into the feature extraction module of the verification module for classification pre-training to obtain an initial weight of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updating of a center updating module; inputting a mitotic sample into the feature extractor, updating the weight of the feature extractor by using the initial weight, outputting the feature expression of the mitotic sample by the feature extractor, initializing the radial basis function center of a convolution network embedded with a radial basis function by running a K-means clustering algorithm on the feature expression of the mitotic sample, reclassifying the output of a feature extraction module by using the initialized convolution network embedded with the radial basis function, and determining the optimal radial basis function center by adopting iterative cluster updating;
s5, connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells.
2. The task-guided radial basis network method for mitosis detection of a breast pathology image according to claim 1, wherein positive sample sampling randomly shifts with the manually marked position in the breast pathology image as a reference point, then samples image blocks of the same size with the shifted position as a center and with the reference point inside, and saves the shift for a position regression task; the negative sample is generated by randomly sampling an image block with the same size as the positive sample in the mammary gland pathology image, and the distance between the center of the negative sample and the reference point is larger than the sampling size of the image block.
3. The task-guided radial basis network-to-breast pathology image mitosis detection method according to claim 1, wherein the feature extraction layer uses a convolutional neural network as a backbone network, the backbone network does not adopt a full connection layer, and a maximum pooling layer is inserted between successive convolutional layers;
the input of each side output layer is connected with the first two pooling layers and the last convolution layer of the feature extraction layer, the channel features are recalibrated by an attention module to obtain the features of different calibrated channels, the features of different calibrated channels are combined by using a convolution layer with the kernel size of 1 multiplied by 1 to obtain combined features, a full convolution layer is used as a classifier of the combined features, and the output of the side output layer is obtained by a softmax layer; and the fusion layer performs weighted fusion on the outputs of the side output layers with different scales to obtain the weighted fused characteristics.
4. A task guided radial basis network to breast pathology image mitosis detection method according to claim 3, characterized in that the total loss function of the candidate module training consists of two parts, namely the loss function of the side output layer and the loss function of the fusion layer;
the loss function of the side output layer comprises position loss and category loss;
the loss function of the fusion layer comprises position loss and category loss;
the expression of the total loss function when the candidate module is trained is as follows:
where L (W) is the total loss function of the candidate block during training, W represents all parameters to be learned in the candidate block,representing class loss of the s-th output of the side output layer, i representing binary label true value of the image block sample, negative sample being identified by 0, positive sample being identified by 1,/->Derived for each side output layer Softmax classifier [0,1 ]]A range of predicted mitotic nucleus probabilities; />Representing class loss of fusion layer, gamma is a super parameter balancing costs between classification and regression tasks, +.>For the loss of position of the side output layer, (x, y) is the recorded center offset of the cell nucleus during image block samplingTrue value of shift, ->Offset of candidate block estimated for candidate block, +.>Is the loss of position of the fusion layer.
5. The task directed radial basis network mitosis detection method for a breast pathology image of claim 4, wherein the class loss for each output of the side output layer is defined by a cross entropy function, expressed as follows:
wherein,class loss representing the s-th output of the side output layer, N + Representing the number of positive sample image blocks; n (N) - Representing the number of negative sample image blocks, l i A binary label true value representing the ith input image block sample.
6. The task directed radial basis network mitosis detection method for breast pathology images of claim 4, wherein the loss of position of the fusion layer is defined using euclidean L2 norm, expressed as follows:
wherein 1 (·) is the indicator function, 1 (l) i =1) is represented at l i When=1, it is 1, l i A binary label true value representing the ith input image block sample.
7. The task directed radial basis network mitosis detection method for breast pathology images of claim 1, wherein training of the radial basis network uses a binary cross entropy loss function to perform class decision learning on candidate blocks:
wherein,for class loss of radial basis network, N represents the number of samples, l i Binary tag truth value representing the ith candidate block, non-mitotic cells are identified with 0, mitotic cells are identified with 1,/o>The i candidate block mitotic nucleus prediction probability is obtained for a weighted combination of different radial basis functions.
8. A task directed radial basis network mitosis detection system for a breast pathology image, comprising:
the data acquisition module is used for acquiring a mammary gland pathology image, acquiring an image block sample from the mammary gland pathology image, wherein the image block sample comprises a positive sample and a negative sample, the positive sample comprises category information and center offset information of the image block sample, the negative sample only comprises the category information of the image block sample, and a mitosis sample is selected from the image block sample;
a model construction module for constructing a training model for mitotic detection, comprising: the device comprises a candidate module and a verification module, wherein the candidate module comprises a feature extraction layer, a side output layer and a fusion layer;
the feature extraction layer outputs feature graphs with different scales, the side output layer utilizes a depth supervision and attention mechanism to supervise, learn and output the feature graphs with different scales output by each feature extraction layer, and the fusion layer carries out weighted fusion on the outputs of the side output layers with different scales to obtain weighted fused features;
the verification module comprises a feature extraction module, a radial base network and a center updating module;
the feature extraction module performs preliminary classification on the mitotic cells of the pathological image based on the weighted and fused features, the radial basis network uses a convolution network embedded with a radial basis function, further performs category judgment on the preliminary classification result of the feature extraction module to obtain a mitotic detection result of the pathological image, and the center updating module is used for iteratively updating the radial basis function center of the radial basis network;
the first-stage training module is used for training the candidate module by using the image block sample to obtain a trained candidate module; inputting the mammary gland pathology image into a trained candidate module to obtain an anchor point array, wherein a positive anchor point is used as a mitotic cell candidate point, candidate blocks are extracted from the mammary gland pathology image by taking the mitotic cell candidate point as a center, and the label of each candidate block is determined according to the distance between the coordinate of each mitotic cell candidate point and the manually marked position coordinate;
the second stage training module of the model is used for training the verification module according to the following method: constructing a feature extractor, wherein the feature extractor is the same as a feature extraction module of the verification module, and inputting the extracted candidate blocks into the feature extraction module of the verification module for classification pre-training to obtain initial weights of the feature extraction module; the radial basis network guides the center initialization of the radial basis function based on the candidate block classification task, and determines the optimal radial basis function center through iterative updating of a center updating module; inputting a mitotic sample into the feature extractor, updating the weight of the feature extractor by using the initial weight, outputting the feature expression of the mitotic sample by the feature extractor, initializing the radial basis function center of a convolution network embedded with a radial basis function by running a K-means clustering algorithm on the feature expression of the mitotic sample, reclassifying the output of a feature extraction module by using the initialized convolution network embedded with the radial basis function, and determining the optimal radial basis function center by adopting iterative cluster updating;
the detection module is used for connecting the trained radial basis networks of the candidate module and the verification module to form a mitosis detection model, and inputting the whole mammary gland pathological tissue image to be detected into the mitosis detection model to obtain a detection result of mitosis cells.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program implementing the steps of the method according to any of claims 1-7 when executed by a processor.
10. An electronic device comprising a processor and a memory, the processor being interconnected with the memory, wherein the memory is configured to store a computer program comprising computer readable instructions, the processor being configured to invoke the computer readable instructions to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311430004.9A CN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311430004.9A CN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117593253A true CN117593253A (en) | 2024-02-23 |
Family
ID=89914194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311430004.9A Withdrawn CN117593253A (en) | 2023-10-30 | 2023-10-30 | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117593253A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118469824A (en) * | 2024-07-10 | 2024-08-09 | 川北医学院附属医院 | Vocal cord image recognition system based on image enhancement |
CN118520900A (en) * | 2024-07-23 | 2024-08-20 | 湖南南华生物技术有限公司 | Vision-assisted large-scale cell culture counting method |
-
2023
- 2023-10-30 CN CN202311430004.9A patent/CN117593253A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118469824A (en) * | 2024-07-10 | 2024-08-09 | 川北医学院附属医院 | Vocal cord image recognition system based on image enhancement |
CN118520900A (en) * | 2024-07-23 | 2024-08-20 | 湖南南华生物技术有限公司 | Vision-assisted large-scale cell culture counting method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506761B (en) | Brain image segmentation method and system based on significance learning convolutional neural network | |
El Achi et al. | Automated diagnosis of lymphoma with digital pathology images using deep learning | |
CN107274386B (en) | artificial intelligent auxiliary cervical cell fluid-based smear reading system | |
CN110909820B (en) | Image classification method and system based on self-supervision learning | |
Rashmi et al. | Breast histopathological image analysis using image processing techniques for diagnostic purposes: A methodological review | |
CN110633758A (en) | Method for detecting and locating cancer region aiming at small sample or sample unbalance | |
CN117593253A (en) | Method, system, storage medium and device for detecting mitosis of mammary gland pathology image | |
Bai et al. | NHL Pathological Image Classification Based on Hierarchical Local Information and GoogLeNet‐Based Representations | |
Zanjani et al. | Cancer detection in histopathology whole-slide images using conditional random fields on deep embedded spaces | |
CN110705565A (en) | Lymph node tumor region identification method and device | |
CN116012353A (en) | Digital pathological tissue image recognition method based on graph convolution neural network | |
Chen et al. | Segmentation of overlapping cervical cells with mask region convolutional neural network | |
CN111694954B (en) | Image classification method and device and electronic equipment | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
Zhao et al. | Complete three‐phase detection framework for identifying abnormal cervical cells | |
Salama et al. | Enhancing Medical Image Quality using Neutrosophic Fuzzy Domain and Multi-Level Enhancement Transforms: A Comparative Study for Leukemia Detection and Classification | |
CN114782948A (en) | Global interpretation method and system for cervical liquid-based cytology smear | |
CN112633169B (en) | Pedestrian recognition algorithm based on improved LeNet-5 network | |
CN108960005B (en) | Method and system for establishing and displaying object visual label in intelligent visual Internet of things | |
CN113762151A (en) | Fault data processing method and system and fault prediction method | |
CN111210398A (en) | White blood cell recognition system based on multi-scale pooling | |
CN105844299B (en) | A kind of image classification method based on bag of words | |
Thapa et al. | Deep learning for breast cancer classification: Enhanced tangent function | |
CN108304546B (en) | Medical image retrieval method based on content similarity and Softmax classifier | |
US20230196541A1 (en) | Defect detection using neural networks based on biological connectivity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20240223 |