CN115082420A - Example segmentation method of histopathological cell nucleus based on deep learning - Google Patents

Example segmentation method of histopathological cell nucleus based on deep learning Download PDF

Info

Publication number
CN115082420A
CN115082420A CN202210832755.2A CN202210832755A CN115082420A CN 115082420 A CN115082420 A CN 115082420A CN 202210832755 A CN202210832755 A CN 202210832755A CN 115082420 A CN115082420 A CN 115082420A
Authority
CN
China
Prior art keywords
segmentation
feature map
convolution
cell nucleus
histopathological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210832755.2A
Other languages
Chinese (zh)
Inventor
周书航
周扬
王健
赵晶
何勇军
丁博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202210832755.2A priority Critical patent/CN115082420A/en
Publication of CN115082420A publication Critical patent/CN115082420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for segmenting an example of a histopathology cell nucleus based on deep learning, and relates to the problem of accurate segmentation of the example of the histopathology cell nucleus in histopathology image analysis. Example segmentation of histopathological nuclei not only distinguishes nuclei from the background of the picture, but also precisely segments each individual nucleus contour. This poses a challenge for accurate segmentation of histopathological nuclear examples due to the complex image background, lack of sharp boundaries between nuclei, and large nuclear size and morphological changes. The invention provides a histopathology cell nucleus example segmentation method based on a CondInst model. Experiments show that the method effectively solves the problem of missing segmentation or mistaken segmentation of the histopathology cell nucleus, and the outline of the cell nucleus is more fit with the outline of the real cell nucleus. The invention is applied to the precise segmentation of the histopathological nucleus example.

Description

Example segmentation method of histopathological cell nucleus based on deep learning
Technical Field
The invention relates to example segmentation of histopathological nuclei.
Background
Histopathological images can provide a great deal of useful information for the classification and classification of diagnosed cancer, and therefore histopathological image analysis is particularly important in the process of cancer diagnosis. Since the nucleus is the most obvious structure in the tissue, abnormal changes in the nucleus can convey a lot of information about the disease. The traditional diagnostic method is for pathologists to manually examine and analyze the diseased cell nuclei under a microscope. Such manual diagnosis is a tedious and time consuming task, and specificity and sensitivity can be influenced by the experience of the pathologist. The problem can be effectively solved by computer-aided image analysis, and accurate segmentation of cell nuclei is a key step of computer-aided histopathology image analysis. Nuclear segmentation is considered a prerequisite for determining cell phenotype, nuclear morphometry, cell classification, and provides strong support for cancer staging and prognosis.
Example segmentation of histopathological nuclei not only distinguishes nuclei from the background of the picture, but also precisely segments each individual nucleus contour. Due to the fact that noise exists in images, the types of cell nuclei are various, and the sizes and the shapes of the cell nuclei are different, difficulty is caused in feature extraction, missing segmentation or mistaken segmentation of the cell nuclei occurs, clear boundaries between the cell nuclei are not provided, and the challenges are brought to accurate segmentation of histopathological cell nucleus examples. The scheme of the histopathological cell nucleus example segmentation method based on the CondInst (conditional volumes for Instance segmentation) model can effectively solve the problem. The network consists of an effective feature extraction module, a target detection head, a self-adaptive receptive field segmentation branch and a conditional convolution segmentation branch. The effective feature extraction module is responsible for extracting image features and fusing effective information of a multi-scale feature map so as to solve the problems of missing segmentation and error segmentation, then sending the feature map obtained by the backbone network into a detection head of a single-stage FCOS (fuzzy consistent dependent-state object detector) network without an anchor frame to obtain parameters required by a target frame and a conditional convolution, wherein the conditional convolution branch is an original segmentation branch of CondInst, the self-adaptive receptive field segmentation branch cuts out an example image with a corresponding size at a corresponding position of an original image according to the result of the detection frame, more accurate cell contour information is extracted through deformable convolution, the segmentation results of the two branches are subjected to consistency loss calculation so as to enhance the segmentation result of the conditional convolution, and finally the segmentation result of the conditional convolution branch is obtained.
Disclosure of Invention
The invention aims to solve the problem of accurate segmentation of a histopathological nucleus example, and provides a histopathological nucleus example segmentation method based on a CondInst model.
The above object of the invention is mainly achieved by the following technical scheme:
s1, performing data enhancement on the data set;
and performing data enhancement on the data set through operations of cutting, overlapping and overturning.
S2, sending the picture obtained in the step S1 into an effective feature extraction module to obtain a multi-scale feature map;
as shown in fig. 2, the effective feature extraction module is composed of a Global Average Pooling module (GAP) and a Gate module, the GAP module is responsible for extracting Global semantic features of a high-dimensional feature map and discarding location features when multi-scale information is fused, and the Gate module filters low-dimensional features to obtain high-quality features.
GAP Module As shown in FIG. 3, a two-dimensional matrix is pooled by global averagingCompressed into a real number, the real number has global semantic features, and for the feature map
Figure 465724DEST_PATH_IMAGE001
Figure 296276DEST_PATH_IMAGE002
Is the number of the channels,
Figure 116203DEST_PATH_IMAGE003
and
Figure 715811DEST_PATH_IMAGE004
the feature map length and width, respectively, are obtained:
Figure 589089DEST_PATH_IMAGE005
(1)
then pass through
Figure 172517DEST_PATH_IMAGE006
The convolution changes the channels of the feature map into
Figure 156654DEST_PATH_IMAGE007
Figure 982DEST_PATH_IMAGE008
Is the original number of channels of the feature map,
Figure 45161DEST_PATH_IMAGE009
to be over-parametric, and finally passed
Figure 850306DEST_PATH_IMAGE006
Convolution restores the channel number so as to fuse the information of different channels; adding the features obtained by the GAP module into the bottom layer feature graph pixel by pixel:
Figure 638134DEST_PATH_IMAGE010
(2)
wherein
Figure 87701DEST_PATH_IMAGE011
In order to obtain a feature map by the GAP module,
Figure 568361DEST_PATH_IMAGE012
the characteristic diagram is an original characteristic diagram,
Figure 595223DEST_PATH_IMAGE013
as a result of feature fusion;
the Gate module can control the passing of effective features in the low-dimensional feature map; through one
Figure 186741DEST_PATH_IMAGE014
The depth of the feature map can be separated from the convolution and sigmoid functions to obtain a new feature map, and the new feature map is multiplied by the original feature map to obtain a feature map fused with low-dimensional high-quality features.
S3, sending the multi-scale characteristic diagram obtained in the S2 into a target detection head to obtain a target frame and parameters required by conditional convolution;
and (4) sending the multi-scale feature map obtained in the step (S2) into a detection head of a target detection network FCOS, learning parameters and predicted target frame information (classification, regression, centrality and the like) required by conditional convolution pixel by pixel, and screening to obtain a predicted frame of a positive sample and parameters required by conditional convolution. The parameters obtained here correspond one-to-one to the prediction boxes.
S4, sending the feature map with the highest resolution in the multi-scale feature map obtained in the S2 into a conditional convolution segmentation branch to obtain a cell nucleus example segmentation result;
sending the feature map with the highest resolution in the multi-scale feature map obtained in step S2 to a conditional convolution segmentation branch of CondInst, combining the feature map with a coordinate map, where the coordinate map is a relative coordinate from a position on the feature map to a position (x, y) (i.e., a position where a convolution kernel of a mask head is generated), and then sending the combined feature map to the mask head for conditional convolution operation, where parameters required by the mask head are parameters obtained in step S3, the parameters are in one-to-one correspondence with prediction boxes, and the feature map is convolved with the parameters corresponding to each prediction box to obtain an example matching the prediction boxes. The mask is a very compact FCN structure with three 1 x 1 convolutions, each convolution having 8 channels, and using ReLU as the activation function (the last one excluded).
S5, cutting out a picture with a corresponding size at the corresponding position of the picture obtained in S1 by using the target frame obtained in S3, and sending the cut picture into a self-adaptive receptive field segmentation branch to obtain a cell nucleus example segmentation result; and according to the position and the size of the target frame obtained in the step S3, cutting the corresponding area in the picture obtained in the step S1, and adjusting the picture after cutting to be a uniform size through RoIAlign. The small pictures only containing a single cell nucleus example are subjected to convolution operation, and each small picture contains a single example, so that the small pictures are not influenced by other cell nuclei during segmentation, and the segmented cell nucleus outline is closer to the real cell nucleus outline. Since the cell nucleus is mostly elliptical and the sizes and shapes of different examples are different, it is preferable to adaptively determine the size of the receptive field, so that the deformable convolution is selected here, and a direction offset is added to each element in the convolution kernel, so that the convolution kernel can be changed into any shape.
S6, calculating consistency loss by the segmentation result obtained in the S4 and the segmentation result obtained in the S5, and reinforcing the segmentation result of the S4;
the consistency loss calculation uses focal loss in the specific form:
Figure 615448DEST_PATH_IMAGE015
(3)
order to
Figure 126064DEST_PATH_IMAGE016
(4)
Wherein the content of the first and second substances,
Figure 905801DEST_PATH_IMAGE017
in order to be able to adjust the factor,
Figure 301010DEST_PATH_IMAGE018
prediction phase representing two segmentation resultsIn the same way, the first and second electrodes are connected,
Figure 584224DEST_PATH_IMAGE019
the same probability is predicted for both segmentation results,
Figure 406687DEST_PATH_IMAGE020
the two segmentation results representing the model have the same predicted value of probability,
Figure 782042DEST_PATH_IMAGE020
the larger the two segmentation results, the closer the segmented nuclear contour of the conditional convolution is to the true contour.
Effects of the invention
The invention provides a histopathology cell nucleus example segmentation method based on a CondInst model, which can effectively solve the problem. The network can realize accurate segmentation of cell nucleus examples, wherein the effective feature extraction module is responsible for extracting picture features and fusing effective information of multi-scale feature maps, so that the conditions of missing segmentation and error segmentation are solved, the adaptive receptive field segmentation branch can extract more accurate cell contour information through deformable convolution, and the segmentation result of the adaptive receptive field segmentation branch and the original conditional convolution segmentation branch result of CondInst are subjected to consistency loss calculation, so that the segmentation result of conditional convolution is enhanced.
Drawings
FIG. 1 is a network structure of a histological cell nucleus example segmentation method based on a CondInst model;
FIG. 2 is a diagram of a backbone network architecture;
FIG. 3 GAP module;
FIG. 4 is a Gate module;
detailed description of the invention
The first embodiment is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the segmentation method for the example of the pathological cell nucleus provided herein performs the following steps:
s1, performing data enhancement on the data set;
s2, sending the picture obtained in the S1 into an effective feature extraction module to obtain a multi-scale feature map;
s3, sending the multi-scale feature map obtained in the S2 into a target detection head to obtain parameters required by the target frame and the conditional convolution;
s4, sending the feature map with the highest resolution in the multi-scale feature map obtained in the S2 into a conditional convolution segmentation branch to obtain a cell nucleus example segmentation result;
s5, cutting out a picture with a corresponding size at the corresponding position of the picture obtained in S1 by using the target frame obtained in S3, and sending the cut picture into a self-adaptive receptive field segmentation branch to obtain a cell nucleus example segmentation result;
s6, the consistency loss is calculated by using the segmentation result obtained in S4 and the segmentation result obtained in S5, and the segmentation result in S4 is strengthened.
The embodiment of the invention comprises an effective characteristic extraction module, a target detection head, a self-adaptive receptive field segmentation branch and a conditional convolution segmentation branch. The effective characteristic extraction module is responsible for extracting picture characteristics and fusing effective information of a multi-scale characteristic diagram so as to solve the problems of missing segmentation and error segmentation, then the characteristic diagram obtained by the backbone network is sent into a detection head of a single-stage FCOS network without an anchor frame to obtain parameters required by a target frame and a conditional convolution, the conditional convolution branch is an original CondInst segmentation branch, the self-adaptive receptive field segmentation branch cuts out example pictures with corresponding sizes at corresponding positions of an original image according to the result of the detection frame, more accurate cell contour information is extracted through deformable convolution, the segmentation results of the two branches are subjected to consistency loss calculation so as to enhance the segmentation result of the conditional convolution, and finally the segmentation result of the conditional convolution branch is obtained.
The following examples illustrate the invention in detail:
the model training as shown in fig. 1 comprises the steps of:
s1, performing data enhancement on the data set;
MoNuSeg dataset is resolution of
Figure 715363DEST_PATH_IMAGE021
The multi-organ H & E staining image comprises 30 training sets and 14 testing sets, and the picture of the training sets is cut into
Figure 118663DEST_PATH_IMAGE022
Size, data enhancement by adopting horizontal turning, vertical turning and horizontal and vertical turning, image normalization, and cutting test set pictures into
Figure 112026DEST_PATH_IMAGE022
Size, and image normalization.
S2, sending the picture obtained in the step S1 into an effective feature extraction module to obtain a multi-scale feature map;
the pictures processed in S1 are sent to a ResNet (Deep residual network) network to obtain feature maps of different sizes, and sent to an effective feature extraction module shown in fig. 2, where the effective feature extraction module is composed of a Global Average Pooling module (GAP) and a Gate module, the GAP module is responsible for extracting Global semantic features of a high-dimensional feature map when multi-scale information is fused, discarding location features, and the Gate module filters low-dimensional features to obtain high-quality features.
The GAP module compresses a two-dimensional matrix into a real number through global average pooling as shown in fig. 3, the real number has global semantic features, and for the feature map, the real number is
Figure 600776DEST_PATH_IMAGE001
Figure 196843DEST_PATH_IMAGE002
Is the number of the channels,
Figure 454649DEST_PATH_IMAGE003
and
Figure 618914DEST_PATH_IMAGE004
the feature map length and width, respectively, are obtained:
Figure 329381DEST_PATH_IMAGE005
(1)
then pass through
Figure 479871DEST_PATH_IMAGE006
The convolution changes the channels of the feature map into
Figure 857763DEST_PATH_IMAGE007
Figure 927350DEST_PATH_IMAGE008
Is the original number of channels of the feature map,
Figure 390692DEST_PATH_IMAGE009
to be over-parametric, and finally passed
Figure 328561DEST_PATH_IMAGE006
Convolution restores the channel number so as to fuse the information of different channels; adding the features obtained by the GAP module into the bottom layer feature graph pixel by pixel:
Figure 560959DEST_PATH_IMAGE010
(2)
wherein
Figure 801448DEST_PATH_IMAGE011
Is a feature map obtained by the GAP module,
Figure 752086DEST_PATH_IMAGE012
the characteristic diagram is an original characteristic diagram,
Figure 742914DEST_PATH_IMAGE013
is characterized by meltingThe result after synthesis;
the Gate module can control the passing of effective features in the low-dimensional feature map; through one
Figure 829818DEST_PATH_IMAGE014
The depth of the feature map can be separated from the convolution and sigmoid functions to obtain a new feature map, and the new feature map is multiplied by the original feature map to obtain a feature map fused with low-dimensional high-quality features. The sizes of the multi-scale feature maps obtained by the valid feature extraction module are S1, respectively, to obtain 1/8, 1/16, 1/32, 1/64, 1/128 of the pictures.
S3, sending the multi-scale feature map obtained in the S2 into a target detection head to obtain parameters required by the target frame and the conditional convolution;
and (4) sending the multi-scale feature map obtained in the step (S2) into a detection head of a target detection network FCOS, learning parameters and predicted target frame information (classification, regression, centrality and the like) required by conditional convolution pixel by pixel, and screening to obtain a predicted frame of a positive sample and parameters required by conditional convolution. The parameters obtained here correspond one-to-one to the prediction boxes.
S4, sending the characteristic diagram with the highest resolution in the multi-scale characteristic diagram obtained in the S2 into a conditional convolution segmentation branch to obtain a cell nucleus instance segmentation result;
sending the feature map with the highest resolution in the multi-scale feature map obtained in the step S2 to a conditional convolution segmentation branch of the CondInst, combining the feature map with a coordinate map, wherein the coordinate map is a relative coordinate from a position on the feature map to a position (x, y) (i.e., a position where a convolution kernel of a mask head is generated), and then sending the combined feature map to the mask head for conditional convolution operation, wherein parameters required by the mask head are parameters obtained in the step S3, the parameters are in one-to-one correspondence with prediction boxes, and the feature map is convolved with the parameters corresponding to each prediction box to obtain an example matched with the prediction boxes. The mask is a very compact FCN structure with three 1 x 1 convolutions, each convolution having 8 channels, and using ReLU as the activation function (the last one excluded).
S5, cutting out a picture with a corresponding size at the corresponding position of the picture obtained in S1 by using the target frame obtained in S3, and sending the cut picture into a self-adaptive receptive field segmentation branch to obtain a cell nucleus example segmentation result;
cropping the corresponding region in the picture obtained in S1 according to the position and size of the target frame obtained in S3, and adjusting the picture after cropping to a uniform size by RoIAlign (
Figure 506787DEST_PATH_IMAGE023
). These small pictures containing only a single instance of the nucleus were subjected to four deformable convolution operations (convolution kernel size 3 × 3, number of channels 16, 8, 1, respectively).
S6, calculating consistency loss by the segmentation result obtained in the S4 and the segmentation result obtained in the S5, and reinforcing the segmentation result of the S4;
clipping the corresponding region in the segmentation result obtained in S4 according to the position and size of the target frame obtained in S3, and adjusting the picture after clipping to a uniform size by RoIAlign: (
Figure 679143DEST_PATH_IMAGE023
) And the segmentation result obtained from S5 is subjected to consistency loss calculation through local loss to strengthen the segmentation result of the conditional convolution branch, so that the contour of the predicted cell nucleus is more fit with the real contour.
The focal loss is in a specific form:
Figure 958814DEST_PATH_IMAGE015
(3)
order to
Figure 165805DEST_PATH_IMAGE016
(4)
Wherein the content of the first and second substances,
Figure 748096DEST_PATH_IMAGE017
in order to be able to adjust the factor,
Figure 407747DEST_PATH_IMAGE018
it is represented that the two segmentation results are predicted to be the same,
Figure 241842DEST_PATH_IMAGE019
the same probability is predicted for both segmentation results,
Figure 303339DEST_PATH_IMAGE020
the two segmentation results representing the model have the same predicted value of probability,
Figure 322111DEST_PATH_IMAGE020
the larger the two segmentation results, the closer the two segmentation results are, and the closer the segmented nuclear contour of the conditional convolution is to the real contour. And finally, outputting the segmentation result of the conditional convolution branch.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (4)

1. An example segmentation method of histopathological cell nucleus based on deep learning is characterized by comprising the following steps:
s1, performing data enhancement on the data set;
s2, sending the picture obtained in the step S1 into an effective feature extraction module to obtain a multi-scale feature map;
s3, sending the multi-scale feature map obtained in the S2 into a target detection head to obtain parameters required by the target frame and the conditional convolution;
s4, sending the feature map with the highest resolution in the multi-scale feature map obtained in the S2 into a conditional convolution segmentation branch to obtain a cell nucleus example segmentation result;
s5, cutting out a picture with a corresponding size at the corresponding position of the picture obtained in S1 by using the target frame obtained in S3, and sending the cut picture into a self-adaptive receptive field segmentation branch to obtain a cell nucleus example segmentation result;
s6, the consistency loss is calculated by using the segmentation result obtained in S4 and the segmentation result obtained in S5, and the segmentation result in S4 is strengthened.
2. The method for example segmentation of histopathological nuclei based on deep learning as claimed in claim 1, wherein the effective feature extraction module in step S2 is as follows:
the effective feature extraction module consists of a Global Average Pooling module (GAP) and a Gate module, wherein the GAP module is responsible for extracting Global semantic features of a high-dimensional feature map during multi-scale information fusion, abandoning position features, and the Gate module filters low-dimensional features to obtain high-quality features;
in the GAP module, a two-dimensional matrix is compressed into a real number through global average pooling, the real number has global semantic features, and a feature graph is subjected to feature mapping
Figure 401943DEST_PATH_IMAGE001
Figure 53505DEST_PATH_IMAGE002
Is the number of the channels,
Figure 754613DEST_PATH_IMAGE003
and
Figure 149822DEST_PATH_IMAGE004
the feature map length and width, respectively, are obtained:
Figure 698615DEST_PATH_IMAGE005
(1)
then pass through
Figure 317816DEST_PATH_IMAGE006
The convolution changes the channels of the feature map into
Figure 319270DEST_PATH_IMAGE007
Figure 190274DEST_PATH_IMAGE008
Is the original number of channels of the feature map,
Figure 593573DEST_PATH_IMAGE009
to be over-parametric, and finally passed
Figure 383675DEST_PATH_IMAGE006
Convolution restores the channel number so as to fuse the information of different channels; adding the features obtained by the GAP module into the bottom layer feature graph pixel by pixel:
Figure 872425DEST_PATH_IMAGE010
(2)
wherein
Figure 530808DEST_PATH_IMAGE011
In order to obtain a feature map by the GAP module,
Figure 788614DEST_PATH_IMAGE012
the characteristic diagram is an original characteristic diagram,
Figure 15196DEST_PATH_IMAGE013
as a result of feature fusion;
the Gate module can control the passing of effective features in the low-dimensional feature map; through one
Figure 256822DEST_PATH_IMAGE014
The depth of the feature map can be separated from the convolution and sigmoid functions to obtain a new feature map, and the new feature map is multiplied by the original feature map to obtain a feature map fused with low-dimensional high-quality features.
3. The method for example segmentation of histopathological nuclei based on deep learning as claimed in claim 1, wherein the adaptive receptor field segmentation branch in step S5 includes the following steps:
cutting a corresponding area in the original image according to the position and the size of a target frame generated by target detection, and adjusting the cut image to be in a uniform size through RoIAlign; and performing deformable convolution operation on the small pictures only containing the single cell nucleus example to obtain a segmentation result.
4. The method for example segmentation of histopathological nuclei based on deep learning as claimed in claim 1, wherein the method for enhancing segmentation result by consistency loss in step S6 is as follows:
clipping the segmentation result obtained by the conditional convolution according to the position and the size of a target frame generated by target detection, adjusting the clipped picture to be uniform in size through RoIAlign, and calculating consistency loss with the segmentation result obtained by the deformable convolution through focal loss to strengthen the segmentation result of the conditional convolution branch, wherein the focal loss is specifically as follows:
Figure 266366DEST_PATH_IMAGE015
(3)
order to
Figure 581941DEST_PATH_IMAGE016
(4)
Wherein the content of the first and second substances,
Figure 651528DEST_PATH_IMAGE017
in order to be able to adjust the factor,
Figure 177187DEST_PATH_IMAGE018
it is represented that the two segmentation results are predicted to be the same,
Figure 990422DEST_PATH_IMAGE019
the same probability is predicted for both segmentation results,
Figure 488400DEST_PATH_IMAGE020
the two segmentation results representing the model have the same predicted value of probability,
Figure 181418DEST_PATH_IMAGE020
the larger the two segmentation results, the closer the segmented nuclear contour of the conditional convolution is to the true contour.
CN202210832755.2A 2022-07-15 2022-07-15 Example segmentation method of histopathological cell nucleus based on deep learning Pending CN115082420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210832755.2A CN115082420A (en) 2022-07-15 2022-07-15 Example segmentation method of histopathological cell nucleus based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210832755.2A CN115082420A (en) 2022-07-15 2022-07-15 Example segmentation method of histopathological cell nucleus based on deep learning

Publications (1)

Publication Number Publication Date
CN115082420A true CN115082420A (en) 2022-09-20

Family

ID=83258792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210832755.2A Pending CN115082420A (en) 2022-07-15 2022-07-15 Example segmentation method of histopathological cell nucleus based on deep learning

Country Status (1)

Country Link
CN (1) CN115082420A (en)

Similar Documents

Publication Publication Date Title
Mahapatra et al. Image super resolution using generative adversarial networks and local saliency maps for retinal image analysis
JP6710135B2 (en) Cell image automatic analysis method and system
US11636599B2 (en) Image diagnostic system, and methods of operating thereof
CN111260055B (en) Model training method based on three-dimensional image recognition, storage medium and device
Sun et al. Context-constrained hallucination for image super-resolution
US6973213B2 (en) Background-based image segmentation
US20090252429A1 (en) System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing
JP2000137804A (en) Method and system for abnormality detection of digital image and storage medium for same
Bahlmann et al. Automated detection of diagnostically relevant regions in H&E stained digital pathology slides
CN111145209A (en) Medical image segmentation method, device, equipment and storage medium
CN112651979A (en) Lung X-ray image segmentation method, system, computer equipment and storage medium
Xiao et al. Defocus blur detection based on multiscale SVD fusion in gradient domain
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
CN114742758A (en) Cell nucleus classification method in full-field digital slice histopathology picture
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
CN104637060B (en) A kind of image partition method based on neighborhood principal component analysis-Laplce
WO2013022688A1 (en) Automated detection of diagnostically relevant regions in pathology images
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
CN117095180A (en) Embryo development stage prediction and quality assessment method based on stage identification
CN115082420A (en) Example segmentation method of histopathological cell nucleus based on deep learning
CN115439493A (en) Method and device for segmenting cancerous region of breast tissue section
CN115937095A (en) Printing defect detection method and system integrating image processing algorithm and deep learning
CN115170956A (en) Posterior probability hyperspectral image classification method based on multi-scale entropy rate superpixel
Drira et al. Mean-Shift segmentation and PDE-based nonlinear diffusion: toward a common variational framework for foreground/background document image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination