CN113947599B - Cytoplasmic positive immunohistochemical intelligent identification method, system and medium - Google Patents

Cytoplasmic positive immunohistochemical intelligent identification method, system and medium Download PDF

Info

Publication number
CN113947599B
CN113947599B CN202111558334.7A CN202111558334A CN113947599B CN 113947599 B CN113947599 B CN 113947599B CN 202111558334 A CN202111558334 A CN 202111558334A CN 113947599 B CN113947599 B CN 113947599B
Authority
CN
China
Prior art keywords
positive
cell nucleus
ihc
tumor
microscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111558334.7A
Other languages
Chinese (zh)
Other versions
CN113947599A (en
Inventor
蒋谊
黄少冰
韩方剑
余莉
李岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lansi Ningbo Medical Technology Co ltd
Original Assignee
Hunan Lanqian Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Lanqian Biotechnology Co ltd filed Critical Hunan Lanqian Biotechnology Co ltd
Priority to CN202111558334.7A priority Critical patent/CN113947599B/en
Publication of CN113947599A publication Critical patent/CN113947599A/en
Application granted granted Critical
Publication of CN113947599B publication Critical patent/CN113947599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cytoplasmic positive immunohistochemical intelligent identification method, a system and a medium, wherein the method comprises the steps of identifying cell nuclei from an HE (high energy) microscopic panoramic image through circle fitting; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation; mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus; counting the total number of nuclei of the cytoplasm-positive tumorR p And total number of negative tumor nucleiR n And calculating the positive rate of the cytoplasmic IHC tumorH rate . The invention can realize intelligent recognition of the cytoplasmic immunohistochemical positive cells and negative cells, complete automatic calculation of the precise result of the immunohistochemical positive rate, subtract the complex work of manual calculation and analysis, and efficiently assist doctors and scientific researchers to complete the analysis of various immunohistochemical indexes of the cytoplasmic immunohistochemical.

Description

Cytoplasmic positive immunohistochemical intelligent identification method, system and medium
Technical Field
The invention relates to the field of digital image processing, the technical field of biomedical engineering and the field of microscopic pathology automatic analysis, in particular to a cytoplasmic positive immunohistochemical intelligent identification method, a system and a medium.
Background
Immunohistochemistry (Immunohistochemistry), referred to as Immunohistochemistry for short, is a technology for carrying out tissue and cell in-situ detection on the distribution of antigens (or antibodies) in tissues by using labeled specific antibodies (or antigens), is a mature, reliable, economic and widely applied technical means, is widely applied in clinical pathological diagnosis and scientific research at present, and has irreplaceable effects on judgment of tumor tissue sources, evaluation of tumor risks, selection of treatment methods and observation of the expression level and distribution range of the tissues and cell antibodies (or antigens) in various experiments.
The positive types of the current immunohistochemical cells are mainly divided into three types of nuclear positive types, membrane positive types and plasma positive types, wherein the nuclear positive types and the membrane positive types have obvious cell nucleus particle outlines, and cell boundaries can be distinguished by naked eyes; however, the cytoplasmic positive granules are mainly distributed in the cytoplasm. Depending on the form of distribution of the antigen within the cytoplasm, it is usually either a diffuse distribution or a localized distribution. The former positive particles are diffusely and uniformly distributed throughout the cytoplasm. Limitation means that the positive particles are small spots or plaques localized to a site in the cytoplasm, perikaryon or on one side of the cytoplasm.
For pathologists, it is difficult to distinguish individual cells from immunohistochemically stained tissue sections, and it is therefore very difficult to accurately calculate the number of cytoplasmic cells, both by manual counting and computer-aided counting. In the existing situation, a computer-aided calculation method usually calculates according to the area of a positive region, is not an accurate calculation result, and has a large error. In addition, positive expression at sites where the antigen is not present cannot be considered positive at all. These, all present challenges to the interpretation of the positive rate of seropositive types. On the other hand, the size of human cells is between 10 and 20 microns, the thickness of a single immunohistochemical section can be controlled to be between 2 and 3 microns, and if a continuous section mode is adopted, the same cell can exist in four to five sections simultaneously. Thus, it is known that digital slice scanning of these serial slices, by fine registration, completely allows for the localization and coincident representation of the same cell located in the serial slices. In this way, immunohistochemical cells which are positive in cytoplasm can be correspondingly mapped onto other digital sections, so that statistics of positive cells and negative cells can be completed on digital sections with obvious outlines of other cells. In recent years, with the deep cultivation of the deep learning technology in the field of biomedical pathology, the medical field based on artificial intelligence is developed vigorously, the traditional image processing technology means is combined with a new deep learning technology, great progress is made in the medical field, the detection, segmentation and division effects of tissues, cells and the like on a microscopic digital slice are obvious based on the deep learning technology, and the performance of the method is far superior to that of the traditional image processing algorithm.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a cytoplasmic positive immunohistochemical intelligent identification method, a system and a medium, and the method, the system and the medium can realize intelligent identification of cytoplasmic positive immunohistochemical positive cells and negative cells and complete automatic calculation and analysis of an immunohistochemical positive rate accurate result. On the premise of ensuring the accuracy of each analysis and calculation index result, the method can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical index in clinical pathological work and scientific research, and can effectively assist doctors and scientific research personnel to finish the analysis of each immunohistochemical index of the cytoplasmic immunohistochemistry by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
In order to solve the technical problems, the invention adopts the technical scheme that:
a cytoplasmic positive immunohistochemical intelligent identification method comprises the following steps:
1) identifying nuclei from the HE microscopy panoramagram; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation;
2) mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus;
3) counting the total number of tumor-positive nucleiR p And total number of tumor-negative nucleiR n And calculating the cytoplasmic IHPositive rate of C tumorH rate
Optionally, the identifying nuclei from the HE microscopy panorama in step 1) comprises:
1.1A) extracting a local HE microscopic panorama of a target position area from the HE microscopic panorama;
1.2A) inputting the local HE microscopic panoramic image into a first depth convolution neural network to obtain a cell nucleus segmentation mask image;
1.3A) carrying out nucleus segmentation post-processing on the nucleus segmentation mask map;
1.4A) performing circle fitting on the nuclear segmentation mask map subjected to the noise reduction treatment to obtain the central position and the radius of a minimum fitting circle as the identified nucleus.
Optionally, the first deep convolutional neural network in step 1.2A) is a UNet deep convolutional neural network.
Optionally, the post-processing of the cell nucleus segmentation performed in step 1.3A) includes morphological erosion and boundary separation processing of the connected cell nuclei using edge detection and watershed algorithms.
Optionally, the step 1) of registering the IHC microscopic panorama and the HE microscopic panorama and then performing positive and negative tissue region segmentation includes:
1.1B) carrying out coarse registration of an organization level on upper small images of the IHC micro panoramic image and the HE micro panoramic image which both adopt a multi-resolution pyramid file storage format, and extracting coarse registration parameters; generating a new IHC digital microscopic panorama according to the bottom layer large image of the IHC microscopic panorama based on the coarse registration parameters;
1.2B) extracting a local IHC digital microscopic panorama of the target position area from the new IHC digital microscopic panorama;
1.3B) carrying out cell-level fine registration on the local IHC digital microscopic panoramic image and the local HE microscopic panoramic image, and extracting fine registration parameters;
1.4B) adjusting the local IHC digital microscopic panoramic image based on the fine registration parameters to obtain a fine-adjusted local IHC digital microscopic panoramic image;
1.5B) inputting the local IHC digital microscopic panoramic image after fine tuning into a second deep convolutional neural network to carry out positive and negative tissue area segmentation on the IHC microscopic panoramic image after registration to obtain a positive tissue area segmentation mask image, wherein the positive tissue area segmentation mask image distinguishes positive and negative tissue areas by black color.
Optionally, the second deep convolutional neural network in step 1.5B) is a UNet deep convolutional neural network.
Optionally, step 2) comprises: and mapping the central position and the radius of the identified cell nucleus to a positive tissue region segmentation mask map, judging all points in a fitting circle corresponding to each cell nucleus, if more than half of the points are located in the positive tissue region, judging the cell nucleus to be a tumor positive cell nucleus, if more than half of the points are located in the negative tissue region, judging the cell nucleus to be a tumor negative cell nucleus, and if not, judging the cell nucleus to be a non-tumor cell nucleus.
Optionally, the calculating of the positivity of the cytoplasmic IHC tumorH rate The functional expression of (a) is:
H rate =R p /(R p +R n ) ,
in the above formula, the first and second carbon atoms are,R p the total number of tumor-positive nuclei,R n total number of tumor-negative nuclei.
In addition, the invention also provides a cytoplasmic positive immunohistochemical intelligent recognition system which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the cytoplasmic positive immunohistochemical intelligent recognition method.
Furthermore, the present invention also provides a computer-readable storage medium, in which a computer program is stored, the computer program being used for being executed by a computer device to implement the steps of the intelligent identification method for cytoplasmic positive immunohistochemistry.
Compared with the prior art, the invention has the following advantages: the invention takes the cells in the HE microscopic panorama of the HE stained section as reference, positions the cell positions in the cytoplasm positive Immunohistochemistry (IHC) stained section, can realize the intelligent identification of the cytoplasm type immunohistochemical positive cells and negative cells, and completes the automatic calculation and analysis of the precise result of the immunohistochemical plasma positive rate. On the premise of ensuring the accuracy of each analysis and calculation index result, the invention can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical index in clinical pathological work and scientific research, and can effectively assist doctors and scientific research personnel to finish the analysis of each immunohistochemical index of the cytoplasmic immunohistochemistry by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
FIG. 2 is a detailed flow chart of the method according to the embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a first deep convolutional neural network in an embodiment of the present invention.
FIG. 4 is a diagram illustrating a multi-resolution pyramid file storage format according to an embodiment of the invention.
Fig. 5 is a schematic diagram of the principle of coarse registration in the embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a second deep convolutional neural network in the embodiment of the present invention.
Fig. 7 is an example of input and output images of the second deep convolutional neural network in the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the intelligent identification method for cytoplasmic positive immunohistochemistry in this embodiment includes:
1) identifying nuclei from the HE microscopy panoramagram; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation;
2) mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus;
3) counting the total number of tumor-positive nucleiR p And total number of tumor-negative nucleiR n And calculating the positive rate of the cytoplasmic IHC tumorH rate
The IHC and HE microscopic panoramas in step 1) are images obtained for the same target location area. For example, a HE section and a cytoplasmic immunohistochemical section in the corresponding serial sections are obtained by scanning with a digital section scanner, and a HE microscopic digital panorama and an IHC microscopic digital panorama are respectively obtained.
Referring to fig. 2, the identification of cell nuclei from HE microscopic panorama in step 1) of the present embodiment includes:
1.1A) extracting a local HE microscopic panorama of a target position area from the HE microscopic panorama;
1.2A) inputting the local HE microscopic panoramic image into a first depth convolution neural network to obtain a cell nucleus segmentation mask image;
the target position region R is defined by the doctor, and the range is adjusted as follows:
R={(x R ,y R )| x l x R x l +R width y t y R x l +R height },
wherein,x l andy t representing the upper left-hand coordinates of the target position region R,R width andR height respectively representing target position areasRWidth and height of (2); (x R ,y R ) Representing coordinate points within a rectangular area. Due to the target position area defined by the doctorRDifferent sizes, possibly very large sizes, and the whole target position area cannot be directly usedRAs an input to the second deep convolutional neural network. Therefore, the whole rectangular area needs to be further divided into a plurality of tile maps with the same size, and then the plurality of tile maps are sequentially input into the first deep convolutional neural network. Therefore, the temperature of the molten metal is controlled,the step 1.2A) of inputting the local HE microscopic panoramic image into the first deep convolutional neural network to obtain the cell nucleus segmentation mask image specifically comprises the following steps: dividing a local HE microscopic panorama into a plurality of tile maps with the same size, sequentially inputting the tile maps into a first deep convolutional neural network to obtain a cell nucleus division mask map, and splicing the cell nucleus division mask maps into a target position areaRThe nuclear segmentation mask map of (1).
1.3A) carrying out nucleus segmentation post-processing on the nucleus segmentation mask map;
1.4A) performing circle fitting on the nuclear segmentation mask map subjected to the noise reduction treatment to obtain the central position and the radius of a minimum fitting circle as the identified nucleus. In this embodiment, finally, circle fitting is performed on each cell nucleus to obtain the central coordinate position of the minimum fitting circle (cx rc ,y rc ) And radiusrAnd expressing the cell nucleus therewith.
The first deep convolutional neural network in step 1.2A) is a UNet deep convolutional neural network. For the sake of distinction, the first deep convolutional neural network is named as HE nucleus UNet segmentation network in the present embodiment, and its network structure is shown in fig. 3. Wherein "2 × 2 maxporoling" shown by the arrow is a maximum pooling layer of 2 × 2, "3 × 3 Conv2d + Batch Normalization + ReLU" shown by the arrow indicates 3 × 3 two-dimensional convolution, Batch Normalization, and ReLU activation function processing, "2 × 2 convtassposed 2dConv2d + Batch Normalization + ReLU" shown by the arrow indicates 2 × 2 two-dimensional transpose convolution, Batch Normalization, and ReLU activation function processing, the rest of the boxes are convolutional layers, the upper number is a convolution output image size, and the lower side is a convolution kernel size. The HE cell nucleus UNet segmentation network is an UNet deep convolution neural network which is trained in advance and establishes a mapping relation between a local HE microscopic panoramic image and a cell nucleus segmentation mask image. During training, a large number of pictures marked by a professional doctor and containing HE cell nucleus marks are used for cell nucleus segmentation detection data set production to obtain a training, verification and test data set; training the HE cell nucleus UNet segmentation network by using the training and verifying data set, testing the segmentation network performance of the HE cell nucleus UNet segmentation network by using the testing data set, and performing repeated iterative training to finally obtain the optimized HE cell nucleus UNet segmentation network.
The segmentation result on the cell nucleus segmentation detection mask image obtained by the HE cell nucleus UNet segmentation network is not completely accurate, further processing is needed, results of segmentation errors are reduced, and meanwhile, the cell nucleus boundaries connected together are further segmented, namely, cell nucleus segmentation post-processing is carried out. In this embodiment, the post-processing of the cell nucleus segmentation performed in step 1.3A) includes morphological erosion and boundary separation processing of the connected cell nuclei by using edge detection and watershed algorithm. After morphological erosion, boundary separation processing is carried out on the connected cell nucleuses by adopting an edge detection and watershed algorithm, and each single cell nucleus is further obtained; and finally, performing circle fitting on each cell nucleus to obtain the central position and the radius of the minimum fitting circle, and expressing the cell nucleus by using the central position and the radius.
Referring to fig. 2, the step 1) of performing segmentation of positive and negative tissue regions after registering the IHC microscopic panorama and the HE microscopic panorama includes:
1.1B) carrying out coarse registration of the organization level on the upper small images of the IHC micro panoramic image and the HE micro panoramic image which both adopt the multi-resolution pyramid file storage format, extracting coarse registration parameters, and generating a new IHC digital micro panoramic image according to the bottom large image of the IHC micro panoramic image based on the coarse registration parameters.
The multi-resolution pyramid file storage format (pyramid) is shown in fig. 4, and the multi-resolution pyramid file storage format includes a plurality of small graphs with different resolutions distributed from small to large. As shown in fig. 5, the step of performing coarse registration at the tissue level includes: firstly, respectively taking an upper small image (the small image with the minimum resolution) of an IHC (interactive IHC) microscopic panorama and an HE (HE e-microscopic) microscopic panorama in a multi-resolution pyramid file storage format, registering the upper small image by adopting an SIFT (scale invariant feature transform) feature matching algorithm (or adopting other feature matching algorithms according to needs), and extracting coarse registration parameters (including a rotation angle, a similarity proportion, a translation amount and the like); then taking outAdjusting the bottom layer large image (the small image with the maximum resolution) of the IHC micro-panoramic image in the multi-resolution pyramid file storage format according to the coarse registration parameters to obtain the registered bottom layer large image, and then generating new small images with various resolutions distributed from small to large according to the registered bottom layer large image to obtain a new IHC digital micro-panoramic image. By the coarse registration mode, the IHC micro panoramic image and the HE micro panoramic image can be quickly registered, and the calculation amount can be reduced. The coarse registration parameters include the amount of translation (P x ,P y ) Rotation angle (radian)αAnd scalingSWhereinP x Is the amount of translation in the horizontal direction,P y is the amount of translation in the vertical direction. Then the amount of translation already obtained is used (P x ,P y ) Angle of rotation andαscalingSWhen the registration parameters are equal, and a new IHC digital microscopic panorama is regenerated, the coordinate relationship of the newly generated IHC digital microscopic panorama and the coordinate relationship of the original IHC are as follows:
x new = S×cosα×x ori + S×sinα×y ori + P x
y new = -S×sinα×x ori + S×cosα×y ori + P y
F(x new ,y new ) = I(x ori ,y ori ) ,
wherein,x ori andy ori respectively the coordinates of the pixel points of the original IHC microscopic panorama,x new andy new respectively representing pixel point coordinates of the newly generated IHC microscopic panoramic image;I(x ori ,y ori ) To representOriginal IHC micro-panorama at pixel coordinates (x ori ,y ori ) A pixel value of (c);F(x new ,y new ) Representing the pixel coordinates of the newly generated IHC microscopic panorama (c) ((x new ,y new ) The pixel values on the table.
1.2B) extracting a local IHC digital microscopic panorama of the target position area from the new IHC digital microscopic panorama.
1.3B) carrying out cell-level fine registration on the local IHC digital microscopic panoramic image and the local HE microscopic panoramic image, and extracting fine registration parameters.
In this embodiment, the SIFT feature matching algorithm is also adopted when performing cell-level fine registration, and in addition, other feature matching algorithms may be adopted as needed. The coarse registration is the basis of the fine registration and is used for reliably and quickly extracting the target position region for the fine registration, so that the two-stage registration of thickness combination is realized, and both the precision and the efficiency are realized. When cell-level fine registration is performed, extracting fine registration parameters also includes translation amount, rotation angle, scaling and the like. The registration is more refined on the basis of the primary registration, so that fine adjustment is performed on the basis of the original primary registration, the corresponding registration parameters are smaller, and after the fine adjustment, the specific region can finish the fine registration between cells in the region, and the mapping from the position of the cell nucleus detected on the HE to the corresponding region on the newly generated IHC is finished.
1.4B) adjusting the local IHC digital microscopic panoramic image based on the fine registration parameters to obtain a fine-adjusted local IHC digital microscopic panoramic image; the extracted fine registration parameters in this embodiment include: amount of translation (P1 x , P1 y ) Angle of rotationβAnd scalingS1The registration is more refined on the basis of the primary registration, so that the fine adjustment is performed on the basis of the original primary registration, the correspondingly obtained registration parameters are smaller, and after the fine adjustment, the cells and the fine details in the specific region can be obtained in the regionAnd (4) performing fine registration among cells to complete the mapping of the positions of the detected cell nuclei on the HE to the corresponding regions on the newly generated IHC. On a newly generated IHC, firstly extracting an image on a corresponding target position region R, then carrying out tile map division on the region, then sequentially inputting the tile maps which are divided into equal parts into a second deep convolutional neural network, obtaining a network output re-mask map, then splicing the tile mask maps into mask maps which are detected and divided into IHC positive tissue regions and IHC negative tissue regions, wherein the pixel point coordinate value on the mask map is 0 or 255, if the pixel value is 0, the pixel point is located in the negative tissue region, and if the pixel value is 255, the pixel point is located in the positive tissue region; and mapping the coordinate position of the result of the tissue region detection segmentation according to the registration parameter number, wherein the function expression mapped into the coordinate position corresponding to the HE actual cell position one by one is as follows:
x new = S1×cosβ×x R + S1×sinβ×y R + P1 x
y new = -S1×sinβ×x R + S1×cosβ×y R + P1 y
F’R(x R_new ,y R_new ) = FR(x R ,y R ) ,
wherein,x R_new andy R_new representing the pixel coordinates, F, corresponding to the target location area R' after fine-tuningR(x R ,y R ) Coordinates on the mask map representing segmentation detection of positive tissue and negative tissue regions on the target location region R by the newly generated IHC are expressed (x R ,y R ) A pixel value of (a); f'R(x R_new ,y R_new ) Representation is fine-tunedThe subsequent target position region R' corresponds to coordinates (c:)x R_new ,y R_new ) The pixel value of (c).
1.5B) inputting the local IHC digital microscopic panoramic image after fine tuning into a second deep convolutional neural network to carry out positive and negative tissue area segmentation on the IHC microscopic panoramic image after registration to obtain a positive tissue area segmentation mask image, wherein the positive tissue area segmentation mask image distinguishes positive and negative tissue areas by black color.
In this embodiment, the second deep convolutional neural network in step 1.5B) is a UNet deep convolutional neural network. For the sake of convenience of distinction, the second deep convolutional neural network is named as IHC positive region UNet segmentation network in the present embodiment, and its network structure is shown in fig. 6. Wherein "2 × 2 maxporoling" shown by the arrow is a maximum pooling layer of 2 × 2, "3 × 3 Conv2d + Batch Normalization + ReLU" shown by the arrow indicates 3 × 3 two-dimensional convolution, Batch Normalization, and ReLU activation function processing, "2 × 2 convtassposed 2dConv2d + Batch Normalization + ReLU" shown by the arrow indicates 2 × 2 two-dimensional transpose convolution, Batch Normalization, and ReLU activation function processing, the rest of the boxes are convolutional layers, the upper number is a convolution output image size, and the lower side is a convolution kernel size. For the finely adjusted local IHC digital microscopic panoramic image, the segmentation detection of the positive region and the negative region by the organization hierarchy is required, so that a large number of marked pictures containing the IHC positive tissue and the IHC negative tissue are used for data set production to obtain a training, verifying and testing data set; and then, carrying out deep convolution UNet network training by utilizing a training and verifying data set, and then testing the UNet segmentation network performance by using a testing data set to finally obtain another optimized UNet network, which is called as an IHC positive area UNet segmentation network.
Fig. 7 shows a partial IHC digital microscopic panorama image after fine adjustment input to the IHC positive area UNet segmentation network, (a) and fig. 7 shows a positive tissue area segmentation mask image output from the IHC positive area UNet segmentation network, and as can be seen from fig. 7, the positive tissue area segmentation mask image is divided into a positive tissue area and a negative tissue area by black and white.
Because the cells on the newly generated IHC are in one-to-one correspondence with the cells on the HE through the primary registration, on the basis of the primary registration, a more detailed registration operation needs to be performed on the specific region to be subjected to the segmentation detection of the positive cells and the negative cells, which is framed by the doctor. In this embodiment, step 2) includes: and mapping the central position and the radius of the identified cell nucleus to a positive tissue region segmentation mask map, judging all points in a fitting circle corresponding to each cell nucleus, if more than half of the points are located in the positive tissue region, judging the cell nucleus to be a tumor positive cell nucleus, if more than half of the points are located in the negative tissue region, judging the cell nucleus to be a tumor negative cell nucleus, and if not, judging the cell nucleus to be a non-tumor cell nucleus.
When the center positions and radii of the identified nuclei are mapped to the positive tissue region segmentation mask map in step 2), (for each center positionx rc ,y rc ) Judging all points in a fitting circle with the radius r to obtain all points in the fitting circle, and expressing as { Rr: (R)x Rr ,y Rr )}:
Figure 869229DEST_PATH_IMAGE001
In the above formula, theRrAll points on the region are corresponding to the target position region R' after fine adjustment, and whether the points are located on the positive region or the negative region is judged for all the points, namely:
x Rr_new = S1×cosβ×x Rr + S1×sinβ×y Rr + P1 x
y Rr_new = -S1×sinβ×x Rr + S1×cosβ×y Rr + P1 y
if it corresponds toTo point F ' on target position region R ' after trimming 'R(x Rr_new ,y Rr_new ) If the gray value of (1) is 255, the point is located in the positive area, otherwise, the point is located in the negative area; if more than half of the points are positioned in the positive area, the cell nucleus is judged to be a tumor positive cell nucleus, if more than half of the points are positioned in the negative area, the cell nucleus is judged to be a tumor negative cell nucleus, otherwise, the cell nucleus is a non-tumor cell nucleus.
In this example, the positive rate of cytoplasmic IHC tumors was calculatedH rate The functional expression of (a) is:
H rate =R p /(R p +R n ) ,
in the above formula, the first and second carbon atoms are,R p the total number of tumor-positive nuclei,R n total number of tumor-negative nuclei.
In summary, in the method of the present embodiment, a deep convolutional neural network cell nucleus segmentation detection technology, a deep convolutional neural network positive region segmentation detection technology, a double-SIFT fast feature matching registration technology, an image morphology processing and image segmentation technology, an image circle fitting technology, and other technologies are used, the cell positions of the HE micro-panorama and the cell positions of the cytoplasmic IHC micro-panorama are in one-to-one correspondence, the nuclear positions of the cytoplasmic IHC cells are mapped into the HE for statistical calculation, the calculation and analysis of the cytoplasmic IHC tumor cell positive rate are fully automatically and fast completed, the problem of inaccuracy of results caused by complex observation and manual calculation of the cytoplasmic IHC tumor cell positive rate by a doctor is solved, and the doctor is effectively assisted to solve the problem of the calculation and analysis of the positive rate in the scene. The method adopts a double-time deep convolution neural segmentation network method, adopts a nucleus segmentation network for an HE microscopic panorama, adopts a positive area segmentation network for IHC, combines a double-time SIFT registration algorithm to complete the mapping of cytoplasmic IHC positive nuclei and negative nuclei to HE, blurs the IHC, maps the non-statistical cell amount to the distinguishable and statistical HE, thereby completing the identification of cytoplasmic IHC positive cells and negative cells and calculating the tumor positive rate. After the HE cell nucleus is segmented and detected, in order to further reduce impurity interference and distinguish the connected cell nuclei, the method adopts methods such as edge detection, watershed segmentation algorithm and final circle fitting to complete accurate segmentation of the HE cell nuclei. In the method, the cells in the HE microscopic panorama of the HE stained section are used as reference, and the positions of the cells in the cytoplasmic positive Immunohistochemistry (IHC) stained section are located. On the premise of ensuring accurate results of various analysis and calculation indexes, the method can well meet the requirement of reliable quantitative analysis of the cytoplasmic immunohistochemical indexes in clinical pathological work and scientific research, and can efficiently assist doctors and scientific research personnel to finish the analysis of the cytoplasmic immunohistochemical indexes by subtracting the complicated work of manual calculation and analysis of medical personnel and scientific research personnel.
In addition, the present embodiment also provides a intelligent recognition system for cytoplasmic positive immunohistochemistry, which includes a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to execute the steps of the intelligent recognition method for cytoplasmic positive immunohistochemistry.
In addition, the present embodiment also provides a computer-readable storage medium, in which a computer program is stored, the computer program being used for being executed by a computer device to implement the steps of the aforementioned intelligent identification method for cytoplasmic positive immunohistochemistry.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (9)

1. A cytoplasmic positive immunohistochemical intelligent identification method is characterized by comprising the following steps:
1) identifying nuclei from the HE microscopy panoramagram; registering the IHC microscopic panoramic image and the HE microscopic panoramic image, and then carrying out positive and negative tissue region segmentation;
2) mapping the cell nucleus to positive and negative tissue areas, and dividing the cell nucleus into a tumor positive cell nucleus, a tumor negative cell nucleus and a non-tumor cell nucleus;
3) counting the total number of tumor-positive nucleiR p And total number of tumor-negative nucleiR n And calculating the positive rate of the cytoplasmic IHC tumorH rate
In the step 1), after the IHC microscopic panoramic image and the HE microscopic panoramic image are registered, the segmentation of positive and negative tissue areas comprises the following steps:
1.1B) carrying out coarse registration of an organization level on upper small images of the IHC micro panoramic image and the HE micro panoramic image which both adopt a multi-resolution pyramid file storage format, and extracting coarse registration parameters; generating a new IHC digital microscopic panorama according to the bottom layer large image of the IHC microscopic panorama based on the coarse registration parameters;
1.2B) extracting a local IHC digital microscopic panorama of the target position area from the new IHC digital microscopic panorama;
1.3B) carrying out cell-level fine registration on the local IHC digital microscopic panoramic image and the local HE microscopic panoramic image, and extracting fine registration parameters;
1.4B) adjusting the local IHC digital microscopic panoramic image based on the fine registration parameters to obtain a fine-adjusted local IHC digital microscopic panoramic image;
1.5B) inputting the local IHC digital microscopic panoramic image after fine tuning into a second deep convolutional neural network to carry out positive and negative tissue area segmentation on the IHC microscopic panoramic image after registration to obtain a positive tissue area segmentation mask image, wherein the positive tissue area segmentation mask image distinguishes positive and negative tissue areas through black color.
2. The intelligent recognition method for cytoplasmic positive immunohistochemistry according to claim 1, wherein the step 1) of recognizing the cell nuclei from the HE microscopic panorama comprises:
1.1A) extracting a local HE microscopic panorama of a target position area from the HE microscopic panorama;
1.2A) inputting the local HE microscopic panoramic image into a first depth convolution neural network to obtain a cell nucleus segmentation mask image;
1.3A) carrying out nucleus segmentation post-processing on the nucleus segmentation mask map;
1.4A) performing circle fitting on the nuclear segmentation mask map subjected to the noise reduction treatment to obtain the central position and the radius of a minimum fitting circle as the identified nucleus.
3. The intelligent recognition method for cytoplasmic positive immunohistochemistry according to claim 2, characterized in that the first deep convolutional neural network in step 1.2A) is UNet deep convolutional neural network.
4. The intelligent recognition method for cytoplasmic positive immunohistochemistry according to claim 2, wherein the post-processing of the cell nucleus segmentation performed in step 1.3A) includes morphological erosion and boundary separation processing of the connected cell nuclei using edge detection and watershed algorithm.
5. The intelligent recognition method for cytoplasmic positive immunohistochemistry according to claim 1, wherein the second deep convolutional neural network in step 1.5B) is UNet deep convolutional neural network.
6. The intelligent recognition method for the cytoplasmic positive immunohistochemistry according to claim 2, wherein the step 2) comprises: and mapping the central position and the radius of the identified cell nucleus to a positive tissue region segmentation mask map, judging all points in a fitting circle corresponding to each cell nucleus, if more than half of the points are located in the positive tissue region, judging the cell nucleus to be a tumor positive cell nucleus, if more than half of the points are located in the negative tissue region, judging the cell nucleus to be a tumor negative cell nucleus, and if not, judging the cell nucleus to be a non-tumor cell nucleus.
7. The intelligent recognition method for cytoplasmic positive immunohistochemistry according to claim 1, wherein the calculated positive rate for cytoplasmic IHC tumorsH rate The functional expression of (a) is:
H rate =R p /(R p +R n ) ,
in the above formula, the first and second carbon atoms are,R p the total number of tumor-positive nuclei,R n total number of tumor-negative nuclei.
8. A cytoplasmic positive immunohistochemical intelligent identification system comprising a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to perform the steps of the cytoplasmic positive immunohistochemical intelligent identification method according to any one of claims 1 to 7.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is used for being executed by a computer device to implement the steps of the intelligent identification method for cytoplasmic positive immunohistochemistry according to any one of claims 1 to 7.
CN202111558334.7A 2021-12-20 2021-12-20 Cytoplasmic positive immunohistochemical intelligent identification method, system and medium Active CN113947599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111558334.7A CN113947599B (en) 2021-12-20 2021-12-20 Cytoplasmic positive immunohistochemical intelligent identification method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111558334.7A CN113947599B (en) 2021-12-20 2021-12-20 Cytoplasmic positive immunohistochemical intelligent identification method, system and medium

Publications (2)

Publication Number Publication Date
CN113947599A CN113947599A (en) 2022-01-18
CN113947599B true CN113947599B (en) 2022-03-08

Family

ID=79339357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111558334.7A Active CN113947599B (en) 2021-12-20 2021-12-20 Cytoplasmic positive immunohistochemical intelligent identification method, system and medium

Country Status (1)

Country Link
CN (1) CN113947599B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116973571A (en) * 2023-07-28 2023-10-31 上海市闵行区中心医院 Gastrointestinal cancer cell intelligent detection method, system, medium and equipment based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015189264A1 (en) * 2014-06-10 2015-12-17 Ventana Medical Systems, Inc. Predicting breast cancer recurrence directly from image features computed from digitized immunohistopathology tissue slides
US10957041B2 (en) * 2018-05-14 2021-03-23 Tempus Labs, Inc. Determining biomarkers from histopathology slide images
CN112508860B (en) * 2020-11-19 2022-09-30 湖南兰茜生物科技有限公司 Artificial intelligence interpretation method and system for positive check of immunohistochemical image
CN113256618A (en) * 2021-06-23 2021-08-13 重庆点检生物科技有限公司 Tumor identification system and method based on IHC staining

Also Published As

Publication number Publication date
CN113947599A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
US10127675B2 (en) Edge-based local adaptive thresholding system and methods for foreground detection
Parvin et al. Iterative voting for inference of structural saliency and characterization of subcellular events
CN109447998B (en) Automatic segmentation method based on PCANet deep learning model
CN110736747B (en) Method and system for positioning under cell liquid-based smear mirror
CN109840913B (en) Method and system for segmenting tumor in mammary X-ray image
CN108648182B (en) Breast cancer nuclear magnetic resonance image tumor region segmentation method based on molecular subtype
CN106485695A (en) Medical image Graph Cut dividing method based on statistical shape model
CN106611416B (en) Method and device for segmenting lung in medical image
Bartesaghi et al. An energy-based three-dimensional segmentation approach for the quantitative interpretation of electron tomograms
CN107154047A (en) Multi-mode brain tumor image blend dividing method and device
CN113223004A (en) Liver image segmentation method based on deep learning
CN112308840A (en) Automatic segmentation method for oral cancer epithelial tissue region of pathological image
CN113947599B (en) Cytoplasmic positive immunohistochemical intelligent identification method, system and medium
CN112330701A (en) Tissue pathology image cell nucleus segmentation method and system based on polar coordinate representation
CN113139981A (en) DCE-MRI (direct current imaging-magnetic resonance imaging) breast tumor image segmentation method based on deep neural network
CN117670895B (en) Immunohistochemical pathological image cell segmentation method based on section re-staining technology
CN115797293A (en) Brain image reference point automatic labeling method based on magnetic resonance imaging
CN112270684B (en) Microscopic image immunohistochemical virtual multiple labeling and analyzing method and system
CN112508860B (en) Artificial intelligence interpretation method and system for positive check of immunohistochemical image
CN116664590A (en) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image
CN115690182A (en) Image registration method for assisting heterogeneous diagnosis of multi-staining pathological section
CN111062909A (en) Method and equipment for judging benign and malignant breast tumor
CN110120027B (en) CT slice image enhancement method and device for machine learning system data
CN113870194B (en) Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics
Fuchs et al. Weakly supervised cell nuclei detection and segmentation on tissue microarrays of renal clear cell carcinoma

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240805

Address after: Room 605, 606, No. 581 Zhuangyu South Road, Zhuangshi Street, Zhenhai District, Ningbo City, Zhejiang Province 315201

Patentee after: Lansi (Ningbo) Medical Technology Co.,Ltd.

Country or region after: China

Address before: Room g0079, headquarters building, Changsha Zhongdian Software Park Co., Ltd., No. 39, Jianshan Road, Changsha hi tech Development Zone, 410205, Hunan Province

Patentee before: HUNAN LANQIAN BIOTECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right