CN111179214A - Pathological section tissue area identification system based on image semantic segmentation - Google Patents

Pathological section tissue area identification system based on image semantic segmentation Download PDF

Info

Publication number
CN111179214A
CN111179214A CN201911204394.1A CN201911204394A CN111179214A CN 111179214 A CN111179214 A CN 111179214A CN 201911204394 A CN201911204394 A CN 201911204394A CN 111179214 A CN111179214 A CN 111179214A
Authority
CN
China
Prior art keywords
semantic segmentation
identification system
image
convolution
separable convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911204394.1A
Other languages
Chinese (zh)
Inventor
杨永全
郑众喜
袁勇
雷雪梅
蔡小玲
王杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Youna Medical Equipment Co ltd
Original Assignee
Suzhou Youna Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Youna Medical Equipment Co ltd filed Critical Suzhou Youna Medical Equipment Co ltd
Priority to CN201911204394.1A priority Critical patent/CN111179214A/en
Publication of CN111179214A publication Critical patent/CN111179214A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a pathological section tissue area identification system based on image semantic segmentation, which utilizes big data and deep learning technology to realize accurate identification of tissue areas, and the basic contents of the scheme comprise: firstly, collecting and labeling data; (II) training an image semantic segmentation network; (III) predicting the image semantic segmentation network; and (IV) outputting tissue region identification through post-processing. The invention can still well converge on the problem of the division of the organization region under the condition of improving the efficiency of the semantic division network, thereby effectively realizing the division of the organization region.

Description

Pathological section tissue area identification system based on image semantic segmentation
Technical Field
The invention relates to the technical field of machine learning, in particular to a pathological section tissue area identification system based on image semantic segmentation.
Background
Pathological section tissue area identification is a key basic technology in digital pathological application, and the effect of the technology is to eliminate useless background areas in pathological sections and identify effective tissue areas for digitalization. The digital pathology is mainly composed of a digital slice scanning device and data processing software. Firstly, a digital microscope or an amplification system is used for scanning, acquiring and imaging pathological sections one by one under a low-power objective lens, and a microscopic scanning platform automatically scans and moves along the direction of a section X Y axis and automatically focuses along the direction of a Z axis. Then, scanning control software acquires a high-resolution digital Image by using a program-controlled scanning mode on the basis of effective amplification of an optical amplification device, Image compression and storage software automatically performs seamless splicing processing on the Image, and a Whole full-field digital slice (WSI) is manufactured and generated. And storing the data in a certain medium to establish a digital pathological section library. Then, a series of visual data can be subjected to any scale enlargement or reduction and any direction movement browsing and analyzing processing by using a corresponding digital pathological section browsing system, and the operation is as same as that of a real optical microscope.
Conventional image segmentation methods such as threshold-based segmentation methods, region-based segmentation methods, edge-based segmentation methods, and segmentation methods based on specific theories can be applied to solve the problem of pathological section tissue region identification. However, due to the irregular quality of pathological sections and the intricate content morphology, it is still very difficult to accurately identify the tissue regions within the sections.
Disclosure of Invention
The invention aims to provide a pathological section tissue area recognition system based on image semantic segmentation, which can well converge on the problem of tissue area segmentation under the condition of improving the efficiency of a semantic segmentation network and effectively realize tissue area segmentation.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
the invention discloses a pathological section tissue area identification system based on image semantic segmentation, which comprises the following steps:
s100, inputting a macro image;
s200, labeling the tissue region in the macroscopic image to obtain an input segmentation map Pin
S300, establishing a semantic segmentation network, wherein the semantic segmentation network comprises the following steps,
a1to PinPerforming separable convolution K times to obtain P1
a2To P1Performing maximum pooling operation, and performing separable convolution K times to obtain P2
aNTo PN-1Performing maximum pooling operation, and performing separable convolution K times to obtain PN
aN+1To PNPerforming maximum pooling operation, and performing separable convolution K times to obtain PN+1
aN+2To PN+1Performing deconvolution operation to obtain Q1Cutting out PNTo obtain P'NOf which is P'NSize and Q1Same, splicing P'NAnd Q1To obtain Q'1
aN+3To Q'1Performing deconvolution operation to obtain Q2Cutting out PN-1To obtain P'N-1Of which is P'N-1Size and Q2Same, splicing P'N-1And Q2To obtain Q'2
a2N+1To Q'N-1Performing deconvolution operation to obtain QNCutting out P1To obtain P'1Of which is P'1Size and QNSame, splicing P'1And QNTo obtain Q'N
a2N+2To Q'NPerforming deconvolution operation to obtain an output segmentation map Pout;
S400, training a semantic segmentation network;
s500, predicting by using a semantic segmentation network to obtain a prediction result;
s600, based on the prediction result, the tissue area is identified by using a morphological image processing technology.
Preferably, in step aN+1+jIn (1),
at splicing P'N+j-2And QjTo obtain Q'jThen to Q'jPerforming M separable convolution operations to make the number of channels and QjAnd the same, wherein N is more than or equal to j and more than or equal to 1.
Preferably, the step size of the maximum pooling operation is 2 × 2 and the convolution kernel scale of the separable convolution is 3 × 3.
Preferably, K is 2 and M is 2.
Preferably, N is 4.
Preferably, the partition map PinIs a single channel image.
Preferably, the morphological image processing technique is dilation or connected region finding.
Preferably, the label in step S200 is a polygon label.
The invention has the beneficial effects that:
1. the pathological section tissue area identification is defined as the image semantic segmentation problem based on the pathological section tissue area identification of the image semantic segmentation, so that the accurate tissue area identification is realized, and any image semantic segmentation method is suitable for the scheme.
2. The invention introduces separable convolution into the image semantic segmentation network, and constructs more efficient M-UNet.
3. The method applies M-UNet to pathological section tissue region segmentation, and realizes accurate tissue region identification.
4. The invention can provide the digital scanning system with the useful area in the pathological section for digitalization, improve the efficiency of the scanning system and reduce the digital storage space.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a semantic segmentation network structure;
FIG. 3 is a schematic representation of Precision and epochs;
FIG. 4 is a schematic representation of Recall and epochs;
FIG. 5 is a schematic representation of F1 and epochs;
FIG. 6 is a schematic representation of Average identification and epochs.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings.
As shown in fig. 1 and 2, the present invention comprises the following steps:
first, data collection and labeling
According to the scheme, about 10 thousands of slice macroscopic images are collected, and the tissue area in each image is marked by adopting a polygon. Some example images and corresponding annotations are shown as data collection annotations in FIG. 1. About 80% of the macroscopic images collected were about 8 million as training data sets and about 20% were about 2 million as testing data sets.
Secondly, training an image semantic segmentation network.
General image semantic segmentation Networks, such as deep lab series, UNet, SegNet, FCN (fuzzy conditional Neural Networks) series, etc., are all suitable for the present scheme. In order to improve the efficiency of the model, the lightweight convolution structure and the semantic segmentation network are combined in the technical scheme, a more efficient M-UNet is provided, and the technical details are shown in FIG. 2. Unlike the original UNet structure, M-UNet introduces separable convolution (separable convolution) to UNet to replace its original conventional convolution operation. The separable convolution is composed of two parts of channel-by-channel convolution (depthwiseconvolution) and point-by-point convolution (pointwiseconvolution).
Let the dimension (length and width) of the feature map be DFThe scale (length and width) of the convolution kernel is DKThe number of channels of the input feature map is M, the number of channels of the output feature is N, and the calculation amount of the traditional convolution operation is
DK·DK·M·N·DF·DF
The amount of computation of the channel-by-channel convolution is
DK·DK·M·DF·DF
The amount of computation of the point-by-point convolution is
M·N·DF·DF
The computation of separable convolution is channel-by-channel convolution plus point-by-point convolution
DK·DK·M·DF·DF+M·N·DF·DF
The ratio of the calculated amount between the separable convolution and the conventional convolution is
Figure BDA0002296627320000051
Since the values of N are generally set to be relatively large, such as 256, 512 and the like, which can be ignored, the calculation amount of separable convolution is about that of the conventional convolution
Figure BDA0002296627320000052
In the technical scheme, D of traditional convolution in UNet networkKThe value is 3, so the efficiency of the improved M-Unet is improved by nearly 9 times.
As shown in fig. 2, the flow of data in the M-UNet network is described as follows:
(1) an image with a single channel of 572 × 572 channel is input, separable convolution is performed twice, the convolution kernel scale is 3 × 3, and a feature map with a 64 channel scale of 568 × 568 is obtained.
(2) Performing maximum pooling operation on the feature map obtained in the step (1), wherein the step size is 2 multiplied by 2, and then reducing the step size to a feature map with 64-channel scale of 284 multiplied by 284; and after two times of separable convolution, the convolution kernel scale is 3 multiplied by 3, and a characteristic diagram with 128 channel scale of 280 multiplied by 280 is obtained.
(3) Performing maximum pooling operation on the feature map obtained in the step (2), wherein the step length is 2 multiplied by 2, and then reducing the step length to a feature map with a 128-channel scale of 140 multiplied by 140; and after two times of separable convolution, the scale of a convolution kernel is 3 multiplied by 3, and a characteristic diagram with 256 channel scales of 136 multiplied by 136 is obtained.
(4) Performing maximum pooling operation on the feature map obtained in the step (3), wherein the step size is 2 multiplied by 2, and then reducing the feature map to a feature map with 256 channel dimensions of 68 multiplied by 68; and performing two times of separable convolution, wherein the scale of a convolution kernel is 3 multiplied by 3, and a feature map with the 512 channel scale of 64 multiplied by 64 is obtained.
(5) Performing maximum pooling operation on the feature map obtained in the step (4), wherein the step length is 2 multiplied by 2, and then reducing the step length to a feature map with 512 channels and the scale of 32 multiplied by 32; and performing two times of separable convolution, wherein the scale of a convolution kernel is 3 multiplied by 3, and a characteristic diagram with 1024 channels of 28 multiplied by 28 is obtained.
(6) Performing deconvolution operation on the feature map obtained in the step (5), wherein the step size is 2 multiplied by 2, and the step size is further increased to a feature map with 512 channels and the scale of 56 multiplied by 56; simultaneously copying and cutting the characteristic diagram obtained in the step (4) to obtain a characteristic diagram with 512 channels and 56 x 56; finally, the two parts (white and blue) are spliced into a feature map with 1024 channels and dimensions 56 x 56.
(7) Performing deconvolution operation on the feature map obtained in the step (6), wherein the step size is 2 × 2, and the step size is increased to a feature map with 512 channels and the scale of 56 × 56; simultaneously copying and cutting the characteristic diagram obtained in the step (4) to obtain a characteristic diagram with 512 channels and 56 x 56; the two parts (white and blue) are spliced into a 1024-channel feature map with the scale of 56 × 56, and a 512-channel feature map with the scale of 52 × 52 is obtained through two separable convolutions (with the kernel scale of 3 × 3).
(8) Performing deconvolution operation on the feature map obtained in the step (7), wherein the step size is 2 multiplied by 2, and the feature map is raised to a feature map with 256 channel scales of 104 multiplied by 104; simultaneously copying and cutting the characteristic diagram obtained in the step (3) to obtain a characteristic diagram with 256 channels and the scale of 104 multiplied by 104; the two parts (white and blue) are spliced into a feature map with 512 channel scale of 104 × 104, and a feature map with 256 channel scale of 100 × 100 is obtained after two separable convolutions (kernel scale of 3 × 3).
(9) Performing deconvolution operation on the feature map obtained in the step (8), wherein the step size is 2 multiplied by 2, and the feature map is raised to a feature map with a 128-channel scale of 200 multiplied by 200; simultaneously copying and cutting the characteristic diagram obtained in the step (2) to obtain a characteristic diagram with 128 channels and the dimension of 200 multiplied by 200; the two parts (white and blue) are spliced into a feature map with 256 channel scales of 200 × 200, and a feature map with 128 channel scales of 196 × 196 is obtained after two separable convolutions (kernel scales of 3 × 3).
(10) Performing deconvolution operation on the feature map obtained in the step (8), wherein the step size is 2 multiplied by 2, and then the feature map is raised to a feature map with 64-channel scale of 392 multiplied by 392; simultaneously copying and cutting the characteristic diagram obtained in the step (3) to obtain a characteristic diagram with 64 channels and the scale of 392 multiplied by 392; splicing the two parts (white and blue) into a feature map with the 128-channel scale of 392 × 392, and obtaining a feature map with the 64-channel scale of 388 × 388 through two separable convolutions (the kernel scale of 3 × 3); finally, the final segmentation graph is obtained through one convolution of 1 multiplied by 1.
Among them, (1) to (5) are a process of encoding an input image, and different features from details to abstractions are generated in the whole process. (6) The steps (1) to (10) are a process of decoding the features generated in the steps (1) to (5), and the complex tissue region identification effect is achieved by fusing the detailed features and the abstract features.
Three, image semantic segmentation network prediction
For the trained semantic segmentation network, predictions were made on about 2 ten thousand test data sets, and some example prediction results are shown in the image semantic segmentation network prediction part in fig. 1.
Let tp be the number of pixels with correct prediction, the number of pixels with incorrect prediction, and fh be the number of pixels with missed prediction, some metrics such as precision (precision), recall (recall), overall performance f1, and the overlap ratio IoU between the predicted result and the normalized result can be defined.
Figure BDA0002296627320000071
Figure BDA0002296627320000072
Figure BDA0002296627320000073
Figure BDA0002296627320000081
Based on the measurement standard, the performance of the M-UNet on the test set in the training process of the M-UNet is shown in figures 3-6, the M-UNet can still well converge on the problem of the division of the organization region under the condition of improving the efficiency of the semantic division network, and the division of the organization region is effectively realized.
Fourth, post-processing output tissue region identification
Based on the prediction result of the semantic segmentation network, the connected region search can realize accurate identification of the tissue region in the pathological section image by using morphological image processing technology such as expansion, and some identification example results are shown in a post-processing output tissue region identification part in fig. 1.
The present invention is capable of other embodiments, and various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention.

Claims (8)

1. A pathological section tissue region identification system based on image semantic segmentation is characterized by comprising the following steps:
s100, inputting a macro image;
s200, labeling the tissue region in the macroscopic image to obtain an input segmentation map Pin
S300, establishing a semantic segmentation network, wherein the semantic segmentation network comprises the following steps,
a1to PinPerforming separable convolution K times to obtainTo P1
a2To P1Performing maximum pooling operation, and performing separable convolution K times to obtain P2
aNTo PN-1Performing maximum pooling operation, and performing separable convolution K times to obtain PN
aN+1To PNPerforming maximum pooling operation, and performing separable convolution K times to obtain PN+1
aN+2To PN+1Performing deconvolution operation to obtain Q1Cutting out PNTo obtain P'NOf which is P'NSize and Q1Same, splicing P'NAnd Q1To obtain Q'1
aN+3To Q'1Performing deconvolution operation to obtain Q2Cutting out PN-1To obtain P'N-1Of which is P'N-1Size and Q2Same, splicing P'N-1And Q2To obtain Q'2
a2N+1To Q'N-1Performing deconvolution operation to obtain QNCutting out P1To obtain P'1Of which is P'1Size and QNSame, splicing P'1And QNTo obtain Q'N
a2N+2To Q'NPerforming deconvolution operation to obtain an output segmentation map Pout
S400, training a semantic segmentation network;
s500, predicting by using a semantic segmentation network to obtain a prediction result;
s600, based on the prediction result, the tissue area is identified by using a morphological image processing technology.
2. The identification system of claim 1, wherein: in step aN+1+jIn (1),
at splicing P'N+j-2And QjTo obtain Q'jThen to Q'jPerforming M separable convolution operations to make the number of channels and QjAnd the same, wherein N is more than or equal to j and more than or equal to 1.
3. The identification system of claim 2, wherein: the step size of the maximum pooling operation is 2 x 2 and the convolution kernel scale of the separable convolution is 3 x 3.
4. The identification system of claim 3, wherein: k is 2 and M is 2.
5. The identification system of claim 4, wherein: n is 4.
6. The identification system of claim 1, wherein: segmentation chart PinIs a single channel image.
7. The identification system of claim 1, wherein: the morphological image processing technique is dilation or connected region finding.
8. The identification system of claim 1, wherein: the label in step S200 is a polygon label.
CN201911204394.1A 2019-11-29 2019-11-29 Pathological section tissue area identification system based on image semantic segmentation Pending CN111179214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911204394.1A CN111179214A (en) 2019-11-29 2019-11-29 Pathological section tissue area identification system based on image semantic segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911204394.1A CN111179214A (en) 2019-11-29 2019-11-29 Pathological section tissue area identification system based on image semantic segmentation

Publications (1)

Publication Number Publication Date
CN111179214A true CN111179214A (en) 2020-05-19

Family

ID=70647307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911204394.1A Pending CN111179214A (en) 2019-11-29 2019-11-29 Pathological section tissue area identification system based on image semantic segmentation

Country Status (1)

Country Link
CN (1) CN111179214A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197606A (en) * 2018-01-31 2018-06-22 浙江大学 The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
US20190228529A1 (en) * 2016-08-26 2019-07-25 Hangzhou Hikvision Digital Technology Co., Ltd. Image Segmentation Method, Apparatus, and Fully Convolutional Network System
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
CN110097554A (en) * 2019-04-16 2019-08-06 东南大学 The Segmentation Method of Retinal Blood Vessels of convolution is separated based on intensive convolution sum depth

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190228529A1 (en) * 2016-08-26 2019-07-25 Hangzhou Hikvision Digital Technology Co., Ltd. Image Segmentation Method, Apparatus, and Fully Convolutional Network System
CN108197606A (en) * 2018-01-31 2018-06-22 浙江大学 The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution
CN108537292A (en) * 2018-04-10 2018-09-14 上海白泽网络科技有限公司 Semantic segmentation network training method, image, semantic dividing method and device
CN110097554A (en) * 2019-04-16 2019-08-06 东南大学 The Segmentation Method of Retinal Blood Vessels of convolution is separated based on intensive convolution sum depth
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method

Similar Documents

Publication Publication Date Title
CN111325751B (en) CT image segmentation system based on attention convolution neural network
CN110428432B (en) Deep neural network algorithm for automatically segmenting colon gland image
CN111696094B (en) Immunohistochemical PD-L1 membrane staining pathological section image processing method, device and equipment
CN108229576B (en) Cross-magnification pathological image feature learning method
CN108305253B (en) Pathological image classification method based on multiple-time rate deep learning
Ligabue et al. Evaluation of the classification accuracy of the kidney biopsy direct immunofluorescence through convolutional neural networks
CN110647875B (en) Method for segmenting and identifying model structure of blood cells and blood cell identification method
CN112634296A (en) RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism
CN113706545B (en) Semi-supervised image segmentation method based on dual-branch nerve discrimination dimension reduction
CN113378933A (en) Thyroid ultrasound image classification and segmentation network, training method, device and medium
CN115205250A (en) Pathological image lesion segmentation method and system based on deep learning
Juhong et al. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN115909006A (en) Mammary tissue image classification method and system based on convolution Transformer
Jha et al. Instance segmentation for whole slide imaging: end-to-end or detect-then-segment
CN115393283A (en) Polyp image segmentation method based on shallow layer supervision and attention feedback
CN114708229A (en) Pathological section digital image full-hierarchy analysis system
Nasrollahi et al. Deep artifact-free residual network for single-image super-resolution
CN111179214A (en) Pathological section tissue area identification system based on image semantic segmentation
CN110414338B (en) Pedestrian re-identification method based on sparse attention network
Li et al. RGSR: A two-step lossy JPG image super-resolution based on noise reduction
Yan et al. DEST: Deep Enhanced Swin Transformer Toward Better Scoring for NAFLD
CN112069735B (en) Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
CN112686912B (en) Acute stroke lesion segmentation method based on gradual learning and mixed samples
CN114627293A (en) Image matting method based on multi-task learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination