CN112614093A - Breast pathology image classification method based on multi-scale space attention network - Google Patents

Breast pathology image classification method based on multi-scale space attention network Download PDF

Info

Publication number
CN112614093A
CN112614093A CN202011462321.5A CN202011462321A CN112614093A CN 112614093 A CN112614093 A CN 112614093A CN 202011462321 A CN202011462321 A CN 202011462321A CN 112614093 A CN112614093 A CN 112614093A
Authority
CN
China
Prior art keywords
image
network
image block
image blocks
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011462321.5A
Other languages
Chinese (zh)
Inventor
夏勇
冯宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011462321.5A priority Critical patent/CN112614093A/en
Publication of CN112614093A publication Critical patent/CN112614093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a breast pathology image classification method based on a multi-scale space attention network. Firstly, scaling an original breast pathology image to three different scales, cutting an image block, and removing an invalid image block by performing binarization processing on the image block to obtain a training data set image block; then, constructing a spatial attention network and training the network by using training data set image blocks of images with different scales, wherein a spatial attention module of the network realizes spatial transformation of an original pathological image block and obtains a classification result of the image block through a classification module; and finally, inputting the image blocks of the images with different scales into a trained network to obtain the category of each image block, calculating the prediction probability of each category under different scales, and determining the final prediction category of the original pathological image by using a category probability mean value. The method can fully utilize rich texture structure information in the limited breast pathology image and improve the classification effect.

Description

Breast pathology image classification method based on multi-scale space attention network
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to a breast pathology image classification method based on a multi-scale spatial attention network.
Background
Breast cancer is currently one of the most common cancers in women aged 20-59 worldwide, with its incidence and mortality among women leading to the various types of cancer. According to the statement of 'analysis of malignant tumor onset and death in 2015 China area by the cancer center of China' published in 2019, the incidence of breast cancer in women is the highest, and the mortality is the fifth. Clinically, the most accurate examination means available is pathology, i.e. obtaining pathological images of breast tissue by fine needle puncture, relying on the physician to determine whether a lesion has occurred and the type of lesion. Breast pathology images are generally classified into four categories according to the cell density, morphology and lesion degree of living tissues: normal tissue, benign lesions, carcinoma in situ, and invasive carcinoma. The cellular morphology of normal tissues is relatively regular, and no lesion occurs; benign lesions are benign tumors, have small lesion degree and are similar to normal tissues, lesion areas can be removed through an operation generally, and the benign lesions cannot relapse after being treated properly; carcinoma in situ generally shows that cancer cells are limited to primary parts, do not transfer, and can be further subdivided into lobular carcinoma in situ, ductal carcinoma in situ and the like; invasive cancer, i.e., cancer cells, break through the lobular ductus system of the breast and invade other surrounding tissues, easily causes metastasis of lesions, and is embodied in pathological images with various forms and dense distribution. Neither normal tissue nor benign lesions are actually cancerous. While both carcinoma in situ and infiltrating cancer are of the cancerous type, i.e. malignant lesions. Wherein carcinoma in situ, if detected and treated early, is effective in reducing the risk of conversion to invasive carcinoma. Therefore, the early diagnosis and treatment of the breast cancer can effectively reduce the suffering of the breast cancer and improve the cure rate.
However, the manual diagnosis of pathological images not only requires a doctor to have rich medical knowledge and long-term clinical experience, but also needs to concentrate on for a long time, so that the manual diagnosis which is time-consuming and labor-consuming is easy to misdiagnose. With the development of artificial intelligence technology, deep neural network algorithms are beginning to be widely applied in the field of medical image classification. The specific difficulty of classifying the breast pathology images is the insufficient number of labeled images, large intra-class difference and inter-class similarity. The larger intra-class difference means that the cells in the same class have larger difference, such as infiltrating cancer cells, the morphological change of the cells is larger, the cells are dense but may have different degrees, and the larger inter-class similarity means that the images of the cancer class contain the cancer type cells and normal tissue cells, which inevitably results in larger similarity between the two pathological images. Therefore, the classification effect is often not satisfactory due to the difficulties of classifying the breast pathology image. Aiming at the problems, through development for many years, a related algorithm for deep learning obtains remarkable results in the field of breast case image classification. As proposed by Chennamsetty et al In "Chennamsetty S, Safwan M, Alex V.Classification of breakdown coronary heart failure image using the present of pre-routed neural networks [ C ]. In Proceedings of International conference image analysis and recognition Springer, Cham,2018: 804-. However, this method roughly scales the breast pathology image originally containing abundant image information to a size of 224 × 224, and in this process, image information is inevitably lost, which cannot be compensated even through learning through the 3-network. In addition, the method is difficult to provide an effective solution to the difficulty of large intra-class difference and inter-class similarity of the breast pathology images.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a breast pathology image classification method based on a multi-scale space attention network. Firstly, in order to fully learn rich information in pathological images, scaling original breast pathological images to three different scales, and cutting image blocks, wherein the cut image blocks are used as test data sets; in order to prevent interference of cut-out invalid image blocks (image blocks which basically contain no cells), binarization processing is carried out on the image blocks, the invalid image blocks are removed, and training data set image blocks are obtained; then, a spatial attention network is constructed, training is carried out on the network by utilizing training data set image blocks of images with different scales, the spatial attention module of the network realizes the spatial transformation of the original pathological image block, so that the subsequent classification network can pay more attention to the information of the key area, and the classification result of the image block is obtained through a classification module; and finally, inputting the test data image blocks of the images with different scales into a trained network to obtain the category of each image block, calculating the prediction probability of each category under different scales, and determining the final prediction category of the original pathological image by using the category probability mean value.
A breast pathology image classification method based on a multi-scale space attention network is characterized by comprising the following steps:
step 1: scaling the original breast pathology image according to three different scales to obtain images on the three different scales; then, respectively cutting image blocks of images with different scales, carrying out binarization processing on the image blocks by adopting a local adaptive threshold segmentation algorithm, removing the image blocks with a background area of more than 95%, using the rest image blocks as network training data, carrying out augmentation processing on the network training data, and using the labels of the augmented image blocks as the labels of the original image blocks; taking the image block before binarization processing as network test data;
the image block cutting means that a window with a fixed size is adopted to slide on an image with a fixed step length, and the corresponding image in the window is the image block obtained by cutting;
the specific process of performing binarization processing on the image block by adopting the local adaptive threshold segmentation algorithm comprises the following steps: setting a threshold value D, calculating the Gaussian weighted sum of all pixel values in an area with the size of k multiplied by k and taking the pixel as the center, wherein the Gaussian weighted sum is recorded as S, if | -S-D | > x represents the pixel value of the pixel, the pixel value is set to be 255, the pixel belongs to a white background area, and otherwise, the pixel value is set to be 0, and the pixel belongs to a black cell area; wherein, the threshold D is a value in the range of [5,20], and k is a value in the range of [50,150 ];
step 2: constructing a space attention network, which comprises a space attention module and a classification module, wherein the space attention module consists of a positioning network and a sampling grid, the positioning network adopts a modified residual convolution network ResNet-152, namely, the output of a first layer of full connection layer of the original ResNet-152 network is modified into a one-dimensional vector with the length of 128, the output of a last layer of full connection layer of the original ResNet-152 network is modified into a one-dimensional vector with the length of 6, the weight of the layer is initialized to be zero during initialization, and the deviation is set to be [1,0,0,0,1,0 ]; the sampling grid performs matrix mapping transformation on the image blocks by using one-dimensional vectors with the length of 6 output by the positioning network as parameters of a space transformation matrix; the classification module adopts a dense connection convolution network DenseNet-161, inputs the transformed image block output by the sampling grid into the classification module, and outputs the transformed image block as a four-dimensional vector, namely the probability that the image block belongs to four pathological categories;
and step 3: firstly, pre-training a residual convolutional network ResNet-152 and a dense connection convolutional network DenseNet-161 by using an ImageNet change data set, then respectively inputting training data image blocks of three scale images obtained in the step 1 and labels thereof into a spatial attention network for training, and obtaining a trained spatial attention network for each scale;
and 4, step 4: respectively inputting the test data image blocks of the three-scale images obtained in the step (1) into a space attention network trained in a corresponding scale, outputting the probability that each image block belongs to four pathological categories, and taking the category corresponding to the maximum probability value as a label of the image block; for the image of each scale, taking the ratio of the number of the image blocks contained in each category to the number of all the image blocks of the image as the prediction probability of the category; then, calculating the average value of the prediction probabilities of all classes of the three scale images, and taking the class corresponding to the maximum average value as the prediction class of the original breast pathology image.
The invention has the beneficial effects that: due to the adoption of image scaling and image block cutting, the information of the original pathological image can be better learned, and the influence of invalid information on the classification result is eliminated; because a multi-scale space attention network combined with a space attention mechanism is constructed, rich texture structure information in the limited breast pathology image can be more fully utilized, and the classification effect is improved.
Drawings
FIG. 1 is a flow chart of the breast pathology image classification method based on multi-scale spatial attention network of the present invention;
FIG. 2 is an image block before and after binarization processing;
wherein, (a) -the image block before binarization processing, and (b) -the image block after binarization processing;
FIG. 3 is a schematic diagram of a spatial attention network of the present invention;
FIG. 4 is a schematic diagram of the sampling process of the attention module of the present invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in fig. 1, the present invention provides a breast pathology image classification method based on a multi-scale spatial attention network. The specific implementation process is as follows:
1. extraction and preprocessing of pathological image blocks
For a breast pathological image with the size of H multiplied by W, firstly, the pathological image is zoomed to different scales (f is equal to [1, inf) according to different sampling factors f, and when f is equal to 1, the pathological image is represented), and the image is zoomed to three different scales. The image at each scale is then sliced through a sliding window of size h x w with a step size s, where h and w represent the length and width of the sliding window, respectively. By this cut, the total number of image blocks obtained on the scale with a sampling factor f is:
Figure BDA0002828856900000041
wherein the content of the first and second substances,
Figure BDA0002828856900000042
indicating a rounding down.
Then, in order to reduce training of an invalid image block interference model and accuracy of a classification result of the disturbed model, a local adaptive threshold segmentation algorithm is adopted to carry out binarization processing on the image block, the image block of an area with a white background of 255 pixel value accounting for more than 95% is removed, and the rest image blocks are used as network input. Wherein the locally adaptive threshold segmentation is based on a comparison of the input pixel value with the difference, determining the output pixel value according to the comparison result, i.e.: setting a threshold value D, calculating the Gaussian weighted sum of all pixel values in an area with the size of k multiplied by k and taking the pixel as the center, wherein the Gaussian weighted sum is recorded as S, if | -S-D | > x represents the pixel value of the pixel, the pixel value is set to be 255, the pixel belongs to a white background area, and otherwise, the pixel value is set to be 0, and the pixel belongs to a black cell area; in the present invention, the threshold value D is set to 9 and k is set to 113.
And finally, realizing the amplification of the pathological image block by rotation and overturning, and taking the label of the amplified image block as the label of the original image block.
2. Spatial attention network
The image block-based spatial attention network constructed by the invention mainly comprises a spatial attention module and a classification module, as shown in FIG. 3.
Wherein the spatial attention module is composed of a positioning network and a sampling grid. The positioning network adopts a residual convolution network ResNet-152 as a backbone network. The ResNet-152 comprises a convolutional layer, a maximum pooling layer, three residual network blocks, an average pooling layer and two full-connection layers. In the model of the invention, the last two fully-connected layers of the original ResNet-152 network are modified, and the output of the first fully-connected layer is converted from a one-dimensional vector with the length of 1024 to a one-dimensional vector with the length of 128. And simultaneously, the output of the last layer is also changed into a one-dimensional vector with the length of 6, the one-dimensional vector is used as a final space transformation matrix, the weight of the layer is initialized to be zero during initialization, and the deviation is set to be [1,0,0,0,1,0 ]. The specific network structure is shown in table 1.
TABLE 1
Figure BDA0002828856900000051
The input of the positioning network is a pathological image block U epsilon Rh×w×C(where h and w refer to the length and width of the image block, and C refers to the number of channels of the image block), the output is the transformation matrix parameter θ of the spatial transform, a one-dimensional vector of length 6.
Then, a sampling grid G is created using the predicted transformation parameters θ and a regular coordinate grid. The size of the sampling grid G is consistent with that of the original pathological image block U, and each sampling pixel on the sampling grid corresponds to the pixel at the corresponding position on the input image block through a spatial affine matrix. Sampling pixels on a grid
Figure BDA0002828856900000052
The output image block V epsilon R after the spatial transformation is formedh×w×C. The spatial transformation formula can be expressed as:
Figure BDA0002828856900000061
wherein the content of the first and second substances,
Figure BDA0002828856900000062
to output the coordinates of the sample points in the image block,
Figure BDA0002828856900000063
is defined as the original coordinate, A, of the input image block corresponding to the sampling pointθIs a 2 × 3 affine transformation matrix obtained from the transformation parameters θ. The sampling process is shown in fig. 4.
The input image block U is subjected to a spatial affine transformation by means of a sampling grid G to produce an output image block V. However, probably because the coordinates of the sampling points are non-integer, the pixel values of the corresponding coordinates in the original image block U cannot be directly obtained. Bilinear interpolated samples may be used in this case.
And (3) outputting the transformed pathological image V serving as the input of a classification module to obtain the classification probability of the image block corresponding to the four types of images (normal tissues, benign lesions, in-situ cancers and invasive cancers). The classification module of the invention adopts a dense connection convolution network DenseNet-161 which sequentially comprises a convolution layer, a maximum pooling layer and four dense connection blocks, wherein the rear parts of the first three dense connection blocks are connected with a transition layer, and finally, the rear parts of the first three dense connection blocks are connected with an average pooling layer and a full connection layer, and the output of the full connection layer is a one-dimensional vector with the length of 4, namely the probability of four categories corresponding to the transformed pathological image block V. The specific structure is shown in table 2.
TABLE 2
Figure BDA0002828856900000064
Figure BDA0002828856900000071
3. Network training
In the training stage, the image blocks subjected to binarization processing and amplification processing in the step 1 and original pathological image labels thereof are used as network input, and three spatial attention networks are respectively trained by pathological image block sets on three scales. The image blocks obtained under the same scale output respective conversion matrix parameters through a positioning network, matrix mapping transformation is carried out through the parameters, the original pathological image block U is converted into a pathological image block V, then the pathological image block V is input into a classification network Densenet-161, and the probability that the pathological image block V corresponds to four categories is output.
Since each module of the spatial attention network is differentiable and the gradient is transitive (including the matrix mapping transformation module), the spatial attention network can be trained in an end-to-end manner. Considering that the training data is limited, and the network may not be trained well from the beginning, the invention focuses on two main networks of the spatial attention network before training, namely: the positioning network ResNet-152 and the classification network DenseNet-161 are pre-trained with ImageNet challenge datasets, respectively.
4. Image class prediction
After the spatial attention network training is completed, the classification prediction can be carried out on the original mammary gland pathological image. Which may also be referred to as a test phase. At this time, the input data set is a test data set, that is, the image block obtained after the segmentation in step 1 is not subjected to binarization processing and background image block removal, so as to retain the information in the original image as much as possible. For the input image blocks under each scale, the network can predict the scores, namely the prediction probabilities, of the image blocks corresponding to normal tissues, benign lesions, in-situ cancers and invasive cancers, and the class with the maximum probability value is taken as the label of each pathological image block. For the image of each scale, the ratio of the number of image blocks contained in each category to the number of all image blocks of the image is taken as the prediction probability of the category, so as to obtain four category prediction probabilities, the average value of the prediction probabilities of all the categories of the three-scale image is calculated, and the category corresponding to the maximum value is taken as the prediction category of the original breast pathology image.

Claims (1)

1. A breast pathology image classification method based on a multi-scale space attention network is characterized by comprising the following steps:
step 1: scaling the original breast pathology image according to three different scales to obtain images on the three different scales; then, respectively cutting image blocks of images with different scales, carrying out binarization processing on the image blocks by adopting a local adaptive threshold segmentation algorithm, removing the image blocks with a background area of more than 95%, using the rest image blocks as network training data, carrying out augmentation processing on the network training data, and using the labels of the augmented image blocks as the labels of the original image blocks; taking the image block before binarization processing as network test data;
the image block cutting means that a window with a fixed size is adopted to slide on an image with a fixed step length, and the corresponding image in the window is the image block obtained by cutting;
the specific process of performing binarization processing on the image block by adopting the local adaptive threshold segmentation algorithm comprises the following steps: setting a threshold value D, calculating the Gaussian weighted sum of all pixel values in an area with the size of k multiplied by k and taking the pixel as the center, wherein the Gaussian weighted sum is recorded as S, if | -S-D | > x represents the pixel value of the pixel, the pixel value is set to be 255, the pixel belongs to a white background area, and otherwise, the pixel value is set to be 0, and the pixel belongs to a black cell area; wherein, the threshold D is a value in the range of [5,20], and k is a value in the range of [50,150 ];
step 2: constructing a space attention network, which comprises a space attention module and a classification module, wherein the space attention module consists of a positioning network and a sampling grid, the positioning network adopts a modified residual convolution network ResNet-152, namely, the output of a first layer of full connection layer of the original ResNet-152 network is modified into a one-dimensional vector with the length of 128, the output of a last layer of full connection layer of the original ResNet-152 network is modified into a one-dimensional vector with the length of 6, the weight of the layer is initialized to be zero during initialization, and the deviation is set to be [1,0,0,0,1,0 ]; the sampling grid performs matrix mapping transformation on the image blocks by using one-dimensional vectors with the length of 6 output by the positioning network as parameters of a space transformation matrix; the classification module adopts a dense connection convolution network DenseNet-161, inputs the transformed image block output by the sampling grid into the classification module, and outputs the transformed image block as a four-dimensional vector, namely the probability that the image block belongs to four pathological categories;
and step 3: firstly, pre-training a residual convolutional network ResNet-152 and a dense connection convolutional network DenseNet-161 by using an ImageNet change data set, then respectively inputting training data image blocks of three scale images obtained in the step 1 and labels thereof into a spatial attention network for training, and obtaining a trained spatial attention network for each scale;
and 4, step 4: respectively inputting the test data image blocks of the three-scale images obtained in the step (1) into a space attention network trained in a corresponding scale, outputting the probability that each image block belongs to four pathological categories, and taking the category corresponding to the maximum probability value as a label of the image block; for the image of each scale, taking the ratio of the number of the image blocks contained in each category to the number of all the image blocks of the image as the prediction probability of the category; then, calculating the average value of the prediction probabilities of all classes of the three scale images, and taking the class corresponding to the maximum average value as the prediction class of the original breast pathology image.
CN202011462321.5A 2020-12-10 2020-12-10 Breast pathology image classification method based on multi-scale space attention network Pending CN112614093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011462321.5A CN112614093A (en) 2020-12-10 2020-12-10 Breast pathology image classification method based on multi-scale space attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011462321.5A CN112614093A (en) 2020-12-10 2020-12-10 Breast pathology image classification method based on multi-scale space attention network

Publications (1)

Publication Number Publication Date
CN112614093A true CN112614093A (en) 2021-04-06

Family

ID=75234422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011462321.5A Pending CN112614093A (en) 2020-12-10 2020-12-10 Breast pathology image classification method based on multi-scale space attention network

Country Status (1)

Country Link
CN (1) CN112614093A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239993A (en) * 2021-04-29 2021-08-10 中国人民解放军海军军医大学第三附属医院 Pathological image classification method, pathological image classification system, terminal and computer-readable storage medium
CN113807428A (en) * 2021-09-14 2021-12-17 清华大学 Reconstruction method, system and device of classification model probability label and storage medium
CN114170206A (en) * 2021-12-13 2022-03-11 上海派影医疗科技有限公司 Breast pathology image canceration property interpretation method and device considering spatial information correlation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326931A (en) * 2016-08-25 2017-01-11 南京信息工程大学 Mammary gland molybdenum target image automatic classification method based on deep learning
CN109816622A (en) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 A kind of adaptive fruit defects detection method and system based on piecemeal
CN110175998A (en) * 2019-05-30 2019-08-27 沈闯 Breast cancer image-recognizing method, device and medium based on multiple dimensioned deep learning
CN111582393A (en) * 2020-05-13 2020-08-25 山东大学 Classification method for predicting multiple pathological types of benign and malignant pulmonary nodules based on three-dimensional deep learning network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326931A (en) * 2016-08-25 2017-01-11 南京信息工程大学 Mammary gland molybdenum target image automatic classification method based on deep learning
CN109816622A (en) * 2017-11-22 2019-05-28 深圳市祈飞科技有限公司 A kind of adaptive fruit defects detection method and system based on piecemeal
CN110175998A (en) * 2019-05-30 2019-08-27 沈闯 Breast cancer image-recognizing method, device and medium based on multiple dimensioned deep learning
CN111582393A (en) * 2020-05-13 2020-08-25 山东大学 Classification method for predicting multiple pathological types of benign and malignant pulmonary nodules based on three-dimensional deep learning network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAYONG WANG ET AL.: "Deep Learning for Identifying Metastatic Breast Cancer", 《ARXIV:1606.05718V1 [Q-BIO.QM]》 *
ZHANBO YANG ET AL.: "MSA-Net: Multiscale Spatial Attention Network for the Classification of Breast Histology Images", 《BICS 2019: ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS》 *
张锡英 等: "融合STN和DenseNet 的深度学习网络及其应用", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239993A (en) * 2021-04-29 2021-08-10 中国人民解放军海军军医大学第三附属医院 Pathological image classification method, pathological image classification system, terminal and computer-readable storage medium
CN113807428A (en) * 2021-09-14 2021-12-17 清华大学 Reconstruction method, system and device of classification model probability label and storage medium
CN114170206A (en) * 2021-12-13 2022-03-11 上海派影医疗科技有限公司 Breast pathology image canceration property interpretation method and device considering spatial information correlation

Similar Documents

Publication Publication Date Title
CN107886514B (en) Mammary gland molybdenum target image lump semantic segmentation method based on depth residual error network
CN109886986B (en) Dermatoscope image segmentation method based on multi-branch convolutional neural network
CN107748900B (en) Mammary gland tumor classification device and storage medium based on discriminative convolutional neural network
CN112614093A (en) Breast pathology image classification method based on multi-scale space attention network
CN114565761B (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN107240102A (en) Malignant tumour area of computer aided method of early diagnosis based on deep learning algorithm
CN111951288B (en) Skin cancer lesion segmentation method based on deep learning
Tong et al. Improved U-net MALF model for lesion segmentation in breast ultrasound images
CN111462042A (en) Cancer prognosis analysis method and system
Hamad et al. Breast cancer detection and classification using artificial neural networks
CN112215844A (en) MRI (magnetic resonance imaging) multi-mode image segmentation method and system based on ACU-Net
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
CN112990214A (en) Medical image feature recognition prediction model
CN114266717A (en) Parallel capsule network cervical cancer cell detection method based on Inception module
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
KR20230026699A (en) Method and apparatus for classifying a medical image
Reenadevi et al. Breast cancer histopathological image classification using augmentation based on optimized deep ResNet-152 structure
CN113643269A (en) Breast cancer molecular typing method, device and system based on unsupervised learning
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
Honghan et al. Rms-se-unet: A segmentation method for tumors in breast ultrasound images
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN116310553A (en) Mammary gland pathology image molecular typing prediction method based on multi-attribute embedding model
CN115222651A (en) Pulmonary nodule detection system based on improved Mask R-CNN
CN114027794A (en) Pathological image breast cancer region detection method and system based on DenseNet network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210406