CN111612740B - Pathological image processing method and device - Google Patents
Pathological image processing method and device Download PDFInfo
- Publication number
- CN111612740B CN111612740B CN202010301990.8A CN202010301990A CN111612740B CN 111612740 B CN111612740 B CN 111612740B CN 202010301990 A CN202010301990 A CN 202010301990A CN 111612740 B CN111612740 B CN 111612740B
- Authority
- CN
- China
- Prior art keywords
- segmentation result
- segmentation
- pathological image
- result
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000001575 pathological effect Effects 0.000 title claims abstract description 58
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 230000011218 segmentation Effects 0.000 claims abstract description 135
- 239000002245 particle Substances 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000001914 filtration Methods 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000013135 deep learning Methods 0.000 claims abstract description 15
- 230000007170 pathology Effects 0.000 claims description 17
- 206010006187 Breast cancer Diseases 0.000 claims description 10
- 208000026310 Breast neoplasm Diseases 0.000 claims description 10
- 238000012952 Resampling Methods 0.000 claims description 10
- 238000003709 image segmentation Methods 0.000 claims description 7
- 230000000394 mitotic effect Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 6
- 230000001351 cycling effect Effects 0.000 claims description 3
- 230000001413 cellular effect Effects 0.000 claims 4
- 230000000694 effects Effects 0.000 abstract description 7
- 238000012935 Averaging Methods 0.000 abstract description 4
- 210000004027 cell Anatomy 0.000 description 38
- 238000005070 sampling Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000011278 mitosis Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000008602 contraction Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001640 apoptogenic effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000032823 cell division Effects 0.000 description 1
- 238000012258 culturing Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 239000001963 growth medium Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 210000004698 lymphocyte Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000028327 secretion Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
The invention discloses a pathological image processing method and device, wherein the method comprises the following steps: dividing the pathological image based on a semantic division model to obtain a first division result, wherein the semantic division model is based on a deep learning network; dividing the pathological image based on particle filtering to obtain a second division result; calculating a first segmentation result and a second segmentation result to obtain an average value and a corresponding third segmentation result; and processing the third segmentation result based on the classification network to obtain a final segmentation result. The apparatus is for performing the corresponding method. According to the embodiment of the invention, cells in the pathological image can be identified and segmented through semantic segmentation and particle filtering segmentation; through the averaging mode, the recognition error can be reasonably reduced and the segmentation effect can be improved; the false positive of the identification can be reduced through the classification network, and the identification accuracy is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a pathological image processing method and device.
Background
Important biological research materials can be provided through culturing and observing cells. The shape of the cells, the physical and chemical properties of the cell secretions, the cell division conditions and other relevant parameters can guide researchers to obtain corresponding results. These achievements can be applied in the fields of biology, pharmacy, medical treatment and the like to benefit the human society.
The analysis of pathological images, namely the research on images of cells, can help researchers deepen the research on pathology. Among them, cancers, such as breast cancer, are one of the main causes of death in humans. Early detection and early diagnosis of breast cancer are key to improving curative effect. In breast cancer grading, the number of mitoses is an important parameter. However, manually counting mitosis is a very tedious task for a pathologist. Also, there are difficulties in this process such as the presence of apoptotic cells, lymphocytes, and the like, which interfere with the counting of mitotic cells. After the occurrence of the pathological section scanner, the traditional pathological section is converted into a digital pathological section, and the automatic detection of mitosis is gradually paid more attention to.
Conventional detection algorithms, such as CNN, can only be classified simply; while FCN can solve the problem of image segmentation at semantic level, and realize image-to-image, the result obtained is not accurate enough, many details are lost, and a large amount of data is needed when training the network. However, medical data acquisition costs are high, and the corresponding data size is small, so that the problems of low training efficiency and high cost exist in actual image analysis.
Disclosure of Invention
Embodiments of the present invention aim to solve at least one of the technical problems in the related art to some extent. Therefore, an object of the embodiments of the present invention is to provide a pathological image processing method and apparatus.
The technical scheme adopted by the invention is as follows:
in a first aspect, an embodiment of the present invention provides a pathological image processing method, including: dividing the pathological image based on a semantic division model to obtain a first division result, wherein the semantic division model is based on a deep learning network; dividing the pathological image based on particle filtering to obtain a second division result; calculating a first segmentation result and a second segmentation result to obtain an average value and a corresponding third segmentation result; and processing the third segmentation result based on the classification network to obtain a final segmentation result.
The beneficial effects of the embodiment of the invention at least comprise: cells in the pathological image can be identified and segmented through semantic segmentation and particle filtering segmentation; through the averaging mode, the recognition error can be reasonably reduced and the segmentation effect can be improved; the false positive of the identification can be reduced through the classification network, and the identification accuracy is improved.
According to one embodiment of the invention, a pathology image processing method, a deep learning network comprises a U-Net network.
According to one embodiment of the invention, a pathological image processing method includes a classification network including an Xattention network.
According to an embodiment of the present invention, a pathological image processing method processes a third segmentation result based on a classification network, including: based on the classification network and feature screening, the third segmentation result is processed to reduce false positive rate.
According to the pathological image processing method, the pathological image is a cell image of breast cancer, and the first segmentation result, the second segmentation result and the third segmentation result correspond to cell identification and image segmentation.
In a second aspect, an embodiment of the present invention provides a pathology image processing apparatus, including: the first segmentation module is used for segmenting the pathological image based on a semantic segmentation model to obtain a first segmentation result, and the semantic segmentation model is based on a deep learning network; the second segmentation module is used for segmenting the pathological image based on particle filtering to obtain a second segmentation result; the third segmentation module is used for calculating a first segmentation result and a second segmentation result to obtain an average value and a corresponding third segmentation result; and the fourth segmentation module is used for processing the third segmentation result based on the classification network to obtain a final segmentation result.
The beneficial effects of the embodiment of the invention at least comprise: cells in the pathological image can be identified and segmented through semantic segmentation and particle filtering segmentation; through the averaging mode, the recognition error can be reasonably reduced and the segmentation effect can be improved; the false positive of the identification can be reduced through the classification network, and the identification accuracy is improved.
According to one embodiment of the invention, a pathology image processing apparatus, a deep learning network includes a U-Net network.
According to an embodiment of the invention, a pathology image processing apparatus includes a classification network including an Xreception network.
According to an embodiment of the present invention, a pathological image processing apparatus processes a third segmentation result based on a classification network, including: based on the classification network and feature screening, the third segmentation result is processed to reduce false positive rate.
According to the pathological image processing device, the pathological image is a cell image of breast cancer, and the first segmentation result, the second segmentation result and the third segmentation result correspond to cell identification and image segmentation.
Drawings
FIG. 1 is a flow chart of a pathology image processing method according to an embodiment of the present invention;
FIG. 2 is a frame diagram of Xreception in an embodiment of the invention;
fig. 3 is a block diagram of a pathology image processing apparatus according to an embodiment of the present invention.
Detailed Description
The conception and the technical effects produced by the present invention will be clearly and completely described in conjunction with the embodiments below to fully understand the objects, features and effects of the present invention. It is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and that other embodiments obtained by those skilled in the art without inventive effort are within the scope of the present invention based on the embodiments of the present invention.
In the description of the present invention, if an orientation description such as "upper", "lower", "front", "rear", "left", "right", etc. is referred to, it is merely for convenience of description and simplification of the description, and does not indicate or imply that the apparatus or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the invention. If a feature is referred to as being "disposed," "secured," "connected," or "mounted" on another feature, it can be directly disposed, secured, or connected to the other feature or be indirectly disposed, secured, connected, or mounted on the other feature.
In the description of the embodiments of the present invention, if "several" is referred to, it means more than one, if "multiple" is referred to, it is understood that the number is not included if "greater than", "less than", "exceeding", and it is understood that the number is included if "above", "below", "within" is referred to. If reference is made to "first", "second" it is to be understood as being used for distinguishing technical features and not as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
Example 1.
The embodiment of the invention provides a pathological image processing method shown in fig. 1, which comprises the following steps:
s1, segmenting a pathological image based on a semantic segmentation model to obtain a first segmentation result, wherein the semantic segmentation model is based on a deep learning network;
s2, segmenting the pathological image based on particle filtering to obtain a second segmentation result;
s3, calculating a first segmentation result and a second segmentation result to obtain an average value and a corresponding third segmentation result;
s4, processing the third segmentation result based on the classification network to obtain a final segmentation result.
Semantic segmentation is the classification of each pixel in an image, with the classification of the image pixels dividing the corresponding graph to define the image content attributes. In the embodiment of the invention, the method mainly aims at specific pixels of cells and culture media and/or human tissues in the images.
The segmentation network based on deep learning can carry out semantic segmentation on pathological images to realize image-to-image, and specifically comprises the following steps: the data sets used were the data of published ICPR2012 (International Conference on Pattern Recognition 2012), the data of ICPR2014 and the data of AMIDA13 (Assessment of Mitosis Detection Algorithms 2013). Wherein the ICPR2012 dataset is marked for all pixels of each mitotic cell, while the remaining dataset is only weakly marked, i.e. the center coordinates of the mitotic cells are marked. The pathology image was segmented using a U-Net network, first, the dataset of ICPR2012 was used as training set and validation set, and the other dataset was used as test set. And then training and parameter adjustment are carried out on the U-Net network through a training set, finally, a pathological image marked by a trained U-Net convolutional neural network model is obtained and used for training the U-Net network, then, a test image can be input, automatic identification and segmentation are realized, and a first segmentation result is obtained.
Particle filtering is a process of obtaining a minimum variance estimate of the state of a system by finding a set of random samples propagated in the state space to approximate a probability density function, replacing the integration operation with the sample mean, and these samples are vividly called particles. Particle filtering is based on a Monte Carlo method, is non-parametric implementation of Bayesian filtering, and can be used on any form of state space model by using particle sets to represent probabilities in comparison with parametric filtering.
The core idea of particle filtering is random sampling and importance resampling, i.e. when the specific location of the mitotic cells (targets) in the image is not determined, random sampling is performed, then according to the feature similarity (importance probability density), and importance resampling is performed. Specifically: in the first step, a data set of 70% ICPR2012 was randomly selected for learning the characteristics of mitotic cells (targets). Second, particles were uniformly distributed on each of the pathology images in the remaining 30% of ICPR2012 dataset, and the importance probability density of each particle was calculated, with the sum of the importance probability densities of all particles being 1. And thirdly, calculating the pixel coordinates of the target by a weighted average method. And fourthly, importance resampling is carried out, namely more particles are distributed at the places with higher importance probability density, fewer particles are placed at the places with lower importance density, and then the pixel coordinates of the target are calculated. And repeatedly cycling the second, third, fourth and second steps to finish the dynamic identification and segmentation of the field target. After the 30% of the data set of ICPR2012 is segmented by using the particle filter, comparing and verifying the segmentation result with the marked result, and segmenting the data sets of ICPR2014 and AMIDA13 by using the particle filter to obtain a second segmentation result.
Cells in the pathological image can be respectively identified and segmented through semantic segmentation and particle filtering segmentation; specifically, a series of successive images are identified to obtain a division of a given cell; for example, target cells A to Z are identified and determined from image No. 1, and then A is identified and determined from image No. 2 1 And A 2 Cells, B 1 And B 2 Cells, wherein A 1 And A 2 Cell is the division result of target cell A, B 1 And B 2 The cell is the division result of the target cell B, so that the division condition of the cells A and B can be accurately judged from the images No. 1 to No. 2, and the division condition extends to the target cells A to Z, and at the moment, the division condition of the target cells in a series of continuous images can be obtained.
Obviously, different segmentation methods can have different segmentation results, and as the segmented images are the same, multiple segmentation possibilities can be generated between the first segmentation result and the second segmentation result. Specifically, the method comprises the following steps: for a cell a of a pathology image, the first segmentation result identifies segmentation a and the second segmentation result also identifies segmentation a, and a is the target cell. Conversely, a is not the target cell. I.e. the first segmentation result and the second segmentation result are in a union mode.
The false positive of the identification can be reduced through the classification network, and the identification accuracy is improved.
According to one embodiment of the invention, a pathology image processing method, a deep learning network comprises a U-Net network.
U-Net is an FCN-based improved network. U-Net can train the network with less data using data enhancement (rotation, translation, scaling, flipping, and perspective transformation) implementations first. Because medical data is more costly to obtain than other pictures and their text data, both time and resource consumption. Therefore, the advent of U-Net is helpful for deep learning medical images for fewer samples. The U-Net structure is a U-shaped structure and mainly consists of two parts: contracted path strength and expanded path. The contraction path is mainly used for capturing context information in the picture, and the expansion path symmetrical to the contraction path is used for accurately positioning the part required to be segmented in the picture. The result after each up-sampling in the U-Ne expansion path is fused with the result of the corresponding layer, and the four up-sampling is carried out, so that the obtained segmentation result is more accurate than the result obtained by FCN.
According to one embodiment of the invention, a pathological image processing method includes a classification network including an Xattention network.
According to an embodiment of the present invention, a pathological image processing method processes a third segmentation result based on a classification network, including: based on the classification network and feature screening, the third segmentation result is processed to reduce false positive rate.
According to the pathological image processing method, the pathological image is a cell image of breast cancer, and the first segmentation result, the second segmentation result and the third segmentation result correspond to cell identification and image segmentation.
Xception is an improvement on acceptance V3 by reference Depthwise Separable Convolution to replace conventional convolution operations. Xreception is not reference Depthwise Separable Convolution, in contrast to two differences: first, xreception convolves 1*1 first, convolves on a channel-by-channel basis, and Depthwise Separable Convolution is the opposite; second, xreception, after being convolved with 1*1, carries a non-linear activation function of the ReLU, while Depthwise Separable Convolution (depth separable convolution) has no activation function between the two convolutions. As in fig. 2, the xception network is divided into three parts: entry, middle, and Exit. The Entry flow contains 8 convolutions; middle flow contains 24 convolutions; exit flow contains 4 convolutions, so Xreception amounts to 36 layers. On ImageNet (dataset), the parameters of Xception have the same number of parameters as that of acceptance V3, but their efficient use of parameters makes it slightly more accurate than acceptance V3. Meanwhile, a residual error connection mechanism is added in the Xreception, so that the convergence process of the Xreception is obviously accelerated, and the obviously higher accuracy is obtained. Because this assumption is a stronger version of the assumption than the acceptance structure, it is called an Xception, which stands for "Extreme Inception".
The principle of particle filtering includes:
the state variable at time t is denoted as x t The observed value is denoted as y t . Using probability density function p (x t |y 1:t ) To evaluate x t Is a distribution of (a). It is assumed that the known state equation and measurement equation are respectively:
wherein u is t-1 ,v t The process noise at the time t-1 and the measurement noise at the time t are respectively independent and distributed. Since it is not possible to sample from the target cell distribution, it is sampled from a known distribution that can be sampled, denoted q (x|y), the probability density of importance is:
q(x 0:t |y 1:t )=q(x 0:t-1 |y 1:t-1 )q(x t |x 0:t-1 ,y 1:t ) (3) where the subscript of x is 0:t, that is, particle filtering is a posterior of estimating the state at all times in the past. The goal is to know the expected value of the current time state, and the expected value is:
wherein (1)>Is particle->Weight of->Is the weight of the particle after normalization, and N represents the total number of sampling particles. The recursive form of the weights is expressed as:
wherein,,
p(x t |x t-1 )=∫δ(x t -f(x t-1 ,u))p(u)du (6),p(y t |x t )=∫δ(y t -h(x t ,v))p(v)dv (7)
where δ is a dirac function (dirac delta function); the normalized weights are:
the weight of the particles after normalization is as follows:
the expectations are expressed after resampling as:
wherein,,is a particle at time t. />Is the particle after resampling at time t. Wherein n is i Refers to particle->In the production of new particle sets->Number of times that is copied. Then, the standard particle filter flow is:
step 1, initializing a particle set: i.e. at time t=0, by a priori p (x 0 ) Generating N sampling particles and representing the N sampling particles as
Step 2, at a later time t (t=1, 2.), the following steps are cyclically performed:
importance sampling: from the importance probability density q (x 0:t |y 1:t ) Generating N sampling particles and representing the N sampling particles asAccording to the publicThe weight of each particle is calculated in a recurrence way according to the formulas (5) and (8), and the particle weight is obtained after normalization
Resampling: resampling the particle set, i.e. under the condition of keeping the number of particles unchanged, the weight-heavy particles are subjected to n according to the proportion of the weight i The secondary replication replaces particles that do not play a small role weight. The weights of all particles after resampling are the same
(3) And (3) outputting: the state estimates at time t, i.e. their expectations, are calculated according to equation (10).
Example 2.
A pathological image processing apparatus as shown in fig. 3, comprising: the first segmentation module 1 is used for segmenting the pathological image based on a semantic segmentation model to obtain a first segmentation result, wherein the semantic segmentation model is based on a deep learning network; the second segmentation module 2 is used for segmenting the pathological image based on particle filtering to obtain a second segmentation result; the third segmentation module 3 is used for calculating a first segmentation result and a second segmentation result to obtain an average value and a corresponding third segmentation result; and a fourth segmentation module 4 for processing the third segmentation result based on the classification network to obtain a final segmentation result.
The beneficial effects of the embodiment of the invention at least comprise: cells in the pathological image can be identified and segmented through semantic segmentation and particle filtering segmentation; through the averaging mode, the recognition error can be reasonably reduced and the segmentation effect can be improved; the false positive of the identification can be reduced through the classification network, and the identification accuracy is improved.
According to one embodiment of the invention, a pathology image processing apparatus, a deep learning network includes a U-Net network.
According to an embodiment of the invention, a pathology image processing apparatus includes a classification network including an Xreception network.
According to an embodiment of the present invention, a pathological image processing apparatus processes a third segmentation result based on a classification network, including: based on the classification network and feature screening, the third segmentation result is processed to reduce false positive rate.
According to the pathological image processing device, the pathological image is a cell image of breast cancer, and the first segmentation result, the second segmentation result and the third segmentation result correspond to cell identification and image segmentation.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and the equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.
Claims (10)
1. A pathological image processing method, characterized by comprising:
dividing the pathological image based on a semantic division model to obtain a first division result, wherein the semantic division model is based on a deep learning network;
based on particle filtering segmentation of the pathology image, a second segmentation result is obtained, specifically: first, a data set of 70% ICPR2012 was randomly selected for learning characteristics of target cells; wherein the target cell is a mitotic cell; secondly, uniformly distributing particles on each pathological image in the data set of the ICPR2012 of which the residual content is 30%, and calculating the importance probability density of each particle, wherein the sum of the importance probability densities of all the particles is 1; thirdly, calculating pixel coordinates of the target cells by a weighted average method; fourthly, importance resampling is carried out, namely more particles are distributed at the places with higher importance probability density, fewer particles are placed at the places with lower importance density, and then the pixel coordinates of the target cells are calculated; repeatedly cycling the second, third and fourth steps to finish the dynamic identification and segmentation of the field target cells; after the data set of 30% of ICPR2012 is segmented by using particle filtering, comparing and verifying the segmentation result with the marked result, and segmenting the data sets of ICPR2014 and AMIDA13 by using particle filtering to obtain a second segmentation result;
calculating the first segmentation result and the second segmentation result to obtain an average value and a corresponding third segmentation result;
and processing the third segmentation result based on the classification network to obtain a final segmentation result.
2. The pathological image processing method according to claim 1, wherein the deep learning network comprises a U-Net network.
3. A pathology image processing method according to claim 1, wherein the classification network comprises an Xception network.
4. A pathological image processing method according to any one of claims 1 to 3, wherein said processing said third segmentation result based on a classification network comprises:
the third segmentation result is processed to reduce false positive rate based on the classification network and feature screening.
5. A pathological image processing method according to claim 4, wherein the pathological image is a cellular image of breast cancer, and the corresponding,
the first segmentation result, the second segmentation result and the third segmentation result are identification of cells and image segmentation.
6. A pathological image processing apparatus, characterized by comprising:
the first segmentation module is used for segmenting the pathological image based on a semantic segmentation model to obtain a first segmentation result, wherein the semantic segmentation model is based on a deep learning network;
the second segmentation module is used for segmenting the pathological image based on particle filtering to obtain a second segmentation result, and specifically: first, a data set of 70% ICPR2012 was randomly selected for learning characteristics of target cells; wherein the target cell is a mitotic cell; secondly, uniformly distributing particles on each pathological image in the data set of the ICPR2012 of which the residual content is 30%, and calculating the importance probability density of each particle, wherein the sum of the importance probability densities of all the particles is 1; thirdly, calculating pixel coordinates of the target cells by a weighted average method; fourthly, importance resampling is carried out, namely more particles are distributed at the places with higher importance probability density, fewer particles are placed at the places with lower importance density, and then the pixel coordinates of the target cells are calculated; repeatedly cycling the second, third and fourth steps to finish the dynamic identification and segmentation of the field target cells; after the data set of 30% of ICPR2012 is segmented by using particle filtering, comparing and verifying the segmentation result with the marked result, and segmenting the data sets of ICPR2014 and AMIDA13 by using particle filtering to obtain a second segmentation result;
the third segmentation module is used for calculating the first segmentation result and the second segmentation result to obtain an average value and a corresponding third segmentation result;
and the fourth segmentation module is used for processing the third segmentation result based on the classification network to obtain a final segmentation result.
7. The pathology image processing apparatus according to claim 6, wherein the deep learning network comprises a U-Net network.
8. A pathology image processing apparatus according to claim 6, wherein the classification network comprises an Xception network.
9. A pathological image processing device according to any of claims 6 to 8, wherein said processing of said third segmentation result based on a classification network comprises:
the third segmentation result is processed to reduce false positive rate based on the classification network and feature screening.
10. The pathological image processing device according to claim 9, wherein the pathological image is a cellular image of breast cancer, and the pathological image is a cellular image of breast cancer corresponding to the cellular image,
the first segmentation result, the second segmentation result and the third segmentation result are identification of cells and image segmentation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010301990.8A CN111612740B (en) | 2020-04-16 | 2020-04-16 | Pathological image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010301990.8A CN111612740B (en) | 2020-04-16 | 2020-04-16 | Pathological image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111612740A CN111612740A (en) | 2020-09-01 |
CN111612740B true CN111612740B (en) | 2023-07-25 |
Family
ID=72201419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010301990.8A Active CN111612740B (en) | 2020-04-16 | 2020-04-16 | Pathological image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111612740B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408595B (en) * | 2021-06-09 | 2022-12-13 | 北京小白世纪网络科技有限公司 | Pathological image processing method and device, electronic equipment and readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274408A (en) * | 2017-06-16 | 2017-10-20 | 厦门大学 | A kind of image partition method based on new particle filter algorithm |
CN108230337A (en) * | 2017-12-31 | 2018-06-29 | 厦门大学 | A kind of method that semantic SLAM systems based on mobile terminal are realized |
CN109035269A (en) * | 2018-07-03 | 2018-12-18 | 怀光智能科技(武汉)有限公司 | A kind of cervical cell pathological section sick cell dividing method and system |
CN109101975A (en) * | 2018-08-20 | 2018-12-28 | 电子科技大学 | Image, semantic dividing method based on full convolutional neural networks |
CN110675368A (en) * | 2019-08-31 | 2020-01-10 | 中山大学 | Cell image semantic segmentation method integrating image segmentation and classification |
CN110675411A (en) * | 2019-09-26 | 2020-01-10 | 重庆大学 | Cervical squamous intraepithelial lesion recognition algorithm based on deep learning |
-
2020
- 2020-04-16 CN CN202010301990.8A patent/CN111612740B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274408A (en) * | 2017-06-16 | 2017-10-20 | 厦门大学 | A kind of image partition method based on new particle filter algorithm |
CN108230337A (en) * | 2017-12-31 | 2018-06-29 | 厦门大学 | A kind of method that semantic SLAM systems based on mobile terminal are realized |
CN109035269A (en) * | 2018-07-03 | 2018-12-18 | 怀光智能科技(武汉)有限公司 | A kind of cervical cell pathological section sick cell dividing method and system |
CN109101975A (en) * | 2018-08-20 | 2018-12-28 | 电子科技大学 | Image, semantic dividing method based on full convolutional neural networks |
CN110675368A (en) * | 2019-08-31 | 2020-01-10 | 中山大学 | Cell image semantic segmentation method integrating image segmentation and classification |
CN110675411A (en) * | 2019-09-26 | 2020-01-10 | 重庆大学 | Cervical squamous intraepithelial lesion recognition algorithm based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111612740A (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110889852B (en) | Liver segmentation method based on residual error-attention deep neural network | |
CN110889853B (en) | Tumor segmentation method based on residual error-attention deep neural network | |
CN106056595B (en) | Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules | |
CN112215790A (en) | KI67 index analysis method based on deep learning | |
CN112543934A (en) | Method for determining degree of abnormality, corresponding computer readable medium and distributed cancer analysis system | |
JP2023543044A (en) | Method of processing images of tissue and system for processing images of tissue | |
Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
US20210312620A1 (en) | Generating annotation data of tissue images | |
CN110021019B (en) | AI-assisted hair thickness distribution analysis method for AGA clinical image | |
CN116188423A (en) | Super-pixel sparse and unmixed detection method based on pathological section hyperspectral image | |
CN114266898A (en) | Liver cancer identification method based on improved EfficientNet | |
CN111047559A (en) | Method for rapidly detecting abnormal area of digital pathological section | |
CN105184829B (en) | A kind of tight quarters target detection and high-precision method for positioning mass center | |
CN118172614B (en) | Ordered ankylosing spondylitis rating method based on supervised contrast learning | |
CN114445356A (en) | Multi-resolution-based full-field pathological section image tumor rapid positioning method | |
CN113657449A (en) | Traditional Chinese medicine tongue picture greasy classification method containing noise labeling data | |
Elguebaly et al. | Bayesian learning of generalized gaussian mixture models on biomedical images | |
CN111612740B (en) | Pathological image processing method and device | |
Jia et al. | A parametric optimization oriented, AFSA based random forest algorithm: application to the detection of cervical epithelial cells | |
CN116912240B (en) | Mutation TP53 immunology detection method based on semi-supervised learning | |
CN116030063B (en) | Classification diagnosis system, method, electronic device and medium for MRI image | |
CN112927215A (en) | Automatic analysis method for digestive tract biopsy pathological section | |
KUŞ et al. | Detection of microcalcification clusters in digitized X-ray mammograms using unsharp masking and image statistics | |
Athanasiadis et al. | Segmentation of complementary DNA microarray images by wavelet-based Markov random field model | |
CN113762478A (en) | Radio frequency interference detection model, radio frequency interference detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |