CN112396621A - High-resolution microscopic endoscope image nucleus segmentation method based on deep learning - Google Patents
High-resolution microscopic endoscope image nucleus segmentation method based on deep learning Download PDFInfo
- Publication number
- CN112396621A CN112396621A CN202011305801.0A CN202011305801A CN112396621A CN 112396621 A CN112396621 A CN 112396621A CN 202011305801 A CN202011305801 A CN 202011305801A CN 112396621 A CN112396621 A CN 112396621A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- cell nucleus
- resolution
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000013135 deep learning Methods 0.000 title claims abstract description 9
- 210000003855 cell nucleus Anatomy 0.000 claims abstract description 57
- 238000012549 training Methods 0.000 claims abstract description 48
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 28
- 230000007246 mechanism Effects 0.000 claims abstract description 21
- 210000004940 nucleus Anatomy 0.000 claims abstract description 12
- 238000012795 verification Methods 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 claims description 4
- 238000003745 diagnosis Methods 0.000 description 6
- 210000004027 cell Anatomy 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000012333 histopathological diagnosis Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 206010052428 Wound Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000000338 in vitro Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002324 minimally invasive surgery Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a high-resolution microscopic endoscope image nucleus segmentation method based on deep learning, which comprises the following steps: acquiring an original endoscope image, performing pixel-level marking on cell nucleuses of the endoscope image to obtain a mask image of the cell nucleuses, and dividing the marked mask image and the endoscope image into a training set and a verification set; constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model; after data enhancement is carried out on the training data set, inputting the training data set into the convolutional neural network for iterative training, and judging whether the iterative training is finished or not by using a verification set; and after judging that the iterative training is finished, inputting the original endoscope image into the trained convolutional neural network, outputting the prediction probability that each pixel in the endoscope image belongs to the cell nucleus, obtaining the segmentation result of the cell nucleus, and realizing the accurate segmentation of the input image.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a microscopic endoscope image cell nucleus segmentation method based on deep learning.
Background
The endoscope is an optical device capable of detecting image information in an object, and a medical endoscope can enter a human body through a natural pore passage of the human body or a small incision of an operation, can help a doctor to see the internal structure of the human body which can not be displayed by X-rays, is necessary medical equipment for in-vivo pathological change exploration and diagnosis and minimally invasive surgery, and is widely applied to various fields of clinical medicine. The common endoscope observes tissues in a body on a macroscopic scale, identifies suspicious regions, clamps the suspicious tissues to the outside of the body to carry out histopathological diagnosis if necessary, is a process with wounds, is often accompanied by phenomena of bleeding, infection, early missed diagnosis and the like, and has certain risk. The high-resolution microscopic endoscope enables microscopic observation of living tissues to reach the magnification and resolution equivalent to in-vitro microscopic imaging of histological samples, can realize real-time high-resolution histopathological diagnosis of internal organs without sampling biopsy, is an important appliance for early lesion noninvasive diagnosis, and particularly has great significance for early lesion diagnosis which is difficult to find by a conventional endoscope.
The high-resolution endoscope can shoot high-resolution endoscope images, and doctors can quantitatively analyze the images (such as the size, the shape, the density, the quantity, the polymorphism and the like of cells or cell nuclei) according to the prior knowledge of the doctors so as to provide reliable support for medical diagnosis and set a corresponding scheme according to the reliable support. However, the endoscope image requires too much time for the doctor to perform the judgment analysis, and is often subjective and prone to misjudgment. Compared with the manual processing process with strong time consumption, poor reproducibility and strong subjectivity, the automatic nucleus segmentation technology based on image processing can quickly, accurately and reproducibly obtain objective quantitative data, thereby improving the analysis efficiency of the endoscope image. On the premise of ensuring the accuracy, the reproducibility, timeliness and objectivity of observation are obviously improved, and basic scientific researchers and clinicians can be saved from boring and repeated daily work. The traditional cell nucleus segmentation method mainly comprises methods such as distance transformation, morphological operation, region feature extraction and Hough transformation, and is usually only suitable for a cell nucleus segmentation task with a simple image background and sparse cell distribution. The cell nucleus segmentation model based on deep learning utilizes a convolutional neural network to classify each pixel point on the image, and the mode can accurately classify most pixel points but depends on a deeper network model and larger parameters. The complex network models can effectively extract context features with strong global consistency, but lack boundary space information, are not friendly to the segmentation of the cell nucleus boundary, and are easily interfered by factors such as complex image background, tight cell arrangement, fuzzy cell boundary, different dyeing depths of the size and the shape and the like when processing a microscopic endoscope image, so that the robustness of the cell nucleus segmentation is poor, and the accuracy is low.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a microscopic endoscope image cell nucleus segmentation method based on deep learning, which integrates the characteristics of different levels through a high-resolution network and a layered multi-scale attention mechanism to realize the accurate segmentation of the cell nucleus boundary, solves the problem of low accuracy of the prior art in the segmentation of the microscopic endoscope image cell nucleus, and reduces the manual processing cost.
In order to realize the purpose of the invention, the following technical scheme is adopted: a microscopic endoscope image nucleus segmentation method based on deep learning comprises the following steps:
(1) collecting an original endoscope cell nucleus image set, carrying out pixel-level marking on cell nuclei of the original endoscope cell nucleus image set according to priori knowledge, obtaining a mask image set of the cell nuclei, carrying out decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value to be zero, and then carrying out regularization treatment to obtain an image data set; the image data set is divided into a training set and a verification set;
(2) constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the layered multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are both composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers;
(3) the method for training the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model comprises the following sub-steps:
(3.1) rotating and horizontally turning the training set in the step (1) to obtain an expanded training set, inputting the expanded training set into the high-resolution convolutional neural network model constructed in the step (2), amplifying each training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image, a second training image and a third training image, correspondingly inputting the training images into a first coding network, a second coding network and a third coding network, extracting characteristic images, inputting the three characteristic images into a decoding network, fusing, and outputting a segmentation result;
(3.2) using dice loss as a loss function:
wherein X represents a real cell nucleus mask, and Y represents an output segmentation result of a training layered multi-scale attention mechanism high-resolution convolutional neural network model;
judging whether the loss function is converged by using the verification set, and finishing the training of the high-resolution convolutional neural network model when the loss function is converged;
(4) and (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
Compared with the prior art, the method has the following beneficial effects: the high-resolution microscopic endoscope image nucleus segmentation method disclosed by the invention has the advantages that a layered multi-scale attention mechanism high-resolution convolutional neural network model is constructed, three coding networks ensure layered input of images with different resolutions, and the nucleus edge characteristic information is reserved through the high-resolution network in the coding networks, so that the nucleus edge segmentation precision is improved; a space and channel attention mechanism is introduced to enable a decoding network to have attention in channels and spaces, feature maps from different layers are connected to combine multi-scale information, the decoding network is supplied to restore the details and the space information of images, and the images are subjected to pixel-level classification, so that the integral nuclear segmentation precision is improved, the actual application requirement can be relieved to a certain extent, and the accuracy of auxiliary diagnosis by means of an endoscope is improved.
Drawings
FIG. 1 is a flow chart of a method for nuclear segmentation of a microendoscope image according to the present invention;
FIG. 2 is a nuclear image and a mask image of nuclear pixel level labeling captured by a microscope endoscope according to the present invention; fig. 2(a) is a captured image of a cell nucleus, and fig. 2(b) is a mask image;
FIG. 3 is a high-resolution convolutional neural network model of a hierarchical multi-scale attention mechanism built in the method for segmenting the cell nucleus provided by the invention.
FIG. 4 is a high resolution network structure diagram included in the coding network in the convolutional neural network model constructed in the present invention;
fig. 5 is a diagram of an attention mechanism provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities.
The invention provides a microscopic endoscope image cell nucleus segmentation method based on deep learning, which is used for helping a user and a doctor thereof to quickly finish quantitative analysis (such as the size, the shape, the density, the quantity, the polymorphism and the like of cells or cell nuclei) of an endoscope image. Fig. 1 is a flowchart of the method for nucleus segmentation of the microscopic endoscopic image, which specifically includes the following steps:
(1) firstly, a high-resolution endoscope is adopted to shoot a high-resolution endoscope image of a cell nucleus, an original endoscope cell nucleus image set is collected, the priori knowledge of different doctors or different experts is fused to carry out cell nucleus pixel-level labeling on the original endoscope cell nucleus image set so as to ensure the accuracy of a cell nucleus mask image, and a mask image set of the cell nucleus is obtained, for example, a cell nucleus image shot by a micro-endoscope is shown in fig. 2(a), and a mask image after the cell nucleus pixel-level labeling is shown in fig. 2 (b). Performing decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value of the mask image set and the original endoscope cell nucleus image set to be zero, and performing regularization to obtain an image data set; the image data set is divided into a training set and a validation set.
(2) Constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the layered multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers.
(3) The training of the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model, as shown in fig. 3 specifically, includes the following sub-steps:
(3.1) inputting the training set in the step (1) into the step(2) In the constructed high-resolution convolutional neural network model, data enhancement is carried out on a training set, the original image is rotated by 90 degrees, 180 degrees and 270 degrees under the condition that the color and the shape of the original image are not changed, and then each image is horizontally turned over, so that the scale of the data set is enlarged, the performance of the network model is improved, the overfitting phenomenon is weakened, the generalization capability is enhanced, and the problem of lack of training data is solved; amplifying each expanded training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image, a second training image and a third training image, and inputting the first training image into a first coding network, wherein FIG. 4 shows a structure diagram of a high-resolution network HRNet in the first coding network; and inputting the second training image into a second coding network, and inputting the third training image into a third coding network, wherein the second coding network and the third coding network are formed by cascading a convolutional layer and a pooling layer. And respectively extracting features of the three training images with different magnifications through a coding network to obtain representations with different resolutions so as to form a multi-resolution representation. Meanwhile, information exchange is continuously carried out between the multi-resolution representations through an attention mechanism, so that the expression capacity of the high-resolution representations and the low-resolution representations is improved, and the multi-resolution representations are better promoted mutually. The output representation of each resolution is fused with the representations of the three resolution inputs to ensure the full utilization and interaction of information. Note that the mechanism is configured as shown in FIG. 5 first for input xlAnd g, performing 1 × 1 convolution, performing point-by-point addition on the outputs of the two and activating through a ReLU function, performing 1 × 1 convolution on the outputs and activating by using a Sigmoid function, finally performing resampling on an activation value to obtain a weight alpha, and performing weight alpha and input xlAnd multiplying to obtain the weighted characteristic value. The decoding network multiplies the high-layer features and the low-layer features to obtain a highlighted feature mapping under three different layer coding networks through the attention mechanism module, the decoding network uses three feature mapping layers with different scales for segmentation to recover the details and the spatial information of the image, the multi-resolution representations are fused, and finally the segmentation result of the image is obtained through a common convolution layer and a softmax layer.
And (3.2) repeating (3.1) iterative training, and outputting a cell nucleus segmentation model containing a loss function and segmentation accuracy after each iteration is finished. And setting an initial learning rate, and reducing the learning rate according to the adjustment of the network parameters when the monitoring index is unchanged. In the verification set loss, dice loss is used as a loss function:
wherein X represents a real cell nucleus mask, and Y represents an output segmentation result of a training layered multi-scale attention mechanism high-resolution convolutional neural network model;
and after each iterative training is finished, judging whether the iterative training is finished by using the verification set, and finishing the training of the high-resolution convolutional neural network model when the loss function is converged. Evaluating the obtained model through a test set, measuring the similarity degree of the segmentation result and the real mask by adopting evaluation indexes, and setting an F1 coefficient evaluation threshold value F1thresholdIf the model segmentation result F1 coefficient is higher than the threshold value F1thresholdThe model performs well. The evaluation index is an F1 coefficient, and the formula is as follows:
where P is precision and R is recall.
(4) And (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
By the microscopic endoscope image nucleus segmentation method, the accuracy rate of nucleus segmentation can reach 98%, and the dice coefficient of the segmented nucleus is 0.82.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (1)
1. A microscopic endoscope image nucleus segmentation method based on deep learning is characterized by comprising the following steps:
(1) collecting an original endoscope cell nucleus image set, carrying out pixel-level marking on cell nuclei of the original endoscope cell nucleus image set according to priori knowledge, obtaining a mask image set of the cell nuclei, carrying out decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value to be zero, and then carrying out regularization treatment to obtain an image data set; the image data set is divided into a training set and a verification set;
(2) constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the layered multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are both composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers;
(3) the method for training the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model comprises the following sub-steps:
(3.1) rotating and horizontally turning the training set in the step (1) to obtain an expanded training set, inputting the expanded training set into the high-resolution convolutional neural network model constructed in the step (2), amplifying each training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image, a second training image and a third training image, correspondingly inputting the training images into a first coding network, a second coding network and a third coding network, extracting characteristic images, inputting the three characteristic images into a decoding network, fusing, and outputting a segmentation result;
(3.2) using dice loss as a loss function:
wherein X represents a real cell nucleus mask, and Y represents an output segmentation result of a training layered multi-scale attention mechanism high-resolution convolutional neural network model;
judging whether the loss function is converged by using the verification set, and finishing the training of the high-resolution convolutional neural network model when the loss function is converged;
(4) and (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011305801.0A CN112396621B (en) | 2020-11-19 | 2020-11-19 | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011305801.0A CN112396621B (en) | 2020-11-19 | 2020-11-19 | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112396621A true CN112396621A (en) | 2021-02-23 |
CN112396621B CN112396621B (en) | 2022-08-30 |
Family
ID=74607139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011305801.0A Active CN112396621B (en) | 2020-11-19 | 2020-11-19 | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112396621B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192047A (en) * | 2021-05-14 | 2021-07-30 | 杭州迪英加科技有限公司 | Method for automatically interpreting KI67 pathological section based on deep learning |
CN113409321A (en) * | 2021-06-09 | 2021-09-17 | 西安电子科技大学 | Cell nucleus image segmentation method based on pixel classification and distance regression |
CN113813053A (en) * | 2021-09-18 | 2021-12-21 | 长春理工大学 | Operation process analysis method based on laparoscope endoscopic image |
CN113850821A (en) * | 2021-09-17 | 2021-12-28 | 武汉兰丁智能医学股份有限公司 | Attention mechanism and multi-scale fusion leukocyte segmentation method |
CN114387264A (en) * | 2022-01-18 | 2022-04-22 | 桂林电子科技大学 | HE staining pathological image data expansion and enhancement method |
CN115760957A (en) * | 2022-11-16 | 2023-03-07 | 北京工业大学 | Method for analyzing substance in three-dimensional electron microscope cell nucleus |
CN117011550A (en) * | 2023-10-08 | 2023-11-07 | 超创数能科技有限公司 | Impurity identification method and device in electron microscope photo |
CN117576103A (en) * | 2024-01-17 | 2024-02-20 | 浙江大学滨江研究院 | Urinary sediment microscopic examination analysis system integrating electric control microscope and deep learning algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635711A (en) * | 2018-12-07 | 2019-04-16 | 上海衡道医学病理诊断中心有限公司 | A kind of pathological image dividing method based on deep learning network |
WO2019135234A1 (en) * | 2018-01-03 | 2019-07-11 | Ramot At Tel-Aviv University Ltd. | Systems and methods for the segmentation of multi-modal image data |
US20190287234A1 (en) * | 2016-12-06 | 2019-09-19 | Siemens Energy, Inc. | Weakly supervised anomaly detection and segmentation in images |
CN111179273A (en) * | 2019-12-30 | 2020-05-19 | 山东师范大学 | Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning |
CN111462122A (en) * | 2020-03-26 | 2020-07-28 | 中国科学技术大学 | Automatic cervical cell nucleus segmentation method and system |
-
2020
- 2020-11-19 CN CN202011305801.0A patent/CN112396621B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190287234A1 (en) * | 2016-12-06 | 2019-09-19 | Siemens Energy, Inc. | Weakly supervised anomaly detection and segmentation in images |
WO2019135234A1 (en) * | 2018-01-03 | 2019-07-11 | Ramot At Tel-Aviv University Ltd. | Systems and methods for the segmentation of multi-modal image data |
CN109635711A (en) * | 2018-12-07 | 2019-04-16 | 上海衡道医学病理诊断中心有限公司 | A kind of pathological image dividing method based on deep learning network |
CN111179273A (en) * | 2019-12-30 | 2020-05-19 | 山东师范大学 | Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning |
CN111462122A (en) * | 2020-03-26 | 2020-07-28 | 中国科学技术大学 | Automatic cervical cell nucleus segmentation method and system |
Non-Patent Citations (2)
Title |
---|
DENIZ SAYIN MERCADIER等: "Automatic Segmentation of Nuclei in Histopathology Images Using Encoding-decoding Convolutional Neural Networks", 《 ICASSP 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)》 * |
熊伟等: "基于神经网络的遥感图像海陆语义分割方法", 《计算机工程与应用》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113192047A (en) * | 2021-05-14 | 2021-07-30 | 杭州迪英加科技有限公司 | Method for automatically interpreting KI67 pathological section based on deep learning |
CN113409321A (en) * | 2021-06-09 | 2021-09-17 | 西安电子科技大学 | Cell nucleus image segmentation method based on pixel classification and distance regression |
CN113409321B (en) * | 2021-06-09 | 2023-10-27 | 西安电子科技大学 | Cell nucleus image segmentation method based on pixel classification and distance regression |
CN113850821A (en) * | 2021-09-17 | 2021-12-28 | 武汉兰丁智能医学股份有限公司 | Attention mechanism and multi-scale fusion leukocyte segmentation method |
CN113813053A (en) * | 2021-09-18 | 2021-12-21 | 长春理工大学 | Operation process analysis method based on laparoscope endoscopic image |
CN114387264A (en) * | 2022-01-18 | 2022-04-22 | 桂林电子科技大学 | HE staining pathological image data expansion and enhancement method |
CN115760957A (en) * | 2022-11-16 | 2023-03-07 | 北京工业大学 | Method for analyzing substance in three-dimensional electron microscope cell nucleus |
CN117011550A (en) * | 2023-10-08 | 2023-11-07 | 超创数能科技有限公司 | Impurity identification method and device in electron microscope photo |
CN117011550B (en) * | 2023-10-08 | 2024-01-30 | 超创数能科技有限公司 | Impurity identification method and device in electron microscope photo |
CN117576103A (en) * | 2024-01-17 | 2024-02-20 | 浙江大学滨江研究院 | Urinary sediment microscopic examination analysis system integrating electric control microscope and deep learning algorithm |
CN117576103B (en) * | 2024-01-17 | 2024-04-05 | 浙江大学滨江研究院 | Urinary sediment microscopic examination analysis system integrating electric control microscope and deep learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN112396621B (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112396621B (en) | High-resolution microscopic endoscope image nucleus segmentation method based on deep learning | |
Chandran et al. | Diagnosis of cervical cancer based on ensemble deep learning network using colposcopy images | |
CN109670510B (en) | Deep learning-based gastroscope biopsy pathological data screening system | |
CN109272492B (en) | Method and system for processing cytopathology smear | |
Miranda et al. | A survey of medical image classification techniques | |
CN111985536B (en) | Based on weak supervised learning gastroscopic pathology image Classification method | |
CN111243042A (en) | Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning | |
CN112529894B (en) | Thyroid nodule diagnosis method based on deep learning network | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN112381164B (en) | Ultrasound image classification method and device based on multi-branch attention mechanism | |
Pan et al. | Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review | |
CN109948671B (en) | Image classification method, device, storage medium and endoscopic imaging equipment | |
CN111160135A (en) | Urine red blood cell lesion identification and statistical method and system based on improved Faster R-cnn | |
CN110189293A (en) | Cell image processing method, device, storage medium and computer equipment | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
Yonekura et al. | Improving the generalization of disease stage classification with deep CNN for glioma histopathological images | |
CN116188423A (en) | Super-pixel sparse and unmixed detection method based on pathological section hyperspectral image | |
Chen et al. | Automatic whole slide pathology image diagnosis framework via unit stochastic selection and attention fusion | |
CN114972254A (en) | Cervical cell image segmentation method based on convolutional neural network | |
CN115100474B (en) | Thyroid gland puncture image classification method based on topological feature analysis | |
Cao et al. | An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features | |
CN116740435A (en) | Breast cancer ultrasonic image classifying method based on multi-mode deep learning image group science | |
CN113538344A (en) | Image recognition system, device and medium for distinguishing atrophic gastritis and gastric cancer | |
Alzubaidi et al. | Multi-class breast cancer classification by a novel two-branch deep convolutional neural network architecture | |
CN112634291A (en) | Automatic burn wound area segmentation method based on neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |