CN112396621B - High-resolution microscopic endoscope image nucleus segmentation method based on deep learning - Google Patents

High-resolution microscopic endoscope image nucleus segmentation method based on deep learning Download PDF

Info

Publication number
CN112396621B
CN112396621B CN202011305801.0A CN202011305801A CN112396621B CN 112396621 B CN112396621 B CN 112396621B CN 202011305801 A CN202011305801 A CN 202011305801A CN 112396621 B CN112396621 B CN 112396621B
Authority
CN
China
Prior art keywords
image
training
cell nucleus
resolution
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011305801.0A
Other languages
Chinese (zh)
Other versions
CN112396621A (en
Inventor
王立强
牛春阳
杨青
袁波
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202011305801.0A priority Critical patent/CN112396621B/en
Publication of CN112396621A publication Critical patent/CN112396621A/en
Application granted granted Critical
Publication of CN112396621B publication Critical patent/CN112396621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-resolution microscopic endoscope image nucleus segmentation method based on deep learning, which comprises the following steps: acquiring an original endoscope image, performing pixel-level marking on cell nucleuses of the endoscope image to obtain a mask image of the cell nucleuses, and dividing the marked mask image and the endoscope image into a training set and a verification set; constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model; after data enhancement is carried out on the training data set, inputting the training data set into the convolutional neural network for iterative training, and judging whether the iterative training is finished or not by using a verification set; and after judging that the iterative training is finished, inputting the original endoscope image into the trained convolutional neural network, outputting the prediction probability that each pixel in the endoscope image belongs to the cell nucleus, obtaining the segmentation result of the cell nucleus, and realizing the accurate segmentation of the input image.

Description

High-resolution microscopic endoscope image nucleus segmentation method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a microscopic endoscope image cell nucleus segmentation method based on deep learning.
Background
The endoscope is an optical device capable of detecting image information in an object, and a medical endoscope can enter a human body through a natural pore passage of the human body or a small incision of an operation, can help a doctor to see the internal structure of the human body which can not be displayed by X-rays, is necessary medical equipment for in-vivo pathological change exploration and diagnosis and minimally invasive surgery, and is widely applied to various fields of clinical medicine. The common endoscope observes tissues in a body on a macroscopic scale, identifies suspicious regions, clamps the suspicious tissues to the outside of the body to carry out histopathological diagnosis if necessary, is a process with wounds, is often accompanied by phenomena of bleeding, infection, early missed diagnosis and the like, and has certain risk. The high-resolution microscopic endoscope enables microscopic observation of living tissues to reach the magnification and resolution equivalent to in-vitro microscopic imaging of histological samples, can realize real-time high-resolution histopathological diagnosis of internal organs without sampling biopsy, is an important appliance for early lesion noninvasive diagnosis, and particularly has great significance for early lesion diagnosis which is difficult to find by a conventional endoscope.
The high-resolution endoscope can shoot high-resolution endoscope images, and doctors can quantitatively analyze the images (such as the size, the shape, the density, the quantity, the polymorphism and the like of cells or cell nuclei) according to the prior knowledge of the doctors so as to provide reliable support for medical diagnosis and set a corresponding scheme according to the reliable support. However, endoscope images require too much time for a doctor to perform judgment analysis, and are often subjective and prone to misjudgment. Compared with the manual processing process with strong time consumption, poor reproducibility and strong subjectivity, the automatic nucleus segmentation technology based on image processing can quickly, accurately and reproducibly obtain objective quantitative data, thereby improving the analysis efficiency of the endoscope image. On the premise of ensuring the accuracy, the reproducibility, timeliness and objectivity of observation are obviously improved, and basic scientific researchers and clinicians can be saved from boring and repeated daily work. The traditional cell nucleus segmentation method mainly comprises methods such as distance transformation, morphological operation, region feature extraction and Hough transformation, and is usually only suitable for a cell nucleus segmentation task with a simple image background and sparse cell distribution. The cell nucleus segmentation model based on deep learning utilizes a convolutional neural network to classify each pixel point on the image, and the mode can accurately classify most pixel points but depends on a deeper network model and larger parameters. The complex network models can effectively extract context features with strong global consistency, but lack boundary space information, are not friendly to the segmentation of the cell nucleus boundary, and are easily interfered by factors such as complex image background, tight cell arrangement, fuzzy cell boundary, different dyeing depths of the size and the shape and the like when processing a microscopic endoscope image, so that the robustness of the cell nucleus segmentation is poor, and the accuracy is low.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a microscopic endoscope image cell nucleus segmentation method based on deep learning, which integrates the characteristics of different levels through a high-resolution network and a layered multi-scale attention mechanism to realize the accurate segmentation of the cell nucleus boundary, solves the problem of low accuracy of the prior art in the segmentation of the microscopic endoscope image cell nucleus, and reduces the manual processing cost.
In order to realize the purpose of the invention, the following technical scheme is adopted: a microscopic endoscope image nucleus segmentation method based on deep learning comprises the following steps:
(1) collecting an original endoscope cell nucleus image set, carrying out pixel-level marking on cell nuclei of the original endoscope cell nucleus image set according to priori knowledge, obtaining a mask image set of the cell nuclei, carrying out decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value to be zero, and then carrying out regularization treatment to obtain an image data set; the image data set is divided into a training set and a verification set;
(2) constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are both composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers;
(3) the method for training the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model comprises the following sub-steps:
(3.1) rotating and horizontally turning the training set in the step (1) to obtain an expanded training set, inputting the expanded training set into the high-resolution convolutional neural network model constructed in the step (2), amplifying each training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image, a second training image and a third training image, correspondingly inputting the training images into a first coding network, a second coding network and a third coding network, extracting characteristic images, inputting the three characteristic images into a decoding network, fusing, and outputting a segmentation result;
(3.2) using dice loss as a loss function:
Figure BDA0002788287300000021
wherein, X represents a real cell nucleus mask, and Y represents an output segmentation result of a training hierarchical multi-scale attention mechanism high-resolution convolution neural network model;
judging whether the loss function is converged by using the verification set, and finishing the training of the high-resolution convolutional neural network model when the loss function is converged;
(4) and (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
Compared with the prior art, the method has the following beneficial effects: the high-resolution microscopic endoscope image nucleus segmentation method disclosed by the invention has the advantages that a layered multi-scale attention mechanism high-resolution convolutional neural network model is constructed, three coding networks ensure layered input of images with different resolutions, and the nucleus edge characteristic information is reserved through the high-resolution network in the coding networks, so that the nucleus edge segmentation precision is improved; a space and channel attention mechanism is introduced to enable a decoding network to have attention in channels and spaces, feature maps from different layers are connected to combine multi-scale information, the decoding network is supplied to restore the details and the space information of images, and the images are subjected to pixel-level classification, so that the integral nuclear segmentation precision is improved, the actual application requirement can be relieved to a certain extent, and the accuracy of auxiliary diagnosis by means of an endoscope is improved.
Drawings
FIG. 1 is a flow chart of a method for nuclear segmentation of a microendoscope image according to the present invention;
FIG. 2 is a nuclear image and a mask image of nuclear pixel level labeling captured by a microscope endoscope according to the present invention; fig. 2(a) is a captured image of a cell nucleus, and fig. 2(b) is a mask image;
FIG. 3 is a high-resolution convolutional neural network model of a hierarchical multi-scale attention mechanism built in the method for segmenting the cell nucleus provided by the invention.
FIG. 4 is a high resolution network structure diagram included in the coding network in the convolutional neural network model constructed in the present invention;
fig. 5 is a diagram of an attention mechanism provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities.
The invention provides a microscopic endoscope image cell nucleus segmentation method based on deep learning, which is used for helping a user and a doctor thereof to quickly finish quantitative analysis (such as the size, the shape, the density, the quantity, the polymorphism and the like of cells or cell nuclei) of an endoscope image. Fig. 1 is a flowchart of the method for nucleus segmentation in the microscopic endoscope image, which specifically includes the following steps:
(1) firstly, a high-resolution endoscope is adopted to shoot a high-resolution endoscope image of a cell nucleus, an original endoscope cell nucleus image set is collected, the priori knowledge of different doctors or different experts is fused to carry out cell nucleus pixel-level labeling on the original endoscope cell nucleus image set so as to ensure the accuracy of a cell nucleus mask image, and a mask image set of the cell nucleus is obtained, for example, a cell nucleus image shot by a micro-endoscope is shown in fig. 2(a), and a mask image after the cell nucleus pixel-level labeling is shown in fig. 2 (b). Performing decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value of the mask image set and the original endoscope cell nucleus image set to be zero, and performing regularization to obtain an image data set; the image data set is divided into a training set and a validation set.
(2) Constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers.
(3) The training of the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model, as shown in fig. 3 specifically, includes the following sub-steps:
(3.1) inputting the training set in the step (1) into the high-resolution convolutional neural network model constructed in the step (2), enhancing data of the training set, rotating the original image by 90 degrees, 180 degrees and 270 degrees under the condition of not changing the color and the shape of the original image, and horizontally turning each image so as to enlarge the scale of the data set, improve the performance of the network model, weaken the over-fitting phenomenon and enhance the generalization capability to solve the problem of lack of training data; amplifying each expanded training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image and a second trainingTraining images and third training images, inputting the first training image into a first coding network, and showing a structure diagram of a high resolution network HRNet in the first coding network in FIG. 4; and inputting the second training image into a second coding network, and inputting the third training image into a third coding network, wherein the second coding network and the third coding network are formed by cascading a convolutional layer and a pooling layer. And respectively extracting features of the three training images with different magnifications through a coding network to obtain representations with different resolutions so as to form a multi-resolution representation. Meanwhile, information exchange is continuously carried out between the multi-resolution representations through an attention mechanism, so that the expression capacity of the high-resolution representations and the low-resolution representations is improved, and the multi-resolution representations are better promoted mutually. The output representation of each resolution is fused with the representations of the three resolution inputs so as to ensure the full utilization and interaction of information. Note that the mechanism is configured as shown in FIG. 5 first for input x l And g, performing 1 × 1 convolution, performing point-by-point addition on the outputs of the two and activating through a ReLU function, performing 1 × 1 convolution on the outputs and activating by using a Sigmoid function, finally performing resampling on an activation value to obtain a weight alpha, and performing weight alpha and input x l And multiplying to obtain the weighted characteristic value. The decoding network multiplies the high-layer features and the low-layer features to obtain a highlighted feature mapping under three different layer coding networks through the attention mechanism module, the decoding network uses three feature mapping layers with different scales for segmentation to recover the details and the spatial information of the image, the multi-resolution representations are fused, and finally the segmentation result of the image is obtained through a common convolution layer and a softmax layer.
And (3.2) repeating (3.1) iterative training, and outputting a cell nucleus segmentation model containing a loss function and segmentation accuracy after each iteration is finished. And setting an initial learning rate, and reducing the learning rate according to the adjustment of the network parameters when the monitoring index is unchanged. In the verification set loss, dice loss is used as a loss function:
Figure BDA0002788287300000041
wherein X represents a real cell nucleus mask, and Y represents an output segmentation result of a training layered multi-scale attention mechanism high-resolution convolutional neural network model;
and after each iterative training is finished, judging whether the iterative training is finished by using the verification set, and finishing the training of the high-resolution convolutional neural network model when the loss function is converged. Evaluating the obtained model through a test set, measuring the similarity degree of the segmentation result and the real mask by adopting evaluation indexes, and setting an F1 coefficient evaluation threshold value F 1threshold If the model segmentation result F1 coefficient is higher than the threshold value F 1threshold The model performs well. The evaluation index is an F1 coefficient, and the formula is as follows:
Figure BDA0002788287300000051
where P is precision and R is recall.
(4) And (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
By the microscopic endoscope image nucleus segmentation method, the accuracy rate of nucleus segmentation can reach 98%, and the dice coefficient of the segmented nucleus is 0.82.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (1)

1. A microscopic endoscope image nucleus segmentation method based on deep learning is characterized by comprising the following steps:
(1) collecting an original endoscope cell nucleus image set, carrying out cell nucleus pixel level labeling on the original endoscope cell nucleus image set according to priori knowledge, obtaining a mask image set of a cell nucleus, carrying out decentralization on the mask image set and the original endoscope cell nucleus image set to enable the mean value to be zero, and then carrying out regularization processing to obtain an image data set; the image data set is divided into a training set and a verification set;
(2) constructing a layered multi-scale attention mechanism high-resolution convolutional neural network model: the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model is formed by respectively connecting a first coding network, a second coding network and a third coding network with a decoding network; the structure of the first coding network is a high-resolution network, the second coding network and the third coding network are both composed of a plurality of convolutional layers and pooling layers, and the decoding network comprises three feature mapping layers with different scales, convolutional layers and softmax layers;
(3) the method for training the hierarchical multi-scale attention mechanism high-resolution convolutional neural network model comprises the following sub-steps:
(3.1) rotating and horizontally turning the training set in the step (1) to obtain an expanded training set, inputting the expanded training set into the high-resolution convolutional neural network model constructed in the step (2), amplifying each training image by 2 times, 1 time and 0.5 time respectively to obtain a first training image, a second training image and a third training image, correspondingly inputting the training images into a first coding network, a second coding network and a third coding network, extracting characteristic images, inputting the three characteristic images into a decoding network, fusing, and outputting a segmentation result;
(3.2) using dice loss as a loss function:
Figure FDA0002788287290000011
wherein, X represents a real cell nucleus mask, and Y represents an output segmentation result of a training hierarchical multi-scale attention mechanism high-resolution convolution neural network model;
judging whether the loss function is converged by using a verification set, and finishing training a high-resolution convolutional neural network model when the loss function is converged;
(4) and (3) shooting an original cell nucleus image through a high-resolution micro endoscope, inputting the original cell nucleus image into the high-resolution convolutional neural network model trained in the step (3), and outputting the prediction probability that each pixel in the original cell nucleus image belongs to the cell nucleus to obtain the segmentation result of the cell nucleus.
CN202011305801.0A 2020-11-19 2020-11-19 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning Active CN112396621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011305801.0A CN112396621B (en) 2020-11-19 2020-11-19 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011305801.0A CN112396621B (en) 2020-11-19 2020-11-19 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning

Publications (2)

Publication Number Publication Date
CN112396621A CN112396621A (en) 2021-02-23
CN112396621B true CN112396621B (en) 2022-08-30

Family

ID=74607139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011305801.0A Active CN112396621B (en) 2020-11-19 2020-11-19 High-resolution microscopic endoscope image nucleus segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN112396621B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192047A (en) * 2021-05-14 2021-07-30 杭州迪英加科技有限公司 Method for automatically interpreting KI67 pathological section based on deep learning
CN113409321B (en) * 2021-06-09 2023-10-27 西安电子科技大学 Cell nucleus image segmentation method based on pixel classification and distance regression
CN113850821A (en) * 2021-09-17 2021-12-28 武汉兰丁智能医学股份有限公司 Attention mechanism and multi-scale fusion leukocyte segmentation method
CN113813053A (en) * 2021-09-18 2021-12-21 长春理工大学 Operation process analysis method based on laparoscope endoscopic image
CN114387264B (en) * 2022-01-18 2023-04-18 桂林电子科技大学 HE staining pathological image data expansion and enhancement method
CN115760957B (en) * 2022-11-16 2023-05-12 北京工业大学 Method for analyzing substances in cell nucleus by three-dimensional electron microscope
CN117011550B (en) * 2023-10-08 2024-01-30 超创数能科技有限公司 Impurity identification method and device in electron microscope photo
CN117576103B (en) * 2024-01-17 2024-04-05 浙江大学滨江研究院 Urinary sediment microscopic examination analysis system integrating electric control microscope and deep learning algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635711A (en) * 2018-12-07 2019-04-16 上海衡道医学病理诊断中心有限公司 A kind of pathological image dividing method based on deep learning network
WO2019135234A1 (en) * 2018-01-03 2019-07-11 Ramot At Tel-Aviv University Ltd. Systems and methods for the segmentation of multi-modal image data
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN111462122A (en) * 2020-03-26 2020-07-28 中国科学技术大学 Automatic cervical cell nucleus segmentation method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3655923B1 (en) * 2016-12-06 2022-06-08 Siemens Energy, Inc. Weakly supervised anomaly detection and segmentation in images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019135234A1 (en) * 2018-01-03 2019-07-11 Ramot At Tel-Aviv University Ltd. Systems and methods for the segmentation of multi-modal image data
CN109635711A (en) * 2018-12-07 2019-04-16 上海衡道医学病理诊断中心有限公司 A kind of pathological image dividing method based on deep learning network
CN111179273A (en) * 2019-12-30 2020-05-19 山东师范大学 Method and system for automatically segmenting leucocyte nucleoplasm based on deep learning
CN111462122A (en) * 2020-03-26 2020-07-28 中国科学技术大学 Automatic cervical cell nucleus segmentation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Automatic Segmentation of Nuclei in Histopathology Images Using Encoding-decoding Convolutional Neural Networks;Deniz Sayin Mercadier等;《 ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;20190417;1020-1024 *
基于神经网络的遥感图像海陆语义分割方法;熊伟等;《计算机工程与应用》;20190915;第56卷(第15期);221-227 *

Also Published As

Publication number Publication date
CN112396621A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112396621B (en) High-resolution microscopic endoscope image nucleus segmentation method based on deep learning
Chandran et al. Diagnosis of cervical cancer based on ensemble deep learning network using colposcopy images
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
Miranda et al. A survey of medical image classification techniques
CN112529894B (en) Thyroid nodule diagnosis method based on deep learning network
CN111985536B (en) Based on weak supervised learning gastroscopic pathology image Classification method
CN111243042A (en) Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN112381164B (en) Ultrasound image classification method and device based on multi-branch attention mechanism
CN111160135A (en) Urine red blood cell lesion identification and statistical method and system based on improved Faster R-cnn
CN110189293A (en) Cell image processing method, device, storage medium and computer equipment
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN114972254A (en) Cervical cell image segmentation method based on convolutional neural network
CN115100474B (en) Thyroid gland puncture image classification method based on topological feature analysis
Chen et al. Automatic whole slide pathology image diagnosis framework via unit stochastic selection and attention fusion
CN116188423A (en) Super-pixel sparse and unmixed detection method based on pathological section hyperspectral image
Cao et al. An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features
CN113538344A (en) Image recognition system, device and medium for distinguishing atrophic gastritis and gastric cancer
CN112634291A (en) Automatic burn wound area segmentation method based on neural network
Alzubaidi et al. Multi-class breast cancer classification by a novel two-branch deep convolutional neural network architecture
Do et al. Supporting thyroid cancer diagnosis based on cell classification over microscopic images
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Sun et al. Liver tumor segmentation and subsequent risk prediction based on Deeplabv3+

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant