CN112767266A - Deep learning-oriented microscopic endoscope image data enhancement method - Google Patents

Deep learning-oriented microscopic endoscope image data enhancement method Download PDF

Info

Publication number
CN112767266A
CN112767266A CN202110030166.8A CN202110030166A CN112767266A CN 112767266 A CN112767266 A CN 112767266A CN 202110030166 A CN202110030166 A CN 202110030166A CN 112767266 A CN112767266 A CN 112767266A
Authority
CN
China
Prior art keywords
image
cell nucleus
endoscope
image data
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110030166.8A
Other languages
Chinese (zh)
Other versions
CN112767266B (en
Inventor
牛春阳
杨青
王立强
袁波
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202110030166.8A priority Critical patent/CN112767266B/en
Publication of CN112767266A publication Critical patent/CN112767266A/en
Application granted granted Critical
Publication of CN112767266B publication Critical patent/CN112767266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a deep learning-oriented microscopic endoscope image data enhancement method, which comprises the following steps: (1) marking the acquired endoscope nucleus image to obtain a nucleus mask image, (2) constructing and training an confrontation generation network model, (3) generating a simulated nucleus mask image data set, and (4) inputting the generated simulated nucleus mask image data set into the trained confrontation generation network model to generate a synthetic data set; (5) and (3) performing dyeing separation on the generated synthetic data set, randomly adjusting the dyeing proportion, and performing dyeing fusion to obtain a data-enhanced sample set. The invention can generate the microscopic endoscope images with certain diversity and good enough quality, and can solve the problems of unbalanced data set and insufficient data volume of the deep learning microscopic endoscope, so that the model can provide better prediction capability to assist the diagnosis of doctors, further improve the diagnosis accuracy of the doctors and improve the working efficiency.

Description

Deep learning-oriented microscopic endoscope image data enhancement method
Technical Field
The invention relates to the technical field of image processing, in particular to a depth learning-oriented microscopic endoscope image data enhancement method.
Background
The high-resolution endoscope can shoot high-resolution endoscope images, and doctors can quantitatively analyze the images (such as the size, the shape, the density, the quantity, the polymorphism and the like of cells or cell nuclei) according to the prior knowledge of the doctors so as to provide reliable support for medical diagnosis and set a corresponding scheme according to the reliable support. However, the endoscope image requires too much time for the doctor to perform the judgment analysis, and is often subjective and prone to misjudgment. The breakthrough progress obtained by the deep learning provides good opportunity for assisting doctors to analyze images of the endoscope, and compared with the manual processing process with strong time consumption, poor reproducibility and strong subjectivity, the computer-aided diagnosis based on the deep learning can quickly, accurately and reproducibly obtain objective quantitative data, thereby improving the analysis efficiency of endoscope images. On the premise of ensuring the accuracy, the reproducibility, timeliness and objectivity of observation are obviously improved, and basic scientific researchers and clinicians can be saved from boring and repeated daily work. However, the deep learning sample data set has an important premise, and a large-scale data set is required to support model training of deep learning, so that overfitting can be prevented, and accuracy and robustness are improved. The microscopic endoscope image data is medical image data with high complexity and high heterogeneity, accurate labeling can be given only by experienced doctors, the labeling cost is high, and large individual difference generally exists due to the influence of the quality of a coloring agent and the coloring proportion, so that microscopic endoscope images originally containing the same lesion type are in different distribution in a color space, so that sufficient representative training samples are difficult to obtain, a great problem is brought to given amount of computer-aided diagnosis, and the problem is difficult to solve from a practical perspective. Therefore, it is very important to enhance the data of the image of the micro-endoscope based on a small amount of high-quality image data sets. In the prior art, when the problem of insufficient training samples is faced, the image analysis method based on machine learning, especially deep learning, expands the samples in the training set to simulate and generate more samples, and the expanding method includes rotation, translation, inversion, scaling, noise addition and the like. The expansion methods can relieve the problem of insufficient training samples of natural images to a certain extent, but the methods are not designed for the images of the micro endoscope, and the methods have little effect on model optimization of the images of the micro endoscope.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a deep learning-oriented microscopic endoscope image data enhancement method, which generates vivid microscopic endoscope cell nucleus data through a conditional countermeasure generation network, then adjusts the dyeing proportion to achieve the purpose of expanding a data set, solves the problems of insufficient microscopic endoscope image samples and uneven sample distribution, and further can improve the precision of the deep learning-based microscopic endoscope computer-aided diagnosis method.
In order to achieve the purpose, the invention adopts the following technical scheme: a microscopic endoscope image data enhancement method facing deep learning specifically comprises the following steps:
step S1: the method comprises the steps that a high-resolution endoscope is adopted to shoot a high-resolution endoscope cell nucleus image, the shot endoscope cell nucleus image is subjected to pixel level marking of a cell nucleus according to priori knowledge to obtain a cell nucleus mask image, and the endoscope cell nucleus image and the cell nucleus mask image form a training set;
step S2: constructing an antagonistic generation network model, wherein the antagonistic generation network model comprises a generator G and a discriminator D, inputting the training set in the step S1 into the antagonistic generation network model for iterative training, judging whether the synthetic image generated by the generator G is an endoscope cell nucleus image or not by the discriminator D, and finishing training the antagonistic generation network model when the iteration number of the antagonistic generation network model reaches a threshold value;
step S3: generating a simulated cell nuclear mask image data set, specifically comprising the following sub-steps:
step S3-1: selecting a cell nucleus mask image from the training set to generate a background image I with the same size as the cell nucleus mask image and all zero pixel valuesb
Step S3-2: binarizing the cell nucleus mask image to obtain a binary image, and calculating the number n of connected areas of the binary image;
step S3-3: the center point coordinates (x) of each connected region in the step S3-2 are calculatedi,yi) For each center point coordinate (x)i,yi) The Gaussian disturbance is added randomly to obtain the new central point coordinate (x) of each connected regioni′,yi′);
Step S3-4: generating a random natural number r, wherein r is more than or equal to 1 and less than or equal to n, selecting a new central point coordinate of a corresponding connected region according to r, randomly rotating the first connected region counterclockwise by 90 degrees, 180 degrees or 270 degrees, and splicing the first connected region to the background image IbPerforming the following steps;
step S3-5: repeating the operation of the step S3-4 on each communication area, wherein the values of r are different from each other every time, and obtaining a complete simulated cell nucleus mask image;
step S3-6: repeating the steps S3-1-S3-5 on all the cell nucleus mask images in the training set to obtain a simulated cell nucleus mask image data set;
step S4: inputting the simulated cell nuclear mask image data set generated in the step S3 into a trained confrontation generation network model to generate a synthetic data set;
step S5: and (5) dyeing and separating the synthetic data set generated in the step (S4), randomly adjusting the dyeing proportion, and then performing dyeing fusion to obtain a data-enhanced sample set.
Further, the endoscopic images in the training set are processed for decentralization and regularization.
Further, the generator G adopts a U-net structure, the generator G is configured to generate a synthetic image similar to the endoscope nucleus image, and the discriminator D adopts a 6-layer convolution network of PatchGAN, and is configured to determine whether the generated synthetic image is the endoscope nucleus image.
Further, if the discriminator D judges that the synthesized image generated by the generator G is not the cell nucleus mask image, the generator G is optimized according to a loss function L (G, D); the loss function L (G, D) is:
L(G,D)=E[logD(x,y)]+E[log(1-D(x,G(x)))]
wherein, x is a cell nucleus mask image in the training set, y is an endoscope image in the training set, D () represents the probability that the discriminator judges the real endoscope cell nucleus image, D (x, y) represents the probability that the discriminator judges the real endoscope cell nucleus image as the real endoscope cell nucleus image, D (x, g (x)) represents the probability that the discriminator judges the composite image generated by the generator as the real endoscope cell nucleus image, and E is an overall judgment expected average value of the discriminator on the whole training set.
Further, the process of randomly adjusting the dyeing ratio in step S5 is as follows:
by generating random numbers alphak、βkAnd gammakObtaining the adjusted independent staining component image of hematoxylin stain
Figure BDA0002891787440000031
The adjusted eosin stain is independently stained into an image
Figure BDA0002891787440000032
Independently dyeing with the adjusted developer to form an image
Figure BDA0002891787440000033
Figure BDA0002891787440000034
Figure BDA0002891787440000035
Figure BDA0002891787440000036
Where k denotes the kth composite image in the composite dataset, u denotes the width of the composite image, v denotes the height of the composite image,
Figure BDA0002891787440000037
representing a hematoxylin stain alone stained sub-image,
Figure BDA0002891787440000038
the expression eosin stain alone stains the component image,
Figure BDA0002891787440000039
representing the developer alone dye component image.
Compared with the prior art, the invention has the following beneficial effects: the method for enhancing the image in the traditional natural image field, such as the methods of randomly segmenting the image, randomly rotating the image, enhancing the contrast ratio and the like, is not suitable for being applied to the field of nucleus segmentation of the microscope endoscope, and the method for enhancing the image data of the microscope endoscope facing to deep learning is provided by the disclosure, so that a sample set can be effectively expanded aiming at the nucleus segmentation work, and the labor cost of data annotation of experts and doctors is saved; the method provides data level help for the construction, learning and training of a deep learning-based computer aided diagnosis model, so that the diagnosis of a doctor is finally assisted, the diagnosis accuracy of the doctor is further improved, the working efficiency is improved, the problems of unbalanced data set and insufficient data volume of a deep learning micro endoscope can be solved, and the generalization capability of a cell nucleus segmentation model is improved.
Drawings
FIG. 1 is a flow chart of a method for enhancing image data of a deep learning-oriented micro-endoscope provided by the invention;
FIG. 2 is a mask image of an image and a cell nucleus pixel level annotation taken by a microscope endoscope in the present invention;
FIG. 3 is a conditional countermeasure generation network model employed in the present invention;
FIG. 4 is a simulation mask image of the nucleus of a microscope generated in the present invention;
FIG. 5 is a composite image generated by the model of the present invention similar to a real microscopic endoscopic image;
FIG. 6 is a flow chart of the dyeing ratio adjustment provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a deep learning-oriented microscopic endoscope image data enhancement method, which generates a vivid microscopic endoscope nucleus image by resisting a generation network model so as to achieve the purpose of expanding a data set, solve the problems of insufficient microscopic endoscope image samples and uneven sample distribution and further improve the precision of a deep learning-based microscopic endoscope computer-aided diagnosis method. Fig. 1 is a flowchart of a method for enhancing image data of a deep learning-oriented micro endoscope provided by the present invention, and the method for enhancing image data of a micro endoscope specifically includes the following steps:
step S1: the method comprises the steps of shooting a high-resolution endoscope cell nucleus image by using a high-resolution endoscope, carrying out cell nucleus pixel-level labeling on the shot endoscope cell nucleus image according to prior knowledge to obtain a cell nucleus mask image, and fusing the prior knowledge of different doctors or different experts in the process of obtaining the cell nucleus mask image to label so as to ensure the accuracy of the cell nucleus mask image, wherein as shown in fig. 2, the left side of the fig. 2 is a microscopic endoscope cell nucleus image, and the right side of the fig. 2 is a mask image after cell nucleus pixel-level labeling. Forming a training set by the endoscope cell nucleus image and the cell nucleus mask image; then, performing decentralization on the endoscope images in the training set to enable the mean value of the images to be zero; and performing regularization processing.
Step S2: a countermeasure generation network model is constructed, and as shown in fig. 3, a microscopic endoscopic image specifying the distribution of cell nuclei can be generated due to conditional constraints of the cell nucleus position and shape information added to the countermeasure generation network model. The anti-net network model comprises a generator G and a discriminator D, wherein the generator G is used for generating a synthetic image similar to the endoscope cell nucleus image, and the discriminator D is used for discriminating whether the generated synthetic image is the endoscope cell nucleus image or not in the training stage of the anti-net generation network. The generator G adopts a U-net structure, the discriminator D adopts a 6-layer convolution network of PatchGAN, and the generated synthetic image is divided into a plurality of Patch with fixed size and input into the discriminator D for judgment, so that the input of the discriminator D is reduced, the calculated amount is small, and the training speed is high.
Inputting the training set in the step S1 into the confrontation generating network model for iterative training, judging whether the synthetic image generated by the generator G is an endoscope cell nucleus image or not by the discriminator D, and finishing training the antibiotization network model when the iteration number of the confrontation generating network model reaches a threshold value; if the discriminator D judges that the synthetic image generated by the generator G is not the real microscopic endoscopic image corresponding to the cell nucleus mask image, optimizing the generator G according to a loss function L (G, D); the loss function L (G, D) is:
L(G,D)=E[log D(x,y)]+E[log(1-D(x,G(x)))]
wherein, x is a cell nucleus mask image in the training set, y is an endoscope image in the training set and represents the probability of the discriminator judging the cell nucleus image as a real endoscope, D (x, y) is the probability of the discriminator judging the real endoscopic microscopy cell nucleus image as a real endoscopic microscopy cell nucleus image, D (x, g (x)) represents the probability of the discriminator judging the synthetic image generated by the generator as a real endoscopic microscopy image, and E is an overall judgment expected average value of the discriminator on the whole training set.
Step S3: generating a simulated cell nuclear mask image data set, specifically comprising the following sub-steps:
step S3-1: selecting a cell nucleus mask image from the training set to generate a background image I with the same size as the cell nucleus mask image and all zero pixel valuesb
Step S3-2: binarizing the cell nucleus mask image to obtain a binary image, and calculating the number n of connected areas of the binary image;
step S3-3: the center point coordinates (x) of each connected region in the step S3-2 are calculatedi,yi) For each center point coordinate (x)i,yi) The Gaussian disturbance is added randomly to obtain the new central point coordinate (x) of each connected regioni′,yi′);
Step S3-4: generating a random natural number r, wherein r is more than or equal to 1 and less than or equal to n, selecting a new central point coordinate of a corresponding connected region according to r, randomly rotating the first connected region counterclockwise by 90 degrees, 180 degrees or 270 degrees, and splicing the first connected region to the background image IbPerforming the following steps;
step S3-5: repeating the operation of the step S3-4 on each communication area, wherein the values of r are different from each other every time, and obtaining a complete simulated cell nucleus mask image; as shown in fig. 4, the cell nuclei of the simulated cell nucleus mask image are all composed of real cell nuclei, which are very similar to the real cell nucleus mask image, and the distribution of the cell nuclei is different, so that the data samples capable of describing different cell nucleus distributions are greatly expanded.
Step S3-6: and repeating the steps S3-1 to S3-5 on all the cell nucleus mask images in the training set to obtain a simulated cell nucleus mask image data set. Random Gaussian disturbance is introduced in the step, images with different cell nucleus position distributions can be generated, and the diversity of the generated cell nucleus simulation mask data set is guaranteed.
Step S4: according to the generated simulated cell nucleus mask image, the microscopic endoscopic simulated image generated by the generation network is very similar to the real microscopic image, and based on the simulated images, the cell nucleus segmentation model can be effectively trained. Inputting the simulated cell nuclear mask image data set generated in the step S3 into a trained confrontation generation network model to generate a synthetic data set; as shown in fig. 5, the synthetic dataset can be used to train a deep learning based nuclear segmentation model, increasing the accuracy and generalization capability of the model.
Step S5: performing dyeing separation on the synthetic data set generated in the step S4, randomly adjusting the dyeing ratio, and performing dyeing fusion to obtain a data-enhanced sample set, as shown in fig. 6, specifically including the following steps:
step S5-1: selecting a composite image I from the composite datasetk(u, v), where k denotes the kth composite image in the composite dataset, u denotes the width of the composite image, and v denotes the height of the composite image, in pixels. Computing a composite image IkOptical density of (u, v):
Figure BDA0002891787440000061
wherein, ImaxFor the maximum value of the brightness of the pixel in three channels, the invention uses ImaxIs set to 255.
Step S5-2: calculating the coloring intensity A of the coloring agent according to the optical density of the RGB color channel of the composite imageH(u,v),AE(u, v) and ADAB(u,v):
Figure BDA0002891787440000062
Figure BDA0002891787440000063
Wherein,
Figure BDA0002891787440000064
represents the absorbance of a stain s comprising H (hematoxylin), E (eosin), DAB (color developer) on color channel c.
Figure BDA0002891787440000065
And
Figure BDA0002891787440000066
optical density representing color channels RGB;
step S5-3: calculating the image of the individual staining component of hematoxylin stain according to the staining intensity of the stain
Figure BDA0002891787440000067
Eosin stain individual stain component images
Figure BDA0002891787440000068
Developer alone dye component images
Figure BDA0002891787440000069
Figure BDA00028917874400000610
Figure BDA00028917874400000611
Figure BDA00028917874400000612
Wherein A ismaxIs the maximum coloring intensity of the coloring agent;
step S5-4: adjusting the dyeing ratio to generate a random number alphak、βkAnd gammakObtaining the adjusted independent staining component image of hematoxylin stain
Figure BDA00028917874400000613
The adjusted eosin stain is independently stained into an image
Figure BDA00028917874400000614
Independently dyeing with the adjusted developer to form an image
Figure BDA00028917874400000615
Figure BDA00028917874400000616
Figure BDA00028917874400000617
Figure BDA00028917874400000618
Step S5-5: calculating adjusted tinting strength
Figure BDA00028917874400000619
And
Figure BDA00028917874400000620
Figure BDA0002891787440000071
Figure BDA0002891787440000072
Figure BDA0002891787440000073
step S5-6: calculating the adjusted optical density
Figure BDA0002891787440000074
And
Figure BDA0002891787440000075
adjusted optical density representing color channels RGB:
Figure BDA0002891787440000076
step S5-7: performing dyeing fusion to obtain a fused image
Figure BDA0002891787440000077
Figure BDA0002891787440000078
Step S5-8: and repeating the steps S5-1 to S5-7 for each synthetic image in the synthetic data set to obtain a data enhanced sample data set.
Table 1 shows the comparison of the effects of the deep learning image enhancement method and the image enhancement of the method of the present invention for a test set
Original image training set Legacy data enhancement method The method of the invention
PA 0.91 0.93 0.96
Dice 0.68 0.71 0.76
Table 1 shows comparison between the depth learning image enhancement method for the test set and the image enhancement effect of the method of the present invention, where two indexes of the pixel accuracy PA and the Dice coefficient on the test set are selected for evaluation. From the results in table 1, it is seen that the Unet model is selected as a nuclear segmentation model for deep learning, and after the data enhancement is performed on the same batch of training data by the method, the Unet model can achieve higher pixel accuracy PA and Dice coefficient, which shows that the method improves the precision and generalization capability of the deep learning model.
The above embodiments are only intended to illustrate the technical solution of the present invention, but not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same, and the protection scope of the present invention is subject to the claims.

Claims (5)

1. A microscopic endoscope image data enhancement method facing deep learning is characterized by comprising the following steps:
step S1: the method comprises the steps that a high-resolution endoscope is adopted to shoot a high-resolution endoscope cell nucleus image, the shot endoscope cell nucleus image is subjected to pixel level marking of a cell nucleus according to priori knowledge to obtain a cell nucleus mask image, and the endoscope cell nucleus image and the cell nucleus mask image form a training set;
step S2: constructing an antagonistic generation network model, wherein the antagonistic generation network model comprises a generator G and a discriminator D, inputting the training set in the step S1 into the antagonistic generation network model for iterative training, judging whether the synthetic image generated by the generator G is an endoscope cell nucleus image or not by the discriminator D, and finishing training the antagonistic generation network model when the iteration number of the antagonistic generation network model reaches a threshold value;
step S3: generating a simulated cell nuclear mask image data set, specifically comprising the following sub-steps:
step S3-1: selecting a cell nucleus mask image from the training set to generate a background image I with the same size as the cell nucleus mask image and all zero pixel valuesb
Step S3-2: binarizing the cell nucleus mask image to obtain a binary image, and calculating the number n of connected areas of the binary image;
step S3-3: the center point coordinates (x) of each connected region in the step S3-2 are calculatedi,yi) For each center point coordinate (x)i,yi) Gaussian disturbance is randomly added to obtain new center point coordinates (x ') of each connected region'i,y′i);
Step S3-4: generating a random natural number r, wherein r is more than or equal to 1 and less than or equal to n, selecting a new central point coordinate of a corresponding connected region according to r, randomly rotating the first connected region counterclockwise by 90 degrees, 180 degrees or 270 degrees, and splicing the first connected region to the background image IbPerforming the following steps;
step S3-5: repeating the operation of the step S3-4 on each communication area, wherein the values of r are different from each other every time, and obtaining a complete simulated cell nucleus mask image;
step S3-6: repeating the steps S3-1-S3-5 on all the cell nucleus mask images in the training set to obtain a simulated cell nucleus mask image data set;
step S4: inputting the simulated cell nuclear mask image data set generated in the step S3 into a trained confrontation generation network model to generate a synthetic data set;
step S5: and (5) dyeing and separating the synthetic data set generated in the step (S4), randomly adjusting the dyeing proportion, and then performing dyeing fusion to obtain a data-enhanced sample set.
2. The deep learning-oriented microendoscope image data enhancement method according to claim 1, wherein the endoscopic images in the training set are subjected to a process of decentration and regularization.
3. The method for enhancing image data of a deep learning-oriented microendoscope according to claim 1, wherein the generator G has a U-net structure, the generator G is configured to generate a synthetic image similar to the nuclear image of the endoscopical cell, and the discriminator D is configured to use a 6-layer convolution network of PatchGAN to determine whether the generated synthetic image is the nuclear image of the endoscopical cell.
4. The depth-learning-oriented microendoscope image data enhancement method as claimed in claim 1, wherein if the discriminator D judges that the synthesized image generated by the generator G is not a cell nuclear mask image, the generator G is optimized according to a loss function L (G, D); the loss function L (G, D) is:
L(G,D)=E[logD(x,y)]+E[log(1-D(x,G(x)))]
wherein, x is a cell nucleus mask image in the training set, y is an endoscope image in the training set, D () represents the probability that the discriminator judges the real endoscope cell nucleus image, D (x, y) represents the probability that the discriminator judges the real endoscope cell nucleus image as the real endoscope cell nucleus image, D (x, g (x)) represents the probability that the discriminator judges the composite image generated by the generator as the real endoscope cell nucleus image, and E is an overall judgment expected average value of the discriminator on the whole training set.
5. The deep learning-oriented microendoscope image data enhancement method according to claim 1, wherein the process of randomly adjusting the dye mixture ratio in step S5 is:
by generating random numbers alphak、βkAnd gammakObtaining the adjusted independent staining component image of hematoxylin stain
Figure FDA0002891787430000021
The adjusted eosin stain is independently stained into an image
Figure FDA0002891787430000022
Independently dyeing with the adjusted developer to form an image
Figure FDA0002891787430000023
Figure FDA0002891787430000024
Figure FDA0002891787430000025
Figure FDA0002891787430000026
Where k denotes the kth composite image in the composite dataset, u denotes the width of the composite image, v denotes the height of the composite image,
Figure FDA0002891787430000027
representing a hematoxylin stain alone stained sub-image,
Figure FDA0002891787430000028
the expression eosin stain alone stains the component image,
Figure FDA0002891787430000029
representing the developer alone dye component image.
CN202110030166.8A 2021-01-11 2021-01-11 Deep learning-oriented microscopic endoscope image data enhancement method Active CN112767266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110030166.8A CN112767266B (en) 2021-01-11 2021-01-11 Deep learning-oriented microscopic endoscope image data enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110030166.8A CN112767266B (en) 2021-01-11 2021-01-11 Deep learning-oriented microscopic endoscope image data enhancement method

Publications (2)

Publication Number Publication Date
CN112767266A true CN112767266A (en) 2021-05-07
CN112767266B CN112767266B (en) 2022-08-30

Family

ID=75701297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110030166.8A Active CN112767266B (en) 2021-01-11 2021-01-11 Deep learning-oriented microscopic endoscope image data enhancement method

Country Status (1)

Country Link
CN (1) CN112767266B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530250A (en) * 2022-04-24 2022-05-24 广东工业大学 Wearable blood glucose detection method and system based on data enhancement and storage medium
CN117095395A (en) * 2023-10-19 2023-11-21 北京智源人工智能研究院 Model training method and device for heart ultrasonic image segmentation and segmentation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563493A (en) * 2017-07-17 2018-01-09 华南理工大学 A kind of confrontation network algorithm of more maker convolution composographs
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
US20200167914A1 (en) * 2017-07-19 2020-05-28 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
CN111524138A (en) * 2020-07-06 2020-08-11 湖南国科智瞳科技有限公司 Microscopic image cell identification method and device based on multitask learning
CN111784596A (en) * 2020-06-12 2020-10-16 北京理工大学 General endoscope image enhancement method and device based on generation of antagonistic neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563493A (en) * 2017-07-17 2018-01-09 华南理工大学 A kind of confrontation network algorithm of more maker convolution composographs
US20200167914A1 (en) * 2017-07-19 2020-05-28 Altius Institute For Biomedical Sciences Methods of analyzing microscopy images using machine learning
US20190333199A1 (en) * 2018-04-26 2019-10-31 The Regents Of The University Of California Systems and methods for deep learning microscopy
CN111784596A (en) * 2020-06-12 2020-10-16 北京理工大学 General endoscope image enhancement method and device based on generation of antagonistic neural network
CN111524138A (en) * 2020-07-06 2020-08-11 湖南国科智瞳科技有限公司 Microscopic image cell identification method and device based on multitask learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530250A (en) * 2022-04-24 2022-05-24 广东工业大学 Wearable blood glucose detection method and system based on data enhancement and storage medium
CN117095395A (en) * 2023-10-19 2023-11-21 北京智源人工智能研究院 Model training method and device for heart ultrasonic image segmentation and segmentation method
CN117095395B (en) * 2023-10-19 2024-02-09 北京智源人工智能研究院 Model training method and device for heart ultrasonic image segmentation and segmentation method

Also Published As

Publication number Publication date
CN112767266B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
Lal et al. NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images
CN108364288B (en) Segmentation method and device for breast cancer pathological image
CN110705425B (en) Tongue picture multi-label classification method based on graph convolution network
Bai et al. Liver tumor segmentation based on multi-scale candidate generation and fractal residual network
CN111931811B (en) Calculation method based on super-pixel image similarity
Yan et al. Automatic segmentation of high-throughput RNAi fluorescent cellular images
CN107256558A (en) The cervical cell image automatic segmentation method and system of a kind of unsupervised formula
CN108428229A (en) It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network
CN112767266B (en) Deep learning-oriented microscopic endoscope image data enhancement method
CN105893925A (en) Human hand detection method based on complexion and device
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN111667491B (en) Breast tumor block diagram generation method with boundary marking information based on depth countermeasure network
CN107784319A (en) A kind of pathological image sorting technique based on enhancing convolutional neural networks
CN108010013A (en) A kind of lung CT image pulmonary nodule detection methods
Poon et al. Efficient interactive 3D Livewire segmentation of complex objects with arbitrary topology
CN108629762B (en) Image preprocessing method and system for reducing interference characteristics of bone age evaluation model
CN111563563B (en) Method for enhancing combined data of handwriting recognition
US20200193139A1 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN116580203A (en) Unsupervised cervical cell instance segmentation method based on visual attention
CN111476794B (en) Cervical pathological tissue segmentation method based on UNET
CN116630971B (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
CN111047559A (en) Method for rapidly detecting abnormal area of digital pathological section
Wang et al. Automatic segmentation of concrete aggregate using convolutional neural network
CN109919216B (en) Counterlearning method for computer-aided diagnosis of prostate cancer
CN113538422B (en) Pathological image automatic classification method based on dyeing intensity matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant