CN113256572A - Gastroscope image analysis system, method and equipment based on restoration and selective enhancement - Google Patents

Gastroscope image analysis system, method and equipment based on restoration and selective enhancement Download PDF

Info

Publication number
CN113256572A
CN113256572A CN202110517412.2A CN202110517412A CN113256572A CN 113256572 A CN113256572 A CN 113256572A CN 202110517412 A CN202110517412 A CN 202110517412A CN 113256572 A CN113256572 A CN 113256572A
Authority
CN
China
Prior art keywords
image
detected
gastroscope
reflective
restoration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110517412.2A
Other languages
Chinese (zh)
Other versions
CN113256572B (en
Inventor
田捷
董迪
巩立鑫
胡朝恩
杨鑫
操润楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202110517412.2A priority Critical patent/CN113256572B/en
Publication of CN113256572A publication Critical patent/CN113256572A/en
Application granted granted Critical
Publication of CN113256572B publication Critical patent/CN113256572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of image recognition, particularly relates to a gastroscope image analysis system, method and equipment based on restoration and selective enhancement, and aims to solve the problem that an existing gastroscope image recognition system cannot accurately recognize early gastric cancer images. The invention comprises the following steps: acquiring a gastroscope image of narrow-band imaging and taking the gastroscope image as an image to be detected; preprocessing to obtain an image to be detected only containing the gastric mucosa; obtaining a non-reflective image to be detected through a reflective processing module; generating a synthetic image through the trained generation type confrontation network, and automatically selecting a more vivid image; the trained gastroscope image recognition network obtains the early gastric cancer probability of the image to be detected and obtains the area image of the suspected early gastric cancer through a gradient weighting activation mapping method. According to the invention, the selectivity of data features is enhanced through the generation type countermeasure network, and the feature information most relevant to the classification task is automatically learned through the recognition model, so that the accuracy of gastroscope image analysis is improved.

Description

Gastroscope image analysis system, method and equipment based on restoration and selective enhancement
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a gastroscope image analysis system, method and equipment based on restoration and selective enhancement.
Background
Gastric Cancer (GC) is one of the most common malignancies in the world, with an estimated 100 million new cases per year. The prognosis of late gastric cancer is poor, while Early Gastric Cancer (EGC) has high radical endoscopic resection rate due to less lymph node metastasis, and the survival rate of 5 years reaches more than 90 percent, so that the EGC is timely and accurately identified. However, the identification of EGCs is quite challenging. The average detection rate of EGC in China is about 2% -5%, and the undetected rate of EGC in gastroscopy is about 10% [3 ]. Given the high incidence of gastric cancer, the number of EGC missed diagnoses is overwhelming.
In order to overcome the current situation, a narrow-band imaging amplification endoscope (ME-NBI) can analyze the microstructure and the microvasculature of the EGC and is the most powerful tool for evaluating the EGC at present. However, in practice, since EGC often lacks typical characteristics in ME-NBI and is easily confused with gastritis, visual identification of EGC by an endoscopist requires extensive experience and can only be found by qualified experts. Thus, in daily practice, the results are often disappointing. Furthermore, the classification results often vary widely between endoscopists, and even between endoscopists. At present, few intelligent methods for EGC diagnosis based on ME-NBI exist, and the EGC diagnosis is easily interfered by a reflecting region in a gastroscope image. Therefore, in order to better identify the gastroscopic image, it is urgent to develop an automatic and efficient classification method.
Disclosure of Invention
In order to solve the problems in the prior art, namely the problem that the prior gastroscope image analysis technology cannot accurately and quickly identify a gastroscope image, the invention provides a gastroscope image analysis system based on restoration and selective enhancement, which comprises an image acquisition module, a preprocessing module, a light reflection processing module, an image confrontation synthesis and selective enhancement module and an image identification module;
the image acquisition module is configured to acquire a gastroscope image of narrow-band imaging and serve as an image to be detected;
the preprocessing module is configured to obtain an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
the reflection processing module is configured to obtain a non-reflection image to be detected through reflection detection, image restoration and Gaussian blur and weighted superposition based on the image to be detected only containing the gastric mucosa;
the image confrontation synthesis and selectivity enhancement module is configured to generate a realistic synthesis image through a trained generative confrontation network based on the image to be detected which only contains the gastric mucosa and does not have reflection;
the image identification module comprises a probability prediction unit and a region display unit;
the probability prediction unit obtains the early gastric cancer probability of the image to be detected through a trained gastroscope image identification network based on the vivid synthetic image;
and the region display unit acquires a region image of suspected early gastric cancer by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
In some preferred embodiments, the reflection processing module includes a reflection detection unit, a reflection restoration and gaussian fuzzy unit and a weighted superposition unit;
the reflective detection unit is configured to detect the image I to be detected only containing the gastric mucosa through a color balance adaptive threshold valueoriA high-gloss portion of (1);
converting the image to be detected into a gray image: 0.2989R + 0.5870G + 0.1140B,
wherein R is the image I to be measuredoriG is the image I to be measuredoriB is the image I to be measuredoriThe blue channel of (1), Gray is the image I to be measuredoriA corresponding gray scale image;
respectively calculating the 95 th percentile ratio of the image intensity of the blue channel to the intensity of the gray image and the 95 th percentile ratio of the image intensity of the green channel to the intensity of the gray image to obtain a threshold value of the reflection detection:
Figure BDA0003062184540000031
Figure BDA0003062184540000032
the image I to be measuredoriSetting the pixel point n meeting any one reflecting point condition as a reflecting point:
the condition of the reflecting points is as follows: g (n)>T·Rg-gray(ii) a And (2) reflecting point condition two: b (n)>T·Rb-gray(ii) a And (3) reflecting point conditions are as follows: r (n)>T; g (n) represents the pixel value of a pixel point n of the image to be detected on a green channel, B (n) represents the pixel value of the pixel point n of the image to be detected on a blue channel, R (n) represents the pixel value of the pixel point n of the image to be detected on a red channel, and T is a super parameter; setting the pixels of the image to be detected as the light reflecting points as 0 and setting the other pixels as 1 to obtain an image I to be detectedoriMask M for light reflecting regions
The reflective repairing unit is configured to obtain a reflective area through a mask of the reflective area, fill reflective points based on an average value of pixels around the reflective area, and obtain a filled image to be detectedIp(ii) a Checking the filled image I to be measured by Gaussian convolutionpCarrying out Gaussian blur to obtain a smooth image Is
Is=Fguassin*Ip
Wherein, FguassinRepresenting a gaussian blur;
the weighted superposition unit is configured to perform mean filtering on the mask of the light reflection region to calculate a smooth image weight w, and then to fill the pre-filled image I to be measuredoriAnd weighting and summing the corresponding smooth images to obtain a non-reflective image to be detected:
w=Ms*Fmean Iinpainted=w·Is+(1-w)·Iori
wherein, FguassinIs a Gaussian convolution kernel, FmeanFor mean filtering, IsTo smooth the image, IoriW is the weight of the smoothed image.
In some preferred embodiments, the image countermeasure synthesis and selective enhancement module is configured to input the non-reflective image to be tested and noise into a generation countermeasure network to obtain a synthesized image:
Figure BDA0003062184540000041
wherein the content of the first and second substances,
Figure BDA0003062184540000042
representing the degree of difference between the true and generated samples, G representing the generator, D representing the discriminator, (x, y) representing the true image and its label, (z, y) representing the random noise and its label, E representing the expectation of the probability distribution, x-PdataRepresenting true data x obeys PdataDistributing, wherein z-N represents that the noise obeys uniform distribution;
based on the synthetic image, a Monte Carlo random inactivation method is adopted to operate a feature extraction network in the K pre-training networks, and the normalized Euclidean distance of the feature centroid of the synthetic image and the non-reflective image to be detected is calculatedDf
Figure BDA0003062184540000043
Wherein Hl×WlRepresenting the size of the feature extraction network, x representing the composite image, ciThe centroid of the i-th class is represented,
Figure BDA0003062184540000044
indicating the normalized activation of the/th feature extraction network layer,
Figure BDA0003062184540000045
expressing the activation variance of the l layer of the kth feature extraction network, and expressing the total operation times of the feature extraction network by K;
characteristic centroid c of non-reflection image to be detectediThe calculation method comprises the following steps:
Figure BDA0003062184540000046
wherein the content of the first and second substances,
Figure BDA0003062184540000047
the activation of the ith layer of the feature extraction network is represented, L represents the common L layer of the feature network, Ni represents the number of ith training samples, and xjRepresents the jth sample;
selecting Euclidean distance DfAnd taking the image smaller than the preset lifelike threshold value as a lifelike composite image.
In some preferred embodiments, the trained gastroscope image recognition network is obtained by:
step S10, acquiring training set images;
step S20, acquiring training set realistic synthetic images by the methods of the preprocessing module, the light reflection processing module, the image confrontation synthesis and the selective data enhancement module;
step S30, inputting the training set of the real images after the light reflection processing and the realistic synthetic images thereof into VGG-19 pre-trained on ILSVRC-2012, generating recognizable characteristic vectors, obtaining classification results through a full connection layer, performing iterative training on the first training set through a transfer learning method, and updating model parameters until the cross entropy loss of the model on the second training set is not reduced under the continuous preset iteration times, so as to obtain a trained gastroscope image recognition network.
In some preferred embodiments, the area display unit specifically includes: calculating the weight alpha of each characteristic image corresponding category of the image by adopting a Grad-CAM algorithm based on a gradient weighted category activation mapping method:
Figure BDA0003062184540000051
wherein the content of the first and second substances,
Figure BDA0003062184540000052
represents the weight of the k-th feature activation map to the class c, Z represents the number of pixels of the feature activation map, ycA gradient corresponding to the class c is represented,
Figure BDA0003062184540000053
represents the pixel value at position (i, j) in the kth feature activation map;
weighting and summing the weights of the corresponding categories and obtaining corresponding activation graphs through ReLU activation function activation:
Figure BDA0003062184540000054
wherein
Figure BDA0003062184540000055
Representing an activation map for category c;
in some preferred embodiments, the preprocessing module includes a text removing unit and a background removing unit;
the character removing unit is configured to remove the character data and the irrelevant part of the patient of the image to be detected through an Otsu threshold algorithm;
the background removing unit is configured to detect an effective gastric mucosa part through a contour detection algorithm, remove the black background of the image to be detected and obtain the image to be detected which has less interference and only comprises the gastric mucosa;
in some preferred embodiments, the environment of the assay system is the ubuntu16.04 system, python3.8.3, pytorch1.4.0 platform.
In some preferred embodiments, the gastroscope image recognition network, during training, divides the training set images into a first training set and a second training set, and inputs 16 images of the first training set into pre-trained VGG-19 for each iterative training, and adopts an initial learning rate of 10-6And the Adam optimizer with the weight attenuation of 0.01 sets that the second training set is verified once every 10 iterations, and executes the model training early-stop strategy with the tolerance of 10.
In another aspect of the present invention, a method for data-based selection of enhanced gastroscopic image analysis is presented, the analysis method comprising:
s100, acquiring a gastroscope image of narrow-band imaging and taking the gastroscope image as an image to be detected;
step S200, obtaining an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
step S300, based on the image to be detected only containing the gastric mucosa, obtaining a non-reflective image to be detected through reflective detection, image restoration and Gaussian blur and weighted superposition;
step S400, generating a vivid synthetic image through a trained generation type countermeasure network based on the non-reflective image to be detected;
step S500, acquiring the early gastric cancer probability of the image to be detected and the image of the suspected early gastric cancer region based on the vivid synthetic image; the method specifically comprises the following steps:
step S510, based on the vivid synthetic image, obtaining the early gastric cancer probability of the image to be detected through a trained gastroscope image recognition network;
and step S520, acquiring an image of a suspected early gastric cancer region by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
In a third aspect of the present invention, an electronic device is provided, including: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the data-based selection enhanced gastroscopic image analysis method described above.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores computer instructions for execution by the computer to implement the above-mentioned method for data-based selection of enhanced gastroscopic image analysis.
The invention has the beneficial effects that:
(1) the gastroscope image analysis system based on restoration and selective enhancement removes the inevitable reflective problem of the gastroscope image by reflective restoration, selectively enhances the data through the generative countermeasure network, automatically learns the characteristic information most relevant to the classification task through the recognition model, and improves the accuracy of the gastroscope image analysis.
(2) The method is based on a gastroscope image analysis system with restoration and selectivity enhancement, automatically displays the area with the largest contribution to the classification result in the image, and positions the suspected area of early gastric cancer by combining the probability of classifying the image into early gastric cancer, thereby providing help for the judgment of experts.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a block diagram of a gastroscopic image analysis system based on restoration and selective enhancement in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the effect of the preprocessing module, the reflection processing module, the image confrontation synthesis and the selectivity enhancing module according to the present invention;
FIG. 3 is a schematic diagram of a gastroscopic image recognition model training process and early stop strategy;
FIG. 4 is a schematic diagram illustrating the effect of the area display unit according to the present invention;
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a gastroscope image analysis system based on restoration and selective enhancement. The invention generates and synthesizes a stomach narrow-band imaging amplification endoscope image to enhance an input image based on a deep learning method taking a convolutional neural network as a core, automatically excavates and analyzes image characteristics, obtains the output probability that a gastroscope image is divided into early gastric cancer categories, can prompt an early gastric cancer focus area on the gastroscope image, and provides help for judgment of experts.
The invention relates to a gastroscope image analysis system based on restoration and selectivity enhancement, which comprises: the system comprises an image acquisition module, a preprocessing module, a light reflection processing module, an image confrontation synthesis and selectivity enhancement module and an image identification module;
the image acquisition module is configured to acquire a gastroscope image of narrow-band imaging and serve as an image to be detected;
the preprocessing module is configured to obtain an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
the reflection processing module is configured to obtain a non-reflection image to be detected through reflection detection, image restoration and Gauss fuzzy and weighted superposition based on the image to be detected only containing the gastric mucosa
The image confrontation synthesis and selectivity enhancement module is configured to generate a realistic synthesis image through a trained generative confrontation network based on a non-reflective image to be detected, and select a more realistic synthesis image based on a normalized Euclidean distance;
the image identification module comprises a probability prediction unit and a region display unit;
the probability prediction unit obtains the early gastric cancer probability of the image to be detected through a trained gastroscope image identification network based on the vivid synthetic image;
the region display unit acquires a region image of suspected early gastric cancer by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected
In order to more clearly illustrate the gastroscopic image analysis system based on restoration and selective enhancement of the present invention, the functional modules in the embodiment of the present invention are described in detail below with reference to fig. 1.
The gastroscope image analysis system based on restoration and selective enhancement comprises an image acquisition module, a preprocessing module, a light reflection processing module, an image confrontation synthesis and selective enhancement module and an image identification module, wherein the functional modules are described in detail as follows:
as shown in fig. 2, a represents an image to be measured, B represents a text data from which text information is removed and an image to be measured of an irrelevant part, C represents an image to be measured from which a black background is removed and which only includes a gastric mucosa, D represents an image to be measured without reflection, E represents a synthetic image after data enhancement, and F represents a realistic synthetic image selected by an euclidean distance.
In this embodiment, the environment of the analysis system is ubuntu16.04 system, python3.8.3, pytorch1.4.0 platform.
The image acquisition module is configured to acquire a gastroscope image of narrow-band imaging and serve as an image to be detected;
the preprocessing module is configured to obtain an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
in this embodiment, the preprocessing module includes a text removal unit and a background removal unit;
the character removing unit is configured to remove the character data and the irrelevant part of the patient of the image to be detected through an Otsu threshold algorithm;
the background removing unit is configured to detect an effective gastric mucosa part through a contour detection algorithm, remove the black background of the image to be detected and obtain the image to be detected which has less interference and only comprises the gastric mucosa;
the reflection processing module comprises a reflection detection unit, an image restoration and Gaussian blur unit and a weighted superposition unit;
the reflective detection unit is configured to detect the image I to be detected only containing the gastric mucosa through a color balance adaptive threshold valueoriA high-gloss portion of (1); the method specifically comprises the following steps: converting the image to be detected into a gray image:
Gray=0.2989*R+0.5870*G+0.1140*B,
wherein R is the image I to be measuredoriG is the image I to be measuredoriB is the image I to be measuredoriThe blue channel of (1), Gray is the image I to be measuredoriA corresponding gray scale image;
respectively calculating the 95 th percentile ratio of the image intensity of the blue channel to the intensity of the gray image and the 95 th percentile ratio of the image intensity of the green channel to the intensity of the gray image to obtain a threshold value of the reflection detection:
Figure BDA0003062184540000101
Figure BDA0003062184540000102
the image I to be measuredoriSetting the pixel point n meeting any one reflecting point condition as a reflecting point:
the condition of the reflecting points is as follows: g (n)>T·Rg-gray(ii) a And (2) reflecting point condition two: b (n)>T·Rb-gray(ii) a And (3) reflecting point conditions are as follows: r (n)>T; g (n) represents the pixel value of a pixel point n of the image to be detected on a green channel, B (n) represents the pixel value of the pixel point n of the image to be detected on a blue channel, R (n) represents the pixel value of the pixel point n of the image to be detected on a red channel, and T is a hyper-parameter set according to experience; setting the pixels of the image to be detected as the light reflecting points as 0 and setting the other pixels as 1 to obtain an image I to be detectedoriMask M for light reflecting regions
The reflective repairing unit is configured to obtain a reflective area through a mask of the reflective area, fill reflective points based on an average value of pixels around the reflective area, and obtain a filled image I to be detectedp(ii) a Checking the filled image I to be measured by Gaussian convolutionpCarrying out Gaussian blur to obtain a smooth image Is
Is=Fguassin*Ip
Wherein, FguassinRepresenting a gaussian blur;
the weighted superposition unit is configured to perform mean filtering on the mask of the light reflection region to calculate a smooth image weight w, and then to fill the pre-filled image I to be measuredoriAnd weighting and summing the corresponding smooth images to obtain a non-reflective image to be detected:
w=Ms*Fmean
Iinpainted=w·Is+(1-w)·Iori
wherein, FguassinIs a Gaussian convolution kernel, FmeanFor mean filtering, IsTo smooth the image, IoriW is the weight of the smoothed image.
The unit can effectively reduce the image of the inevitable reflected light to the system judgment precision during shooting.
The image confrontation synthesis and selectivity enhancement module is configured to generate a realistic synthesis image through a trained generation type confrontation network based on the non-reflective image to be detected, and select a more realistic synthesis image through normalizing Euclidean distance (in order to improve the robustness of the characteristics and reduce the uncertainty, a Monte Carlo random inactivation method is adopted to operate a characteristic extraction network in a pre-training network for K times);
in this embodiment, the image confrontation synthesis and selectivity enhancement module specifically includes: inputting the non-reflective image to be detected and noise into a generating countermeasure network to obtain a synthetic image:
Figure BDA0003062184540000121
wherein the content of the first and second substances,
Figure BDA0003062184540000122
representing the degree of difference between the true and generated samples, G representing the generator, D representing the discriminator, (x, y) representing the true image and its label, (z, y) representing the random noise and its label, E representing the expectation of the probability distribution, x-PdataRepresenting true data x obeys PdataDistributing, wherein z-N represents that the noise obeys uniform distribution;
based on the synthetic image, a Monte Carlo random inactivation method is adopted to operate a feature extraction network in the K pre-training networks, and the normalized Euclidean distance D of the feature centroid of the synthetic image and the non-reflective image to be detected is calculatedf
Figure BDA0003062184540000123
Wherein Hl×WlRepresenting the size of the feature extraction network, x representing the composite image, ciThe centroid of the i-th class is represented,
Figure BDA0003062184540000124
expressing the normalized activation of the l layer of the kth feature extraction network, wherein K expresses the total operation times of the feature extraction network;
characteristic centroid c of non-reflective image to be detectediThe calculation method comprises the following steps:
Figure BDA0003062184540000125
wherein the content of the first and second substances,
Figure BDA0003062184540000126
the activation of the ith layer of the feature extraction network is represented, L represents the common L layer of the feature network, Ni represents the number of ith training samples, and xjRepresents the jth sample;
selecting Euclidean distance DfAnd taking the image smaller than the preset lifelike threshold value as a lifelike composite image.
In the embodiment, the generator G and the discriminator D in the generative confrontation network perform confrontation learning, so that the image generated by the generator is more vivid.
The image identification module comprises a probability prediction unit and a region display unit;
the probability prediction unit obtains the early gastric cancer probability of the image to be detected through a trained gastroscope image identification network based on the vivid synthetic image;
in this embodiment, the trained gastroscope image recognition network is obtained by the following steps:
step S10, acquiring training set images;
step S20, obtaining training set realistic synthetic image by the method of the preprocessing module, the reflection processing module and the image confrontation synthesis and selective enhancement image module;
step S30, inputting the training set of the real images after the light reflection processing and the realistic synthetic images thereof into VGG-19 pre-trained on ILSVRC-2012, generating recognizable characteristic vectors, obtaining classification results through a full connection layer, performing iterative training on the first training set through a transfer learning method, and updating model parameters until the cross entropy loss of the model on the second training set is not reduced under the continuous preset iteration times, so as to obtain a trained gastroscope image recognition network.
In this embodiment, in the training process of the gastroscope image recognition network, the training set images are divided into a first training set and a second training set, 16 images of the first training set are input into the pre-trained VGG-19 for each iterative training, and the initial learning rate is 10-6And the Adam optimizer with the weight attenuation of 0.01 sets that the second training set is verified once every 10 iterations, and executes the model training early-stop strategy with the tolerance of 10. The early stop strategy is shown in fig. 3. And obtaining an output probability based on the most reliable classification model, obtaining a judgment threshold value of early gastric cancer based on the Youden Index so as to obtain a final result of judging early gastric cancer by the model, and comparing the output probability (continuous probability decimal value) with a preset threshold value to obtain a classification result (yes or no, two classifications of 0 and 1). In this embodiment, the first training set and the second training set are in a ratio of about 6: 1.
In this example, based on the early-stop strategy, the optimal model was obtained in the 7 th round of training, and the model had ACC of 0.843, Sn of 0.870, and Sp of 0.819 in the training set.
The optimal model obtains good prediction performance ACC of 0.770, Sn of 0.792 and Sp of 0.745 in the internal test set, which shows that the model has good robustness, and simultaneously, tests are carried out on the external test set, which obtains AUC of 0.813, ACC of 0.763, Sn of 0.782 and Sp of 0.741, which shows that the model has good generalization.
The verification result shows that the model related to the invention has good performance in robustness and generalization.
Figure BDA0003062184540000141
The results of the classification performance comparison of the endoscopist and the model are shown in Table 3. The results show that three high-level endoscopists have better classification capability, the average classification accuracy is 0.755, and the average classification accuracy of the primary endoscopist is lower. It is noteworthy that the model obtained similar predicted performance to the advanced endoscopy expert (ACC: 0.770vs.0.755, p ═ 0.355; Sn: 0.792vs.0.767, p ═ 0.183; Sp: 0.745vs.0.742, p ═ 0.931; PPV: 0.772vs.0.764, p ═ 0.515; NPV: 0.767vs.0.745, p ═ 0.162, supplementary Table S2), with ACC, Sn and NPV significantly higher than the primary group and all experts (p < 0.05). Specifically, as shown in Table 2, the mean performance of the endoscopist was significantly improved (p <0.05) after reference to the model results. Based on a subgroup analysis of different groups of endoscopists (Table 2) it was further shown that the mean ACC, Sp and PPV of the primary endoscopists were significantly increased to 0.747(p <0.05), 0.813 (p <0.05) and 0.800(p <0.05) with the aid of the model, which is comparable to the classification performance of advanced endoscopists without model assistance. On the other hand, with the help of the model, the performance of the higher-level group was also significantly improved (ACC: 0.755vs.0.789, p < 0.05; Sn: 0.767vs.0.874, p < 0.05; NPV: 0.745vs.0.836, p <0.05), with significantly better performance in terms of ACC, Sn, Sp and NPV than the model (p < 0.05).
Figure BDA0003062184540000142
Figure BDA0003062184540000151
Figure BDA0003062184540000152
There is a significant difference between the target group and the model; *: there was a significant difference between the results of the model-prompt-free group and the model-prompt-containing group.
And the region display unit acquires a region image of suspected early gastric cancer by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
In this embodiment, the area display unit specifically includes: calculating the weight alpha of each characteristic image corresponding category of the image by adopting a Grad-CAM algorithm based on a gradient weighted category activation mapping method:
Figure BDA0003062184540000153
wherein the content of the first and second substances,
Figure BDA0003062184540000154
represents the weight of the kth feature map to the class c, Z represents the number of pixels of the feature map, ycRepresenting the prediction probability value for the corresponding category c,
Figure BDA0003062184540000155
represents the pixel at position (i, j) in the kth feature map;
weighting and summing the weights of the corresponding categories and obtaining corresponding activation graphs through ReLU activation function activation:
Figure BDA0003062184540000156
wherein the content of the first and second substances,
Figure BDA0003062184540000157
is a category activation corresponding to a whole input graph.
The effect of the area visualization is shown in fig. 4; according to the visualization result of fig. 4, the model is found to successfully distinguish and highlight the abnormal region, which shows that the model has interpretability and strong credibility.
In this embodiment, the data is divided into training sets and internal test sets on a 7:3 scale. The method also comprises a model verification module, wherein multiple performance indexes such as average Accuracy (ACC), area under an operation curve (ROC) of a subject, sensitivity (Sn), specificity (Sp), Positive Predictive Value (PPV) and Negative Predictive Value (NPV) are respectively adopted, and firstly, in order to verify the robustness of the model, the test is carried out on an internal test set ITC from the same center as a training set; to verify the generalization of the model, the model performance was evaluated using an external test set, ETC, from another center.
Further, in the model evaluation, the judgment abilities of the model, 5 elementary experts and 3 advanced experts are evaluated respectively based on the same test set, and the change of the classification result of each expert before and after the model guidance is calculated simultaneously, so that the classification ability of the model is evaluated. Specifically, 3 endoscopists with more than 10 years of experience with ME-NBI were classified into the senior group, and the other 5 endoscopists with 12-24 months of experience with NBI were classified into the senior group. A primary group. Initially, both advanced endoscopists and primary endoscopists were asked to make classification decisions (EGC or non-EGC) for the same ITC without knowing any patient data and pathological outcome. To explore the classification ability of the model, after at least 2 weeks, all endoscopists were asked to make classification decisions again for the same ITC, referring to the predicted probabilities given by the model, without receiving feedback from the first test. Then, the classification performance of the endoscope experts under the prompt of the presence or absence of the model is compared. Comparing differences in accuracy, sensitivity and specificity between different groups using the McNemar test method; while the chi-square test is used to compare PPV and NPV between different groups.
Table 3 list table example
Figure BDA0003062184540000161
Table 3 shows a tabulation of the electrical dichotomy problem based on which the original assumption of Mcnemar is that the edge probabilities are equal, i.e., pa+pb=pa+pc,pd+pb=pd+pcThus, the assumptions of the Mcnemar test can be written as: h0:pb=pc,H1:pb≠pcThe test statistic thus established is:
Figure BDA0003062184540000171
for chi-square test, aiming at the tetrad table, the calculation method is as follows:
Figure BDA0003062184540000172
it should be noted that the gastroscope image analysis system based on restoration and selective enhancement provided by the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the above embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the above described functions. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A second embodiment of the invention is a method for data-based selection of enhanced gastroscopic image analysis, said method comprising:
s100, acquiring a gastroscope image of narrow-band imaging and taking the gastroscope image as an image to be detected;
step S200, obtaining an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
step S300, based on the image to be detected only containing the gastric mucosa, obtaining a non-reflective image to be detected through reflective detection, image restoration and Gaussian blur and weighted superposition;
step S400, generating a vivid synthetic image through a trained generation type countermeasure network based on the non-reflective image to be detected;
step S500, acquiring the early gastric cancer probability of the image to be detected and the image of the suspected early gastric cancer region based on the vivid synthetic image; the method specifically comprises the following steps:
step S510, based on the vivid synthetic image, obtaining the early gastric cancer probability of the image to be detected through a trained gastroscope image recognition network;
and step S520, acquiring an image of a suspected early gastric cancer region by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
An electronic device according to a third embodiment of the present invention is characterized by including: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the data-based selection enhanced gastroscopic image analysis method described above.
A computer-readable storage medium according to a fourth embodiment of the present invention stores computer instructions for execution by the computer to implement the above-described method for selectively enhancing a gastroscopic image based on data.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A gastroscopic image analysis system based on restoration and selective enhancement, the system comprising: the system comprises an image acquisition module, a preprocessing module, a light reflection processing module, an image confrontation synthesis module, a selective enhancement module and an image identification module;
the image acquisition module is configured to acquire a gastroscope image of narrow-band imaging and serve as an image to be detected;
the preprocessing module is configured to obtain an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
the reflection processing module is configured to obtain a non-reflection image to be detected through reflection detection, image restoration and Gaussian blur and weighted superposition based on the image to be detected only containing the gastric mucosa;
the image confrontation synthesis and selective enhancement module is configured to generate a vivid synthetic image through a trained generation type confrontation network based on the non-reflective image to be detected;
the image identification module comprises a probability prediction unit and a region display unit;
the probability prediction unit obtains the early gastric cancer probability of the image to be detected through a trained gastroscope image identification network based on the vivid synthetic image;
and the region display unit acquires a region image of suspected early gastric cancer by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
2. The gastroscope image analysis system based on restoration and selectivity enhancement according to claim 1, the reflex processing module comprising a reflex detection unit, a reflex restoration and gaussian blurring unit and a weighted superposition unit;
the reflective detection unit is configured to detect the image I to be detected only containing the gastric mucosa through a color balance adaptive threshold valueoriA high-gloss portion of (1);
converting the image to be detected into a gray image:
Gray=0.2989*R+0.5870*G+0.1140*B,
wherein R is the image I to be measuredoriG is the image I to be measuredoriB is the image I to be measuredoriThe blue channel of (1), Gray is the image I to be measuredoriA corresponding gray scale image;
respectively calculating the 95 th percentile ratio of the image intensity of the blue channel to the intensity of the gray image and the 95 th percentile ratio of the image intensity of the green channel to the intensity of the gray image to obtain a threshold value of the reflection detection:
Figure FDA0003062184530000021
Figure FDA0003062184530000022
the image I to be measuredoriSetting the pixel point n meeting any one reflecting point condition as a reflecting point:
the condition of the reflecting points is as follows: g (n) > T.Rg-gray(ii) a And (2) reflecting point condition two: b (n) > T.Rb-gray(ii) a And (3) reflecting point conditions are as follows: r (n) > T; g (n) represents the pixel value of a pixel point n of the image to be detected on a green channel, B (n) represents the pixel value of the pixel point n of the image to be detected on a blue channel, R (n) represents the pixel value of the pixel point n of the image to be detected on a red channel, and T is a super parameter; setting the pixels of the image to be detected as the light reflecting points as 0 and setting the other pixels as 1 to obtain an image I to be detectedoriMask M for light reflecting regions
The reflective repairing unit is configured to obtain a reflective area through a mask of the reflective area, fill reflective points based on an average value of pixels around the reflective area, and obtain a filled image I to be detectedp(ii) a Checking the filled image I to be measured by Gaussian convolutionpCarrying out Gaussian blur to obtain a smooth image Is
Is=Fguassin*IpWherein, FguassinRepresenting a gaussian blur;
the weighted superposition unit is configured to perform mean filtering on the mask of the light reflection region to calculate a smooth image weight w, and then to fill the pre-filled image I to be measuredoriAnd weighting and summing the corresponding smooth images to obtain a non-reflective image to be detected:
w=Ms*Fmean
Iinpainted=w·Is+(1-w)·Iori
wherein, PguassinIs a Gaussian convolution kernel, FmeanFor mean filtering, IsTo smooth the image, IoriW is the weight of the smoothed image.
3. The gastroscope image analysis system based on restoration and selectivity enhancement according to claim 1, the image confrontation synthesis and selectivity enhancement module, in particular, inputting the non-reflective image to be tested and noise into a generation type confrontation network to obtain a synthesized image:
Figure FDA0003062184530000031
wherein the content of the first and second substances,
Figure FDA0003062184530000035
representing the degree of difference between the real and generated samples, G representing the generator, D representing the discriminator, (x, y) representing the real image and its label, (z, y) representing random noise and its label,e represents the expectation of the probability distribution, x-PdataRepresenting true data x obeys PdataDistributing, wherein z-N represents that the noise obeys uniform distribution; based on the synthetic image, calculating the normalized Euclidean distance D of the characteristic mass center of the synthetic image and the non-reflective image to be detectedfAnd operating a feature extraction network in the K pre-training networks by adopting a Monte Carlo random inactivation method:
Figure FDA0003062184530000032
wherein Hl×WlRepresenting the size of the feature extraction network, x representing the composite image, ciThe centroid of the i-th class is represented,
Figure FDA0003062184530000033
expressing the normalized activation of the l layer of the kth feature extraction network, wherein K expresses the total operation times of the feature extraction network;
characteristic centroid c of non-reflective image to be detectediThe calculation method comprises the following steps:
Figure FDA0003062184530000034
wherein the content of the first and second substances,
Figure FDA0003062184530000036
the activation of the ith layer of the feature extraction network is represented, L represents the common L layer of the feature network, Ni represents the number of ith training samples, and xjRepresents the jth sample;
selecting Euclidean distance DfAnd taking the image smaller than the preset lifelike threshold value as a lifelike composite image.
4. The gastroscopic image analysis system based on restoration and selective enhancement according to claim 1 and characterized in that the trained gastroscopic image recognition network is obtained by:
step S10, acquiring training set images;
step S20, acquiring training set realistic synthetic images by the methods of the preprocessing module, the light reflection processing module and the image confrontation synthesis and selective enhancement module;
step S30, inputting the training set of the real images after the light reflection processing and the realistic synthetic images thereof into VGG-19 pre-trained on ILSVRC-2012, generating recognizable characteristic vectors, obtaining classification results through a full connection layer, performing iterative training on the first training set through a transfer learning method, and updating model parameters until the cross entropy loss of the model on the second training set is not reduced under the continuous preset iteration times, so as to obtain a trained gastroscope image recognition network.
5. The gastroscopic image analysis system based on restoration and selectivity enhancement according to claim 1 and characterized in that the area display unit specifically comprises: calculating the weight alpha of each characteristic image corresponding category of the image by adopting a Grad-CAM algorithm based on a gradient weighted category activation mapping method:
Figure FDA0003062184530000041
wherein the content of the first and second substances,
Figure FDA0003062184530000042
represents the weight of the kth feature map to the class c, Z represents the number of pixels of the feature map, ycRepresenting the prediction probability value for the corresponding category c,
Figure FDA0003062184530000043
represents the pixel at position (i, j) in the kth feature map;
weighting and summing the weights of the corresponding categories and obtaining corresponding activation graphs through ReLU activation function activation:
Figure FDA0003062184530000044
wherein the content of the first and second substances,
Figure FDA0003062184530000045
an activation map representing the input image for class c;
6. the gastroscopic image analysis system based on restoration and selectivity enhancement according to claim 1 and wherein the preprocessing module includes a text removal unit, a background removal unit;
the character removing unit is configured to remove the character data and the irrelevant part of the patient of the image to be detected through an Otsu threshold algorithm;
the background removing unit is configured to detect an effective gastric mucosa part through a contour detection algorithm, remove the black background of the image to be detected, and obtain the image to be detected which has less interference and only comprises the gastric mucosa.
7. The gastroscope image analysis system based on restoration and selective enhancement according to claim 4, wherein the gastroscope image recognition network, during training, divides the training set images into a first training set and a second training set, and each iterative training inputs 16 images of the first training set into pre-trained VGG-19, and adopts an initial learning rate of 10-6And the Adam optimizer with the weight attenuation of 0.01 sets that the second training set is verified once every 10 iterations, and executes the model training early-stop strategy with the tolerance of 10.
8. A gastroscopic image analysis method based on restoration and selectivity enhancement, the method comprising:
s100, acquiring a gastroscope image of narrow-band imaging and taking the gastroscope image as an image to be detected;
step S200, obtaining an image to be detected only containing the gastric mucosa by an Otsu threshold method and a contour line detection method based on the image to be detected;
step S300, based on the image to be detected only containing the gastric mucosa, obtaining a non-reflective image to be detected through reflective detection, image restoration and Gaussian blur and weighted superposition;
step S400, generating a vivid synthetic image through a trained generation type countermeasure network based on the non-reflective image to be detected;
step S500, acquiring the early gastric cancer probability of the image to be detected and the image of the suspected early gastric cancer region based on the vivid synthetic image; the method specifically comprises the following steps:
step S510, based on the vivid synthetic image, obtaining the early gastric cancer probability of the image to be detected through a trained gastroscope image recognition network;
and step S520, acquiring an image of a suspected early gastric cancer region by a gradient weighting activation mapping method based on the gastric cancer probability of the vivid synthetic image and the image to be detected.
9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to at least one of the processors; wherein the memory stores instructions executable by the processor for execution by the processor to implement the repair and selective enhancement based gastroscopic image analysis method of claim 8.
10. A computer readable storage medium storing computer instructions for execution by the computer to implement the restoration and selective enhancement based gastroscopic image analysis method of claim 8.
CN202110517412.2A 2021-05-12 2021-05-12 Gastroscope image analysis system, method and equipment based on restoration and selective enhancement Active CN113256572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110517412.2A CN113256572B (en) 2021-05-12 2021-05-12 Gastroscope image analysis system, method and equipment based on restoration and selective enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110517412.2A CN113256572B (en) 2021-05-12 2021-05-12 Gastroscope image analysis system, method and equipment based on restoration and selective enhancement

Publications (2)

Publication Number Publication Date
CN113256572A true CN113256572A (en) 2021-08-13
CN113256572B CN113256572B (en) 2023-04-07

Family

ID=77223177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110517412.2A Active CN113256572B (en) 2021-05-12 2021-05-12 Gastroscope image analysis system, method and equipment based on restoration and selective enhancement

Country Status (1)

Country Link
CN (1) CN113256572B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115938546A (en) * 2023-02-21 2023-04-07 四川大学华西医院 Early gastric cancer image synthesis method, system, equipment and storage medium
CN116703798A (en) * 2023-08-08 2023-09-05 西南科技大学 Esophagus multi-mode endoscope image enhancement fusion method based on self-adaptive interference suppression

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
CN110472676A (en) * 2019-08-05 2019-11-19 首都医科大学附属北京朝阳医院 Stomach morning cancerous tissue image classification system based on deep neural network
US10699163B1 (en) * 2017-08-18 2020-06-30 Massachusetts Institute Of Technology Methods and apparatus for classification
CN111882509A (en) * 2020-06-04 2020-11-03 江苏大学 Medical image data generation and detection method based on generation countermeasure network
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN112101451A (en) * 2020-09-14 2020-12-18 北京联合大学 Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
CN112364915A (en) * 2020-11-10 2021-02-12 浙江科技学院 Imperceptible counterpatch generation method and application

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699163B1 (en) * 2017-08-18 2020-06-30 Massachusetts Institute Of Technology Methods and apparatus for classification
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
CN110472676A (en) * 2019-08-05 2019-11-19 首都医科大学附属北京朝阳医院 Stomach morning cancerous tissue image classification system based on deep neural network
CN111882509A (en) * 2020-06-04 2020-11-03 江苏大学 Medical image data generation and detection method based on generation countermeasure network
CN111985536A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Gastroscope pathological image classification method based on weak supervised learning
CN112101451A (en) * 2020-09-14 2020-12-18 北京联合大学 Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks
CN112364915A (en) * 2020-11-10 2021-02-12 浙江科技学院 Imperceptible counterpatch generation method and application

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IAN J. GOODFELLOW 等: "Generative Adversarial Nets", 《ARXIV》 *
温庭栋 等: "胃镜下早期胃癌计算机辅助分析研究综述", 《计算机工程与应用》 *
程铖: "基于多特征融合与显著性检测的肠镜图像分类系统", 《中国优秀博硕士学位论文全文数据库(硕士) 医药卫生科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115938546A (en) * 2023-02-21 2023-04-07 四川大学华西医院 Early gastric cancer image synthesis method, system, equipment and storage medium
CN116703798A (en) * 2023-08-08 2023-09-05 西南科技大学 Esophagus multi-mode endoscope image enhancement fusion method based on self-adaptive interference suppression
CN116703798B (en) * 2023-08-08 2023-10-13 西南科技大学 Esophagus multi-mode endoscope image enhancement fusion method based on self-adaptive interference suppression

Also Published As

Publication number Publication date
CN113256572B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN110517256B (en) Early cancer auxiliary diagnosis system based on artificial intelligence
WO2023015743A1 (en) Lesion detection model training method, and method for recognizing lesion in image
TW201941217A (en) System and method for diagnosing gastrointestinal neoplasm
CN109191476A (en) The automatic segmentation of Biomedical Image based on U-net network structure
CN110288555B (en) Low-illumination enhancement method based on improved capsule network
CN112102256B (en) Narrow-band endoscopic image-oriented cancer focus detection and diagnosis system for early esophageal squamous carcinoma
CN110060237A (en) A kind of fault detection method, device, equipment and system
CN110765865B (en) Underwater target detection method based on improved YOLO algorithm
CN113256572B (en) Gastroscope image analysis system, method and equipment based on restoration and selective enhancement
Xia et al. A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images
CN110647802A (en) Remote sensing image ship target detection method based on deep learning
CN112466466B (en) Digestive tract auxiliary detection method and device based on deep learning and computing equipment
CN111524144A (en) Intelligent pulmonary nodule diagnosis method based on GAN and Unet network
CN113610118B (en) Glaucoma diagnosis method, device, equipment and method based on multitasking course learning
CN114529516A (en) Pulmonary nodule detection and classification method based on multi-attention and multi-task feature fusion
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN116168240A (en) Arbitrary-direction dense ship target detection method based on attention enhancement
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
CN108447066B (en) Biliary tract image segmentation method, terminal and storage medium
CN115880266A (en) Intestinal polyp detection system and method based on deep learning
CN114067159A (en) EUS-based fine-granularity classification method for submucosal tumors
CN113971764A (en) Remote sensing image small target detection method based on improved YOLOv3
US20240071057A1 (en) Microscopy System and Method for Testing a Sensitivity of an Image Processing Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant