CN110189303B - NBI image processing method based on deep learning and image enhancement and application thereof - Google Patents

NBI image processing method based on deep learning and image enhancement and application thereof Download PDF

Info

Publication number
CN110189303B
CN110189303B CN201910375216.9A CN201910375216A CN110189303B CN 110189303 B CN110189303 B CN 110189303B CN 201910375216 A CN201910375216 A CN 201910375216A CN 110189303 B CN110189303 B CN 110189303B
Authority
CN
China
Prior art keywords
image
nbi
difference
original
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910375216.9A
Other languages
Chinese (zh)
Other versions
CN110189303A (en
Inventor
胡珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Priority to CN201910375216.9A priority Critical patent/CN110189303B/en
Publication of CN110189303A publication Critical patent/CN110189303A/en
Priority to PCT/CN2019/106030 priority patent/WO2020224153A1/en
Application granted granted Critical
Publication of CN110189303B publication Critical patent/CN110189303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an NBI image processing method based on deep learning and image enhancement and application thereof, wherein the characteristics of microvessels, microstructures and the like of an NBI image are extracted by utilizing a deep learning algorithm and an image enhancement technology, the characterized image is presented to an endoscopist, the bottleneck of the prior art is overcome, and the doctor can give more accurate auxiliary diagnosis opinions on early cancers under NBI by utilizing artificial intelligence. The image processed by the method of the invention has the advantages that the focus area in the stomach early cancer image is strengthened, the boundary of the focus and the normal area is processed by highlighting, the focus becomes clearer, and a doctor can refer to the processed image to judge whether the patient has early cancer in an auxiliary way during diagnosis, so that the condition that the focus is missed due to too fast gastroscopy or fatigue operation of the doctor is avoided.

Description

NBI image processing method based on deep learning and image enhancement and application thereof
Technical Field
The invention belongs to the field of medical detection assistance, and particularly relates to an early gastric cancer auxiliary diagnosis method based on artificial intelligence.
Background
Gastric cancer is one of common malignant tumors in China, and the incidence rate of the gastric cancer is the first of digestive system tumors. 67.9 million new cases and 49.8 ten thousand death cases of gastric cancer in 2015 in China account for 1/5 of total cancer deaths. The root cause of malignant tumors that endanger human health is the difficulty of early detection. Digestive tract tumors can have a 5-year survival rate of greater than 90% if diagnosed at an early stage, and only 5-25% if advanced to a middle-to-late stage. Therefore, early diagnosis is an important strategy to improve patient survival.
Endoscopy is the most commonly used powerful tool for finding early stage gastric cancer. Common white light and biopsy under an endoscope are main means for finding early gastric cancer, and have the advantages of simplicity, convenience, intuition and the like, however, as the change of early cancer focus is usually slight, and the early cancer focus has no specific expression under white light, is difficult to distinguish from normal mucosa and benign focus such as erosion, ulcer and the like, has lower sensitivity and specificity, and is easy to cause missed diagnosis. In recent years, as some key technologies such as filters, under-endoscope magnification technology, and the like are becoming mature, Narrow Band Imaging (NBI) and Magnification Endoscope (ME) are rapidly developing. The magnifying endoscope can magnify the object image under the endoscope by tens to hundreds of times, and clearly display the changes of the tiny structures of the alimentary tract mucosa, the opening of the glandular tube and the like. The narrow-band imaging endoscope filters out broadband spectrums in red, blue and green light waves emitted by an endoscope light source by using a filter, only blue light (400-. The narrow-band imaging combined amplification endoscopy technology (ME-NBI) can enable an endoscopist to observe the surface capillary morphology and the surface fine structure of the gastric mucosa more clearly, and the accuracy of early gastric cancer diagnosis by the gastrointestinal endoscopy is greatly improved. However, the diagnosis standard of early gastric cancer under ME-NBI is very complex, and the lesion manifestation morphology is different, so that the diagnosis of early gastric cancer can be realized by using the technology only by an endoscopist with strong knowledge storage and rich experience. Under the current conditions of large population base and shortage of medical resources in China, the complexity of ME-NBI diagnosis greatly restricts the ability of ME-NBI diagnosis to find early gastric cancer.
In recent years, the rapid development of science and technology has brought about a new wave of technical surge through artificial intelligence. With the successful test of the automatic driving automobile and the AlphaGo defeating the world champion of the Weiqi, the artificial intelligence gradually enters the public visual field in as short as a few years. In the medical industry, artificial intelligence research is mainly focused on the field of static film reading, namely, a machine summarizes and summarizes the characteristics of focuses by learning a large number of focus pictures and normal pictures labeled by doctors, and then actively identifies similar focuses in strange pictures. The successful cases include classified diagnosis of skin cancer, detection of lung nodules, and the like. However, the application of this method has certain limitations, firstly, it requires that the target lesion and normal tissue, other lesions have clearly distinguishable features, and secondly, it requires that the image input into the machine for learning is classified accurately without data pollution. The early-stage gastric cancer identification method is used for training the machine to identify the early-stage gastric cancer, however, the early-stage gastric cancer focus characteristics are complex and variable and have similar characteristics with various benign focuses, so that the false judgment conditions such as false alarm and false recognition are easy to occur.
Based on the above, the invention proposes an artificial intelligence-based NBI image processing method, and performs auxiliary diagnosis on early gastric cancer through the processed NBI image, the method extracts characteristics such as microvessels and microstructures of the NBI image by using a deep learning algorithm and an image enhancement technology, and presents the characterized image to an endoscopist, thereby overcoming the bottleneck of the prior art and enabling the doctor to give more accurate auxiliary diagnosis opinions on early cancer under NBI by using artificial intelligence.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the characteristics of the NBI picture such as microvessels, microstructures and the like are extracted by utilizing a deep learning algorithm and an image enhancement technology, the characterized picture is presented to an endoscopist, the bottleneck of the prior art is overcome, and the doctor can give more accurate auxiliary diagnosis opinions on early cancers under NBI by utilizing artificial intelligence.
In order to achieve the above object, the present invention adopts an NBI image processing method based on deep learning and image enhancement, which specifically includes the following steps:
step S1, collecting a large number of NBI stomach early cancer or non-cancer amplification images;
step S2, labeling white areas and blood vessels in the image by a professional physician, converting the original NBI image with complex background and structure into a simple stroke image with clear characteristics, and obtaining a labeled image;
step S3, inputting the original NBI image and the marked image into a deep convolution neural network model for training, wherein the deep convolution neural network model is used for continuously calculating the prominent information characteristics between the original image and the marked image, including the texture difference LtextureContent difference LcontentColor difference LcolorAnd the overall difference LtvAnd based on the above differenceWeighting differently to obtain a total loss function value, and completing the mapping relation from the original NBI image to the marked image;
step S4, obtaining the target image of the image to be processed based on the mapping relation, mapping each pixel point into a one-dimensional array composed of numbers,
and step S5, displaying different colors with different depths on different numbers in the array by adjusting the RGB color space of the target image, and obtaining the gastric mucosa image with enhanced blood vessels and surface structures and with other backgrounds hidden.
Further, the specific implementation manner of step S3 is as follows,
step S31, for texture difference LtextureTraining a separate antagonistic CNN recognizer, LtextureThe calculation formula is as follows:
Figure BDA0002051414170000031
wherein, ISIs the original NBI picture, ItIs a physician annotation image, I is ISAnd ItLogarithm of (d), FW、FW(IS) Respectively indicating an image enhancement function and an enhanced image after function processing, and D is an identifier;
step S32, for content difference LcontentDefined according to an activation map generated by a ReLU layer of a pre-trained VGG-19 network;
Figure BDA0002051414170000032
wherein, CjHjWjRespectively represent ItAnd FW(IS) Number, height and width of enhanced images, #jIs the feature mapping after j convolutions;
step S33, for color difference LcolorAnd calculating Euclidean distance between the doctor labeling image and the original NBI image by using a Gaussian blur method, wherein the formula is as follows:
Figure BDA0002051414170000033
Xb、Ybx, Y (pixel coordinates of the original NBI image) respectively, and the corresponding values in the annotated image are calculated as follows:
Figure BDA0002051414170000034
the above formula is a Gaussian filter template, where μxIs the mean value of X, σxThe variance of X, A is the weight sum of pixel points, and the obtained result G (k, l) is the filtering template values at k and l;
Figure BDA0002051414170000035
multiplying the original image k, l and the surrounding pixel points by the filtering template value to obtain XbA gaussian blur value of; obtaining Y by the same methodbSubstituting into formula 3 to obtain Lcolor
Step S34, enhancing the spatial smoothness of the image by calculating the total variation loss function, the formula is as follows:
Figure BDA0002051414170000036
step S35, finally, the color difference, texture difference, content difference and overall difference are combined to obtain the overall loss function value,
Ltotal=Lcontent+0.4·Ltexture+0.1·Lcolor+400·Ltv (7)
wherein CHW represents FW(IS) The number, height and width of the enhanced images,
Figure BDA0002051414170000041
hamilton for differentiating X, YAnd (4) an operator.
Further, the mapping manner in step S4 is to implement rgb channel separation based on the Image method in the PythonPIL packet, and then convert the Image into a one-dimensional array composed of numbers by the reshape method.
The invention also provides an application of the NBI image based on deep learning and image enhancement in early gastric cancer diagnosis, and the NBI image is obtained by the technical scheme.
The invention has the beneficial effects that: by extracting characteristics such as microvessels, microstructures and the like of focuses under an NBI picture, on one hand, reference is provided for an endoscopist to independently judge the properties of the focuses; on the other hand, the prediction problem of the artificial intelligence model is closed, so that compared with the prior art, more accurate auxiliary diagnosis opinions can be given to early cancer under NBI.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of labeling of a professional physician (i.e., NBI artwork and physician labeling image).
FIG. 3 is a schematic diagram of the processing of the present invention (i.e., NBI artwork) and (ii) image processed by the present invention.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
As shown in FIG. 1, the invention provides an NBI image processing method based on deep learning and image enhancement, comprising the following steps
Step S1, collecting a large number of NBI stomach early cancer or non-cancer amplification images;
step S2, labeling white areas and blood vessels in the image by a professional physician, so as to convert the original NBI image with complex background and structure into a simple stroke image (labeled image) with clear features, as shown in fig. 2;
step S3, inputting the original NBI image and the labeled image into a deep convolution neural network model for training, wherein the deep convolution neural network model continuously calculates the prominent information characteristics (such as texture difference L) between the original image and the labeled imagetextureContent difference LcontentColor difference LcolorAnd gross differences) are necessary, and the final effect is to enable the model to automatically complete the mapping of the original NBI image to the target image (machine resembling the annotation map processed by the doctor), the final result being shown in fig. 3.
Step S31, for texture difference LtextureTraining a separate antagonistic CNN recognizer, LtextureThe calculation formula is as follows:
Figure BDA0002051414170000042
wherein, ISIs the original NBI picture, ItIs a physician annotation image, I is ISAnd ItLogarithm of (d), FW、FW(IS) Respectively refer to an image enhancement function and an enhanced image after function processing, in the embodiment, FWThe method can be customized according to the precision requirement, and D is an identifier;
step S32, for content difference LcontentWe base our pre-trained VGG-19 network [1 ]]Is defined by the activation map generated by the ReLU layer of (1);
Figure BDA0002051414170000051
CjHjWjrespectively represent ItAnd FW(IS) Number, height and width of enhanced images, #jIs a feature map after j convolutions.
Step S33, for color difference LcolorThe Euclidean distance between the doctor labeling image and the original NBI image is calculated by using a Gaussian blur method, and the formula is as follows:
Figure BDA0002051414170000052
Xb、Ybx, Y (pixel coordinates of original NBI image) respectivelyAnd solving the corresponding values in the labeled image as follows:
Figure BDA0002051414170000053
the above formula is a Gaussian filter template, where μxIs the mean value of X, σxIs the variance of X, and a is the sum of the weights of the pixels, which can be temporarily set to 0.035, in order to make the sum of the weights of the pixels in the region equal to 1, thereby keeping the image brightness unchanged. The obtained result G (k, l) is a filter template value at k, l.
Figure BDA0002051414170000054
Multiplying the source image k, l and surrounding pixel points by the filter template value to obtain XbA gaussian blur value of; same as above YbCan be obtained and substituted into formula 3, then LcolorThe method can be used for solving the problems.
Step S34, enhancing the spatial smoothness of the image by calculating the total variation loss function, the formula is as follows:
Figure BDA0002051414170000055
step S35, finally, combining the color, texture, content and overall difference to obtain the overall loss function value:
Ltotal=Lcontent+0.4·Ltexture+0.1·Lcolor+400·Ltv (7)
wherein CHW represents FW(IS) The number, height and width of the enhanced images (note that CHW is only a parameter of the enhanced image, and CjHjWjIs ItAnd FW(IS) Parameters of the enhanced image, in fact the original image, I, in practicetAnd FW(IS) The CHW values of the enhanced image are all correspondingly the same, except by subscriptThe manner of (d) to make the distinction);
Figure BDA0002051414170000061
is a hamiltonian that differentiates X, Y.
Step S4, obtaining a target Image of the Image to be processed based on the mapping relation, mapping each pixel point in the target Image into a number, realizing rgb channel separation based on an Image method in a PythonPIL package in a mapping mode, and converting the Image into a one-dimensional array consisting of the numbers by a reshape method;
and step S5, displaying different colors with different depths on different numbers in the array by adjusting the picture RGB mode, and obtaining the gastric mucosa image with enhanced blood vessels and surface structures and with other backgrounds hidden.
The RGB pattern is a representation of color by 8-bit binary digits, with the digit interval being 0,255, which is a so-called "gray image", 0 representing no brightness, i.e. black, and 255 representing the maximum achievable brightness, i.e. white. The brightness of the appointed pixel point can be adjusted by adjusting the number in the array.
As shown in fig. 2 and fig. 3, the structures of the gastric vessels and glands have great similarity, and the diseased regions in the gastric vessels and glands are difficult to find or define; after the image processing, as shown in the right of fig. 2 and the right of fig. 3, the lesion region in the image of the stomach precancer is strengthened, and the boundary between the lesion and the normal region is highlighted to be clearer, wherein the white highlighted region is a possible lesion region, and the brighter color indicates that the region is more abnormal. When a doctor diagnoses, the doctor can refer to the processed image to judge whether the patient has early cancer or not, and the condition that the identification of the focus is missed due to too fast gastroscopy or fatigue operation of the doctor is avoided.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Reference to the literature
[1]Karen Simonyan*&Andrew Zisserman.Very Deep Convolutional Networks for Large-Scale Image Recognition[C].ICLR2015,2015。

Claims (3)

1. An NBI image processing method based on deep learning and image enhancement is characterized by comprising the following steps:
step S1, collecting a large number of NBI stomach early cancer or non-cancer amplification images;
step S2, labeling white areas and blood vessels in the image by a professional physician, converting the original NBI image with complex background and structure into a simple stroke image with clear characteristics, and obtaining a labeled image;
step S3, inputting the original NBI image and the marked image into a deep convolution neural network model for training, wherein the deep convolution neural network model is used for continuously calculating the prominent information characteristics between the original image and the marked image, including the texture difference LtextureContent difference LcontentColor difference LcolorAnd the overall difference LtvObtaining a total loss function value based on the difference weighting, and completing the mapping relation from the original NBI image to the marked image;
the specific implementation of step S3 is as follows,
step S31, for texture difference LtextureTraining a separate antagonistic CNN recognizer, LtextureThe calculation formula is as follows:
Figure FDA0002764314060000011
wherein, ISIs the original NBI picture, ItIs a physician annotation image, I is ISAnd ItLogarithm of (d), FW、FW(IS) Respectively indicating an image enhancement function and an enhanced image after function processing, and D is an identifier;
step S32, for content difference LcontentAccording to a pre-trained VGG-19 networkIs defined by the activation map generated by the ReLU layer of (1);
Figure FDA0002764314060000012
wherein, CjHjWjRespectively represent ItAnd FW(IS) Number, height and width of enhanced images, #jIs the feature mapping after j convolutions;
step S33, for color difference LcolorAnd calculating Euclidean distance between the doctor labeling image and the original NBI image by using a Gaussian blur method, wherein the formula is as follows:
Figure FDA0002764314060000013
Xb、Ybx, Y (pixel coordinates of the original NBI image) respectively, and the corresponding values in the annotated image are calculated as follows:
Figure FDA0002764314060000021
the above formula is a Gaussian filter template, where μxIs the mean value of X, σxThe variance of X, A is the weight sum of pixel points, and the obtained result G (k, l) is the filtering template values at k and l;
Figure FDA0002764314060000022
multiplying the original image k, l and the surrounding pixel points by the filtering template value to obtain XbA gaussian blur value of; obtaining Y by the same methodbSubstituting into formula 3 to obtain Lcolor
Step S34, enhancing the spatial smoothness of the image by calculating the total variation loss function, the formula is as follows:
Figure FDA0002764314060000023
step S35, finally, the color difference, texture difference, content difference and overall difference are combined to obtain the overall loss function value,
Ltotal=Lcontent+0.4·Ltexture+0.1·Lcolor+400·Ltv (7)
wherein CHW represents FW(IS) The number, height and width of the enhanced images,
Figure FDA0002764314060000024
hamiltonian to differentiate X, Y;
step S4, obtaining the target image of the image to be processed based on the mapping relation, mapping each pixel point into a one-dimensional array composed of numbers,
and step S5, displaying different colors with different depths on different numbers in the array by adjusting the RGB color space of the target image, and obtaining the gastric mucosa image with enhanced blood vessels and surface structures and with other backgrounds hidden.
2. The NBI image processing method based on deep learning and image enhancement as claimed in claim 1, wherein: the mapping manner in step S4 is to implement rgb channel separation based on the Image method in the PythonPIL package, and then convert the Image into a one-dimensional array composed of numbers by the reshape method.
3. The application of an NBI image based on deep learning and image enhancement in early gastric cancer diagnosis is characterized in that: the NBI image is obtained by the method of claim 1 or 2.
CN201910375216.9A 2019-05-07 2019-05-07 NBI image processing method based on deep learning and image enhancement and application thereof Active CN110189303B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910375216.9A CN110189303B (en) 2019-05-07 2019-05-07 NBI image processing method based on deep learning and image enhancement and application thereof
PCT/CN2019/106030 WO2020224153A1 (en) 2019-05-07 2019-09-16 Nbi image processing method based on deep learning and image enhancement, and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910375216.9A CN110189303B (en) 2019-05-07 2019-05-07 NBI image processing method based on deep learning and image enhancement and application thereof

Publications (2)

Publication Number Publication Date
CN110189303A CN110189303A (en) 2019-08-30
CN110189303B true CN110189303B (en) 2020-12-25

Family

ID=67715784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910375216.9A Active CN110189303B (en) 2019-05-07 2019-05-07 NBI image processing method based on deep learning and image enhancement and application thereof

Country Status (2)

Country Link
CN (1) CN110189303B (en)
WO (1) WO2020224153A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189303B (en) * 2019-05-07 2020-12-25 武汉楚精灵医疗科技有限公司 NBI image processing method based on deep learning and image enhancement and application thereof
CN111899229A (en) * 2020-07-14 2020-11-06 武汉楚精灵医疗科技有限公司 Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
CN112435246A (en) * 2020-11-30 2021-03-02 武汉楚精灵医疗科技有限公司 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope
CN112884777B (en) * 2021-01-22 2022-04-12 复旦大学 Multi-modal collaborative esophageal cancer lesion image segmentation system based on self-sampling similarity
CN113256572B (en) * 2021-05-12 2023-04-07 中国科学院自动化研究所 Gastroscope image analysis system, method and equipment based on restoration and selective enhancement
CN114359279B (en) * 2022-03-18 2022-06-03 武汉楚精灵医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114359280B (en) * 2022-03-18 2022-06-03 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590786A (en) * 2017-09-08 2018-01-16 深圳市唯特视科技有限公司 A kind of image enchancing method based on confrontation learning network
CN108229526A (en) * 2017-06-16 2018-06-29 北京市商汤科技开发有限公司 Network training, image processing method, device, storage medium and electronic equipment
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156711B (en) * 2015-04-21 2020-06-30 华中科技大学 Text line positioning method and device
CN105962904A (en) * 2016-04-21 2016-09-28 西安工程大学 Human tissue focus detection method based on infrared thermal imaging technology
US10600185B2 (en) * 2017-03-08 2020-03-24 Siemens Healthcare Gmbh Automatic liver segmentation using adversarial image-to-image network
CN108229525B (en) * 2017-05-31 2021-12-28 商汤集团有限公司 Neural network training and image processing method and device, electronic equipment and storage medium
CN108695001A (en) * 2018-07-16 2018-10-23 武汉大学人民医院(湖北省人民医院) A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning
CN108961350B (en) * 2018-07-17 2023-09-19 北京工业大学 Wind painting migration method based on saliency matching
CN110189303B (en) * 2019-05-07 2020-12-25 武汉楚精灵医疗科技有限公司 NBI image processing method based on deep learning and image enhancement and application thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229526A (en) * 2017-06-16 2018-06-29 北京市商汤科技开发有限公司 Network training, image processing method, device, storage medium and electronic equipment
CN107590786A (en) * 2017-09-08 2018-01-16 深圳市唯特视科技有限公司 A kind of image enchancing method based on confrontation learning network
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN109447973A (en) * 2018-10-31 2019-03-08 腾讯科技(深圳)有限公司 A kind for the treatment of method and apparatus and system of polyp of colon image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI;Sahin Olut等;《1st Conference on Medical Imaging with Deep Learning (MIDL 2018)》;20180412;1-10 *
High-resolution medical image synthesis using progressively grown generative adversarial networks;Andrew Beers等;《arXiv》;20180509;1-8 *
生成式对抗网络在医学图像处理中的应用;潘丹等;《生物医学工程学杂志》;20181231;第35卷(第6期);970-976 *

Also Published As

Publication number Publication date
CN110189303A (en) 2019-08-30
WO2020224153A1 (en) 2020-11-12

Similar Documents

Publication Publication Date Title
CN110189303B (en) NBI image processing method based on deep learning and image enhancement and application thereof
JP6657480B2 (en) Image diagnosis support apparatus, operation method of image diagnosis support apparatus, and image diagnosis support program
CN110600122B (en) Digestive tract image processing method and device and medical system
CN111899229A (en) Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
CN109635871B (en) Capsule endoscope image classification method based on multi-feature fusion
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN112435246A (en) Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope
Cui et al. Bleeding detection in wireless capsule endoscopy images by support vector classifier
CN115049666B (en) Endoscope virtual biopsy device based on color wavelet covariance depth map model
CN105657580A (en) Capsule endoscopy video summary generation method
Yuan et al. Automatic bleeding frame detection in the wireless capsule endoscopy images
Sun et al. A novel gastric ulcer differentiation system using convolutional neural networks
KR102095730B1 (en) Method for detecting lesion of large intestine disease based on deep learning
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
Liedlgruber et al. A summary of research targeted at computer-aided decision support in endoscopy of the gastrointestinal tract
CN116434920A (en) Gastrointestinal epithelial metaplasia progression risk prediction method and device
CN111476312A (en) Method for classifying lesion images based on convolutional neural network
CN112734749A (en) Vocal leukoplakia auxiliary diagnosis system based on convolutional neural network model
CN112950601A (en) Method, system and storage medium for screening pictures for esophageal cancer model training
Cui et al. Detection of lymphangiectasia disease from wireless capsule endoscopy images with adaptive threshold
CN111179264A (en) Method and device for producing restored image of specimen, specimen processing system, and electronic device
Vats et al. SURF-SVM based identification and classification of gastrointestinal diseases in wireless capsule endoscopy
Shi et al. Bleeding fragment localization using time domain information for WCE videos
JP7449004B2 (en) Hyperspectral object image detection method using frequency bands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191204

Address after: 430014 Room 001, Building D2, 10 Building, Phase III, Huacheng Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Chujingling Medical Technology Co., Ltd.

Address before: Room 101-77, 1st floor, Building 5, 128 Lane, Linhong Road, Changning District, Shanghai

Applicant before: Shanghai Zhenling Medical Technology Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant