WO2020224153A1 - Nbi image processing method based on deep learning and image enhancement, and application thereof - Google Patents
Nbi image processing method based on deep learning and image enhancement, and application thereof Download PDFInfo
- Publication number
- WO2020224153A1 WO2020224153A1 PCT/CN2019/106030 CN2019106030W WO2020224153A1 WO 2020224153 A1 WO2020224153 A1 WO 2020224153A1 CN 2019106030 W CN2019106030 W CN 2019106030W WO 2020224153 A1 WO2020224153 A1 WO 2020224153A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- nbi
- difference
- original
- texture
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
Definitions
- the invention belongs to the field of medical detection assistance, and specifically relates to an artificial intelligence-based early gastric cancer auxiliary diagnosis method.
- Gastric cancer is one of the common malignant tumors in my country, and its incidence ranks first among digestive system tumors. In 2015, there were 679,000 new cases of gastric cancer and 498,000 deaths in my country, accounting for about 1/5 of the total cancer deaths. The fundamental reason that malignant tumors harm human health is that it is difficult to detect early. If gastrointestinal tumors are diagnosed at an early stage, the 5-year survival rate of patients can be higher than 90%, and if it progresses to advanced stages, the 5-year survival rate of patients is only 5-25%. Therefore, early diagnosis is an important strategy to improve patient survival.
- Endoscopy is the most commonly used powerful tool to detect early gastric cancer. Endoscopic ordinary white light plus biopsy is the main method for detecting early gastric cancer. It has the advantages of simplicity and intuitiveness. However, because the changes of early cancer lesions are usually mild and have no specific manifestations under white light, it is difficult to compare with normal mucosa and benign lesions. The distinction between erosions and ulcers has low sensitivity and specificity, which can easily lead to missed diagnosis. In recent years, as some key technologies such as filters and magnification under endoscopy have matured, narrow-band imaging (NBI) and magnifying endoscopy (ME) have developed rapidly.
- NBI narrow-band imaging
- ME magnifying endoscopy
- the magnifying endoscope can magnify the object image under the endoscope by tens to hundreds of times, and clearly shows the changes in the microvessels of the digestive tract mucosa, the opening of the glands and other fine structures.
- Narrowband imaging endoscopy uses filters to filter out the broadband spectrum of the red, blue, and green light waves emitted by the endoscope light source, leaving only the blue (400-430nm) and green (535-565nm) narrowband spectra, while hemoglobin is in the visible light. Under the wave, light waves with wavelengths of 415nm and 540nm are strongly absorbed, so the capillaries and mucosal surface structure of the mucosal surface can be clearly displayed.
- Narrowband imaging combined with magnifying endoscopy can enable endoscopists to observe the surface microvascular morphology and surface microstructure of gastric mucosa more clearly, and greatly improve the accuracy of gastrointestinal endoscopy in the diagnosis of early gastric cancer.
- the diagnostic criteria for early gastric cancer under ME-NBI are very complex, and the appearance of the lesions is different. It requires endoscopists to have a strong knowledge reserve and rich experience in order to make good use of this technology to achieve early cancer diagnosis. Under the current situation of my country's large population base and shortage of medical resources, the complexity of ME-NBI diagnosis greatly restricts its ability to detect early gastric cancer.
- NBI image processing method based on artificial intelligence, and assist in the diagnosis of early gastric cancer through the processed NBI image.
- This method uses deep learning algorithms and image enhancement technology to extract the microvessels and microstructures of the NBI image.
- Features Present the characterized picture to the endoscopist, overcome the bottleneck of the existing technology, and use artificial intelligence to enable the doctor to give more accurate auxiliary diagnosis opinions for early cancer under NBI.
- the technical problem to be solved by the present invention is to use deep learning algorithms and image enhancement technology to extract the microvessels and microstructures of NBI pictures, present the characterized pictures to the endoscopist, overcome the bottleneck of the prior art, and use artificial intelligence to make Doctors give more accurate auxiliary diagnosis opinions for early cancer under NBI.
- the present invention adopts an NBI image processing method based on deep learning and image enhancement, which specifically includes the following steps:
- Step S1 collecting a large number of NBI early gastric cancer or non-cancerous enlarged images
- Step S2 the white areas and blood vessels in the image are marked by a professional physician, so that the original NBI image with a complex background and structure is transformed into a simple stroke image with clear characteristics to obtain the marked image;
- Step S3 the original NBI image and the annotated image are input to the deep convolutional neural network model for training.
- the deep convolutional neural network model is used to continuously calculate the prominent information features between the original image and the annotated image, including the texture difference L texture ,
- the content difference L content , the color difference L color and the overall difference L tv are weighted to obtain the total loss function value based on the above difference to complete the mapping relationship from the original NBI image to the annotated image;
- Step S4 Obtain the target image of the image to be processed based on the above mapping relationship, and map each pixel into a one-dimensional array composed of numbers.
- Step S5 By adjusting the RGB color space of the target image, different numbers in the array are displayed in different colors with different shades, so as to obtain a gastric mucosal image with enhanced blood vessels and surface structure and with the remaining background hidden.
- step S3 is as follows:
- step S31 for the texture difference L texture , a separate adversarial CNN recognizer is trained.
- the formula for calculating L texture is as follows:
- I S is the original NBI image
- I t is the physician-labeled image
- i is the logarithm of I S and I t
- F W and F W (I S ) refer to the image enhancement function and the enhanced image after function processing, respectively, D Is the recognizer;
- Step S32 the content difference L content is defined according to the activation map generated by the ReLU layer of the pre-trained VGG-19 network
- C j H j W j represents the number, height and width of I t and F W (I S ) enhanced images respectively, and ⁇ j is the feature map after j times of convolution;
- Step S33 For the color difference L color , the Gaussian blur method is used to calculate the Euclidean distance between the doctor-labeled image and the original NBI image, the formula is as follows:
- X b and Y b are the corresponding values of X and Y (pixel coordinates of the original NBI image) in the labeled image after calculation.
- the solution process is as follows:
- the above formula is the Gaussian filter template, where ⁇ x is the mean value of X, ⁇ x is the variance of X, A is the sum of the weights of the pixels, and the result G(k,l) is the filter template value at k and l;
- step S34 the spatial smoothness of the image is enhanced by calculating the total variation loss function, the formula is as follows:
- Step S35 Finally, the color difference, texture difference, content difference and overall difference are combined to obtain the total loss function value
- CHW represents the number, height and width of F W (I S ) enhanced images
- Is the Hamiltonian to differentiate between X and Y.
- mapping method in step S4 is to realize rgb channel separation based on the Image method in the PythonPIL package, and then use the reshape method to convert the image into a one-dimensional array composed of numbers.
- the present invention also provides an application of an NBI image based on deep learning and image enhancement in the diagnosis of early gastric cancer.
- the NBI image is obtained through the above technical solution.
- the beneficial effects of the present invention are: extracting the microvascular and microstructure characteristics of the lesion under the NBI picture by the present invention, on the one hand, it provides a reference for the endoscopist to independently judge the nature of the lesion; on the other hand, the artificial intelligence model prediction problem is sealed to make it Compared with the prior art, it can give more accurate auxiliary diagnosis opinions for early cancer under NBI.
- FIG. 1 is a flowchart of an embodiment of the present invention.
- Figure 2 is a schematic diagram of professional physician labeling (1NBI original image2Doctor labeling image).
- Figure 3 is a schematic diagram of the method of the present invention (1NBI original image 2image processed by the method of the present invention).
- an NBI image processing method based on deep learning and image enhancement includes the following steps
- Step S1 collecting a large number of NBI early gastric cancer or non-cancerous enlarged images
- Step S2 The white area and blood vessels in the image are marked by a professional physician, so that the original NBI image with complex background and structure is transformed into a simple stroke image (annotated image) with clear features, as shown in Figure 2;
- Step S3 the original NBI image and the annotated image are input into the deep convolutional neural network model for training, and the deep convolutional neural network model continuously calculates the prominent information features between the original image and the annotated image (such as: texture difference L texture , content The difference L content , the color difference L color, and the overall difference) are very necessary.
- the final effect is to enable the model to automatically complete the mapping of the original NBI image to the target image (the machine imitates the annotation image processed by the doctor), and the final processing The result is shown in Figure 3.
- step S31 for the texture difference L texture , a separate adversarial CNN recognizer is trained.
- the formula for calculating L texture is as follows:
- I S is the original NBI image
- I t is the physician-labeled image
- i is the logarithm of I S and I t
- F W and F W (I S ) refer to the image enhancement function and the enhanced image after function processing, respectively.
- F W can be customized according to accuracy requirements
- D is the recognizer;
- Step S32 for the content difference L content , we define it according to the activation map generated by the ReLU layer of the pre-trained VGG-19 network [1];
- C j H j W j represents the number, height, and width of I t and F W (I S ) enhanced images, respectively, and ⁇ j is the feature map after j times of convolution.
- Step S33 for the color difference L color , we use the Gaussian blur method to calculate the Euclidean distance between the doctor-labeled image and the original NBI image, the formula is as follows:
- X b and Y b are the corresponding values of X and Y (the pixel coordinates of the original NBI image) in the labeled image after calculation.
- the solution process is as follows:
- the above formula is the Gaussian filter template, where ⁇ x is the mean value of X, ⁇ x is the variance of X, and A is the sum of the weights of the pixels, which can be temporarily set to 0.035. The purpose is to make the sum of the weights of the pixels in the area equal to 1, so as to keep the image brightness unchanged.
- G(k,l) is the filter template value at k and l.
- the Gaussian blur value of X b can be obtained by multiplying the source image k, l and the surrounding pixels with the filter template value; as above, Y b can be obtained, and substituting in Equation 3, L color can be obtained.
- step S34 the spatial smoothness of the image is enhanced by calculating the total variation loss function, the formula is as follows:
- Step S35 Finally, the color, texture, content, and overall difference are combined to obtain the overall loss function value:
- CHW respectively represents the number, height and width of F W (I S ) enhanced images
- C j H j W j is I t and F W (I S ) enhanced images parameters, in fact, in practice the original image, I t and F W (I S) is enhanced CHW values corresponding to the same image, but distinguished by subscripts mode); Is the Hamiltonian to differentiate between X and Y.
- Step S4 Obtain the target image of the image to be processed based on the above mapping relationship, and map each pixel in the target image to a number.
- the mapping method is based on the Image method in the PythonPIL package to achieve rgb channel separation, and then the image is converted into a One-dimensional array of numbers;
- Step S5 By adjusting the RGB mode of the picture, different numbers in the array are displayed in different shades of different colors to obtain an image of the gastric mucosa with enhanced blood vessels and surface structure and with the remaining background hidden.
- the RGB mode uses 8-bit binary numbers to represent colors.
- the number range is [0,255], which is often referred to as a "gray image". 0 means no brightness, that is, black, and 255 indicates the maximum brightness that can be achieved, that is white.
- the brightness of the specified pixel can be adjusted by adjusting the size of the number in the array.
- the structure of gastric blood vessels and glands is very similar, and it is difficult for us to find or define the lesion area; and after the above image processing, as shown in Figure 2 right and Figure 3 right
- the lesion area in the image of early gastric cancer is enhanced, and the boundary between the lesion and the normal area is highlighted and becomes clearer.
- the white highlighted area is the possible lesion area, and the brighter the color indicates that this area is abnormal The higher the probability.
- the doctor can refer to the processed images to assist in determining whether the patient has early cancer, so as to avoid missing the recognition of the lesion due to the gastroscopy being too fast or the doctor’s fatigue operation.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Probability & Statistics with Applications (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Image Processing (AREA)
Abstract
Provided are an NBI image processing method based on deep learning and image enhancement, and an application thereof. A deep learning algorithm and image enhancement technology are used to extract features such as microvessels and microstructures of an NBI image, and the characterized image is presented to an endoscopist, overcoming the bottleneck of the prior art. The use of artificial intelligence enables a doctor to give a more accurate supplemental diagnostic opinion on early cancer under NBI. In an image processed using the present method, a lesion area in an image of early gastric cancer is enhanced, the boundary between the lesion and the normal area is highlighted and becomes clearer. When making a diagnosis, the doctor may refer to the processed image to supplement the determination of whether the patient suffers from early cancer, preventing erroneous identification of a lesion due to the gastroscopy being too rapid or because of fatigue of the doctor when operating.
Description
本发明属于医疗检测辅助领域,具体涉及一种基于人工智能的早期胃癌辅助诊断方法。The invention belongs to the field of medical detection assistance, and specifically relates to an artificial intelligence-based early gastric cancer auxiliary diagnosis method.
胃癌是我国常见的恶性肿瘤之一,发病率居消化系统肿瘤之首。2015年我国胃癌新发病例67.9万例,死亡病例49.8万例,约占癌症总死亡人数的1/5。恶性肿瘤危害人类健康的根本原因是难以早期发现。消化道肿瘤若在早期阶段得到诊断,患者5年生存率可高于90%,若进展至中晚期,患者5年生存率仅为5-25%。因此,早期诊断是提高患者生存率的重要策略。Gastric cancer is one of the common malignant tumors in my country, and its incidence ranks first among digestive system tumors. In 2015, there were 679,000 new cases of gastric cancer and 498,000 deaths in my country, accounting for about 1/5 of the total cancer deaths. The fundamental reason that malignant tumors harm human health is that it is difficult to detect early. If gastrointestinal tumors are diagnosed at an early stage, the 5-year survival rate of patients can be higher than 90%, and if it progresses to advanced stages, the 5-year survival rate of patients is only 5-25%. Therefore, early diagnosis is an important strategy to improve patient survival.
内镜检查是发现早期胃癌最常用的有力工具。内镜下普通白光加活检是发现早期胃癌的主要手段,具有简便、直观等优点,然而,由于早癌病灶的改变通常较为轻微,在白光下无特异性表现,难以与正常粘膜和良性病灶如糜烂、溃疡等区分,具有较低的敏感性和特异性,容易造成漏诊。近年来,随着一些关键技术如滤光片、内镜下放大技术等日渐成熟,窄带成像内镜(narrow band imagine,NBI)和放大内镜(magnifying endoscopy,ME)迅速发展。放大内镜可将内镜下的物像放大数十至上百倍,清晰的显示消化道粘膜的微血管、腺管开口等细微结构的改变。窄带成像内镜是利用滤光器过滤掉内镜光源所发出的红蓝绿光波中的宽带光谱,仅留下蓝光(400-430nm)和绿光(535-565nm)窄带光谱,而血红蛋白在可见光波下强烈吸收415nm和540nm波长的光波,因此可清晰显示粘膜表层的毛细血管和粘膜表面结构。窄带成像联合放大内镜技术(ME-NBI)可以使内镜医师对胃黏膜的表面微血管形态和表面细微结构的观察更加清晰,使消化道内镜诊断早期胃癌的准确率大大提升。然而,ME-NBI下早期胃癌的诊断标准十分复杂,且病灶表现形态各异,需要内镜医师具有强大的知识储备和丰富的经验,才能利用好这一技术实现早癌的诊断。在我国人口基数大、医疗资源短缺的现状下,ME-NBI诊断的复杂性大大制约了其发现早期胃癌的能力。Endoscopy is the most commonly used powerful tool to detect early gastric cancer. Endoscopic ordinary white light plus biopsy is the main method for detecting early gastric cancer. It has the advantages of simplicity and intuitiveness. However, because the changes of early cancer lesions are usually mild and have no specific manifestations under white light, it is difficult to compare with normal mucosa and benign lesions. The distinction between erosions and ulcers has low sensitivity and specificity, which can easily lead to missed diagnosis. In recent years, as some key technologies such as filters and magnification under endoscopy have matured, narrow-band imaging (NBI) and magnifying endoscopy (ME) have developed rapidly. The magnifying endoscope can magnify the object image under the endoscope by tens to hundreds of times, and clearly shows the changes in the microvessels of the digestive tract mucosa, the opening of the glands and other fine structures. Narrowband imaging endoscopy uses filters to filter out the broadband spectrum of the red, blue, and green light waves emitted by the endoscope light source, leaving only the blue (400-430nm) and green (535-565nm) narrowband spectra, while hemoglobin is in the visible light. Under the wave, light waves with wavelengths of 415nm and 540nm are strongly absorbed, so the capillaries and mucosal surface structure of the mucosal surface can be clearly displayed. Narrowband imaging combined with magnifying endoscopy (ME-NBI) can enable endoscopists to observe the surface microvascular morphology and surface microstructure of gastric mucosa more clearly, and greatly improve the accuracy of gastrointestinal endoscopy in the diagnosis of early gastric cancer. However, the diagnostic criteria for early gastric cancer under ME-NBI are very complex, and the appearance of the lesions is different. It requires endoscopists to have a strong knowledge reserve and rich experience in order to make good use of this technology to achieve early cancer diagnosis. Under the current situation of my country's large population base and shortage of medical resources, the complexity of ME-NBI diagnosis greatly restricts its ability to detect early gastric cancer.
近年来科技迅猛发展,人工智能掀起了新一波的技术浪潮。随着自动驾驶汽车的测试成功、AlphaGo击败围棋世界冠军,短短几年时间内,人工智能逐步进入公众视野。在医疗行业,人工智能的研究主要集中在静态读片领域,即机器通过学习大量经过医生标注的病灶图片和正常图片,归纳总结病灶的特征,再主动识别陌生图片中类似的病灶。较为成功的案例 有皮肤癌的分类诊断、肺结节的检测等。然而这种方法的应用具有一定的局限性,首先要求目标病灶和正常组织、其他病灶之间具有可明确区分的特征,其次要求输入机器进行学习的图片分类准确、没有数据污染。我们前期尝试使用该方法训练机器识别早期胃癌的能力,然而,由于早癌病灶特征复杂多变,且和多种良性病灶有相似特征,易出现误报误认等错误判断的状况。In recent years, science and technology have developed rapidly, and artificial intelligence has set off a new wave of technology. With the successful testing of self-driving cars and AlphaGo defeating the world champion of Go, artificial intelligence has gradually entered the public eye in just a few years. In the medical industry, artificial intelligence research is mainly concentrated in the field of static film reading, that is, the machine learns a large number of lesion pictures and normal pictures marked by doctors, summarizes the characteristics of the lesion, and then actively recognizes similar lesions in unfamiliar pictures. The more successful cases include the classification and diagnosis of skin cancer and the detection of lung nodules. However, the application of this method has certain limitations. Firstly, it requires that the target lesions and normal tissues and other lesions have clearly distinguishable characteristics, and secondly, it requires accurate classification of pictures input to the machine for learning and no data pollution. We tried to use this method to train the machine's ability to recognize early gastric cancer. However, because the characteristics of early cancer lesions are complex and changeable, and they have similar characteristics to a variety of benign lesions, it is prone to false positives and misidentifications.
基于此,我们拟发明一种基于人工智能的NBI图像处理方法,并通过处理后的NBI图像对早期胃癌进行辅助诊断,该方法利用深度学习算法和图像增强技术提取NBI图片的微血管和微结构等特征,将特征化后的图片呈现给内镜医生,克服现有技术瓶颈,利用人工智能使医生对NBI下早癌给出更为精确的辅助诊断意见。Based on this, we intend to invent a NBI image processing method based on artificial intelligence, and assist in the diagnosis of early gastric cancer through the processed NBI image. This method uses deep learning algorithms and image enhancement technology to extract the microvessels and microstructures of the NBI image. Features: Present the characterized picture to the endoscopist, overcome the bottleneck of the existing technology, and use artificial intelligence to enable the doctor to give more accurate auxiliary diagnosis opinions for early cancer under NBI.
发明内容Summary of the invention
本发明要解决的技术问题是:利用深度学习算法和图像增强技术提取NBI图片的微血管和微结构等特征,将特征化后的图片呈现给内镜医生,克服现有技术瓶颈,利用人工智能使医生对NBI下早癌给出更为精确的辅助诊断意见。The technical problem to be solved by the present invention is to use deep learning algorithms and image enhancement technology to extract the microvessels and microstructures of NBI pictures, present the characterized pictures to the endoscopist, overcome the bottleneck of the prior art, and use artificial intelligence to make Doctors give more accurate auxiliary diagnosis opinions for early cancer under NBI.
为实现上述目的,本发明采用一种基于深度学习和图像增强的NBI图像处理方法,具体包括如下步骤:In order to achieve the above objective, the present invention adopts an NBI image processing method based on deep learning and image enhancement, which specifically includes the following steps:
步骤S1,收集大量NBI胃早癌或者非癌放大图像;Step S1, collecting a large number of NBI early gastric cancer or non-cancerous enlarged images;
步骤S2,由专业医师标注图像中的白区和血管,使背景及结构复杂的原始NBI图像转变为特征清晰的简笔画图像,得到标注图像;Step S2, the white areas and blood vessels in the image are marked by a professional physician, so that the original NBI image with a complex background and structure is transformed into a simple stroke image with clear characteristics to obtain the marked image;
步骤S3,将原始NBI图像和标注图像输入深度卷积神经网络模型进行训练,所述深度卷积神经网络模型用于不断计算原始图像和标注图像之间突出的信息特征,包括纹理差异L
texture、内容差异L
content、颜色差异L
color和总体差异L
tv,并基于上述差异加权得到总的损失函数值,完成原始NBI图像到标注图像的映射关系;
Step S3, the original NBI image and the annotated image are input to the deep convolutional neural network model for training. The deep convolutional neural network model is used to continuously calculate the prominent information features between the original image and the annotated image, including the texture difference L texture , The content difference L content , the color difference L color and the overall difference L tv are weighted to obtain the total loss function value based on the above difference to complete the mapping relationship from the original NBI image to the annotated image;
步骤S4,基于上述映射关系得到待处理图像的目标图像,将每个像素点映射成由数字组成的一维数组,Step S4: Obtain the target image of the image to be processed based on the above mapping relationship, and map each pixel into a one-dimensional array composed of numbers.
步骤S5,通过调整目标图像的RGB颜色空间,使数组中不同数字显示深浅不一的不同颜色,得到血管和表面结构增强、且隐去其余背景的胃粘膜图像。Step S5: By adjusting the RGB color space of the target image, different numbers in the array are displayed in different colors with different shades, so as to obtain a gastric mucosal image with enhanced blood vessels and surface structure and with the remaining background hidden.
进一步的,步骤S3的具体实现方式如下,Further, the specific implementation of step S3 is as follows:
步骤S31,对于纹理差异L
texture,训练一个单独的对抗性CNN识别器,L
texture计算公式如下:
In step S31, for the texture difference L texture , a separate adversarial CNN recognizer is trained. The formula for calculating L texture is as follows:
其中,I
S是原始NBI图像,I
t是医师标注图像,i是I
S和I
t的对数,F
W、F
W(I
S)分别指图像增强函数和函数处理后的增强图像,D为识别器;
Among them, I S is the original NBI image, I t is the physician-labeled image, i is the logarithm of I S and I t , F W and F W (I S ) refer to the image enhancement function and the enhanced image after function processing, respectively, D Is the recognizer;
步骤S32,对于内容差异L
content,根据预先训练的VGG-19网络的ReLU层生成的激活图来定义;
Step S32, the content difference L content is defined according to the activation map generated by the ReLU layer of the pre-trained VGG-19 network;
其中,C
jH
jW
j分别表示I
t和F
W(I
S)增强图像的数量、高度和宽度,ψ
j是经过j次卷积后的特征映射;
Among them, C j H j W j represents the number, height and width of I t and F W (I S ) enhanced images respectively, and ψ j is the feature map after j times of convolution;
步骤S33,对于颜色差异L
color,使用高斯模糊方法计算医生标注图像和原始NBI图像的欧式距离,公式如下:
Step S33: For the color difference L color , the Gaussian blur method is used to calculate the Euclidean distance between the doctor-labeled image and the original NBI image, the formula is as follows:
X
b、Y
b分别是X、Y(原始NBI图像的像素坐标)经过计算后在标注图像中对应的值,求解过程如下:
X b and Y b are the corresponding values of X and Y (pixel coordinates of the original NBI image) in the labeled image after calculation. The solution process is as follows:
上式为高斯滤波模板,其中μ
x是X的均值,σ
x是X的方差,A是像素点的权重总和,求得的结果G(k,l)是k、l处的滤波模板值;
The above formula is the Gaussian filter template, where μ x is the mean value of X, σ x is the variance of X, A is the sum of the weights of the pixels, and the result G(k,l) is the filter template value at k and l;
将原图像k、l及周围像素点与滤波模板值相乘可以获得X
b的高斯模糊值;同理得到Y
b,代入式3得到L
color;
Multiply the original image k, l and surrounding pixels with the filter template value to obtain the Gaussian blur value of X b ; in the same way, obtain Y b , and substitute into Equation 3 to obtain L color ;
步骤S34,通过计算总变异损失函数来增强图像的空间平滑性,公式如下:In step S34, the spatial smoothness of the image is enhanced by calculating the total variation loss function, the formula is as follows:
步骤S35,最后将颜色差异、纹理差异、内容差异和总体差异结合,得到总的损失函数值,Step S35: Finally, the color difference, texture difference, content difference and overall difference are combined to obtain the total loss function value,
L
total=L
content+0.4·L
texture+0.1·L
color+400·L
tv (7)
L total =L content +0.4·L texture +0.1·L color +400·L tv (7)
其中,CHW表示F
W(I
S)增强图像的数量、高度和宽度,
为对X、Y求微分的哈密顿算符。
Among them, CHW represents the number, height and width of F W (I S ) enhanced images, Is the Hamiltonian to differentiate between X and Y.
进一步的,步骤S4中的映射方式为,基于PythonPIL包中Image方法实现rgb通道分离,再通过reshape方法使图像转换成由数字组成的一维数组。Further, the mapping method in step S4 is to realize rgb channel separation based on the Image method in the PythonPIL package, and then use the reshape method to convert the image into a one-dimensional array composed of numbers.
本发明还提供一种基于深度学习和图像增强的NBI图像在早期胃癌诊断中的应用,所述NBI图像通过上述技术方案得到。The present invention also provides an application of an NBI image based on deep learning and image enhancement in the diagnosis of early gastric cancer. The NBI image is obtained through the above technical solution.
本发明的有益效果为:通过本发明提取NBI图片下病灶的微血管和微结构等特征,一方面为内镜医生独立判断病灶性质提供参考;另一方面将人工智能模型预测问题封闭化,使其相比于现有技术,可以对NBI下早癌给出更为精确的辅助诊断意见。The beneficial effects of the present invention are: extracting the microvascular and microstructure characteristics of the lesion under the NBI picture by the present invention, on the one hand, it provides a reference for the endoscopist to independently judge the nature of the lesion; on the other hand, the artificial intelligence model prediction problem is sealed to make it Compared with the prior art, it can give more accurate auxiliary diagnosis opinions for early cancer under NBI.
图1为本发明实施例流程图。Figure 1 is a flowchart of an embodiment of the present invention.
图2为专业医师标注示意图(①NBI原图②医生标注图像)。Figure 2 is a schematic diagram of professional physician labeling (①NBI original image②Doctor labeling image).
图3为本发明方法处理示意图(①NBI原图②本发明方法处理后图像)。Figure 3 is a schematic diagram of the method of the present invention (①NBI original image ②image processed by the method of the present invention).
下面结合附图和实施例对本发明的技术方案作进一步说明。The technical scheme of the present invention will be further described below in conjunction with the drawings and embodiments.
如图1所示,本发明提供的一种基于深度学习和图像增强的NBI图像处理方法,包括如下步骤As shown in Figure 1, an NBI image processing method based on deep learning and image enhancement provided by the present invention includes the following steps
步骤S1,收集大量NBI胃早癌或者非癌放大图像;Step S1, collecting a large number of NBI early gastric cancer or non-cancerous enlarged images;
步骤S2,由专业医师标注图像中的白区和血管,使背景及结构复杂的原始NBI图像转变为特征清晰的简笔画图像(标注图像),如图2所示;Step S2: The white area and blood vessels in the image are marked by a professional physician, so that the original NBI image with complex background and structure is transformed into a simple stroke image (annotated image) with clear features, as shown in Figure 2;
步骤S3,将原始NBI图像和标注图像输入深度卷积神经网络模型进行训练,所述深度卷积神经网络模型不断计算原始图像和标注图像之间突出的信息特征(如:纹理差异L
texture、内容差异L
content、颜色差异L
color和总体差异)是非常必要的,最后要达到的效果是,使模型能自动完成原始NBI图像到目标图像(机器仿照医生处理的标注图)的映射,最后处理的结果如图3。
Step S3, the original NBI image and the annotated image are input into the deep convolutional neural network model for training, and the deep convolutional neural network model continuously calculates the prominent information features between the original image and the annotated image (such as: texture difference L texture , content The difference L content , the color difference L color, and the overall difference) are very necessary. The final effect is to enable the model to automatically complete the mapping of the original NBI image to the target image (the machine imitates the annotation image processed by the doctor), and the final processing The result is shown in Figure 3.
步骤S31,对于纹理差异L
texture,训练一个单独的对抗性CNN识别器,L
texture计算公式如下:
In step S31, for the texture difference L texture , a separate adversarial CNN recognizer is trained. The formula for calculating L texture is as follows:
其中,I
S是原始NBI图像,I
t是医师标注图像,i是I
S和I
t的对数,F
W、F
W(I
S)分别指图像增强函数和函数处理后的增强图像,本实施例中,F
W可以根据精度需要进行自定义,D为识别器;
Among them, I S is the original NBI image, I t is the physician-labeled image, i is the logarithm of I S and I t , F W and F W (I S ) refer to the image enhancement function and the enhanced image after function processing, respectively. In the embodiment, F W can be customized according to accuracy requirements, and D is the recognizer;
步骤S32,对于内容差异L
content,我们根据预先训练的VGG-19网络[1]的ReLU层生成的激活图来定义;
Step S32, for the content difference L content , we define it according to the activation map generated by the ReLU layer of the pre-trained VGG-19 network [1];
C
jH
jW
j分别表示I
t和F
W(I
S)增强图像的数量、高度和宽度,ψ
j是经过j次卷积后的特征映射。
C j H j W j represents the number, height, and width of I t and F W (I S ) enhanced images, respectively, and ψ j is the feature map after j times of convolution.
步骤S33,对于颜色差异L
color,我们使用高斯模糊方法计算医生标注图像和原始NBI图像的欧式距离,公式如下:
Step S33, for the color difference L color , we use the Gaussian blur method to calculate the Euclidean distance between the doctor-labeled image and the original NBI image, the formula is as follows:
X
b、Y
b分别是X、Y(原NBI图像的像素坐标)经过计算后在标注图像中对应的值,求解过程如下:
X b and Y b are the corresponding values of X and Y (the pixel coordinates of the original NBI image) in the labeled image after calculation. The solution process is as follows:
上式为高斯滤波模板,其中μ
x是X的均值,σ
x是X的方差,A是像素点的权重总和,可以暂设为0.035,其目的是让区域内的像素点的权重之和等于1,从而保持图像亮度不变。求得的结果G(k,l)是k、l处的滤波模板值。
The above formula is the Gaussian filter template, where μ x is the mean value of X, σ x is the variance of X, and A is the sum of the weights of the pixels, which can be temporarily set to 0.035. The purpose is to make the sum of the weights of the pixels in the area equal to 1, so as to keep the image brightness unchanged. The obtained result G(k,l) is the filter template value at k and l.
将源图像k、l及周围像素点与滤波模板值相乘可以获得X
b的高斯模糊值;同上Y
b可求,代入式3,则L
color可求。
The Gaussian blur value of X b can be obtained by multiplying the source image k, l and the surrounding pixels with the filter template value; as above, Y b can be obtained, and substituting in Equation 3, L color can be obtained.
步骤S34,通过计算总变异损失函数来增强图像的空间平滑性,公式如下:In step S34, the spatial smoothness of the image is enhanced by calculating the total variation loss function, the formula is as follows:
步骤S35,最后将颜色、纹理、内容和总体差异结合,得到总的损失函数值:Step S35: Finally, the color, texture, content, and overall difference are combined to obtain the overall loss function value:
L
total=L
content+0.4·L
texture+0.1·L
color+400·L
tv (7)
L total =L content +0.4·L texture +0.1·L color +400·L tv (7)
其中,CHW分别表示F
W(I
S)增强图像的数量、高度和宽度(要说明的是CHW只是增强图像的参数,而C
jH
jW
j是I
t和F
W(I
S)增强图像的参数,其实在实际操作中原始图像、I
t和F
W(I
S)增强图像的CHW值都是对应相同的,只是通过下标的方式进行区别);
为对X、Y求微分的哈密顿算符。
Among them, CHW respectively represents the number, height and width of F W (I S ) enhanced images (it should be noted that CHW is only a parameter of enhanced images, and C j H j W j is I t and F W (I S ) enhanced images parameters, in fact, in practice the original image, I t and F W (I S) is enhanced CHW values corresponding to the same image, but distinguished by subscripts mode); Is the Hamiltonian to differentiate between X and Y.
步骤S4,基于上述映射关系得到待处理图像的目标图像,将目标图像中的每个像素点映射成数字,映射方式基于PythonPIL包中Image方法实现rgb通道分离,再通过reshape方法使图片转换成由数字组成的一维数组;Step S4: Obtain the target image of the image to be processed based on the above mapping relationship, and map each pixel in the target image to a number. The mapping method is based on the Image method in the PythonPIL package to achieve rgb channel separation, and then the image is converted into a One-dimensional array of numbers;
步骤S5,通过调整图片RGB模式,使数组中不同数字显示深浅不一的不同颜色,得到血管和表面结构增强、且隐去其余背景的胃粘膜图像。Step S5: By adjusting the RGB mode of the picture, different numbers in the array are displayed in different shades of different colors to obtain an image of the gastric mucosa with enhanced blood vessels and surface structure and with the remaining background hidden.
RGB模式是通过8位二进制数字来表示颜色,数字区间是[0,255],这就是常说的“灰度图像”,0表示没有亮度,即黑色,255表示能达到的亮度最大值,即白色。通过调整数组中的数字大小可以实现对指定像素点亮度的调节。The RGB mode uses 8-bit binary numbers to represent colors. The number range is [0,255], which is often referred to as a "gray image". 0 means no brightness, that is, black, and 255 indicates the maximum brightness that can be achieved, that is white. The brightness of the specified pixel can be adjusted by adjusting the size of the number in the array.
如图2左和图3左所示,胃血管和腺体结构具有很大的相似性,我们很难发现或者界定其中的病变区域;而经过上述图像处理后,如图2右和图3右所示,胃早癌图像中的病灶区域被得到加强,病灶和正常区域的边界经过高亮处理,变得更加清晰,其中白色高亮区域是可能的病灶区域,颜色越亮说明这个区域不正常的可能性越高。医生在诊断时,可以参考处理过的图像,对该患者是否患有早癌进行辅助判断,避免因为胃镜检查太快或者医生疲劳操作而导致错过对病灶的识别。As shown in Figure 2 on the left and Figure 3 on the left, the structure of gastric blood vessels and glands is very similar, and it is difficult for us to find or define the lesion area; and after the above image processing, as shown in Figure 2 right and Figure 3 right As shown, the lesion area in the image of early gastric cancer is enhanced, and the boundary between the lesion and the normal area is highlighted and becomes clearer. The white highlighted area is the possible lesion area, and the brighter the color indicates that this area is abnormal The higher the probability. When making a diagnosis, the doctor can refer to the processed images to assist in determining whether the patient has early cancer, so as to avoid missing the recognition of the lesion due to the gastroscopy being too fast or the doctor’s fatigue operation.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely examples to illustrate the spirit of the present invention. Those skilled in the art to which the present invention pertains can make various modifications or additions to the specific embodiments described or use similar alternatives, but they will not deviate from the spirit of the present invention or exceed the definition of the appended claims. Range.
参考文献references
[1]Karen Simonyan*&Andrew Zisserman.Very Deep Convolutional Networks for Large-Scale Image Recognition[C].ICLR2015,2015.[1]Karen Simonyan*&Andrew Zisserman.Very Deep Convolutional Networks for Large-Scale Image Recognition[C].ICLR2015,2015.
Claims (4)
- 一种基于深度学习和图像增强的NBI图像处理方法,其特征在于,包括如下步骤:An NBI image processing method based on deep learning and image enhancement, which is characterized in that it includes the following steps:步骤S1,收集大量NBI胃早癌或者非癌放大图像;Step S1, collecting a large number of NBI early gastric cancer or non-cancerous enlarged images;步骤S2,由专业医师标注图像中的白区和血管,使背景及结构复杂的原始NBI图像转变为特征清晰的简笔画图像,得到标注图像;Step S2, the white areas and blood vessels in the image are marked by a professional physician, so that the original NBI image with a complex background and structure is transformed into a simple stroke image with clear characteristics to obtain the marked image;步骤S3,将原始NBI图像和标注图像输入深度卷积神经网络模型进行训练,所述深度卷积神经网络模型用于不断计算原始图像和标注图像之间突出的信息特征,包括纹理差异L texture、内容差异L content、颜色差异L color和总体差异L tv,并基于上述差异加权得到总的损失函数值,完成原始NBI图像到标注图像的映射关系; Step S3, the original NBI image and the annotated image are input to the deep convolutional neural network model for training. The deep convolutional neural network model is used to continuously calculate the prominent information features between the original image and the annotated image, including the texture difference L texture , The content difference L content , the color difference L color and the overall difference L tv are weighted to obtain the total loss function value based on the above difference to complete the mapping relationship from the original NBI image to the annotated image;步骤S4,基于上述映射关系得到待处理图像的目标图像,将每个像素点映射成由数字组成的一维数组,Step S4: Obtain the target image of the image to be processed based on the above mapping relationship, and map each pixel into a one-dimensional array composed of numbers.步骤S5,通过调整目标图像的RGB颜色空间,使数组中不同数字显示深浅不一的不同颜色,得到血管和表面结构增强、且隐去其余背景的胃粘膜图像。Step S5: By adjusting the RGB color space of the target image, different numbers in the array are displayed in different colors with different shades, so as to obtain a gastric mucosal image with enhanced blood vessels and surface structure and with the remaining background hidden.
- 如权利要求1所述的一种基于深度学习和图像增强的NBI图像处理方法,其特征在于:步骤S3的具体实现方式如下,The NBI image processing method based on deep learning and image enhancement according to claim 1, characterized in that: the specific implementation of step S3 is as follows:步骤S31,对于纹理差异L texture,训练一个单独的对抗性CNN识别器,L texture计算公式如下: In step S31, for the texture difference L texture , a separate adversarial CNN recognizer is trained. The formula for calculating L texture is as follows:其中,I S是原始NBI图像,I t是医师标注图像,i是I S和I t的对数,F W、F W(I S)分别指图像增强函数和函数处理后的增强图像,D为识别器; Among them, I S is the original NBI image, I t is the physician-labeled image, i is the logarithm of I S and I t , F W and F W (I S ) refer to the image enhancement function and the enhanced image after function processing, respectively, D Is the recognizer;步骤S32,对于内容差异L content,根据预先训练的VGG-19网络的ReLU层生成的激活图来定义; Step S32, the content difference L content is defined according to the activation map generated by the ReLU layer of the pre-trained VGG-19 network;其中,C jH jW j分别表示I t和F W(I S)增强图像的数量、高度和宽度,ψ j是经过j次卷积后的特征映射; Among them, C j H j W j represents the number, height and width of I t and F W (I S ) enhanced images respectively, and ψ j is the feature map after j times of convolution;步骤S33,对于颜色差异L color,使用高斯模糊方法计算医生标注图像和原始NBI图像的 欧式距离,公式如下: Step S33: For the color difference L color , the Gaussian blur method is used to calculate the Euclidean distance between the doctor-labeled image and the original NBI image, the formula is as follows:X b、Y b分别是X、Y(原始NBI图像的像素坐标)经过计算后在标注图像中对应的值,求解过程如下: X b and Y b are the corresponding values of X and Y (pixel coordinates of the original NBI image) in the labeled image after calculation. The solution process is as follows:上式为高斯滤波模板,其中μ x是X的均值,σ x是X的方差,A是像素点的权重总和,求得的结果G(k,l)是k、l处的滤波模板值; The above formula is the Gaussian filter template, where μ x is the mean value of X, σ x is the variance of X, A is the sum of the weights of the pixels, and the result G(k,l) is the filter template value at k and l;将原图像k、l及周围像素点与滤波模板值相乘可以获得X b的高斯模糊值;同理得到Y b,代入式3得到L color; Multiply the original image k, l and surrounding pixels with the filter template value to obtain the Gaussian blur value of X b ; in the same way, obtain Y b , and substitute into Equation 3 to obtain L color ;步骤S34,通过计算总变异损失函数来增强图像的空间平滑性,公式如下:In step S34, the spatial smoothness of the image is enhanced by calculating the total variation loss function, the formula is as follows:步骤S35,最后将颜色差异、纹理差异、内容差异和总体差异结合,得到总的损失函数值,Step S35: Finally, the color difference, texture difference, content difference and overall difference are combined to obtain the total loss function value,L total=L content+0.4·L texture+0.1·L color+400·L tv (7) L total =L content +0.4·L texture +0.1·L color +400·L tv (7)其中,CHW表示F W(I S)增强图像的数量、高度和宽度,▽ x、▽ y为对X、Y求微分的哈密顿算符。 Among them, CHW represents the number, height and width of F W (I S ) enhanced images, and ▽ x , ▽ y are Hamiltonian operators that differentiate between X and Y.
- 如权利要求1或2所述的一种基于深度学习和图像增强的NBI图像处理方法,其特征在于:步骤S4中的映射方式为,基于PythonPIL包中Image方法实现rgb通道分离,再通过reshape方法使图像转换成由数字组成的一维数组。An NBI image processing method based on deep learning and image enhancement according to claim 1 or 2, characterized in that: the mapping method in step S4 is to realize rgb channel separation based on the Image method in the PythonPIL package, and then use the reshape method Convert the image into a one-dimensional array of numbers.
- 一种基于深度学习和图像增强的NBI图像在早期胃癌诊断中的应用,其特征在于:所述NBI图像通过权利要求1或2或3所述方法得到。An application of an NBI image based on deep learning and image enhancement in the diagnosis of early gastric cancer, characterized in that the NBI image is obtained by the method of claim 1 or 2 or 3.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910375216.9A CN110189303B (en) | 2019-05-07 | 2019-05-07 | NBI image processing method based on deep learning and image enhancement and application thereof |
CN201910375216.9 | 2019-05-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020224153A1 true WO2020224153A1 (en) | 2020-11-12 |
Family
ID=67715784
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/106030 WO2020224153A1 (en) | 2019-05-07 | 2019-09-16 | Nbi image processing method based on deep learning and image enhancement, and application thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110189303B (en) |
WO (1) | WO2020224153A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189303B (en) * | 2019-05-07 | 2020-12-25 | 武汉楚精灵医疗科技有限公司 | NBI image processing method based on deep learning and image enhancement and application thereof |
CN111899229A (en) * | 2020-07-14 | 2020-11-06 | 武汉楚精灵医疗科技有限公司 | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology |
CN112435246A (en) * | 2020-11-30 | 2021-03-02 | 武汉楚精灵医疗科技有限公司 | Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope |
CN112884777B (en) * | 2021-01-22 | 2022-04-12 | 复旦大学 | Multi-modal collaborative esophageal cancer lesion image segmentation system based on self-sampling similarity |
CN113256572B (en) * | 2021-05-12 | 2023-04-07 | 中国科学院自动化研究所 | Gastroscope image analysis system, method and equipment based on restoration and selective enhancement |
CN114359280B (en) * | 2022-03-18 | 2022-06-03 | 武汉楚精灵医疗科技有限公司 | Gastric mucosa image boundary quantification method, device, terminal and storage medium |
CN114359279B (en) * | 2022-03-18 | 2022-06-03 | 武汉楚精灵医疗科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156711A (en) * | 2015-04-21 | 2016-11-23 | 华中科技大学 | The localization method of line of text and device |
CN108695001A (en) * | 2018-07-16 | 2018-10-23 | 武汉大学人民医院(湖北省人民医院) | A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning |
CN108961350A (en) * | 2018-07-17 | 2018-12-07 | 北京工业大学 | One kind being based on the matched painting style moving method of significance |
CN109447973A (en) * | 2018-10-31 | 2019-03-08 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and system of polyp of colon image |
CN110189303A (en) * | 2019-05-07 | 2019-08-30 | 上海珍灵医疗科技有限公司 | A kind of NBI image processing method and its application based on deep learning and image enhancement |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105962904A (en) * | 2016-04-21 | 2016-09-28 | 西安工程大学 | Human tissue focus detection method based on infrared thermal imaging technology |
US10600185B2 (en) * | 2017-03-08 | 2020-03-24 | Siemens Healthcare Gmbh | Automatic liver segmentation using adversarial image-to-image network |
CN108229525B (en) * | 2017-05-31 | 2021-12-28 | 商汤集团有限公司 | Neural network training and image processing method and device, electronic equipment and storage medium |
CN108229526B (en) * | 2017-06-16 | 2020-09-29 | 北京市商汤科技开发有限公司 | Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment |
CN107590786A (en) * | 2017-09-08 | 2018-01-16 | 深圳市唯特视科技有限公司 | A kind of image enchancing method based on confrontation learning network |
CN109410127B (en) * | 2018-09-17 | 2020-09-01 | 西安电子科技大学 | Image denoising method based on deep learning and multi-scale image enhancement |
-
2019
- 2019-05-07 CN CN201910375216.9A patent/CN110189303B/en active Active
- 2019-09-16 WO PCT/CN2019/106030 patent/WO2020224153A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106156711A (en) * | 2015-04-21 | 2016-11-23 | 华中科技大学 | The localization method of line of text and device |
CN108695001A (en) * | 2018-07-16 | 2018-10-23 | 武汉大学人民医院(湖北省人民医院) | A kind of cancer lesion horizon prediction auxiliary system and method based on deep learning |
CN108961350A (en) * | 2018-07-17 | 2018-12-07 | 北京工业大学 | One kind being based on the matched painting style moving method of significance |
CN109447973A (en) * | 2018-10-31 | 2019-03-08 | 腾讯科技(深圳)有限公司 | A kind for the treatment of method and apparatus and system of polyp of colon image |
CN110189303A (en) * | 2019-05-07 | 2019-08-30 | 上海珍灵医疗科技有限公司 | A kind of NBI image processing method and its application based on deep learning and image enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN110189303B (en) | 2020-12-25 |
CN110189303A (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020224153A1 (en) | Nbi image processing method based on deep learning and image enhancement, and application thereof | |
JP2020073081A (en) | Image diagnosis assistance apparatus, learned model, image diagnosis assistance method, and image diagnosis assistance program | |
CN110600122B (en) | Digestive tract image processing method and device and medical system | |
US8131054B2 (en) | Computerized image analysis for acetic acid induced cervical intraepithelial neoplasia | |
CN109635871B (en) | Capsule endoscope image classification method based on multi-feature fusion | |
WO2021147429A9 (en) | Endoscopic image display method, apparatus, computer device, and storage medium | |
CN111899229A (en) | Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology | |
CN113496489A (en) | Training method of endoscope image classification model, image classification method and device | |
WO2020162275A1 (en) | Medical image processing device, endoscope system, and medical image processing method | |
Cui et al. | Bleeding detection in wireless capsule endoscopy images by support vector classifier | |
WO2020215810A1 (en) | Image recognition-based narrowband image detection method for colonoscopy procedure | |
Suzuki et al. | Artificial intelligence for cancer detection of the upper gastrointestinal tract | |
CN115049666B (en) | Endoscope virtual biopsy device based on color wavelet covariance depth map model | |
Bae et al. | Quantitative screening of cervical cancers for low-resource settings: pilot study of smartphone-based endoscopic visual inspection after acetic acid using machine learning techniques | |
KR101875004B1 (en) | Automated bleeding detection method and computer program in wireless capsule endoscopy videos | |
Yuan et al. | Automatic bleeding frame detection in the wireless capsule endoscopy images | |
CN111341441A (en) | Gastrointestinal disease model construction method and diagnosis system | |
TWI738367B (en) | Method for detecting image using convolutional neural network | |
CN114372951A (en) | Nasopharyngeal carcinoma positioning and segmenting method and system based on image segmentation convolutional neural network | |
CN116745861A (en) | Control method, device and program of lesion judgment system obtained through real-time image | |
KR102095730B1 (en) | Method for detecting lesion of large intestine disease based on deep learning | |
JP2024124575A (en) | A program for identifying Hanna lesions | |
US20240065540A1 (en) | Apparatus and method for detecting cervical cancer | |
JP2014124333A (en) | Medical image processor | |
CN117649373A (en) | Digestive endoscope image processing method and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19928135 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19928135 Country of ref document: EP Kind code of ref document: A1 |