CN112435246A - Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope - Google Patents

Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope Download PDF

Info

Publication number
CN112435246A
CN112435246A CN202011371330.3A CN202011371330A CN112435246A CN 112435246 A CN112435246 A CN 112435246A CN 202011371330 A CN202011371330 A CN 202011371330A CN 112435246 A CN112435246 A CN 112435246A
Authority
CN
China
Prior art keywords
area
image
abnormal
cancer
morphological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011371330.3A
Other languages
Chinese (zh)
Inventor
郑碧清
刘奇为
于天成
胡珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Priority to CN202011371330.3A priority Critical patent/CN112435246A/en
Publication of CN112435246A publication Critical patent/CN112435246A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention relates to the technical field of medical technology assistance, in particular to an artificial intelligent diagnosis method for gastric cancer under a narrow-band imaging amplification gastroscope, which comprises the following steps: s1, constructing a mini-UNet neural network model; s2, constructing a UNet + + image segmentation neural network model to obtain an area ratio R of an image feature difference regionabnormal(ii) a S3, obtaining a microvascular morphology map and a microstructure morphology map of the characteristic abnormal region by adopting a generative confrontation network GAN technology; s4, identifying the microvascular morphological dissimilarity and the microstructure morphological dissimilarity in the microvascular morphological map and the microstructure morphological map by the neural network model ResNet 50; s5, using the trained random forest model to recognize and judge to obtain the final judgment of cancer or non-cancer, and recognizing the canceration position range of the cancer judged as cancer as the recognized image characteristic difference region Pabnormal. The sensitivity and specificity of the invention to cancer and non-cancer identification respectively reach about 93.4 percent and 90.7 percent, and the invention can effectively assistThe diagnosis is carried out by the clinician to judge whether the cancer is cancerous or non-cancerous, and the location range of the cancerous change is given.

Description

Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope
Technical Field
The invention relates to the technical field of medical technology assistance, in particular to an artificial intelligent diagnosis method for gastric cancer under a narrow-band imaging amplification gastroscope.
Background
Gastric cancer is a malignant tumor of gastric mucosal epithelium, Endoscopy is the most important means for gastric cancer screening, particularly, a high-definition amplified gastric mucosa image can be shot by a Narrow-Band Imaging and amplifying gastroscope (ME-NBI) technology, and a doctor is greatly helpful for accurate diagnosis by observing the range of a lesion area of the image and the shape of microvessels and microstructures of the lesion area.
In recent years, with the rapid development of computer artificial intelligence technology, the application of technologies such as artificial intelligence, deep learning, convolutional neural network and the like in the medical field is gradually increased, and the technologies are used for auxiliary diagnosis, lesion target identification and the like. In the aspect of artificial intelligence application of endoscopic gastric cancer diagnosis, the method has public achievements, but the method has a poor effect of classifying and judging cancer/non-cancer or identifying focus according to the whole image of the gastric endoscope. Because there are many lesions in the stomach, such as ulcers, erosion, etc., and the characteristic boundaries of the lesions are not obvious, the simple classification and lesion identification are prone to cause missed diagnosis and misdiagnosis. There is no related public achievement at present in the aspect of applying artificial intelligence technology to identify various characteristics of gastric endoscope pictures for comprehensive study and judgment and identification of gastric cancer lesions.
The invention patent NBI image processing method based on deep learning and image enhancement and application thereof mentions that the microvessels and microstructures of an endoscope image of the stomach are enhanced by using a deep learning technology, and a doctor assists in diagnosis of the stomach cancer by observing the enhanced microvessels and the microstructure diagram through enhancing the displayed microvessels and microstructures. However, the enhanced microvessel and microstructure images displayed by the patent are not clear, and a doctor is required to observe the microvessel and microstructure images.
On the basis of the invention, the method identifies the lesion area of an endoscope image through the application of various deep learning and machine learning technologies, generates a microvascular and microstructure morphological map by applying an improved GAN technology, identifies the dissimilarity degree of the morphological map, and directly provides a diagnosis result by using a random forest model according to the lesion area ratio, the microvascular morphological dissimilarity degree and the microstructure morphological dissimilarity degree. The invention improves the diagnosis accuracy and can effectively help the clinician to diagnose the gastric cancer. Therefore, an artificial intelligent diagnosis method for gastric cancer under a narrow-band imaging magnifying gastroscope is provided.
Disclosure of Invention
The invention aims to provide an artificial intelligent diagnosis method for gastric cancer under a narrow-band imaging magnifying gastroscope, which aims to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: the artificial intelligent diagnosis method of the gastric cancer under the narrow-band imaging amplification gastroscope comprises the following steps:
s1, constructing a mini-UNet neural network model for identifying the characteristic difference between the effective image area of the static image of the narrow-band imaging amplification stomach endoscope and the black area and character area to obtain a rectangular effective image area P';
s2, constructing a UNet + + image segmentation neural network model for identifying the characteristic difference between a diseased region and a normal region which is not diseased in a narrow-band imaging amplification stomach endoscope image to obtain an image characteristic difference region area ratio Rabnormal
S3, obtaining a microvascular morphology map and a microstructure morphology map of the characteristic abnormal area by using a generative confrontation network GAN technology and taking the image characteristic abnormal area with the area ratio larger than 13% as input;
s4, identifying the microvascular morphological dissimilarity and the microstructure morphological dissimilarity in the microvascular morphological map and the microstructure morphological map by the neural network model ResNet50 obtained by respective training, wherein the dissimilarity is a quantitative value of morphological map irregularity, and the morphological dissimilarity of the microvascular and the microstructure is respectively represented as Vlevel、Slevel
S5, using Rabnormal、Vlevel、SlevelThe random forest model obtained by training is identified and judged to obtain the final judgment of cancer or non-cancer, and the canceration position of the random forest model which is judged to be cancerExtent-i.e. identified image characteristic difference region Pabnormal
Preferably, the method for calculating the feature map of the mini-UNet segmented image in step S1 is as follows:
Figure BDA0002806799140000021
wherein wcIs a weight to balance the class frequencies, d1Is the distance from the pixel to the nearest boundary, d2Is the distance, w, from the pixel to the second near boundary0And σ is an empirical constant;
the loss function of the mini-UNet neural network model training is a weighted cross entropy loss function, and each pixel point has a weight:
Figure BDA0002806799140000031
the result of the mini-UNet output is a matrix formed by the confidence coefficients of whether each pixel of the input image is an effective region, and the higher the confidence coefficient is, the more likely the pixel belongs to the effective region; the matrix is subjected to operational transformation with a 0.5 threshold:
Figure BDA0002806799140000032
the result after operation is a matrix formed by 0 and 1, 1 represents an effective region, the numerical value 1 forms 1 or more connected domains, and the connected domain M with the largest area is taken(i,j)maxAnd performing the following operation with the input image:
P(i,j)=P(i,j)·M(i,j)max
obtaining an effective image area with an irregular shape, and taking the outer tangent rectangle of the effective image area to obtain a rectangular effective image area P', wherein the width w and the height h are obtained.
Preferably, in step S2, the matrix is subjected to operation transformation with a threshold value of 0.5:
Figure BDA0002806799140000033
1 is expressed as a characteristic difference region, and the connected domain M with the largest area in the connected domains consisting of the numerical value 1 is takenabnormal(i,j)maxThe image characteristic abnormal area is as follows:
Pabnormal(i,j)=P′(i,j)·Mabnormal(i,j)max
calculating the connected domain area:
Sabnormal=∑Mabnormal(i,j)max
calculating the ratio of the maximum connected domain area to the effective image area, namely the area ratio of the image characteristic difference area:
Figure BDA0002806799140000041
the feature difference region having an area ratio R of less than 13% is disregarded to filter out possible noise and recognition errors.
Preferably, in step S3, the image feature abnormal region is divided into 9 grids to obtain 9 sub-regions, and the loss function of the model training is as follows:
Figure BDA0002806799140000042
wherein:
Figure BDA0002806799140000043
Figure BDA0002806799140000044
wherein:
Sdata={(pi,ai)|pi∈P,ai∈A,i=1,2,...,N}
p is the original image, A is the morphological image produced; G. d is a generation model and a discrimination model of the GAN neural network respectively.
Preferably, the model training process in step S4 is: the method comprises the steps of firstly marking the gastric mucosa microvasculature into normal and canceration type 2 morphograms, marking the microstructure morphogram into normal and canceration type 2, and respectively using a deep convolution neural network ResNet50 to carry out learning training on the microvasculature and the microstructure morphogram to respectively obtain a microvasculature heterology recognition model and a microstructure morphism recognition model.
Compared with the prior art, the invention has the beneficial effects that: the invention flexibly uses various artificial intelligence technologies such as UNet, UNet + +, a generative countermeasure network (GAN), ResNet50, random forest and the like, and aims at the application scene of the invention: carrying out artificial intelligent diagnosis on gastric cancer under a narrow-band imaging amplification gastroscope, improving and optimizing a UNet network model, and constructing a mini-UNet model; identifying an effective image area and a characteristic difference area by adopting an image pixel level confidence coefficient matrix and a threshold filtering mode, and calculating the area ratio of the characteristic difference area; optimizing GAN training learning, and performing 9-grid division on the image to obtain 9 sub-regions; acquiring a microvascular morphology map and a microstructure morphology map of a narrow-band imaging amplification gastric endoscope image by using a GAN technology, and further acquiring a microvascular dissimilarity degree and a microstructure dissimilarity degree; and finally, identifying cancer or non-cancer by the random forest according to the area ratio, the microvascular dissimilarity degree and the microstructural dissimilarity degree of the characteristic difference area. Through clinical multiple verification, the sensitivity and specificity of the invention for cancer and non-cancer identification respectively reach about 93.4 percent and 90.7 percent, and the invention can effectively assist clinicians in cancer and non-cancer discrimination and diagnosis and give out the location range of cancer.
Drawings
FIG. 1: the effective region of the stomach endoscope image is identified and cut.
FIG. 2: and the mini-UNet neural network structure is schematic.
FIG. 3: and the image effective area identified by the mini-UNet is shown schematically.
FIG. 4: a schematic diagram of mini-UNet training picture marking and conversion.
FIG. 5: and (5) a mini-UNet model training schematic diagram.
FIG. 6: schematic diagram of UNet + + training picture marking and conversion.
FIG. 7: schematic drawing of training of UNet + + model.
FIG. 8: GAN generates a schematic representation of microvascular morphology.
FIG. 9: GAN generates a microstructure morphology map.
FIG. 10: and (3) training a schematic diagram of a microvascular morphological dissimilarity recognition model.
FIG. 11: and (3) a microstructure morphology dissimilarity recognition model training schematic diagram.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a technical scheme that: the artificial intelligent diagnosis method of the gastric cancer under the narrow-band imaging amplification gastroscope comprises the following steps: the method comprises the following steps:
and S1, constructing a mini-UNet neural network model. According to the characteristic that the characteristic difference between the effective image area of the static image of the gastric endoscope and the black area and the character area is very obvious through narrow-band imaging amplification, the image segmentation neural network model UNet is simplified and optimized, so that the size of a model file is reduced, the GPU space occupied by the model is reduced, the reasoning speed is accelerated, and a mini-UNet model is constructed. The schematic diagram of the identification and cutting of the effective region of the gastric endoscope image shown in fig. 1.
And identifying the image effective area of the stomach endoscope, such as the image effective area schematic diagram identified by the mini-UNet shown in figure 3. An improved and simplified UNet model, namely a mini-UNet model, is adopted, and the mini-UNet neural network structure schematic diagram is shown in figure 2. The image size is 256 × 256, the number of simplified and improved model layers is reduced from the original 226 layers to 28 layers, the size of a model file is reduced from 418M to 44.7M, and the reasoning time is reduced by about 30%. The training method comprises the steps that firstly, people mark effective areas of various endoscope original images, the marks are converted into black and white 2-color marked pictures, white is the effective area, then the original pictures and the black and white 2-color marked pictures are all scaled to 256 × 256, the 2 pictures serve as 1 piece of training data, a plurality of pieces of training data are divided into a training set and a verification set according to a ratio of 4:1, and a mini-UNet model is trained. The mini-UNet training picture labeling and conversion diagram is shown in FIG. 4. And a mini-UNet model training diagram as shown in fig. 5.
The mini-UNet neural network model is used for identifying the characteristic difference between the effective image area of the static image of the narrow-band imaging amplification stomach endoscope and the black area and character area to obtain a rectangular effective image area P';
the feature map calculation method of the mini-UNet segmentation image comprises the following steps:
Figure BDA0002806799140000061
wherein wcIs a weight to balance the class frequencies, d1Is the distance from the pixel to the nearest boundary, d2Is the distance, w, from the pixel to the second near boundary0And σ are empirical constants, set to 8, 6, respectively, in the present invention;
the loss function of the mini-UNet neural network model training is a weighted cross entropy loss function, and each pixel point has a weight:
Figure BDA0002806799140000071
the result of the mini-UNet output is a matrix formed by the confidence coefficients of whether each pixel of the input image is an effective region, and the higher the confidence coefficient is, the more likely the pixel belongs to the effective region; the matrix is subjected to operational transformation with a 0.5 threshold:
Figure BDA0002806799140000072
the result after operation is a matrix formed by 0 and 1, 1 represents an effective region, the numerical value 1 forms 1 or more connected domains, and the connected domain M with the largest area is taken(i,j)maxAnd performing the following operation with the input image:
P(i,j)=P(i,j)·M(i,j)max
obtaining an effective image area with an irregular shape, and taking the outer tangent rectangle of the effective image area to obtain a rectangular effective image area P', wherein the width w and the height h are obtained.
And S2, constructing a UNet + + image segmentation neural network model. The narrow-band imaging amplification stomach endoscope image has a certain characteristic difference between a diseased region and a normal region which is not diseased, and the identification of the characteristic difference part is of great help for diagnosis of the stomach cancer. In order to improve the accuracy of image feature difference identification, the segmentation identification of a lesion region and a normal region without lesion is realized by adopting a neural network model UNet + + with better image segmentation effect, and ResNet50 is used as its backbone. The physician with abundant annual capital can mark the canceration position region, such as the UNet + + training picture marking and conversion diagram shown in fig. 6. Then the mark is converted into a black and white 2-color marked picture, white is a cancer region, then the original picture and the black and white 2-color marked picture are both scaled to 512 x 512 size, the 2 pictures are used as 1 piece of training data, a plurality of training data are divided into a training set and a verification set according to a 4:1 ratio, and an UNet + + model is trained, such as the UNet + + model training schematic diagram shown in FIG. 7. The UNet + + image segmentation neural network model is used for identifying the characteristic difference between a lesion area and a normal area without lesion in a narrow-band imaging amplification gastric endoscope image to obtain an area ratio R of an image characteristic difference areaabnormal
And taking the cut effective image area P' as the input of the UNet + + network, wherein the output result is a matrix formed by the difference confidence degrees of each pixel of the input image, the pixel difference confidence degrees represent the specificity degree of the pixel different from other pixels, and the larger the value is, the higher the specificity degree of the pixel different from other pixels is. The matrix is subjected to operational transformation with a 0.5 threshold:
Figure BDA0002806799140000081
1 is expressed as a characteristic difference region, and the connected domain M with the largest area in the connected domains consisting of the numerical value 1 is takenabnormal(i,j)maxThe image characteristic abnormal area is as follows:
Pabnormal(i,j)=P′(i,j)·Mabnormal(i,j)max
calculating the connected domain area:
Sabnormal=∑Mabnormal(i,j)max
calculating the ratio of the maximum connected domain area to the effective image area, namely the area ratio of the image characteristic difference area:
Figure BDA0002806799140000082
test verification shows that the characteristic difference region with the area ratio R smaller than 13% and too small area is neglected to filter possible noise and recognition errors, so that the best recognition effect can be obtained.
And S3, obtaining a microvascular and microstructure morphological map of the image feature difference region by a Generative Additive Networks (GAN) technology after obtaining the image feature difference region (a region which is not filtered due to too small R). The GAN-generated microvascular morphology map shown in figure 8 is schematic. And GAN generation microstructure morphology representation as shown in figure 9.
Aiming at the fineness of the change of the gastric cancer microvessels and microstructures, in order to draw a morphological graph more accurately, 9 lattices are divided on an input picture to obtain 9 sub-regions, and the loss function is improved as follows:
Figure BDA0002806799140000091
wherein:
Figure BDA0002806799140000092
Figure BDA0002806799140000093
wherein:
Sdata={(pi,ai)|pi∈P,ai∈A,i=1,2,...,N}
p is the original image, and a is the generated morphological image. G. D is a generation Model (Generative Model) and a discriminant Model (discriminant Model) of the GAN neural network, respectively.
S4, identifying the microvascular morphological dissimilarity and the microstructure morphological dissimilarity in the microvascular morphological map and the microstructure morphological map by the neural network model ResNet50 obtained by respective training, wherein the dissimilarity is a quantitative value of morphological map irregularity, and the morphological dissimilarity of the microvascular and the microstructure is respectively represented as Vlevel、Slevel
The model training process is as follows: the method comprises the steps of firstly marking the gastric mucosa microvasculature into normal and canceration type 2 morphograms, marking the microstructure morphogram into normal and canceration type 2, and respectively using a deep convolution neural network ResNet50 to carry out learning training on the microvasculature and the microstructure morphogram to respectively obtain a microvasculature heterology recognition model and a microstructure morphism recognition model. Fig. 10 is a schematic diagram of training a microvascular morphological dissimilarity degree identification model. Fig. 11 is a schematic diagram of a microstructure morphology dissimilarity degree recognition model training.
S5, adopting random forest machine learning technique, and using area ratio R of image feature difference areaabnormalAnd the degree of morphological dissimilarity V of the microvessels in the image characteristic difference regionlevelThe morphological dissimilarity degree S of the microstructure in the image characteristic difference arealevelAnd training a random forest model. When the inference is identified, the previously obtained R is usedabnormal、Vlevel、SlevelInputting the trained random forest model to obtain the judgement of cancer and non-cancerCancer picture, previously obtained image feature difference region PabnormalNamely the range of the cancerous region.
The invention flexibly uses various artificial intelligence technologies such as UNet, UNet + +, a generative countermeasure network (GAN), ResNet50, random forest and the like, and aims at the application scene of the invention: carrying out artificial intelligent diagnosis on gastric cancer under a narrow-band imaging amplification gastroscope, improving and optimizing a UNet network model, and constructing a mini-UNet model; identifying an effective image area and a characteristic difference area by adopting an image pixel level confidence coefficient matrix and a threshold filtering mode, and calculating the area ratio of the characteristic difference area; optimizing GAN training learning, and performing 9-grid division on the image to obtain 9 sub-regions; acquiring a microvascular morphology map and a microstructure morphology map of a narrow-band imaging amplification gastric endoscope image by using a GAN technology, and further acquiring a microvascular dissimilarity degree and a microstructure dissimilarity degree; and finally, identifying cancer or non-cancer by the random forest according to the area ratio, the microvascular dissimilarity degree and the microstructural dissimilarity degree of the characteristic difference area. Through clinical multiple verification, the sensitivity and specificity of the invention for cancer and non-cancer identification respectively reach about 93.4 percent and 90.7 percent, and the invention can effectively assist clinicians in cancer and non-cancer discrimination and diagnosis and give out the location range of cancer.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. The artificial intelligent diagnosis method of the gastric cancer under the narrow-band imaging amplification gastroscope is characterized by comprising the following steps: the method comprises the following steps:
s1, constructing a mini-UNet neural network model for identifying the characteristic difference between the effective image area of the static image of the narrow-band imaging amplification stomach endoscope and the black area and character area to obtain a rectangular effective image area P';
s2, constructing a UNet + + image segmentation neural network model for identifying the hair in the narrow-band imaging amplification stomach endoscope imageObtaining the area ratio R of the image characteristic difference region by the characteristic difference between the area with pathological changes and the normal area without pathological changesabnormal
S3, obtaining a microvascular morphology map and a microstructure morphology map of the characteristic abnormal area by using a generative confrontation network GAN technology and taking the image characteristic abnormal area with the area ratio larger than 13% as input;
s4, identifying the microvascular morphological dissimilarity and the microstructure morphological dissimilarity in the microvascular morphological map and the microstructure morphological map by the neural network model ResNet50 obtained by respective training, wherein the dissimilarity is a quantitative value of morphological map irregularity, and the morphological dissimilarity of the microvascular and the microstructure is respectively represented as Vlevel、Slevel
S5, using Rabnormal、Vlevel、SlevelThe random forest model obtained by training is identified and judged to obtain the final judgment of cancer or non-cancer, and the canceration position range of the random forest model which is judged to be cancer is the identified image characteristic difference region Pabnormal
2. The method of artificial intelligence diagnosis of gastric cancer under narrow band imaging magnification gastroscope of claim 1, characterized in that: the feature map calculation method of the mini-UNet segmented image in the step S1 is as follows:
Figure FDA0002806799130000011
where wc is the weight of the balanced class frequency, d1Is the distance from the pixel to the nearest boundary, d2Is the distance, w, from the pixel to the second near boundary0And σ is an empirical constant;
the loss function of the mini-UNet neural network model training is a weighted cross entropy loss function, and each pixel point has a weight:
Figure FDA0002806799130000021
the result of the mini-UNet output is a matrix formed by the confidence coefficients of whether each pixel of the input image is an effective region, and the higher the confidence coefficient is, the more likely the pixel belongs to the effective region; the matrix is subjected to operational transformation with a 0.5 threshold:
Figure FDA0002806799130000022
the result after operation is a matrix formed by 0 and 1, 1 represents an effective region, the numerical value 1 forms 1 or more connected domains, and the connected domain M with the largest area is taken(i,j)maxAnd performing the following operation with the input image:
P(i,j)=P(i,j)·M(i,j)max
obtaining an effective image area with an irregular shape, and taking the outer tangent rectangle of the effective image area to obtain a rectangular effective image area P', wherein the width w and the height h are obtained.
3. The method of artificial intelligence diagnosis of gastric cancer under narrow band imaging magnification gastroscope of claim 1, characterized in that: in step S2, the matrix is transformed by an operation with a threshold value of 0.5:
Figure FDA0002806799130000023
1 is expressed as a characteristic difference region, and the connected domain M with the largest area in the connected domains consisting of the numerical value 1 is takenabnormal(i,j)maxThe image characteristic abnormal area is as follows:
Pabnormal(i,j)=P′(i,j)·Mabnormal(i,j)max
calculating the connected domain area:
Sabnormal=∑Mabnormal(i,j)max
calculating the ratio of the maximum connected domain area to the effective image area, namely the area ratio of the image characteristic difference area:
Figure FDA0002806799130000031
the feature difference region having an area ratio R of less than 13% is disregarded to filter out possible noise and recognition errors.
4. The method of artificial intelligence diagnosis of gastric cancer under narrow band imaging magnification gastroscope of claim 1, characterized in that: in step S3, 9 lattices are divided into image feature abnormal regions to obtain 9 sub-regions, and the loss function of model training is:
Figure FDA0002806799130000032
wherein:
Figure FDA0002806799130000033
Figure FDA0002806799130000034
wherein:
Sdata={(pi,ai)|pi∈P,ai∈A,i=1,2,...,N}
p is the original image, A is the morphological image produced; G. d is a generation model and a discrimination model of the GAN neural network respectively.
5. The method of artificial intelligence diagnosis of gastric cancer under narrow band imaging magnification gastroscope of claim 1, characterized in that: the model training process in step S4 is as follows: the method comprises the steps of firstly marking the gastric mucosa microvasculature into normal and canceration type 2 morphograms, marking the microstructure morphogram into normal and canceration type 2, and respectively using a deep convolution neural network ResNet50 to carry out learning training on the microvasculature and the microstructure morphogram to respectively obtain a microvasculature heterology recognition model and a microstructure morphism recognition model.
CN202011371330.3A 2020-11-30 2020-11-30 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope Pending CN112435246A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011371330.3A CN112435246A (en) 2020-11-30 2020-11-30 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011371330.3A CN112435246A (en) 2020-11-30 2020-11-30 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope

Publications (1)

Publication Number Publication Date
CN112435246A true CN112435246A (en) 2021-03-02

Family

ID=74698409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011371330.3A Pending CN112435246A (en) 2020-11-30 2020-11-30 Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope

Country Status (1)

Country Link
CN (1) CN112435246A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926478A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Gender identification method, system, electronic device and storage medium
CN113012140A (en) * 2021-03-31 2021-06-22 武汉楚精灵医疗科技有限公司 Digestive endoscopy video frame effective information region extraction method based on deep learning
CN113269747A (en) * 2021-05-24 2021-08-17 浙江大学医学院附属第一医院 Pathological picture liver cancer diffusion detection method and system based on deep learning
CN113344859A (en) * 2021-05-17 2021-09-03 武汉大学 Method for quantifying capillary surrounding degree of gastric mucosa staining amplification imaging
CN113344860A (en) * 2021-05-17 2021-09-03 武汉大学 Abnormal degree quantification method for microstructure of gastric mucosa staining and amplifying image
CN113393425A (en) * 2021-05-19 2021-09-14 武汉大学 Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging
CN113469941A (en) * 2021-05-27 2021-10-01 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113706539A (en) * 2021-10-29 2021-11-26 南京裕隆生物医学发展有限公司 Artificial intelligence auxiliary system for identifying tumors
CN113989496A (en) * 2021-11-22 2022-01-28 杭州艾名医学科技有限公司 Cancer organoid recognition method
CN114266794A (en) * 2022-02-28 2022-04-01 华南理工大学 Pathological section image cancer region segmentation system based on full convolution neural network
CN114359280A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium
CN115393356A (en) * 2022-10-27 2022-11-25 武汉楚精灵医疗科技有限公司 Target part abnormal form recognition method and device and computer readable storage medium
CN115393230A (en) * 2022-10-28 2022-11-25 武汉楚精灵医疗科技有限公司 Ultrasonic endoscope image standardization method and device and related device thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018008593A1 (en) * 2016-07-04 2018-01-11 日本電気株式会社 Image diagnosis learning device, image diagnosis device, image diagnosis method, and recording medium for storing program
CN109871630A (en) * 2019-02-28 2019-06-11 浙江工业大学 A kind of manually generated method of capilary tree construction
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
WO2019223147A1 (en) * 2018-05-23 2019-11-28 平安科技(深圳)有限公司 Liver canceration locating method and apparatus, and storage medium
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
WO2020071678A2 (en) * 2018-10-02 2020-04-09 한림대학교 산학협력단 Endoscopic apparatus and method for diagnosing gastric lesion on basis of gastroscopy image obtained in real time
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111583246A (en) * 2020-05-11 2020-08-25 北京小白世纪网络科技有限公司 Method for classifying liver tumors by utilizing CT (computed tomography) slice images
CN111833343A (en) * 2020-07-23 2020-10-27 北京小白世纪网络科技有限公司 Coronary artery stenosis degree estimation method system and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018008593A1 (en) * 2016-07-04 2018-01-11 日本電気株式会社 Image diagnosis learning device, image diagnosis device, image diagnosis method, and recording medium for storing program
WO2019223147A1 (en) * 2018-05-23 2019-11-28 平安科技(深圳)有限公司 Liver canceration locating method and apparatus, and storage medium
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
WO2020071678A2 (en) * 2018-10-02 2020-04-09 한림대학교 산학협력단 Endoscopic apparatus and method for diagnosing gastric lesion on basis of gastroscopy image obtained in real time
CN109871630A (en) * 2019-02-28 2019-06-11 浙江工业大学 A kind of manually generated method of capilary tree construction
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111583246A (en) * 2020-05-11 2020-08-25 北京小白世纪网络科技有限公司 Method for classifying liver tumors by utilizing CT (computed tomography) slice images
CN111833343A (en) * 2020-07-23 2020-10-27 北京小白世纪网络科技有限公司 Coronary artery stenosis degree estimation method system and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鹿博;吴明波;周甜;张双双;王艳;熊轩轩;张海燕;: "窄带成像技术联合超声内镜诊断直肠癌的临床价值分析", 胃肠病学和肝病学杂志, no. 10 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926478A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Gender identification method, system, electronic device and storage medium
CN113012140A (en) * 2021-03-31 2021-06-22 武汉楚精灵医疗科技有限公司 Digestive endoscopy video frame effective information region extraction method based on deep learning
CN113344859B (en) * 2021-05-17 2022-04-26 武汉大学 Method for quantifying capillary surrounding degree of gastric mucosa staining amplification imaging
CN113344859A (en) * 2021-05-17 2021-09-03 武汉大学 Method for quantifying capillary surrounding degree of gastric mucosa staining amplification imaging
CN113344860A (en) * 2021-05-17 2021-09-03 武汉大学 Abnormal degree quantification method for microstructure of gastric mucosa staining and amplifying image
CN113344860B (en) * 2021-05-17 2022-04-29 武汉大学 Abnormal degree quantification method for microstructure of gastric mucosa staining and amplifying image
CN113393425A (en) * 2021-05-19 2021-09-14 武汉大学 Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging
CN113393425B (en) * 2021-05-19 2022-04-26 武汉大学 Microvessel distribution symmetry quantification method for gastric mucosa staining amplification imaging
CN113269747A (en) * 2021-05-24 2021-08-17 浙江大学医学院附属第一医院 Pathological picture liver cancer diffusion detection method and system based on deep learning
CN113469941A (en) * 2021-05-27 2021-10-01 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113469941B (en) * 2021-05-27 2022-11-08 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113706539A (en) * 2021-10-29 2021-11-26 南京裕隆生物医学发展有限公司 Artificial intelligence auxiliary system for identifying tumors
CN113989496B (en) * 2021-11-22 2022-07-12 杭州艾名医学科技有限公司 Cancer organoid recognition method
CN113989496A (en) * 2021-11-22 2022-01-28 杭州艾名医学科技有限公司 Cancer organoid recognition method
CN114266794B (en) * 2022-02-28 2022-06-10 华南理工大学 Pathological section image cancer region segmentation system based on full convolution neural network
CN114266794A (en) * 2022-02-28 2022-04-01 华南理工大学 Pathological section image cancer region segmentation system based on full convolution neural network
CN114359280A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium
CN115393356A (en) * 2022-10-27 2022-11-25 武汉楚精灵医疗科技有限公司 Target part abnormal form recognition method and device and computer readable storage medium
CN115393356B (en) * 2022-10-27 2023-02-03 武汉楚精灵医疗科技有限公司 Target part abnormal form recognition method and device and computer readable storage medium
CN115393230A (en) * 2022-10-28 2022-11-25 武汉楚精灵医疗科技有限公司 Ultrasonic endoscope image standardization method and device and related device thereof
CN115393230B (en) * 2022-10-28 2023-02-03 武汉楚精灵医疗科技有限公司 Ultrasonic endoscope image standardization method and device and related device thereof

Similar Documents

Publication Publication Date Title
CN112435246A (en) Artificial intelligent diagnosis method for gastric cancer under narrow-band imaging amplification gastroscope
JP6657480B2 (en) Image diagnosis support apparatus, operation method of image diagnosis support apparatus, and image diagnosis support program
CN110189303B (en) NBI image processing method based on deep learning and image enhancement and application thereof
CN109598709B (en) Mammary gland auxiliary diagnosis system and method based on fusion depth characteristic
JP4409166B2 (en) Image processing device
CN111899229A (en) Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
TWI673683B (en) System and method for identification of symptom image
CN114372951A (en) Nasopharyngeal carcinoma positioning and segmenting method and system based on image segmentation convolutional neural network
CN113837989B (en) Large intestine endoscope polyp detection and pathological classification method based on anchor-free frame
CN105512612A (en) SVM-based image classification method for capsule endoscope
Maghsoudi et al. A computer aided method to detect bleeding, tumor, and disease regions in Wireless Capsule Endoscopy
CN112529892A (en) Digestive tract endoscope lesion image detection method, digestive tract endoscope lesion image detection system and computer storage medium
CN114897094A (en) Esophagus early cancer focus segmentation method based on attention double-branch feature fusion
CN114708258B (en) Eye fundus image detection method and system based on dynamic weighted attention mechanism
CN114155202A (en) Thyroid nodule ultrasonic image classification method based on feature fusion and transfer learning
CN113344859A (en) Method for quantifying capillary surrounding degree of gastric mucosa staining amplification imaging
Naz et al. Segmentation and Classification of Stomach Abnormalities Using Deep Learning.
CN114494106A (en) Deep learning multi-feature fusion-based oral mucosal disease identification method
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
Liedlgruber et al. A summary of research targeted at computer-aided decision support in endoscopy of the gastrointestinal tract
CN116434920A (en) Gastrointestinal epithelial metaplasia progression risk prediction method and device
CN111476312A (en) Method for classifying lesion images based on convolutional neural network
CN116228709A (en) Interactive ultrasonic endoscope image recognition method for pancreas solid space-occupying focus
CN114581408A (en) Gastroscope polyp detection method based on YOLOV5
CN112950601B (en) Picture screening method, system and storage medium for esophageal cancer model training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination