CN111899229A - Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology - Google Patents

Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology Download PDF

Info

Publication number
CN111899229A
CN111899229A CN202010675476.0A CN202010675476A CN111899229A CN 111899229 A CN111899229 A CN 111899229A CN 202010675476 A CN202010675476 A CN 202010675476A CN 111899229 A CN111899229 A CN 111899229A
Authority
CN
China
Prior art keywords
model
image
deep learning
diagnosis
lesions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010675476.0A
Other languages
Chinese (zh)
Inventor
胡孝
刘奇为
于天成
胡珊
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Endoangel Medical Technology Co Ltd
Original Assignee
Wuhan Endoangel Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Endoangel Medical Technology Co Ltd filed Critical Wuhan Endoangel Medical Technology Co Ltd
Priority to CN202010675476.0A priority Critical patent/CN111899229A/en
Publication of CN111899229A publication Critical patent/CN111899229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Endoscopes (AREA)

Abstract

The invention relates to the technical field of medical image processing, in particular to an early gastric cancer auxiliary diagnosis method based on a deep learning multi-model fusion technology, which comprises the following steps: s1, constructing multiple models; s2, collecting gastroscope images to form continuous serialized image frames, identifying the light source mode of the current image frame by using the image classification model 1, entering the step S3 to mark the position of a focus by using the target detection model 2 when the light source mode is identified as a white light mode, and marking the high-risk focus by using the image classification model 3; when the image is identified as the staining amplification mode, the step S4 is performed to extract the boundary range, the microvascular morphology and the micro tissue structure feature map in the image frame in real time by the example segmentation model set, and the decision model 7 outputs whether the image is cancerous, the degree of reliability and the differentiation type. The invention constructs a plurality of deep learning models according to different tasks, and provides a full-flow intelligent auxiliary diagnosis function in the stomach early cancer screening process of endoscopic physicians by adopting a parallel-cascade model fusion technology.

Description

Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
Technical Field
The invention relates to the technical field of medical image processing, in particular to an early gastric cancer auxiliary diagnosis method based on a deep learning multi-model fusion technology.
Background
Gastric cancer is a common malignant tumor, and the global cancer statistical data in 2018 show that the incidence and mortality of gastric cancer are respectively ranked 3 rd, which seriously threatens the life quality and life safety of patients and causes huge sanitary burden. The 5-year survival rate of early gastric cancer reaches more than 90 percent, so the early discovery, early diagnosis and early treatment of gastric cancer have great significance, can effectively improve the prognosis of patients, save the lives of the patients, save happy families and save medical resources, and has great social value and economic value.
Electronic endoscopes are currently the most developed and most effective means for screening early gastric cancer in the world, and endoscopists can diagnose whether early gastric cancer has appeared by carefully observing subtle changes of gastric mucosa through white light and a dye-amplified endoscope. However, the mucosa under the endoscope of the stomach precancer shows a variety of manifestations, and the experience and the diagnosis level of an endoscope physician are different, so that the problems that the focus cannot be found, the focus can not be found but the diagnosis is wrong and the like generally exist.
With the gradual maturity and wide application of artificial intelligence technology, the artificial intelligence technology has been introduced into the screening process of the stomach precancer at present, for example, the system in patent 1 "CN 107967946A-gastroscope operation real-time auxiliary system and method based on deep learning" can assist an endoscopic physician to comprehensively and thoroughly observe the whole stomach cavity, so as to prevent focus omission; the method in the patent 2 'an NBI image processing method based on deep learning and image enhancement and application thereof' extracts characteristics such as microvessels and microstructures of an NBI image by using a deep learning algorithm and an image enhancement technology, presents the characterized image to an endoscopist, and accurately and effectively assists the doctor in diagnosis; in a patent 3 "CN 110974179A-an auxiliary diagnosis system for gastric precancer under electronic staining endoscope based on deep learning", under the electronic staining endoscope, a suspected gastric precancer lesion is marked by a rectangular frame by using a deep learning algorithm, and an endoscopist is prompted to carry out careful observation.
The above-listed patent 1 uses a deep learning image classification technique, which is only used for quality control and does not relate to lesion diagnosis; in patent 2, the characteristics of the microvessels and microstructures of the NBI picture are extracted by using a deep learning style migration algorithm and presented to an endoscopist, and the changes of the microvessels and the microstructures are not diagnosed and analyzed, such as whether the microvessels are distorted or deformed, whether the microstructures are neat or disappear, and no diagnosis suggestion is provided for the properties of focuses, so that the assistance effect on the endoscopist is relatively limited; in patent 3, only a deep learning target detection algorithm is used for marking a suspected gastric precancerous lesion by using a rectangular frame under an electronic staining endoscope, and only the rough boundary range characteristics of the lesion are extracted and presented to an endoscope physician, so that no diagnosis suggestion is provided for the nature of the lesion, and the auxiliary effect on the physician is relatively limited. In conclusion, the prior art cannot accurately identify the focus and provide a diagnosis suggestion, has limited auxiliary effect, and causes high risk of missed diagnosis and misdiagnosis, and therefore, the advanced gastric cancer auxiliary diagnosis method based on the deep learning multi-model fusion technology is provided.
Disclosure of Invention
Based on the technical problems in the background art, the invention provides an auxiliary diagnosis method for early gastric cancer based on a deep learning multi-model fusion technology, which has the characteristics that expert-level diagnosis opinions meeting diagnosis standards and fitting diagnosis habits can be quickly obtained in real time so as to make more accurate judgment and reduce the risks of missed diagnosis and misdiagnosis, and solves the problems that the prior art cannot accurately identify focuses and provide diagnosis suggestions, has limited auxiliary effects and causes large risks of missed diagnosis and misdiagnosis.
The invention provides the following technical scheme: a gastric precancer auxiliary diagnosis method based on a deep learning multi-model fusion technology comprises the following steps:
s1, constructing an image classification model 1 for identifying white light and electronic staining amplification light source modes, a target detection model 2 for marking and tracking suspicious lesions under white light, an image classification model 3 for high-low risk analysis of lesions under white light, an example segmentation model group for feature extraction of boundary range, microvascular morphology and micro-tissue structure of lesions under staining amplification, and a decision model 7 for lesion property analysis by integrating a plurality of key features;
s2, the image acquisition module carries out frame skipping acquisition on continuous gastroscope images at fixed time intervals to form continuous serialized image frames, the image classification model 1 is used for identifying the light source mode of the current image frame, and the step S3 is carried out when the image classification model 1 identifies the light source mode as a white light mode; if the mode is identified as the dyeing magnification mode, the process proceeds to step S4;
s3, detecting suspicious focuses in the image frames in real time by the target detection model 2, marking the positions of the focuses, extracting the focuses from an original image by adopting a conventional image processing algorithm, predicting the risk degree of the focuses by utilizing the image classification model 3, and marking high-risk focuses;
s4, extracting boundary range, microvascular morphology and microtissue structure feature maps in the image frame in real time by the example segmentation model group, parallelly transmitting the three extracted feature maps into the constructed decision model 7, outputting whether the image is cancerated, the degree of credibility and the differentiation type by the decision model 7, and marking the range of the cancer in the original image if the image is judged to be cancerous.
Preferably, the position of the lesion is marked by a blue rectangular frame in the step S3, and if the lesion is determined to be a high risk lesion, the lesion is marked again by a red rectangular frame; the blue frame or the red frame mark can be displayed to an endoscope physician in real time for diagnosis reference.
Preferably, the lesions in step S3 are classified into low risk lesions and high risk lesions, wherein the low risk lesions include inflammatory lesion signs of gastric polyp, superficial gastritis, hemorrhagic gastritis, erosive gastritis, atrophic gastritis, bile reflux, benign ulcer, intestinal metaplasia, and the high risk lesions include neoplastic lesion signs of low grade intraepithelial neoplasia, high grade intraepithelial neoplasia, advanced carcinoma, and intra-mucosal carcinoma.
Preferably, the example segmentation model group consists of a deep learning model 4 for extracting boundaries, a deep learning model 5 for extracting microvessels and a deep learning model 6 for extracting microstructures, the network structural characteristics of the three deep learning models are completely consistent, and the complete extraction of the key features of the gastric precancer under the condition of dyeing and amplification is constructed in a parallel working mode.
Preferably, in step S4, the range of cancer is outlined by red polygons in the original image, and the comprehensive feature map and the diagnosis information are displayed in real time to the endoscopist for reference in diagnosis, wherein the comprehensive feature map is formed by superimposing three feature maps.
The invention provides a stomach early cancer auxiliary diagnosis method based on a deep learning multi-model fusion technology, which adopts the deep learning multi-model fusion technology and has the advantages of novelty and practicability: on one hand, in order to completely cover the whole stomach early cancer diagnosis process, the scheme provides a full-process diagnosis method under white light and electronic dyeing amplification according to clinical guidelines and expert consensus, and completely accords with the diagnosis habit and diagnosis logic of an endoscopic physician; on the other hand, according to the task characteristics of each link in the diagnosis process, a cascade model structure of deep learning target detection and image classification under white light is constructed by the scheme, and is used for focus tracking and focus height risk analysis to avoid missed diagnosis; a parallel model structure for deep learning of multiple example segmentation under dyeing amplification is constructed, characteristics of boundaries, capillaries and microstructures are accurately extracted and presented to an endoscope physician for auxiliary judgment, a cascade model structure for deep learning of example segmentation and depth decision is constructed, characteristics of the boundaries, the capillaries and the microstructures are comprehensively analyzed and decided, and clear diagnosis opinions of whether canceration exists, the degree of reliability and the differentiation type are given for reference of the endoscope physician.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a process description diagram of an embodiment of the invention;
FIG. 3 is a labeled diagram of a target detection model according to the present invention;
FIG. 4 is a labeled diagram of an exemplary segmentation model of the present invention;
FIG. 5 is a flow chart of the diagnosis of stomach early cancer under the condition of dyeing and magnification according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a gastric precancer auxiliary diagnosis method based on a deep learning multi-model fusion technology comprises the following steps:
s1, constructing an image classification model 1 for identifying white light and electronic staining amplification light source modes, a target detection model 2 for marking and tracking suspicious lesions under white light, an image classification model 3 for high-low risk analysis of lesions under white light, an example segmentation model group for feature extraction of boundary range, microvascular morphology and micro-tissue structure of lesions under staining amplification, and a decision model 7 for lesion property analysis by integrating a plurality of key features;
s2, the image acquisition module carries out frame skipping acquisition on continuous gastroscope images at fixed time intervals to form continuous serialized image frames, the image classification model 1 is used for identifying the light source mode of the current image frame, and the step S3 is carried out when the image classification model 1 identifies the light source mode as a white light mode; if the mode is identified as the dyeing magnification mode, the process proceeds to step S4;
s3, detecting suspicious focuses in the image frames in real time by the target detection model 2, marking the positions of the focuses, extracting the focuses from an original image by adopting a conventional image processing algorithm, predicting the risk degree of the focuses by utilizing the image classification model 3, and marking high-risk focuses;
s4, extracting boundary range, microvascular morphology and microtissue structure feature maps in the image frame in real time by the example segmentation model group, parallelly transmitting the three extracted feature maps into the constructed decision model 7, outputting whether the image is cancerated, the degree of credibility and the differentiation type by the decision model 7, and marking the range of the cancer in the original image if the image is judged to be cancerous.
Example (b):
s1, the process of constructing the multiple models comprises the following steps:
firstly, constructing a deep learning image classification model DCNN1 for identifying white light and electronic dyeing amplification light source modes; constructing a deep learning target detection model DCNN2 for marking and tracking suspicious lesions under white light; constructing a deep learning image classification model DCNN3 for high-low risk analysis of the focus under white light; constructing a deep learning example segmentation model group (DCNNS, which is composed of DCNN4, DCNN5 and DCNN 6) for the characteristic extraction of the boundary range, the microvascular morphology and the micro-tissue structure of the lesion under the dyeing magnification; and constructing a deep learning decision model DCNN7 for lesion property analysis by integrating a plurality of key features. The example segmentation model group is composed of a deep learning model 4 for extracting boundaries, a deep learning model 5 for extracting microvessels and a deep learning model 6 for extracting microstructures, network structural characteristics of the three deep learning models are completely consistent, and a parallel working mode is adopted to construct complete extraction of key features of gastric precancer under dyeing amplification.
The DCNN1-7 respectively corresponds to the image classification model 1, the target detection model 2, the image classification model 3, the deep learning model 4, the degree learning model 5, the deep learning model 6 and the decision model 7. DCNNS corresponds to the instance segmentation model set. DCNN stands for deep convolutional neural network.
S2, the front image acquisition module constructed based on the powerful general image processing library OpenCv carries out frame skipping acquisition on continuous gastroscope images at fixed time intervals to form continuous serialized image frames. The length of the acquisition time interval depends on the time consumed by the deep learning model in the subsequent steps to complete one complete diagnosis process, the faster the model is predicted, the shorter the acquisition time interval, and the better the continuity and the real-time performance of the diagnosis performance.
Factors influencing the model prediction speed mainly include: the processing performance of the GPU and the CPU of the loaded hardware platform and the size of the model network parameters. The preferred hardware configuration of this embodiment is: a CPU processor: intel core processor I7-9700K; GPU graphic processor: maso RTX2080 Ti.
And identifying the light source mode of the current image frame by using the constructed classification model DCNN1 capable of identifying the white light and the electronic dyeing magnification mode. The task needs to train an image classification (image classification) model, and since white light and electronic stained images are extremely easy to distinguish for endoscopists, the classification model with less network parameters pre-trained on the largest global public data set ImageNet is preferred in the embodiment, such as MobileNet, inclusion and the like, and the small model has the advantages that: the accuracy meets the requirement, the network parameter is very small, and the prediction speed is high.
The convolutional neural network for image classification generally consists of the following structure:
(1) an input layer: the pixel matrix is usually a color picture or a gray picture, the length and the width in the three-dimensional matrix represent the size of an image, and the depth represents a color channel;
(2) and (3) rolling layers: each node input of the convolutional layer is only a small block of an upper layer network, and the input is usually 3x3 or 5x5 and is used for acquiring features with higher abstraction degree;
(3) a pooling layer: the depth of the three-dimensional matrix is not changed, but the matrix size is reduced. The image with high resolution is converted into the image with lower resolution, so that the number of nodes in the full-connection layer can be further reduced, and the aim of reducing parameters is fulfilled;
(4) full connection layer: the final result is generally obtained from 1 to 2 fully connected layers. After several rounds of convolution and pooling, the information in the image is abstracted into features with higher information content. The full connection layer finishes the classification task;
(5) softmax layer: probability distributions of the current sample belonging to different classes can be obtained.
Senior endoscopists classify the labeled pictures into two categories, namely white light and electronic staining amplification to construct a data set, and train the DCNN1 model to ensure that the expected accuracy index is achieved. The required accuracy of the model performance reaches more than 99%.
And S3 and DCNN1 detect suspicious lesions in the image frame in real time by using the constructed deep learning target detection model DCNN2 when the white light mode is identified, and mark the positions of the lesions by using a blue rectangular frame.
The Object Detection (Object Detection) algorithm based on deep learning is accumulated and improved for a long time, the performance tends to be stable, and the current mainstream algorithms include SSD, YOLO and Retina-Net. In consideration of the requirement of high real-time performance, this embodiment preferably selects YOLO (young Only Look once) series latest version YOLO43, and the detection speed of this algorithm is very fast to 40fps (video greater than 20fps does not feel delayed to human eyes), and the accuracy is also good. The fundamental principle of the YOLO algorithm: the image is first divided into defined bounding boxes and then the recognition algorithms of all these boxes are run in parallel to identify to which object class they belong. After identifying these classes, it continues to smartly merge the boxes to form an optimal bounding box around the object. An expert endoscopist constructs a lesion detection data set by marking the lesion position of each picture by using a marking tool, trains a DCNN2 model to ensure that a desired accuracy index is achieved, and the marking schematic diagram is shown in FIG. 3.
Extracting the focus from an original image by adopting a conventional image processing algorithm, and predicting the risk degree of the focus by utilizing a constructed deep learning classification model DCNN 3; if the high-risk focus is judged, the focus is marked again by a red rectangular frame; the blue frame or the red frame mark can be displayed to an endoscope physician in real time for diagnosis reference.
The step also needs to train a classification model, and due to the fact that disease species analysis is involved, the task difficulty is high, and the requirement on precision indexes is high. Therefore, the embodiment prefers the pre-trained classification models with larger network parameters, such as VGG, ResNet, densneet, etc. The pathological result is used as a golden standard to construct a training sample set, wherein low-risk focuses comprise inflammatory pathological signs such as gastric polyp, superficial gastritis, hemorrhagic gastritis, erosive gastritis, atrophic gastritis, bile reflux, benign ulcer and intestinal metaplasia, high-risk focuses comprise neoplastic pathological signs such as low-grade intraepithelial neoplasia, high-grade intraepithelial neoplasia, advanced cancer and intra-mucosal cancer, and the DCNN3 model is trained to ensure that the expected precision index is reached.
The input of the classification model DCNN3 in the above steps is a focus map extracted based on the prediction result of the object detection model DCNN 2; the DCNN2 realizes focus tracking, the DCNN3 realizes focus property analysis, and the two cascade working modes of deep learning models with different network structure characteristics construct the implementation process of intelligent diagnosis of the early gastric carcinoma under white light, thereby conforming to the diagnosis process of the early gastric carcinoma under white light. The requirements of the process on the performance of the focus detection model are as follows: the sensitivity reaches more than 99 percent, and no focus omission is ensured; the requirements on the disease classification model performance are as follows: the sensitivity reaches more than 95 percent, the specificity reaches more than 80 percent, and the tumor and above lesion missed diagnosis rate is reduced as much as possible.
And S4, when the DCNN1 is identified as a dyeing and amplifying mode, extracting a boundary range, a microvascular morphology and a micro-tissue structure feature map in an image frame in real time by using the constructed deep learning example segmentation model group. The example segmentation (InstanceSegmentation) algorithm belongs to the field of image segmentation (image segmentation) of digital image processing, and is a very important and fundamental research direction in the field of computer vision. Briefly, instance segmentation detects object contours or boundaries in an image and then labels each pixel.
The example segmentation algorithm has long development history and successively goes through the following stages:
(1) a threshold-based segmentation method.
(2) An edge-based segmentation method.
(3) A region-based segmentation method.
(4) An image segmentation method based on cluster analysis.
(5) A segmentation method based on wavelet transform.
(6) Based on mathematical morphological methods.
(7) A convolutional neural network based approach.
The example segmentation algorithm based on deep learning obtains a linear decision function by training a multilayer perceptron, and then the decision function is used for classifying pixels to achieve the purpose of segmentation. The performance of the algorithm under a complex task is obviously superior to that of the traditional algorithm and becomes a main research direction, and the mainstream algorithms at present comprise SegNet, RefineNet, MaskR-CNN and U-Net. U-Net is a network structure which is widely applied in the field of medical image processing at present and is used most in a large number of AI technical papers and the field of medical AI research, so that U-Net is preferred in the embodiment of the invention. The algorithm is a full convolution neural network, is similar to a U-shaped network, is named after the U-shaped network, is used for inputting and outputting images, has no full connection layer, can be combined with bottom layer information and high layer information at the same time, the bottom layer information is beneficial to improving the precision, and the high layer information is used for extracting complex features. The performance characteristics are as follows: the method has the characteristics of rapid training convergence and strong network generalization capability.
Similar to the training set construction of the target detection model, an expert endoscopist is required to construct a data set by labeling feature information of each picture using a labeling tool. In the step, 3 different data sets need to be established, the labeling task outlines the boundary range, the microvascular structure and the microstructure structure respectively, and the labeling schematic diagram is shown in an attached figure 4. The three deep learning models related in the step belong to the example segmentation category, the network structure characteristics are completely consistent, the complete extraction of the key features of the gastric precancer under the condition of dyeing and amplification is constructed in a parallel working mode, and the diagnosis process of the gastric precancer under the condition of dyeing and amplification is met. The requirements of the process on the model performance are: the coincidence degree (IOU) of the feature region extracted by the AI and the area of the artificial mark is more than 0.7, so that the completeness of feature information can be ensured.
The three feature graphs extracted in the steps are parallelly transmitted into a constructed deep learning decision model DCNN7, and the decision model outputs whether canceration exists, the degree of credibility and the differentiation type; if the cancer is judged, the range of the cancer is outlined by a red polygon in the original image, and meanwhile, the comprehensive characteristic diagram (the superposition of the three characteristic diagrams) and the diagnosis information are displayed to an endoscope doctor in real time for diagnosis reference.
The step involves an artificial intelligence decision algorithm, and the decision algorithm is suitable for solving the problem that a certain complex data rule or logic relation exists between a decision result and a plurality of relevant input characteristics. Decision-making models which are most widely applied at present comprise decision trees, random forests and deep forests. The random forest belongs to traditional machine learning, is used in a large amount in real data analysis, greatly improves accuracy compared with a decision tree, improves the characteristic that the decision tree is easy to attack to a certain extent, and still has the problem of easy overfitting. The deep forest algorithm can build a multilayer structure by introducing fine-grained scanning and cascading operation (cascading), and the structure has adaptive model complexity and can achieve competitive performance on various types of tasks. According to the diagnosis process of early gastric cancer under dyeing and amplification described in fig. 5, in the embodiment of the invention, the dyeing and amplification lower boundary range, the microvascular structure and the microstructure structure are the main characteristics for diagnosing gastric cancer, the pixel matrixes of the three characteristic maps are used as the input of a deep forest model, and multiple rounds of training samples and random extraction training of verification samples are carried out to obtain a decision model meeting performance requirements.
The input of the depth decision model DCNN7 is a boundary, micro-vessel and microstructure feature map extracted based on the instance segmentation model group DCNNS; the DCNNS realizes the extraction of key features, the DCNN7 realizes the focus property analysis of a plurality of key features, the cascade working mode of two deep learning models with different network structural characteristics constructs the realization process of the intelligent diagnosis of the stomach early cancer under the dyeing and amplification, and the extraction and display of feature maps can help an endoscopic physician to understand the diagnosis idea of the deep learning models, so that the deep learning models have strong interpretability.
Early gastric cancer screening is generally divided into two stages, primary screening and screening. Prescreening refers to patients undergoing routine gastroscopic examination or symptomatic gastroscopic review. The primary screening device adopts a common white-light gastroscope, and has the advantages that the whole gastric mucosa can be comprehensively and clearly observed, the mucosal characteristics of early gastric cancer are familiar, and suspicious lesions such as local mucosal color, surface structure change and the like can be found. The whole examination process is about 4-6 minutes, an endoscopist can comprehensively observe the stomach cavity under white light, and when a suspicious lesion (the lesion which can not eliminate early cancer) is found, the patient is advised to arrange the next stage of endoscopic fine examination as soon as possible. Because the common white-light gastroscope can only evaluate lesions (lesion positions and range sizes) from the whole body and cannot observe lesion parts (precise boundaries, microvessels and microstructures), the important significance of the common white-light gastroscope is to ensure that all suspicious lesions can be found without missing detection.
The precision examination is different, and the precision examination equipment adopts a dyeing amplification gastroscope, and has the advantages that the mucous membrane microstructure is highlighted by changing the spectrum, the pathological change microstructure and the microvessels can be highlighted under amplification, the defect of local characteristic observation of a common white light endoscope is overcome, and the accuracy of stomach precancer screening is improved. Related researches show that the sensitivity of the common white light endoscope for diagnosing the stomach precancer is only 40%, and if the common white light endoscope is combined with electronic staining amplification, the sensitivity can be improved to 95%. The process of scrutiny is typically about 30 minutes, charges 4 times more than a typical white light endoscope, and requires a more experienced and specialized endoscopist to complete the procedure. An endoscopist needs to perform careful observation under dyeing and amplification on each suspicious lesion, and comprehensively judges whether an obvious boundary exists, whether a microstructure disappears, whether a capillary is abnormal and other characteristic factors exist, and the like, as shown in the attached figure 5; if the cancer is suspected, the differentiation type of the cancer area is further diagnosed, then a plurality of tissues inside and outside the cancer area are biopsied, and sent to a pathology department for final pathological diagnosis (the pathology is the gold standard for diagnosing the gastric cancer). Therefore, the important significance of the scrutiny lies in that the lesion property of the focus is more accurately judged according to the performance of the staining and amplifying endoscope, so that the pathological biopsy is more accurately guided, the biopsy efficiency is improved, the early cancer detection rate is improved, and the missed diagnosis and misdiagnosis risks are greatly reduced.
The diagnosis process of the stomach precancer is extremely complex and fussy, because the change of the stomach precancer focus is usually slight, the focus has no specific expression under white light, and missed diagnosis is easy to cause; the diagnosis standard of stomach precancer under electronic staining and amplification is very complex, the lesion expression forms are different, and an endoscopic physician has strong knowledge storage and rich experience to finish effective diagnosis.
Therefore, although the deep learning algorithm has strong technical advantages and mature application results in the field of digestive tract image processing, the single type of deep learning algorithm cannot solve the complex task of the auxiliary diagnosis of the gastric precancer, and the core content of the invention is to construct a plurality of deep learning models (related to classification, target detection, instance segmentation and deep decision tasks) according to different tasks and provide a full-flow intelligent auxiliary diagnosis function in the process of screening the gastric precancer by endoscopic physicians by adopting a parallel and cascaded model fusion technology. The deep learning model related by the invention is trained under a deep learning open source framework Tensenflow2.0 with the widest global application, and compared with other frameworks, the deep learning model has the advantages that: the training efficiency is high, the process is visualized, and the parameter tuning is visual, simple and convenient.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. A gastric precancer auxiliary diagnosis method based on deep learning multi-model fusion technology is characterized by comprising the following steps: the method comprises the following steps:
s1, constructing an image classification model 1 for identifying white light and electronic staining amplification light source modes, a target detection model 2 for marking and tracking suspicious lesions under white light, an image classification model 3 for high-low risk analysis of lesions under white light, an example segmentation model group for feature extraction of boundary range, microvascular morphology and micro-tissue structure of lesions under staining amplification, and a decision model 7 for lesion property analysis by integrating a plurality of key features;
s2, the image acquisition module carries out frame skipping acquisition on continuous gastroscope images at fixed time intervals to form continuous serialized image frames, the image classification model 1 is used for identifying the light source mode of the current image frame, and the step S3 is carried out when the image classification model 1 identifies the light source mode as a white light mode; if the mode is identified as the dyeing magnification mode, the process proceeds to step S4;
s3, detecting suspicious focuses in the image frames in real time by the target detection model 2, marking the positions of the focuses, extracting the focuses from an original image by adopting a conventional image processing algorithm, predicting the risk degree of the focuses by utilizing the image classification model 3, and marking high-risk focuses;
s4, extracting boundary range, microvascular morphology and microtissue structure feature maps in the image frame in real time by the example segmentation model group, parallelly transmitting the three extracted feature maps into the constructed decision model 7, outputting whether the image is cancerated, the degree of credibility and the differentiation type by the decision model 7, and marking the range of the cancer in the original image if the image is judged to be cancerous.
2. The method for auxiliary diagnosis of early gastric cancer based on deep learning multi-model fusion technology as claimed in claim 1, wherein: marking the position of the focus by using a blue rectangular frame in the step S3, and if the focus is judged to be a high-risk focus, re-marking by using a red rectangular frame; the blue frame or the red frame mark can be displayed to an endoscope physician in real time for diagnosis reference.
3. The method for auxiliary diagnosis of early gastric cancer based on deep learning multi-model fusion technology as claimed in claim 1, wherein: the lesions in step S3 are classified into low risk lesions and high risk lesions, wherein the low risk lesions include inflammatory lesions of stomach polyp, superficial gastritis, hemorrhagic gastritis, erosive gastritis, atrophic gastritis, bile reflux, benign ulcer, intestinal metaplasia, and the high risk lesions include neoplastic lesions of low grade intraepithelial neoplasia, high grade intraepithelial neoplasia, advanced stage cancer, and intramucosal cancer.
4. The method for auxiliary diagnosis of early gastric cancer based on deep learning multi-model fusion technology as claimed in claim 1, wherein: the example segmentation model group is composed of a deep learning model 4 for extracting boundaries, a deep learning model 5 for extracting microvessels and a deep learning model 6 for extracting microstructures, network structural characteristics of the three deep learning models are completely consistent, and a parallel working mode is adopted to construct complete extraction of key features of gastric precancer under dyeing amplification.
5. The method for auxiliary diagnosis of early gastric cancer based on deep learning multi-model fusion technology as claimed in claim 1, wherein: in step S4, the red polygon is used to outline the cancer range in the original image, and the comprehensive characteristic map and the diagnosis information are displayed in real time to the endoscopist for reference in diagnosis, wherein the comprehensive characteristic map is formed by superimposing three types of characteristic maps.
CN202010675476.0A 2020-07-14 2020-07-14 Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology Pending CN111899229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010675476.0A CN111899229A (en) 2020-07-14 2020-07-14 Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010675476.0A CN111899229A (en) 2020-07-14 2020-07-14 Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology

Publications (1)

Publication Number Publication Date
CN111899229A true CN111899229A (en) 2020-11-06

Family

ID=73191723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010675476.0A Pending CN111899229A (en) 2020-07-14 2020-07-14 Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology

Country Status (1)

Country Link
CN (1) CN111899229A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634261A (en) * 2020-12-30 2021-04-09 上海交通大学医学院附属瑞金医院 Stomach cancer focus detection method and device based on convolutional neural network
CN112991325A (en) * 2021-04-14 2021-06-18 上海孚慈医疗科技有限公司 Intelligent coding-based speckled red-emitting image acquisition and processing method and system
CN112990267A (en) * 2021-02-07 2021-06-18 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN113116305A (en) * 2021-04-20 2021-07-16 深圳大学 Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium
CN113205492A (en) * 2021-04-26 2021-08-03 武汉大学 Microvessel distortion degree quantification method for gastric mucosa staining amplification imaging
CN113284613A (en) * 2021-05-24 2021-08-20 暨南大学 Face diagnosis system based on deep learning
CN113313203A (en) * 2021-06-22 2021-08-27 哈尔滨工程大学 Medical image classification method based on extension theory and deep learning
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN113743463A (en) * 2021-08-02 2021-12-03 中国科学院计算技术研究所 Tumor benign and malignant identification method and system based on image data and deep learning
CN113920309A (en) * 2021-12-14 2022-01-11 武汉楚精灵医疗科技有限公司 Image detection method, image detection device, medical image processing equipment and storage medium
CN113935993A (en) * 2021-12-15 2022-01-14 武汉楚精灵医疗科技有限公司 Enteroscope image recognition system, terminal device, and storage medium
CN113972004A (en) * 2021-10-20 2022-01-25 华中科技大学同济医学院附属协和医院 Deep learning-based multi-model fusion musculoskeletal ultrasonic diagnosis system
CN113989236A (en) * 2021-10-27 2022-01-28 北京医院 Gastroscope image intelligent target detection system and method
CN114359279A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114511556A (en) * 2022-04-02 2022-05-17 武汉大学 Gastric mucosa bleeding risk early warning method and device and medical image processing equipment
CN116433552A (en) * 2021-12-27 2023-07-14 深圳开立生物医疗科技股份有限公司 Method and related device for constructing focus image detection model in dyeing scene
CN117152509A (en) * 2023-08-28 2023-12-01 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning
CN117238532A (en) * 2023-11-10 2023-12-15 武汉楚精灵医疗科技有限公司 Intelligent follow-up method and device
CN117747121A (en) * 2023-12-19 2024-03-22 首都医科大学宣武医院 Diabetes risk prediction system based on multiple models

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013063097A (en) * 2011-09-15 2013-04-11 Fujifilm Corp Endoscope system and light source device
WO2017042812A2 (en) * 2015-09-10 2017-03-16 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
CN107967946A (en) * 2017-12-21 2018-04-27 武汉大学 Operating gastroscope real-time auxiliary system and method based on deep learning
WO2018120942A1 (en) * 2016-12-31 2018-07-05 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image by means of multi-model fusion
US10127665B1 (en) * 2017-07-31 2018-11-13 Hefei University Of Technology Intelligent assistant judgment system for images of cervix uteri and processing method thereof
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN109118485A (en) * 2018-08-13 2019-01-01 复旦大学 Digestive endoscope image classification based on multitask neural network cancer detection system early
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN110136106A (en) * 2019-05-06 2019-08-16 腾讯科技(深圳)有限公司 Recognition methods, system, equipment and the endoscopic images system of medical endoscope image
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
CN110974179A (en) * 2019-12-20 2020-04-10 山东大学齐鲁医院 Auxiliary diagnosis system for stomach precancer under electronic staining endoscope based on deep learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013063097A (en) * 2011-09-15 2013-04-11 Fujifilm Corp Endoscope system and light source device
WO2017042812A2 (en) * 2015-09-10 2017-03-16 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
WO2018120942A1 (en) * 2016-12-31 2018-07-05 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image by means of multi-model fusion
US10127665B1 (en) * 2017-07-31 2018-11-13 Hefei University Of Technology Intelligent assistant judgment system for images of cervix uteri and processing method thereof
CN107967946A (en) * 2017-12-21 2018-04-27 武汉大学 Operating gastroscope real-time auxiliary system and method based on deep learning
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
WO2019245009A1 (en) * 2018-06-22 2019-12-26 株式会社Aiメディカルサービス Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon
CN109118485A (en) * 2018-08-13 2019-01-01 复旦大学 Digestive endoscope image classification based on multitask neural network cancer detection system early
CN109584218A (en) * 2018-11-15 2019-04-05 首都医科大学附属北京友谊医院 A kind of construction method of gastric cancer image recognition model and its application
CN110136106A (en) * 2019-05-06 2019-08-16 腾讯科技(深圳)有限公司 Recognition methods, system, equipment and the endoscopic images system of medical endoscope image
CN110189303A (en) * 2019-05-07 2019-08-30 上海珍灵医疗科技有限公司 A kind of NBI image processing method and its application based on deep learning and image enhancement
CN110495847A (en) * 2019-08-23 2019-11-26 重庆天如生物科技有限公司 Alimentary canal morning cancer assistant diagnosis system and check device based on deep learning
CN110974179A (en) * 2019-12-20 2020-04-10 山东大学齐鲁医院 Auxiliary diagnosis system for stomach precancer under electronic staining endoscope based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
林建姣;黄思霖;马丽华;刘艳;肖卉;胡红松;项立;: "窄带成像放大内镜联合AⅠ三明治染色法对平坦型胃癌的临床诊断价值", 江西医药, no. 05, 20 May 2017 (2017-05-20) *
钟碧莹;: "蓝激光成像技术在上消化道早癌及癌前病变中的诊断价值", 泰山医学院学报, no. 07, 17 July 2018 (2018-07-17) *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634261A (en) * 2020-12-30 2021-04-09 上海交通大学医学院附属瑞金医院 Stomach cancer focus detection method and device based on convolutional neural network
CN112990267A (en) * 2021-02-07 2021-06-18 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN112990267B (en) * 2021-02-07 2022-06-28 哈尔滨医科大学 Breast ultrasonic imaging method and device based on style migration model and storage medium
CN112991325A (en) * 2021-04-14 2021-06-18 上海孚慈医疗科技有限公司 Intelligent coding-based speckled red-emitting image acquisition and processing method and system
CN113116305A (en) * 2021-04-20 2021-07-16 深圳大学 Nasopharyngeal endoscope image processing method and device, electronic equipment and storage medium
CN113205492A (en) * 2021-04-26 2021-08-03 武汉大学 Microvessel distortion degree quantification method for gastric mucosa staining amplification imaging
CN113205492B (en) * 2021-04-26 2022-05-13 武汉大学 Microvessel distortion degree quantification method for gastric mucosa staining amplification imaging
CN113284613A (en) * 2021-05-24 2021-08-20 暨南大学 Face diagnosis system based on deep learning
CN113313203A (en) * 2021-06-22 2021-08-27 哈尔滨工程大学 Medical image classification method based on extension theory and deep learning
CN113313203B (en) * 2021-06-22 2022-11-01 哈尔滨工程大学 Medical image classification method based on extension theory and deep learning
CN113743463A (en) * 2021-08-02 2021-12-03 中国科学院计算技术研究所 Tumor benign and malignant identification method and system based on image data and deep learning
CN113743463B (en) * 2021-08-02 2023-09-26 中国科学院计算技术研究所 Tumor benign and malignant recognition method and system based on image data and deep learning
CN113610847B (en) * 2021-10-08 2022-01-04 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN113610847A (en) * 2021-10-08 2021-11-05 武汉楚精灵医疗科技有限公司 Method and system for evaluating stomach markers in white light mode
CN113972004A (en) * 2021-10-20 2022-01-25 华中科技大学同济医学院附属协和医院 Deep learning-based multi-model fusion musculoskeletal ultrasonic diagnosis system
CN113989236A (en) * 2021-10-27 2022-01-28 北京医院 Gastroscope image intelligent target detection system and method
CN113920309A (en) * 2021-12-14 2022-01-11 武汉楚精灵医疗科技有限公司 Image detection method, image detection device, medical image processing equipment and storage medium
CN113935993B (en) * 2021-12-15 2022-03-01 武汉楚精灵医疗科技有限公司 Enteroscope image recognition system, terminal device, and storage medium
CN113935993A (en) * 2021-12-15 2022-01-14 武汉楚精灵医疗科技有限公司 Enteroscope image recognition system, terminal device, and storage medium
CN116433552A (en) * 2021-12-27 2023-07-14 深圳开立生物医疗科技股份有限公司 Method and related device for constructing focus image detection model in dyeing scene
CN114359279B (en) * 2022-03-18 2022-06-03 武汉楚精灵医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114359279A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114511556A (en) * 2022-04-02 2022-05-17 武汉大学 Gastric mucosa bleeding risk early warning method and device and medical image processing equipment
CN117152509A (en) * 2023-08-28 2023-12-01 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning
CN117152509B (en) * 2023-08-28 2024-04-30 北京透彻未来科技有限公司 Stomach pathological diagnosis and typing system based on cascade deep learning
CN117238532A (en) * 2023-11-10 2023-12-15 武汉楚精灵医疗科技有限公司 Intelligent follow-up method and device
CN117238532B (en) * 2023-11-10 2024-01-30 武汉楚精灵医疗科技有限公司 Intelligent follow-up method and device
CN117747121A (en) * 2023-12-19 2024-03-22 首都医科大学宣武医院 Diabetes risk prediction system based on multiple models

Similar Documents

Publication Publication Date Title
CN111899229A (en) Advanced gastric cancer auxiliary diagnosis method based on deep learning multi-model fusion technology
JP7335552B2 (en) Diagnostic imaging support device, learned model, operating method of diagnostic imaging support device, and diagnostic imaging support program
CN110600122B (en) Digestive tract image processing method and device and medical system
Pogorelov et al. Deep learning and hand-crafted feature based approaches for polyp detection in medical videos
Igarashi et al. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet
CN110189303B (en) NBI image processing method based on deep learning and image enhancement and application thereof
CN111798425B (en) Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning
Souaidi et al. A new automated polyp detection network MP-FSSD in WCE and colonoscopy images based fusion single shot multibox detector and transfer learning
JP7550409B2 (en) Image diagnosis device, image diagnosis method, and image diagnosis program
CN111862090B (en) Method and system for esophageal cancer preoperative management based on artificial intelligence
Noor et al. GastroNet: A robust attention‐based deep learning and cosine similarity feature selection framework for gastrointestinal disease classification from endoscopic images
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
Sun et al. Channel separation-based network for the automatic anatomical site recognition using endoscopic images
CN112419246B (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
Lin et al. Lesion-decoupling-based segmentation with large-scale colon and esophageal datasets for early cancer diagnosis
CN117649373A (en) Digestive endoscope image processing method and storage medium
KR20110130288A (en) Diagnosis of submucosal tumor using endoscopic ultrasonic image analysis
Pozdeev et al. Anatomical landmarks detection for laparoscopic surgery based on deep learning technology
Wang et al. Three feature streams based on a convolutional neural network for early esophageal cancer identification
JP2019013461A (en) Probe type confocal laser microscopic endoscope image diagnosis support device
Bernal et al. Towards intelligent systems for colonoscopy
CN112734749A (en) Vocal leukoplakia auxiliary diagnosis system based on convolutional neural network model
Chuquimia et al. Polyp follow-up in an intelligent wireless capsule endoscopy
Yin et al. Hybrid regional feature cutting network for thyroid ultrasound images classification
Yan Intelligent diagnosis of precancerous lesions in gastrointestinal endoscopy based on advanced deep learning techniques and limited data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240927