CN117132840B - Peptic ulcer classification method and system based on AHS classification and Forrest classification - Google Patents
Peptic ulcer classification method and system based on AHS classification and Forrest classification Download PDFInfo
- Publication number
- CN117132840B CN117132840B CN202311394585.5A CN202311394585A CN117132840B CN 117132840 B CN117132840 B CN 117132840B CN 202311394585 A CN202311394585 A CN 202311394585A CN 117132840 B CN117132840 B CN 117132840B
- Authority
- CN
- China
- Prior art keywords
- ulcer
- classification
- model
- forrest
- ahs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 208000008469 Peptic Ulcer Diseases 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 26
- 208000011906 peptic ulcer disease Diseases 0.000 title claims abstract description 20
- 208000025865 Ulcer Diseases 0.000 claims abstract description 260
- 231100000397 ulcer Toxicity 0.000 claims abstract description 260
- 238000001514 detection method Methods 0.000 claims abstract description 160
- 238000013145 classification model Methods 0.000 claims abstract description 98
- 208000007107 Stomach Ulcer Diseases 0.000 claims abstract description 83
- 201000005917 gastric ulcer Diseases 0.000 claims abstract description 83
- 210000001035 gastrointestinal tract Anatomy 0.000 claims abstract description 33
- 238000010606 normalization Methods 0.000 claims abstract description 11
- 208000016752 upper digestive tract disease Diseases 0.000 claims abstract description 11
- 201000011540 mitochondrial DNA depletion syndrome 4a Diseases 0.000 claims description 88
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 23
- 230000004913 activation Effects 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 20
- 231100000241 scar Toxicity 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 13
- 235000019580 granularity Nutrition 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 230000000740 bleeding effect Effects 0.000 claims description 11
- 238000013135 deep learning Methods 0.000 claims description 9
- 230000035876 healing Effects 0.000 claims description 9
- 238000011282 treatment Methods 0.000 claims description 9
- 230000018199 S phase Effects 0.000 claims description 7
- 206010039509 Scab Diseases 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 7
- 238000007670 refining Methods 0.000 claims description 7
- 235000019587 texture Nutrition 0.000 claims description 7
- 210000004204 blood vessel Anatomy 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000003745 diagnosis Methods 0.000 abstract description 7
- 238000004458 analytical method Methods 0.000 abstract description 3
- 238000010191 image analysis Methods 0.000 abstract description 3
- 208000032843 Hemorrhage Diseases 0.000 description 17
- 238000010586 diagram Methods 0.000 description 7
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000001079 digestive effect Effects 0.000 description 3
- 238000001839 endoscopy Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 206010061218 Inflammation Diseases 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000002496 gastric effect Effects 0.000 description 2
- 230000004054 inflammatory process Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 208000018522 Gastrointestinal disease Diseases 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 208000010643 digestive system disease Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000981 epithelium Anatomy 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 208000018685 gastrointestinal system disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008855 peristalsis Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000017423 tissue regeneration Effects 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/72—Data preparation, e.g. statistical preprocessing of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The technical field of the invention is the field of computer vision and medical image analysis, and in particular relates to a peptic ulcer classification method and system based on AHS (advanced high-performance analysis) stage and Forrest stage, which are characterized in that an upper digestive tract detection image is acquired and is input into an upper digestive tract disease detection model to obtain an ulcer focus detection result; cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition; if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image; and inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result. The invention realizes auxiliary diagnosis of doctors by classifying the ulcers, and effectively improves the diagnosis efficiency and accuracy of peptic ulcers.
Description
Technical Field
The invention belongs to the technical field of computer vision and medical image analysis, and particularly relates to a peptic ulcer classification method and system based on AHS (advanced high-performance analysis) stage and Forrest stage.
Background
With the development of endoscope technology and the popularization of knowledge of digestive cancers, the knowledge of digestive ulcers is gradually increased, and if the digestive ulcers are not subjected to intervention treatment, the later stage is at risk of developing cancers.
AHS staging is the staging of the development interval of an ulcer, whereas Forrest staging refers to a method of assessing the risk of re-bleeding from peptic ulcer bleeding, which helps to judge the likelihood of recurrent bleeding from peptic ulcers, thereby selecting the best treatment. Ulcers can be classified into two types, high-risk ulcers and low-risk ulcers, according to Forrest classification.
Despite the existence of strict criteria, the features of each category are relatively close, both in the AHS stage and in the Forrest stage. Because of the huge workload of the endoscopy, the judgment of doctors is very easy to influence. Therefore, it is not easy to design an auxiliary diagnosis technique based on deep learning.
Disclosure of Invention
According to a first aspect of the present invention, the present invention claims a method for classifying peptic ulcers based on AHS classification and Forrest classification, comprising:
acquiring an upper digestive tract detection image, and inputting the upper digestive tract detection image into an upper digestive tract disease detection model to obtain an ulcer focus detection result;
cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image;
and inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result.
Further, the obtaining an upper digestive tract detection image, inputting the upper digestive tract detection image into an upper digestive tract disease detection model to obtain an ulcer focus detection result, specifically including:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
Further, the step of cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition or not specifically includes:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
Further, if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image, which specifically includes:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
Further, the inputting the ulcer classification target image into the ulcer classification model, outputting the ulcer classification result, specifically includes:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the natural characteristics of the bottom layers such as color change and texture of ulcers;
in deep networks, the gastric ulcer AHS stage model focuses on the information of yellow/white fur area, layering, annular dyke, scar shape, mucosal fold and the like of ulcers, and the Forrest stage model focuses on factors such as bleeding points, scabs, blood vessels, basal features and the like of the ulcers for highlighting.
According to a second aspect of the present invention, the present invention claims a peptic ulcer classification system based on AHS classification and Forrest classification, comprising:
the focus detection module is used for acquiring an upper digestive tract detection image, inputting the upper digestive tract detection image into an upper digestive tract disease detection model and obtaining an ulcer focus detection result;
the ulcer focus cutting module is used for cutting the ulcer focus detection result into a boundary frame and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
the ulcer classification preprocessing module is used for performing image scaling, normalization and data enhancement on the ulcer focus detection result if the cut ulcer focus detection result meets the gastric ulcer classification condition to obtain an ulcer classification target image;
and the ulcer classification output module inputs the ulcer classification target image into the ulcer classification model and outputs an ulcer classification result.
Further, the focus detection module specifically includes:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
Further, the ulcer focus tailoring module specifically includes:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
Further, the ulcer classification pretreatment module specifically includes:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
Further, the ulcer classification output module specifically includes:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the natural characteristics of the bottom layers such as color change and texture of ulcers;
in deep networks, the gastric ulcer AHS stage model focuses on the information of yellow/white fur area, layering, annular dyke, scar shape, mucosal fold and the like of ulcers, and the Forrest stage model focuses on factors such as bleeding points, scabs, blood vessels, basal features and the like of the ulcers for highlighting.
The technical field of the invention is the field of computer vision and medical image analysis, and in particular relates to a peptic ulcer classification method and system based on AHS (advanced high-performance analysis) stage and Forrest stage, which are characterized in that an upper digestive tract detection image is acquired and is input into an upper digestive tract disease detection model to obtain an ulcer focus detection result; cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition; if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image; and inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result. The invention realizes auxiliary diagnosis of doctors by classifying the ulcers, and effectively improves the diagnosis efficiency and accuracy of peptic ulcers.
Drawings
FIG. 1 is a workflow diagram of a method of classifying peptic ulcers based on AHS classification and Forrest classification as claimed in the present invention;
FIG. 2 is a schematic diagram of lesion detection according to an AHS stage and Forrest stage based peptic ulcer classification method of the present invention;
FIG. 3 is a schematic view of lesion clipping according to an AHS stage and Forrest stage based peptic ulcer classification method of the present invention;
FIG. 4 is a schematic diagram of a classification model of a classification method of peptic ulcers based on AHS classification and Forrest classification as claimed in the present invention;
FIG. 5 is a diagram of a flipped residual block of a method for classifying peptic ulcers based on AHS classification and Forrest classification as claimed in the present invention;
FIG. 6 is a schematic diagram of SEblock of a peptic ulcer classification method based on AHS classification and Forrest classification as claimed in the present invention;
FIG. 7 is a schematic diagram of a feature extraction process of a method for classifying peptic ulcers based on AHS classification and Forrest classification as claimed in the present invention;
fig. 8 is a block diagram of a peptic ulcer classification system based on AHS classification and Forrest classification as claimed in the present invention.
Detailed Description
At present, in the upper gastrointestinal endoscopy process, the medical examination work of the gastrointestinal tract is greatly influenced due to interference of reflection, shaking, gastrointestinal peristalsis, sensor noise and the like of the endoscope.
The scheme aims at realizing auxiliary diagnosis of doctors by classifying the ulcers, and effectively improving the diagnosis efficiency and accuracy of peptic ulcers.
According to a first embodiment of the present invention, referring to fig. 1, the present invention claims a method for classifying peptic ulcers based on AHS classification and Forrest classification, comprising:
acquiring an upper digestive tract detection image, and inputting the upper digestive tract detection image into an upper digestive tract disease detection model to obtain an ulcer focus detection result;
cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image;
and inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result.
Currently, ulcers can be divided into three stages of development according to the classification of the three rounds of (i.e., international endoscopic ulcers AHS stage) in the mountain, large, and small of the wasaki, with the objective of determining which stage the ulcers are in for the next treatment regimen. The specific stage is as follows:
stage a (active stage): is the initial stage of onset, ulcer edge inflammation and edema are obvious, tissue repair does not occur yet, and breakage is occurring. Depending on the severity, it can be subdivided into phases A1 and A2.
Stage H (healing stage): the ulcer is contracted, the inflammation is resolved, regenerated epithelium and fold concentration are obvious, that is, the period in which the ulcer is healing, and it can be divided into H1 phase and H2 phase.
S phase (scar phase): in the period after healing, the ulcer is completely repaired, and only the scar remains. Similarly, the phase S1 and the phase S2 can be subdivided.
Further, the obtaining an upper digestive tract detection image, inputting the upper digestive tract detection image into an upper digestive tract disease detection model to obtain an ulcer focus detection result, specifically including:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
Further, the step of cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition or not specifically includes:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
In this embodiment, reference is made to fig. 2, in which a rectangular box represents an ulcer focus area obtained by an upper gastrointestinal disease detection model, i.e., a Bounding box;
referring to fig. 3, cropping the ulcer lesion through a bounding box refers to cropping the input image. In order to ensure the integrity of the global features of the ulcers, the boundary range of 32 pixel points is filled into the internal boundary frame to obtain a peripheral boundary frame;
wherein in this embodiment, by cropping the image, an image within the peripheral bounding box is acquired as an input image to the classification model. Meanwhile, whether the classification standard of the gastric ulcer model is carried out is judged, and specifically whether the shortest length of the input image reaches a threshold value ulcer_flag is judged. And if the threshold value is reached, the classification condition is satisfied. Wherein the ucler_flag may be adjusted according to different scenes, for example, under a magnifying scope, ucler_flag=336. Under normal white light endoscopy, ucler_flag=224.
Further, if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image, which specifically includes:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
Further, the inputting the ulcer classification target image into the ulcer classification model, outputting the ulcer classification result, specifically includes:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the natural characteristics of the bottom layers such as color change and texture of ulcers;
in deep networks, the gastric ulcer AHS stage model focuses on the information of yellow/white fur area, layering, annular dyke, scar shape, mucosal fold and the like of ulcers, and the Forrest stage model focuses on factors such as bleeding points, scabs, blood vessels, basal features and the like of the ulcers for highlighting.
Wherein in this embodiment, since Forrest classification is one way to assess the risk of re-bleeding for bleeding from ulcers, in AHS classification only active (phase a) is there a chance of bleeding. Thus, the logic of the ulcer classification model is as follows: only peptic ulcers were first subjected to the AHS staging model to determine if the ulcer was in the active phase of the ulcer (i.e., phase a). The probability of re-bleeding of the phase A ulcers is again assessed and input into a Forrest classification model. If the ulcer is in stage A, the classification outcome output for the ulcer includes AHS classification and Forrest classification. Otherwise, only the AHS phase result is output.
The AHS classification model and the Forrest classification model are realized by adopting a classification model based on deep learning, the currently used model is MobileNet v3, and the main structure of the model is composed of a plurality of overturning residual blocks (Inverted Residual). The network structure is shown in fig. 4;
firstly, refining features are carried out through a conv layer with an HSwick activation function; secondly, inputting the ulcer characteristics into N overturning residual blocks, extracting the ulcer characteristics, and separating the ulcer characteristics from the background; and finally, outputting the classified labels and scores through an average pooling and full connection layer. Wherein N is 15. The specific operation is shown in table 1:
TABLE 1 Classification model layering table
Where s denotes the step size, hswick and Relu denote different activation functions. And v denotes whether the flipped convolution block carries a SEblock. Table 1 provides details of the convolution in more detail.
Specifically, in this embodiment, referring to fig. 5, the flipped residual block (Inverted Residual) is composed of depth separable convolutions, SEblock, and 1*1 convolutions. Wherein, SEblock only appears in the upset residual block of a specific layer, namely in layers 5, 6, 7, 12, 13, 14, 15 and 16, SEblock extraction channels and spatial features are introduced; the SEblock is shown in FIG. 6.
As shown in FIG. 7, the AHS stage and Forrest stage models are in a shallow network, both models focus on the color change of ulcers and the natural characteristics of the underlying layers such as texture. In layer 16 networks, the AHS staging model focuses on the boundary information of the ulcer. In the layer 17 network, the Forrest classification model focuses on the scab area of the ulcer, highlighting.
Through the steps, the deep learning-based peptic ulcer AHS stage and Forrest stage method provided by the invention realizes the division of ulcer development period, and then realizes Forrest stage evaluation of ulcer re-bleeding by judging the bleeding possibility of the ulcer. Thus solving the problems of noise interference and difficult guarantee of precision mentioned in the technical problems.
According to a second embodiment of the present invention, referring to fig. 8, the present invention claims a peptic ulcer classifying system based on AHS classification and Forrest classification, comprising:
the focus detection module is used for acquiring an upper digestive tract detection image, inputting the upper digestive tract detection image into an upper digestive tract disease detection model and obtaining an ulcer focus detection result;
the ulcer focus cutting module is used for cutting the ulcer focus detection result into a boundary frame and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
the ulcer classification preprocessing module is used for performing image scaling, normalization and data enhancement on the ulcer focus detection result if the cut ulcer focus detection result meets the gastric ulcer classification condition to obtain an ulcer classification target image;
and the ulcer classification output module inputs the ulcer classification target image into the ulcer classification model and outputs an ulcer classification result.
Further, the focus detection module specifically includes:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
Further, the ulcer focus tailoring module specifically includes:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
Further, the ulcer classification pretreatment module specifically includes:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
Further, the ulcer classification output module specifically includes:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the natural characteristics of the bottom layers such as color change and texture of ulcers;
in deep networks, the gastric ulcer AHS stage model focuses on the information of yellow/white fur area, layering, annular dyke, scar shape, mucosal fold and the like of ulcers, and the Forrest stage model focuses on factors such as bleeding points, scabs, blood vessels, basal features and the like of the ulcers for highlighting.
Those skilled in the art will appreciate that various modifications and improvements can be made to the disclosure. For example, the various devices or components described above may be implemented in hardware, or may be implemented in software, firmware, or a combination of some or all of the three.
A flowchart is used in this disclosure to describe the steps of a method according to an embodiment of the present disclosure. It should be understood that the steps that follow or before do not have to be performed in exact order. Rather, the various steps may be processed in reverse order or simultaneously. Also, other operations may be added to these processes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the methods described above may be implemented by a computer program to instruct related hardware, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic disk, or an optical disk. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiment may be implemented in the form of hardware, or may be implemented in the form of a software functional module. The present disclosure is not limited to any specific form of combination of hardware and software.
Unless defined otherwise, all terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The disclosure is defined by the claims and their equivalents.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
Claims (8)
1. A method for classifying peptic ulcers based on AHS classification and Forrest classification, comprising:
acquiring an upper digestive tract detection image, and inputting the upper digestive tract detection image into an upper digestive tract disease detection model to obtain an ulcer focus detection result;
cutting the ulcer focus detection result into a boundary frame, and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image;
inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result;
inputting the ulcer classification target image into an ulcer classification model, and outputting an ulcer classification result, wherein the method specifically comprises the following steps of:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the color change of ulcers and the natural characteristics of the texture bottom layer;
in the deep network, the gastric ulcer AHS staging model focuses on the yellow/white fur area, layering, dyke, scar shape, and mucosal fold information of the ulcer, and the Forrest staging model focuses on bleeding points, scabs, blood vessels, and basal characteristic factors of the ulcer for highlighting.
2. The method for classifying peptic ulcer based on AHS stage and Forrest stage according to claim 1, wherein said obtaining an upper digestive tract detection image, inputting said upper digestive tract detection image into an upper digestive tract disease detection model, obtaining an ulcer focus detection result, specifically comprises:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
3. The method for classifying peptic ulcer based on AHS stage and Forrest stage according to claim 1, wherein said performing a bounding box clipping treatment on the ulcer focus detection result, judging whether the clipped ulcer focus detection result meets the gastric ulcer classification condition, specifically comprises:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
4. The method for classifying peptic ulcer based on AHS classification and Forrest classification according to claim 1, wherein if the cut ulcer focus detection result meets the gastric ulcer classification condition, performing image scaling, normalization and data enhancement on the ulcer focus detection result to obtain an ulcer classification target image, specifically comprising:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
5. A peptic ulcer classification system based on AHS staging and Forrest staging, comprising:
the focus detection module is used for acquiring an upper digestive tract detection image, inputting the upper digestive tract detection image into an upper digestive tract disease detection model and obtaining an ulcer focus detection result;
the ulcer focus cutting module is used for cutting the ulcer focus detection result into a boundary frame and judging whether the cut ulcer focus detection result meets the gastric ulcer classification condition;
the ulcer classification preprocessing module is used for performing image scaling, normalization and data enhancement on the ulcer focus detection result if the cut ulcer focus detection result meets the gastric ulcer classification condition to obtain an ulcer classification target image;
the ulcer classification output module inputs the ulcer classification target image into an ulcer classification model and outputs an ulcer classification result;
the ulcer classification output module specifically comprises:
the ulcer classification model comprises a gastric ulcer AHS classification model and a Forrest classification model;
inputting the ulcer classification target image into the gastric ulcer AHS stage model to obtain a first classification result;
when the first classification result is the stage A, inputting the ulcer classification target image into the Forrest classification model to obtain a second classification result;
if the first classification result is H phase or S phase, the ulcer classification target image is not required to be input into the Forrest classification model because the ulcer re-bleeding probability in the healing phase and the scar phase is lower;
outputting the first classification result and/or the second classification result as an ulcer classification result;
the gastric ulcer AHS stage model and the Forrest stage model adopt a multilayer classification model MobileNet v3 based on deep learning;
the gastric ulcer AHS stage model and the Forrest stage model are both composed of a plurality of overturning residual blocks;
inputting the ulcer classification target image into the gastric ulcer AHS classification model or Forrest classification model comprises:
refining the features through a conv layer with an HSwick activation function;
inputting the ulcer classification target image into N overturning residual blocks, extracting the characteristics of the ulcer, and separating the ulcer classification target image from the background;
outputting the classified labels and scores of the gastric ulcer AHS classification model and the Forrest classification model through the average pooling and the full connection layer;
the overturning residual block comprises a depth separable convolution, a SEblock convolution and a 1*1 convolution which are connected in sequence;
the SEblock comprises a global average pooling layer, a 1*1 convolution layer, a Relu activation function layer, a 1*1 convolution layer and an HSwick activation function layer which are sequentially connected;
the multi-layer classification model of the gastric ulcer AHS classification model and the Forrest classification model comprises a shallow network and a deep network;
in the shallow network, the gastric ulcer AHS stage model and the Forrest stage model both focus on the color change of ulcers and the natural characteristics of the texture bottom layer;
in the deep network, the gastric ulcer AHS staging model focuses on the yellow/white fur area, layering, dyke, scar shape, and mucosal fold information of the ulcer, and the Forrest staging model focuses on bleeding points, scabs, blood vessels, and basal characteristic factors of the ulcer for highlighting.
6. The peptic ulcer classification system based on AHS classification and Forrest classification according to claim 5, wherein the focus detection module specifically comprises:
acquiring an upper digestive tract detection image;
extracting features of the upper digestive tract detection image, and outputting detection feature images with a plurality of granularities;
integrating the detection feature graphs of each granularity and outputting the integrated detection feature graphs;
carrying out feature statistics on the integrated detection feature images, and establishing a feature set;
based on the feature set, carrying out dynamic convolution containing a self-attention mechanism on the integrated feature map, and outputting an ulcer focus detection result conforming to the feature set.
7. The peptic ulcer classification system based on AHS stage and Forrest stage according to claim 6, wherein the ulcer focus clipping module specifically comprises:
cutting out the ulcer focus detection result through a boundary frame, and performing preset pixel filling treatment on the boundary frame to obtain the ulcer focus detection cutting result;
judging whether the shortest length of the ulcer focus detection cutting result reaches a threshold value ulcer_flag, and if so, meeting the gastric ulcer classification condition.
8. The peptic ulcer classification system based on AHS classification and Forrest classification according to claim 7, wherein the ulcer classification preprocessing module specifically comprises:
the centering is realized through the mean value removal, and in the data enhancement, enhancement of space class and color transformation class is carried out;
the space class comprises translational, overturning and rotating operation on the ulcer focus detection result;
the color transformation classes include noise, brightness, contrast, and blur operations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394585.5A CN117132840B (en) | 2023-10-26 | 2023-10-26 | Peptic ulcer classification method and system based on AHS classification and Forrest classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311394585.5A CN117132840B (en) | 2023-10-26 | 2023-10-26 | Peptic ulcer classification method and system based on AHS classification and Forrest classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117132840A CN117132840A (en) | 2023-11-28 |
CN117132840B true CN117132840B (en) | 2024-01-26 |
Family
ID=88858531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311394585.5A Active CN117132840B (en) | 2023-10-26 | 2023-10-26 | Peptic ulcer classification method and system based on AHS classification and Forrest classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117132840B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118052821A (en) * | 2024-04-15 | 2024-05-17 | 苏州凌影云诺医疗科技有限公司 | Focus detection and grading method and device for reflux esophagitis |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2011117688A (en) * | 2011-05-03 | 2012-11-10 | Государственное Образовательное Учреждение Высшего Профессионального Образования "Смоленская государственная медицинская академия" | METHOD FOR DIFFERENTIAL DIAGNOSTICS OF SEVERITY OF ULCER DISEASE |
CN109948733A (en) * | 2019-04-01 | 2019-06-28 | 深圳大学 | More classification methods, sorter and the storage medium of alimentary tract endoscope image |
CN112767389A (en) * | 2021-02-03 | 2021-05-07 | 紫东信息科技(苏州)有限公司 | Gastroscope picture focus identification method and device based on FCOS algorithm |
CN114764589A (en) * | 2021-01-15 | 2022-07-19 | 紫东信息科技(苏州)有限公司 | Image classification method and device and electronic equipment |
CN115861718A (en) * | 2023-02-22 | 2023-03-28 | 赛维森(广州)医疗科技服务有限公司 | Gastric biopsy image classification method, apparatus, device, medium, and program product |
CN116228690A (en) * | 2023-02-10 | 2023-06-06 | 中国科学院苏州生物医学工程技术研究所 | Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT |
CN116433660A (en) * | 2023-06-12 | 2023-07-14 | 吉林禾熙科技开发有限公司 | Medical image data processing device, electronic apparatus, and computer-readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10811135B2 (en) * | 2018-12-27 | 2020-10-20 | General Electric Company | Systems and methods to determine disease progression from artificial intelligence detection output |
-
2023
- 2023-10-26 CN CN202311394585.5A patent/CN117132840B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2011117688A (en) * | 2011-05-03 | 2012-11-10 | Государственное Образовательное Учреждение Высшего Профессионального Образования "Смоленская государственная медицинская академия" | METHOD FOR DIFFERENTIAL DIAGNOSTICS OF SEVERITY OF ULCER DISEASE |
CN109948733A (en) * | 2019-04-01 | 2019-06-28 | 深圳大学 | More classification methods, sorter and the storage medium of alimentary tract endoscope image |
CN114764589A (en) * | 2021-01-15 | 2022-07-19 | 紫东信息科技(苏州)有限公司 | Image classification method and device and electronic equipment |
CN112767389A (en) * | 2021-02-03 | 2021-05-07 | 紫东信息科技(苏州)有限公司 | Gastroscope picture focus identification method and device based on FCOS algorithm |
CN116228690A (en) * | 2023-02-10 | 2023-06-06 | 中国科学院苏州生物医学工程技术研究所 | Automatic auxiliary diagnosis method for pancreatic cancer and autoimmune pancreatitis based on PET-CT |
CN115861718A (en) * | 2023-02-22 | 2023-03-28 | 赛维森(广州)医疗科技服务有限公司 | Gastric biopsy image classification method, apparatus, device, medium, and program product |
CN116433660A (en) * | 2023-06-12 | 2023-07-14 | 吉林禾熙科技开发有限公司 | Medical image data processing device, electronic apparatus, and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117132840A (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510482B (en) | Cervical cancer detection device based on colposcope images | |
CN108765408B (en) | Method for constructing cancer pathological image virtual disease case library and multi-scale cancer detection system based on convolutional neural network | |
CN110033456B (en) | Medical image processing method, device, equipment and system | |
Chan et al. | Texture-map-based branch-collaborative network for oral cancer detection | |
CN111445478B (en) | Automatic intracranial aneurysm region detection system and detection method for CTA image | |
CN117132840B (en) | Peptic ulcer classification method and system based on AHS classification and Forrest classification | |
CN110706233A (en) | Retina fundus image segmentation method and device | |
Ghosh et al. | Effective deep learning for semantic segmentation based bleeding zone detection in capsule endoscopy images | |
CN111144271B (en) | Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope | |
CN111968091A (en) | Method for detecting and classifying lesion areas in clinical image | |
CN111724397A (en) | Automatic segmentation method for bleeding area of craniocerebral CT image | |
CN108564582B (en) | MRI brain tumor image automatic optimization method based on deep neural network | |
CN112381846A (en) | Ultrasonic thyroid nodule segmentation method based on asymmetric network | |
CN114298971A (en) | Coronary artery segmentation method, system, terminal and storage medium | |
Sánchez-Montes et al. | Review of computational methods for the detection and classification of polyps in colonoscopy imaging | |
CN115131630A (en) | Model training method, microsatellite state prediction method, electronic device and storage medium | |
CN113538344A (en) | Image recognition system, device and medium for distinguishing atrophic gastritis and gastric cancer | |
CN110827963A (en) | Semantic segmentation method for pathological image and electronic equipment | |
CN112330662B (en) | Medical image segmentation system and method based on multi-level neural network | |
Jahmunah et al. | Endoscopy, video capsule endoscopy, and biopsy for automated celiac disease detection: A review | |
Gupta et al. | A systematic review of deep learning based image segmentation to detect polyp | |
CN113705595A (en) | Method, device and storage medium for predicting degree of abnormal cell metastasis | |
CN117237371A (en) | Colon histological image gland segmentation method based on example perception diffusion model | |
CN113920099B (en) | Polyp segmentation method based on non-local information extraction and related components | |
CN116310535A (en) | Multi-scale multi-region thyroid nodule prediction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |