CN114693598A - Capsule endoscope gastrointestinal tract organ image automatic identification method - Google Patents

Capsule endoscope gastrointestinal tract organ image automatic identification method Download PDF

Info

Publication number
CN114693598A
CN114693598A CN202210157114.1A CN202210157114A CN114693598A CN 114693598 A CN114693598 A CN 114693598A CN 202210157114 A CN202210157114 A CN 202210157114A CN 114693598 A CN114693598 A CN 114693598A
Authority
CN
China
Prior art keywords
image
capsule endoscope
frame
gastrointestinal tract
small intestine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210157114.1A
Other languages
Chinese (zh)
Inventor
杨衍荣
朱芳芳
马悦
徐智星
郭桂宝
李胜
何熊熊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Ada Technology Co ltd
Original Assignee
Zhejiang Ada Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Ada Technology Co ltd filed Critical Zhejiang Ada Technology Co ltd
Priority to CN202210157114.1A priority Critical patent/CN114693598A/en
Publication of CN114693598A publication Critical patent/CN114693598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)

Abstract

A capsule endoscope gastrointestinal tract organ image automatic identification method comprises the following steps: 1) standardizing and marking a gastrointestinal tract image sequence acquired by a capsule endoscope; 2) extracting the key frame by utilizing the Hash similarity; 3) determining the time when the capsule endoscope enters the esophagus and the cardia by adopting the multi-head attention combined with a four-classification model of a separable convolution framework; 4) and determining the time when the capsule endoscope enters the small intestine and the large intestine by adopting a multi-head attention deep learning two-classification network fused with edge information. The invention provides an automatic identification method of capsule endoscope gastrointestinal tract organ images, which improves the identification accuracy of capsule endoscope image gastrointestinal tract segmentation, assists in shortening the film reading time of a clinician, and assists in improving the diagnosis efficiency.

Description

Capsule endoscope gastrointestinal tract organ image automatic identification method
Technical Field
The invention belongs to the field of image processing, and particularly relates to an automatic capsule endoscope gastrointestinal tract organ image identification method.
Background
The wireless capsule endoscope is a new gastrointestinal tract examination technology developed in the beginning of the 21 st century, overcomes the limitations of the traditional endoscopy on small intestines, and has the advantages of convenient examination process, no pain and safety. The examiner swallows the next capsule endoscope, and the capsule endoscope shoots the color image of the whole gastrointestinal tract along with the action of gravity and the peristalsis of the gastrointestinal tract, and the shot video sequentially passes through the external part of the body, the esophagus, the stomach, the small intestine and the large intestine according to the time sequence. When viewing the video of the capsule endoscope, the doctor firstly finds out the connection part between organs, such as: the junction of the esophagus and stomach is the cardia, and the junction of the stomach and small intestine is the pylorus. Further determining the time when the capsule endoscope enters different organs. And finally, different viewing modes are adopted for each organ, and diagnosis is made by combining the organ position of the focus after the focus is observed. As the shooting time of the capsule endoscope is about 8-13 hours, and the conditions of inspectors are complex and various, a doctor needs to spend a long time when manually determining the time when the capsule endoscope enters different organs.
In recent years, deep learning has made a great breakthrough in the field of image recognition, and there are gastrointestinal tract segment recognition methods based on deep learning, and generally, a deep learning method is adopted to directly classify each organ (esophagus, stomach, small intestine, and large intestine), but the accuracy of direct classification is not high. Therefore, aiming at the clinical requirements of capsule endoscopy, the problem that the accurate capsule endoscopy gastrointestinal tract organ automatic identification needs to be solved urgently is solved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides the automatic capsule endoscope gastrointestinal tract organ image identification method, which improves the identification accuracy of the capsule endoscope gastrointestinal tract image segmentation, assists in shortening the film reading time of a clinician, and assists in improving the diagnosis efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a capsule endoscope gastrointestinal tract organ image automatic identification method comprises the following steps:
1) standardizing and marking a gastrointestinal tract image sequence acquired by a capsule endoscope;
2) extracting the key frame by utilizing the Hash similarity;
3) determining the time when the capsule endoscope enters the esophagus and the cardia by adopting a four-classification model combining multi-head attention and a separable convolution framework;
4) and determining the time when the capsule endoscope enters the small intestine and the large intestine by adopting a multi-head attention deep learning two-classification network fused with edge information.
Further, in the step 1), time information of each frame of image is calculated according to a frame rate value of the video shot by the capsule endoscope, each frame of image is named in a format of 'time _ frame value', and the position of the image in the video is quickly located by a standard marking method.
Still further, the process of step 2) is as follows:
2.1) initializing a key frame, and setting a first frame of the acquired capsule endoscopy image sequence as a current key frame;
2.2) calculating a similarity value of the key frame and the next frame image by using a difference hash algorithm, and setting a threshold T1; if the similarity value is smaller than the threshold value T1, setting the next frame image as the current key frame; otherwise, judging that the next frame of image is a redundant image, deleting the image from the image sequence, and keeping the current key frame unchanged;
2.3) judging whether the image reading is finished or not, if so, terminating the comparison, and otherwise, repeating the step 2.2);
2.4) the key frame set is a preprocessed image.
Further, the process of step 3) is as follows:
3.1) training in-vitro, cardia, pylorus and other capsule endoscope image four-classification networks, training pictures to perform data enhancement, including turning and rotating the images in the horizontal and vertical directions, entering a multi-head self-attention module, improving the feature expression of key regions, generating a weight mask for each position and performing weighted output, thereby enhancing the interested specific target region and weakening irrelevant background regions; then, the deep separable convolution is adopted, and the two parts of channel-by-channel convolution and point-by-point convolution are included, so that the parameter operation amount and the calculation cost are reduced;
3.2) in the capsule endoscopy image sequence, the images sequentially pass through the extracorporeal region and the cardia, the time for reaching the cardia is short, a threshold value T2 is set, the extracorporeal and cardia images are classified before a T2 frame, and the classification accuracy is improved;
3.3) in the classification result, the last frame of the in vitro image is the starting position of the organ esophagus, and the time when the capsule endoscope enters the esophagus and the cardia can be determined by combining the classification result of the cardia image;
3.4) taking the last pyloric image as the junction of the stomach and the small intestine.
The process of the step 4) is as follows:
4.1) the difference between the small intestine and the small intestine is obvious, in the network structure, the edge information of the image is fused, and the recognition capability of the network to the small intestine villus is enhanced through the extraction of multi-head attention;
4.2) classifying images behind the pylorus at the junction of the stomach and the small intestine in the capsule endoscope image sequence into the large intestine and the small intestine, setting a threshold T3, and stopping classification when the number of continuously classified large intestine pictures is more than T3; and (3) continuously classifying the first frame of the large intestine image as a large intestine and small intestine organ demarcation point, and determining the moment when the capsule endoscope enters the small intestine and the large intestine by combining the determination of the pylorus in the step 3).
The invention has the following beneficial effects: the capsule endoscopy image gastrointestinal tract segmentation identification accuracy is improved, the time for a clinician to read the capsule endoscopy image is reduced in an auxiliary mode, and the diagnosis efficiency is improved in an auxiliary mode.
Drawings
Fig. 1 is a schematic view of a sequence of images of the gastrointestinal tract taken by an endoscopic capsule.
Fig. 2 is a flow chart of a capsule endoscope gastrointestinal tract organ image automatic identification method.
FIG. 3 is a flowchart of a key frame image extraction process of an endoscope.
Fig. 4 is a schematic diagram of a deep learning four-classification network structure.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 4, a method for automatically identifying images of gastrointestinal tract organs by using a capsule endoscope comprises the following steps:
1) standard marking capsule endoscope collected gastrointestinal tract image sequence
As shown in fig. 2, a schematic diagram of a sequence of images of organs in the gastrointestinal tract is taken by a capsule endoscope. It can be understood that the examiner swallows a capsule with a camera, automatically shoots the whole gastrointestinal tract image of the human body through the gravity action and the peristalsis of the intestines and stomach, and the image sequentially passes through the outside of the body, the esophagus, the stomach, the small intestine and the large intestine. The capsule endoscopy shooting speed is generally 5fps, the in-vivo running time is about 8-13 hours, and the number of the shot images reaches ten thousand. When the video is converted into the image, the shooting time of each frame of the endoscope image is calculated, and the image is named in a time-frame number value format, so that a doctor can conveniently and quickly position the image in the video.
Shooting time is frame value/fps
2) Extracting the key frame by utilizing the Hash similarity;
the key frame extraction aims to remove a large number of similar capsule endoscopy images and reduce the network classification burden, and as shown in fig. 3, the specific steps are as follows:
2.1) initializing a key frame, and setting a first frame of the acquired capsule endoscopy image sequence as a current key frame;
2.2) calculating a similarity value between the key frame and the next frame image by using a difference hash algorithm, setting a threshold value T1, if the similarity value is less than the threshold value T1, setting the next frame image as the current key frame, otherwise, judging the next frame image as a redundant image, deleting the redundant image from the image sequence, and keeping the current key frame unchanged. Firstly, the picture is reduced to 9x8 size by a difference hash algorithm, total 72 pixels are obtained, and picture differences caused by different sizes and proportions are abandoned; then the zoomed picture is converted into a 256-order gray scale image; then subtracting two adjacent elements in each row of the gray matrix (the left element is subtracted from the right element) to obtain 8 different difference values, obtaining 64 difference values in total, and processing the difference values, wherein if the difference values are positive numbers or 0, the difference values are recorded as 1; if the number is negative, the number is marked as 0; finally, combining 64 results together to obtain a hash value to represent an image, calculating the distance of the hash value to judge the similarity of the two pictures, wherein the hamming distance is generally used, namely whether the hash values of the two pictures are the same or not is calculated bit by bit;
2.3) judging whether the image reading is finished or not, if so, terminating the comparison, and otherwise, repeating the step 2.2);
2.4) the key frame set is a preprocessed image.
3) Determining the time when the capsule endoscope enters the esophagus and the cardia by adopting a four-classification model combining multi-head attention and a separable convolution framework;
in vitro, cardia and pyloric capsule endoscopic images all have significant features: the in-vitro image mainly comprises the face information of an examiner and has obvious difference with the gastrointestinal tract image; the cardia image is characterized in that the wall of the esophagus around contracts to the center, the periphery of the image is pink, and the center of the image is white; when no object passes through, the periphery of the pylorus image shows that the stomach wall contracts to the center, when an object passes through, the stomach wall expands, and the center of the image is in a circular hole shape.
According to the differences of in vitro, cardia, pylorus and other capsule endoscope images, the invention trains a four-classification network structure diagram, which is shown in figure 4. The training set comprises 4 classes of in-vitro, cardia and pylorus capsule endoscopy pictures, and the rest capsule endoscopy pictures are taken as one class. Firstly, improving the classification precision and generalization capability of the model by adopting a data enhancement method for a capsule endoscopy training set, wherein the method comprises the steps ofTurning and rotating the image in the horizontal and vertical directions, performing two-layer 3x3 convolution on the training enhanced image, sending the feature map after convolution into a Multi-Head Self-Attention-Module (MHSA), performing reverse Residual Bottleneck (IRB) operation on the feature map obtained by the MHSA, wherein the 3x3 convolution in the IRB is depth separable convolution, the activation function is SiLU, and the activation function is xLiIndicating that the MHSA + IRB module is repeated i times. And finally, obtaining a classification result through global average pooling and a full connection layer.
The classification effect indexes are accuracy, precision and recall rate. The accuracy is the percentage of correctly classified results in the total sample; the accuracy rate is for the classification result, which means the probability of actually being a positive sample among all samples classified as positive; the recall rate is for the original sample, and means the probability of being classified as a positive sample in the actual positive sample;
in the classification process, considering that the capsule endoscopy image sequence sequentially passes through the body and the cardia and the time consumption for reaching the cardia is short, a threshold value T2 is set, the body and the cardia images are classified before a T2 frame, and the classification accuracy is improved. In clinic, the operation conditions of the capsule endoscope in different examinees are greatly different. After entering the stomach, the capsule endoscope can pass through the pylorus and enter the small intestine at one time; the capsule endoscope can be repeatedly shot for a long time near the pylorus under the influence of the peristalsis of the stomach, and the number of the shot pylorus images is large and discontinuous. For the latter, multiple pyloric images are classified, and we only focus on the last pyloric classified image in consideration of the practical significance of the last pyloric image in the clinic.
And in the classification result, the last piece of the in-vitro image is judged as the starting position of the esophagus, and the time when the capsule endoscope enters the esophagus and the cardia is determined by combining the classified information of the cardia.
4) And determining the time when the capsule endoscope enters the small intestine and the large intestine by adopting a multi-head attention deep learning two-classification network fused with edge information.
The capsule endoscope shooting process sequentially passes through the external part of the body, the esophagus, the stomach, the small intestine and the large intestine. In the third step, the position of the pylorus is judged, and the images (including small intestine and large intestine) behind the pylorus in the capsule endoscope image sequence are classified. Training a two-class network according to the significant difference of the small intestine and the large intestine, such as: a large amount of villi are arranged on the surface of the small intestine wall; the surface of the large intestine has no villi and appears smoother. The network comprises two branches, wherein the first branch is a network structure of the figure 4 directly after the training image data is enhanced, and a characteristic diagram 1 is obtained; considering that the edge information of the image of the small intestine endoscope is rich in the image of the large intestine, in the branch II, the training image data is enhanced and then edge detection is carried out, and the obtained edge image is sent into the network of FIG. 4 to obtain a feature map 2; and combining the feature maps 1 and 2 to obtain a feature map 3, and finally obtaining a classification result through global average pooling and a full connection layer.
In the classification process, in order to reduce the system classification pressure and improve the classification accuracy, a threshold value T3 is set, and when the number of continuously classified large intestine pictures is greater than T3, the classification is stopped. The first frame of the continuous images classified as the large intestine is the boundary point of the large intestine and the small intestine, and the time when the capsule endoscope enters the small intestine and the large intestine can be determined.
In order to implement the method of the embodiment, an automatic capsule endoscope gastrointestinal tract organ image recognition device is designed, which comprises:
a memory for storing a computer program;
a processor for implementing the steps of the capsule endoscope gastrointestinal tract organ image automatic identification method when the computer program is executed.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be substantially or partially embodied in the form of a software product, which is stored in a computer readable memory (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling an intelligent capsule endoscopic image gastrointestinal tract organ image automatic identification device (which may be a mobile phone, a computer, a capsule endoscopic image gastrointestinal tract organ image automatic identification device, etc.) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (5)

1. A capsule endoscope gastrointestinal tract organ image automatic identification method is characterized by comprising the following steps:
1) standardizing and marking a gastrointestinal tract image sequence acquired by a capsule endoscope;
2) extracting the key frame by utilizing the Hash similarity;
3) determining the time when the capsule endoscope enters the esophagus and the cardia by adopting a four-classification model combining multi-head attention and a separable convolution framework;
4) and determining the time when the capsule endoscope enters the small intestine and the large intestine by adopting a multi-head attention deep learning two-classification network fused with edge information.
2. The method according to claim 1, wherein in step 1), the time information of each frame of image is calculated according to the frame rate of the video captured by the capsule endoscope, each frame of image is named in "time _ frame value" format, and the position of the image in the video is rapidly located by a standard labeling method.
3. The method for automatically recognizing the image of the gastrointestinal tract organ with the capsule endoscope according to claim 1 or 2, wherein the process of the step 2) is as follows:
2.1) initializing a key frame, and setting a first frame of the acquired capsule endoscopy image sequence as a current key frame;
2.2) calculating a similarity value of the key frame and the next frame image by using a difference hash algorithm, and setting a threshold T1; if the similarity value is smaller than the threshold value T1, setting the next frame image as the current key frame; otherwise, judging that the next frame of image is a redundant image, deleting the image from the image sequence, and keeping the current key frame unchanged;
2.3) judging whether the image reading is finished or not, if so, terminating the comparison, and otherwise, repeating 2.2);
2.4) the key frame set is a preprocessed image.
4. The method for automatically recognizing the image of the gastrointestinal tract organ with the capsule endoscope according to claim 1 or 2, wherein the process of the step 3) is as follows:
3.1) training in-vitro, cardia, pylorus and other capsule endoscope image four-classification networks, training pictures to perform data enhancement, including turning and rotating the images in the horizontal and vertical directions, entering a multi-head self-attention module, improving the feature expression of key regions, generating a weight mask for each position and performing weighted output, thereby enhancing the interested specific target region and weakening irrelevant background regions; then, the deep separable convolution is adopted, and the two parts of channel-by-channel convolution and point-by-point convolution are included, so that the parameter operation amount and the calculation cost are reduced;
3.2) in the capsule endoscopy image sequence, the images sequentially pass through the extracorporeal region and the cardia, the time for reaching the cardia is short, a threshold value T2 is set, the extracorporeal and cardia images are classified before the T2 frame, and the classification accuracy is improved;
3.3) in the classification result, the last frame of the in vitro image is the starting position of the organ esophagus, and the time when the capsule endoscope enters the esophagus and the cardia can be determined by combining the classification result of the cardia image;
3.4) taking the last pyloric image as the junction of the stomach and the small intestine.
5. The method for automatically recognizing the images of gastrointestinal tract organs by using the capsule endoscope as claimed in claim 4, wherein the process of the step 4) is as follows:
4.1) the difference between the small intestine and the small intestine is obvious, in the network structure, the edge information of the image is fused, and the recognition capability of the network to the small intestine villus is enhanced through the extraction of multi-head attention;
4.2) classifying images behind the pylorus at the junction of the stomach and the small intestine in the capsule endoscope image sequence into the large intestine and the small intestine, setting a threshold T3, and stopping classification when the number of continuously classified large intestine pictures is more than T3; and (3) continuously classifying the first frame of the large intestine image as a large intestine and small intestine organ demarcation point, and determining the moment when the capsule endoscope enters the small intestine and the large intestine by combining the determination of the pylorus in the step 3).
CN202210157114.1A 2022-02-21 2022-02-21 Capsule endoscope gastrointestinal tract organ image automatic identification method Pending CN114693598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210157114.1A CN114693598A (en) 2022-02-21 2022-02-21 Capsule endoscope gastrointestinal tract organ image automatic identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210157114.1A CN114693598A (en) 2022-02-21 2022-02-21 Capsule endoscope gastrointestinal tract organ image automatic identification method

Publications (1)

Publication Number Publication Date
CN114693598A true CN114693598A (en) 2022-07-01

Family

ID=82137997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210157114.1A Pending CN114693598A (en) 2022-02-21 2022-02-21 Capsule endoscope gastrointestinal tract organ image automatic identification method

Country Status (1)

Country Link
CN (1) CN114693598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229095A (en) * 2022-12-30 2023-06-06 北京百度网讯科技有限公司 Model training method, visual task processing method, device and equipment
CN116758058A (en) * 2023-08-10 2023-09-15 泰安市中心医院(青岛大学附属泰安市中心医院、泰山医养中心) Data processing method, device, computer and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229095A (en) * 2022-12-30 2023-06-06 北京百度网讯科技有限公司 Model training method, visual task processing method, device and equipment
CN116758058A (en) * 2023-08-10 2023-09-15 泰安市中心医院(青岛大学附属泰安市中心医院、泰山医养中心) Data processing method, device, computer and storage medium
CN116758058B (en) * 2023-08-10 2023-11-03 泰安市中心医院(青岛大学附属泰安市中心医院、泰山医养中心) Data processing method, device, computer and storage medium

Similar Documents

Publication Publication Date Title
US10860930B2 (en) Learning method, image recognition device, and computer-readable storage medium
Tania et al. Advances in automated tongue diagnosis techniques
CN114693598A (en) Capsule endoscope gastrointestinal tract organ image automatic identification method
WO2021147429A9 (en) Endoscopic image display method, apparatus, computer device, and storage medium
US20220296081A1 (en) Method for real-time detection of objects, structures or patterns in a video, an associated system and an associated computer readable medium
CN111144271B (en) Method and system for automatically identifying biopsy parts and biopsy quantity under endoscope
CN108615045B (en) Method, device and equipment for screening images shot by capsule endoscopy
CN113129287A (en) Automatic lesion mapping method for upper gastrointestinal endoscope image
CN114399465B (en) Benign and malignant ulcer identification method and system
CN114581375A (en) Method, device and storage medium for automatically detecting focus of wireless capsule endoscope
WO2023284246A1 (en) Endoscopic image feature point extraction method
CN111080639A (en) Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
WO2021197015A1 (en) Image analysis method, image analysis device, and image analysis system
WO2024012080A1 (en) Endoscope auxiliary examination system, method, apparatus, and storage medium
US20240005494A1 (en) Methods and systems for image quality assessment
Yuan et al. Automatic bleeding frame detection in the wireless capsule endoscopy images
CN110197722B (en) AI-CPU system platform
CN111227768A (en) Navigation control method and device of endoscope
CN114419041B (en) Method and device for identifying focus color
CN116168328A (en) Thyroid nodule ultrasonic inspection system and method
CN116468682A (en) Magnetic control capsule endoscope image stomach anatomy structure identification method based on deep learning
CN114332025B (en) Digestive endoscopy oropharynx passing time automatic detection system and method
CN116258686A (en) Method for establishing colon polyp parting detection model based on image convolution feature capture
Arnold et al. Indistinct frame detection in colonoscopy videos
EP4241650A1 (en) Image processing method, and electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination