CN112022065A - Method and system for quickly positioning time point of capsule entering duodenum - Google Patents

Method and system for quickly positioning time point of capsule entering duodenum Download PDF

Info

Publication number
CN112022065A
CN112022065A CN202011016314.2A CN202011016314A CN112022065A CN 112022065 A CN112022065 A CN 112022065A CN 202011016314 A CN202011016314 A CN 202011016314A CN 112022065 A CN112022065 A CN 112022065A
Authority
CN
China
Prior art keywords
duodenum
capsule
images
image
stomach
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011016314.2A
Other languages
Chinese (zh)
Inventor
万思琦
杨国强
喻雷
刘帅成
甘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011016314.2A priority Critical patent/CN112022065A/en
Publication of CN112022065A publication Critical patent/CN112022065A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • A61B1/2736Gastroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method and a system for quickly positioning a time point when a capsule enters duodenum, wherein the method comprises the following steps: s1, acquiring an initial image set; s2, removing repeated images to obtain a preprocessed image set; s3, acquiring a training set; s4, training a LeNet model; s5, classifying the images shot by the target capsule; s6, starting from the first image classified as being positioned in the duodenum, judging whether the subsequent N images are classified as being positioned in the stomach, if so, entering the step S7, and if not, entering the step S8; s7, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to the step S6; and S8, taking the shooting time of the first image classified as being positioned in the duodenum as the time point when the target capsule enters the duodenum, and finishing positioning. The invention solves the problem that manual inquiry is time-consuming and labor-consuming.

Description

Method and system for quickly positioning time point of capsule entering duodenum
Technical Field
The invention relates to the field of computer vision, in particular to a method and a system for quickly positioning a time point when a capsule enters duodenum.
Background
Because of its painlessness and non-invasiveness, capsule endoscopy is one of the best diagnostic tools for examining small intestine diseases in clinic at present, but one of the disadvantages is that after the capsule is swallowed, its movement in the digestive tract completely depends on the peristaltic push speed of the stomach and intestine, when entering the stomach, if the peristaltic evacuation speed of the stomach is slow, the capsule can be retained in the stomach for a long time, the battery energy is too much consumed, when entering the small intestine, the examination of the whole small intestine can not be completed because of the insufficient battery energy. However, there is no good solution to determine the time point when the capsule enters the duodenum.
During the capsule endoscopy, the patient can take tens of thousands of pictures at the speed of 2 pictures/second (OMOM capsule endoscopy, Jinshan company, China), and generally hundreds of pictures can be obtained before entering the duodenum descending segment. However, the existing method for determining that the capsule firstly reaches the duodenum is that a doctor views the image shot by the capsule for manual resolution, and the problem of time and labor waste exists because the number of the images is large.
Disclosure of Invention
Aiming at the defects in the prior art, the method and the system for quickly positioning the time point of the capsule entering the duodenum provided by the invention solve the problem that the time point of the capsule entering the duodenum is time-consuming and labor-consuming through manual inquiry.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a method for rapidly locating a time point of entry of a capsule into duodenum is provided, comprising the steps of:
s1, labeling the image shot when the capsule is positioned in the stomach and the image shot when the capsule is positioned in the duodenum to be used as an initial image set;
s2, screening the images in the initial image set by adopting a residual method, and removing repeated images to obtain a preprocessed image set;
s3, increasing the images shot by the capsule in the duodenum in the preprocessed image set by adopting an image augmentation method to obtain a training set;
s4, carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
s5, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
s6, starting from the first image classified as being positioned in the duodenum, judging whether the subsequent N images are classified as being positioned in the stomach, if so, entering the step S7, and if not, entering the step S8;
s7, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to the step S6;
and S8, taking the shooting time of the first image classified as being positioned in the duodenum as the time point when the target capsule enters the duodenum, and finishing positioning.
Further, the specific method of step S1 is:
the image taken with the capsule in the stomach and the image taken with the capsule in the duodenum were labeled with the photographing positions and the labeled images were normalized in size to 3 × 240 × 256, resulting in an initial image set.
Further, the image augmentation method in step S3 includes a RandomFlip method and a RandomCrop method.
Further, the LeNet model in step S4 includes 4 sets of convolution layers and pooling layers alternately connected, and three full-connection layers sequentially connected after the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
Further, the value N is 3 in step S6.
The system for rapidly positioning the time point when the capsule enters the duodenum comprises an initial image set acquisition module, a preprocessing module, a data equalization module, a training module and an identification module;
the initial image set acquisition module is used for labeling an image shot by the capsule in the stomach and an image shot by the capsule in the duodenum to be used as an initial image set;
the preprocessing module is used for screening the images in the initial image set by adopting a residual method, removing repeated images and obtaining a preprocessed image set;
the data equalization module is used for increasing the images shot by the preprocessed image concentration capsules in duodenum by adopting an image augmentation method to obtain a training set;
the training module is used for carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
an identification module to:
p1, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
p2, starting from the first image classified as being located in the duodenum, determining whether there are N consecutive images classified as being located in the stomach, if so, proceeding to operation P3, otherwise, proceeding to operation P4;
p3, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to step P2;
p4, taking the time of the first image classified as being located in the duodenum as the time point when the target capsule enters the duodenum, and completing the localization.
Further, the LeNet model comprises 4 groups of convolution layers and pooling layers which are alternately connected, and three full-connection layers which are sequentially connected behind the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
Further, in operation P1, the value N is 3.
The invention has the beneficial effects that:
1. according to the invention, the LeNet model is trained after the images shot by the capsule in the stomach and the images shot by the capsule in the duodenum are processed, and the time for the capsule to enter the duodenum is automatically acquired through the trained LeNet model, so that the problems of time and labor waste caused by manual inquiry are solved.
2. The method solves the problem that a large number of repeated images are obtained due to the fact that the capsule moves slowly in a human body through a residual method, and solves the problem that images in the duodenum are few through an image augmentation method, so that training data of the model are more balanced, the model obtained through training is better in recognition effect, and the method is more beneficial to quickly positioning the time point when the capsule enters the duodenum.
3. According to the method, the threshold value is used for classifying the N continuous images to be positioned in the stomach, so that the result of possible recognition errors is corrected, and the recognition accuracy can be improved.
Drawings
FIG. 1 is a schematic flow diagram of the process;
fig. 2 is a schematic structural diagram of the LeNet model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in FIG. 1, the method for rapidly locating the time point when the capsule enters the duodenum comprises the following steps:
s1, labeling the image shot when the capsule is positioned in the stomach and the image shot when the capsule is positioned in the duodenum to be used as an initial image set;
s2, screening the images in the initial image set by adopting a residual method, and removing repeated images to obtain a preprocessed image set;
s3, increasing the images shot by the capsule in the duodenum in the preprocessed image set by adopting an image augmentation method to obtain a training set;
s4, carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
s5, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
s6, starting from the first image classified as being positioned in the duodenum, judging whether the subsequent N images are classified as being positioned in the stomach, if so, entering the step S7, and if not, entering the step S8;
s7, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to the step S6;
and S8, taking the shooting time of the first image classified as being positioned in the duodenum as the time point when the target capsule enters the duodenum, and finishing positioning.
The specific method of step S1 is: the image taken with the capsule in the stomach and the image taken with the capsule in the duodenum were labeled with the photographing positions and the labeled images were normalized in size to 3 × 240 × 256, resulting in an initial image set. The image augmentation method in step S3 includes a RandomFlip method and a RandomCrop method.
As shown in fig. 2, the LeNet model in step S4 includes 4 sets of convolutional layers and pooling layers alternately connected, and three fully-connected layers sequentially connected after the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
The system for rapidly positioning the time point when the capsule enters the duodenum comprises an initial image set acquisition module, a preprocessing module, a data balancing module, a training module and an identification module;
the initial image set acquisition module is used for labeling an image shot by the capsule in the stomach and an image shot by the capsule in the duodenum to be used as an initial image set;
the preprocessing module is used for screening the images in the initial image set by adopting a residual method, removing repeated images and obtaining a preprocessed image set;
the data equalization module is used for increasing the images shot by the preprocessed image concentration capsules in duodenum by adopting an image augmentation method to obtain a training set;
the training module is used for carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
an identification module to:
p1, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
p2, starting from the first image classified as being located in the duodenum, determining whether there are N consecutive images classified as being located in the stomach, if so, proceeding to operation P3, otherwise, proceeding to operation P4;
p3, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to step P2;
p4, taking the time of the first image classified as being located in the duodenum as the time point when the target capsule enters the duodenum, and completing the localization.
The LeNet model comprises 4 groups of convolution layers and pooling layers which are alternately connected, and three full-connection layers which are sequentially connected behind the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
In one embodiment of the present invention, the value N is 3 and the convolution layer kernel is 5 x 5. The training set comprises images of mucosa of stomach, duodenal bulb, duodenal descending segment, and various interference images encountered in the capsule endoscopy process, such as saliva, gastric juice, intestinal fluid, blood fluid, gastric chyme, bile, dark-field images, blurred images and the like.
According to the method, the basis that the capsule does not return to the stomach and the duodenal bulb when entering the duodenal descending segment is taken as the basis, if the trained LeNet model confirms that the capsule endoscope enters the duodenal descending segment, images behind the confirmation point time are images of the duodenal descending segment, if images behind the confirmation point time are judged to be images of mucosa of the stomach and the duodenal bulb, particularly when more than or equal to 3 images appear, the confirmation point can be judged to be wrong, the confirmation point is corrected again, the identification accuracy of the LeNet model can be improved, and the accuracy of the positioning of the capsule entering the duodenal descending segment is improved.
In conclusion, the LeNet model is trained after the images shot by the capsule in the stomach and the images shot by the capsule in the duodenum are processed, and the time for the capsule to enter the duodenum is automatically acquired through the trained LeNet model, so that the problems of time and labor waste caused by manual inquiry are solved.

Claims (8)

1. A method for rapidly locating the point in time at which a capsule enters the duodenum, comprising the steps of:
s1, labeling the image shot when the capsule is positioned in the stomach and the image shot when the capsule is positioned in the duodenum to be used as an initial image set;
s2, screening the images in the initial image set by adopting a residual method, and removing repeated images to obtain a preprocessed image set;
s3, increasing the images shot by the capsule in the duodenum in the preprocessed image set by adopting an image augmentation method to obtain a training set;
s4, carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
s5, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
s6, starting from the first image classified as being positioned in the duodenum, judging whether the subsequent N images are classified as being positioned in the stomach, if so, entering the step S7, and if not, entering the step S8;
s7, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to the step S6;
and S8, taking the shooting time of the first image classified as being positioned in the duodenum as the time point when the target capsule enters the duodenum, and finishing positioning.
2. The method for rapidly locating the entering time point of the capsule into the duodenum according to claim 1, wherein the specific method of step S1 is:
the image taken with the capsule in the stomach and the image taken with the capsule in the duodenum were labeled with the photographing positions and the labeled images were normalized in size to 3 × 240 × 256, resulting in an initial image set.
3. The method for rapidly locating the entry time point of a capsule into the duodenum according to claim 1, wherein the image augmentation method in step S3 comprises a RandomFlip method and a RandomCrop method.
4. The method for rapidly locating a duodenal time point of entry of a capsule as recited in claim 1, wherein the LeNet model in step S4 comprises 4 sets of alternately connected convolutional and pooling layers, and three fully-connected layers connected sequentially after the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
5. The method for rapidly localizing the entry point in time of a capsule into the duodenum according to claim 1, wherein the value N in step S6 is 3.
6. A system for rapidly positioning a time point when a capsule enters duodenum is characterized by comprising an initial image set acquisition module, a preprocessing module, a data balancing module, a training module and an identification module;
the initial image set acquisition module is used for labeling an image shot by the capsule in the stomach and an image shot by the capsule in the duodenum to be used as an initial image set;
the preprocessing module is used for screening the images in the initial image set by adopting a residual method, removing repeated images and obtaining a preprocessed image set;
the data equalization module is used for increasing the images shot by the preprocessed image concentration capsules in duodenum by adopting an image augmentation method to obtain a training set;
the training module is used for carrying out classification training on the LeNet model by adopting a training set, so that the trained LeNet model has the function of classifying images in the stomach and in the duodenum;
the identification module is used for performing the following operations:
p1, classifying the images shot by the target capsule by adopting the trained LeNet model, and acquiring each image shot by the target capsule as being positioned in the stomach or in the duodenum;
p2, starting from the first image classified as being located in the duodenum, determining whether there are N consecutive images classified as being located in the stomach, if so, proceeding to operation P3, otherwise, proceeding to operation P4;
p3, removing the images classified as being in the duodenum before the consecutive N or more images classified as being in the stomach, and returning to step P2;
p4, taking the time of the first image classified as being located in the duodenum as the time point when the target capsule enters the duodenum, and completing the localization.
7. The system for rapidly locating a point in time at which a capsule enters the duodenum according to claim 6, wherein the LeNet model comprises 4 sets of alternately connected convolutional and pooling layers, and three fully connected layers connected sequentially after the last pooling layer; wherein the first set of convolutional and pooling layers has a size of 240 × 256 × 3, the second set of convolutional and pooling layers has a size of 118 × 126 × 32, the third set of convolutional and pooling layers has a size of 28 × 30 × 64, and the fourth set of convolutional and pooling layers has a size of 6 × 6 × 128; the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, and the node of the third fully-connected layer is 2.
8. The system for rapidly positioning a capsule entry time point into the duodenum according to claim 6, wherein the value N in operation P1 is 3.
CN202011016314.2A 2020-09-24 2020-09-24 Method and system for quickly positioning time point of capsule entering duodenum Pending CN112022065A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011016314.2A CN112022065A (en) 2020-09-24 2020-09-24 Method and system for quickly positioning time point of capsule entering duodenum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011016314.2A CN112022065A (en) 2020-09-24 2020-09-24 Method and system for quickly positioning time point of capsule entering duodenum

Publications (1)

Publication Number Publication Date
CN112022065A true CN112022065A (en) 2020-12-04

Family

ID=73573832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011016314.2A Pending CN112022065A (en) 2020-09-24 2020-09-24 Method and system for quickly positioning time point of capsule entering duodenum

Country Status (1)

Country Link
CN (1) CN112022065A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365136A (en) * 2008-09-09 2009-02-11 深圳市同洲电子股份有限公司 Method and apparatus for intra-frame prediction
CN102999912A (en) * 2012-11-27 2013-03-27 宁波大学 Three-dimensional image quality objective evaluation method based on distorted images
WO2018112255A1 (en) * 2016-12-14 2018-06-21 Progenity Inc. Treatment of a disease of the gastrointestinal tract with an immunosuppressant
CN108960198A (en) * 2018-07-28 2018-12-07 天津大学 A kind of road traffic sign detection and recognition methods based on residual error SSD model
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN110367913A (en) * 2019-07-29 2019-10-25 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocaecal sphineter localization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365136A (en) * 2008-09-09 2009-02-11 深圳市同洲电子股份有限公司 Method and apparatus for intra-frame prediction
CN102999912A (en) * 2012-11-27 2013-03-27 宁波大学 Three-dimensional image quality objective evaluation method based on distorted images
WO2018112255A1 (en) * 2016-12-14 2018-06-21 Progenity Inc. Treatment of a disease of the gastrointestinal tract with an immunosuppressant
CN108960198A (en) * 2018-07-28 2018-12-07 天津大学 A kind of road traffic sign detection and recognition methods based on residual error SSD model
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN110367913A (en) * 2019-07-29 2019-10-25 杭州电子科技大学 Wireless capsule endoscope image pylorus and ileocaecal sphineter localization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张向荣,冯婕,刘芳,焦李成: "《人工智能前沿技术丛书 模式识别》", 30 September 2019 *

Similar Documents

Publication Publication Date Title
WO2021036863A1 (en) Deep learning-based diagnosis assistance system for early digestive tract cancer and examination apparatus
CN109146884B (en) Endoscopic examination monitoring method and device
CN106934799B (en) Capsule endoscope visual aids diagosis system and method
WO2020071677A1 (en) Method and apparatus for diagnosing gastric lesions by using deep learning on gastroscopy images
CN110367913B (en) Wireless capsule endoscope image pylorus and ileocecal valve positioning method
CN107767365A (en) A kind of endoscopic images processing method and system
CN111986196B (en) Automatic monitoring method and system for retention of gastrointestinal capsule endoscope
CN109584229A (en) A kind of real-time assistant diagnosis system of Endoscopic retrograde cholangio-pancreatiography art and method
CN111340094A (en) Capsule endoscope image auxiliary classification system and classification method based on deep learning
CN115564712B (en) Capsule endoscope video image redundant frame removing method based on twin network
CN114399465B (en) Benign and malignant ulcer identification method and system
CN114259197B (en) Capsule endoscope quality control method and system
CN111839428A (en) Method for improving detection rate of colonoscope adenomatous polyps based on deep learning
JP2006223376A (en) Medical image processing apparatus
AU2020101450A4 (en) Retinal vascular disease detection from retinal fundus images using machine learning
CN114782760A (en) Stomach disease picture classification system based on multitask learning
Xu et al. Upper gastrointestinal anatomy detection with multi‐task convolutional neural networks
CN111493805A (en) State detection device, method, system and readable storage medium
CN111768389A (en) Automatic timing method for digestive tract operation based on convolutional neural network and random forest
CN108937871A (en) A kind of alimentary canal micro-optics coherence tomography image analysis system and method
CN114359131A (en) Helicobacter pylori stomach video full-automatic intelligent analysis system and marking method thereof
CN112022065A (en) Method and system for quickly positioning time point of capsule entering duodenum
CN111784669A (en) Capsule endoscopy image multi-focus detection method
Patel et al. Deep learning in gastrointestinal endoscopy
CN111126474B (en) Confocal laser micro-endoscope digestive tract image identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201204

RJ01 Rejection of invention patent application after publication