WO2022149836A1 - 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 - Google Patents
실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 Download PDFInfo
- Publication number
- WO2022149836A1 WO2022149836A1 PCT/KR2022/000109 KR2022000109W WO2022149836A1 WO 2022149836 A1 WO2022149836 A1 WO 2022149836A1 KR 2022000109 W KR2022000109 W KR 2022000109W WO 2022149836 A1 WO2022149836 A1 WO 2022149836A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- lesion
- server
- icon
- displaying
- Prior art date
Links
- 230000003902 lesion Effects 0.000 title claims abstract description 237
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001839 endoscopy Methods 0.000 claims abstract description 59
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 44
- 238000001574 biopsy Methods 0.000 claims description 30
- 230000000740 bleeding effect Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims 1
- 210000002784 stomach Anatomy 0.000 abstract description 15
- 230000002496 gastric effect Effects 0.000 description 57
- 238000013528 artificial neural network Methods 0.000 description 27
- 238000013527 convolutional neural network Methods 0.000 description 25
- 206010028980 Neoplasm Diseases 0.000 description 23
- 238000012549 training Methods 0.000 description 22
- 208000005718 Stomach Neoplasms Diseases 0.000 description 18
- 201000011549 stomach cancer Diseases 0.000 description 16
- 206010017758 gastric cancer Diseases 0.000 description 15
- 210000001519 tissue Anatomy 0.000 description 13
- 238000012795 verification Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 12
- 206010058314 Dysplasia Diseases 0.000 description 11
- 238000013135 deep learning Methods 0.000 description 11
- 201000011591 microinvasive gastric cancer Diseases 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000002575 gastroscopy Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 8
- 201000011510 cancer Diseases 0.000 description 7
- 238000007781 pre-processing Methods 0.000 description 7
- 239000002775 capsule Substances 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000012143 endoscopic resection Methods 0.000 description 5
- 230000009826 neoplastic cell growth Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 208000003200 Adenoma Diseases 0.000 description 4
- 206010001233 Adenoma benign Diseases 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000003238 esophagus Anatomy 0.000 description 3
- 230000008595 infiltration Effects 0.000 description 3
- 238000001764 infiltration Methods 0.000 description 3
- 230000009545 invasion Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000002271 resection Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000001356 surgical procedure Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- WQZGKKKJIJFFOK-GASJEMHNSA-N Glucose Natural products OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O WQZGKKKJIJFFOK-GASJEMHNSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 208000035269 cancer or benign tumor Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000009558 endoscopic ultrasound Methods 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- 239000008103 glucose Substances 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 210000004877 mucosa Anatomy 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000004393 prognosis Methods 0.000 description 2
- 230000000391 smoking effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 201000009030 Carcinoma Diseases 0.000 description 1
- 238000012323 Endoscopic submucosal dissection Methods 0.000 description 1
- 206010050161 Gastric dysplasia Diseases 0.000 description 1
- 206010061968 Gastric neoplasm Diseases 0.000 description 1
- 208000007882 Gastritis Diseases 0.000 description 1
- 208000025865 Ulcer Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000012326 endoscopic mucosal resection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013110 gastrectomy Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000002429 large intestine Anatomy 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 230000036244 malformation Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000001613 neoplastic effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 210000000813 small intestine Anatomy 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 231100000397 ulcer Toxicity 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00011—Operational features of endoscopes characterised by signal transmission
- A61B1/00016—Operational features of endoscopes characterised by signal transmission using wireless means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00039—Operational features of endoscopes provided with input arrangements for the user
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00039—Operational features of endoscopes provided with input arrangements for the user
- A61B1/0004—Operational features of endoscopes provided with input arrangements for the user for electronic operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00055—Operational features of endoscopes provided with output arrangements for alerting the user
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/273—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
- A61B1/2736—Gastroscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/945—User interactive design; Environments; Toolboxes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00011—Operational features of endoscopes characterised by signal transmission
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/041—Capsule endoscopes for imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
- G06T2207/30104—Vascular flow; Blood flow; Perfusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Definitions
- the present invention relates to a method, apparatus, and program for controlling a lesion determination system acquired through real-time images, and more particularly, to automatically discover and determine lesions using deep learning in an upper gastrointestinal endoscopy image, and to determine the depth of gastric wall invasion It relates to an endoscopy judgment aid device and method for providing information.
- gastric neoplasia including gastric cancer is made by reading the pathological tissue through biopsy through endoscopic forceps if the endoscopist suspects a lesion through gross findings during the examination. In addition, a biopsy is being performed for the purpose of confirming that lesions that are difficult to determine with the naked eye are not carcinomas or neoplasms.
- the ratio between the gross findings and the biopsy is low (if the endoscopist's visual estimation and actual judgment do not match), and the primary judgment through biopsy (examination of only a part of the entire lesion) and surgery and endoscopic resection
- the rate of inconsistency in the final pathological findings of the entire tissue after treatment through the treatment is also 20-60%, which is reported in various ways depending on the experience and skill of the doctor. Therefore, due to the low initial judgment rate, there is confusion in the change of treatment policy or the prediction of the patient's prognosis.
- a lesion is not found during endoscopy and missed, it directly affects the patient's prognosis.
- the possibility of missing a lesion is also constantly being raised due to the difficulty of a thorough examination or intensive examination for physical reasons. That is, there is an unmet medical need for the detection and visual judgment of gastric neoplasms.
- the depth of invasion into the gastric wall of the lesion must be predicted. This is mainly judged by the doctor's macroscopic findings and examinations such as endoscopic ultrasound, and only when gastric cancer or gastric neoplasm is confined to the mucosa or the submucosal infiltration depth is less than 500 ⁇ m, endoscopic resection is the target.
- An object of the present invention is to provide a method of controlling a lesion determination system acquired through a real-time image.
- a method for controlling a lesion determination system acquired through a real-time image for solving the above-described problem comprising: acquiring, by an endoscope apparatus, an upper endoscopic image; transmitting, by the endoscope device, the acquired gastroscopic image to a server; determining, by the server, the lesion included in the upper endoscopy image by inputting the upper endoscopy image to a first artificial intelligence model acquiring an image and transmitting it to a database of the server; determining, by the server, the type of lesion included in the image by inputting the image into a second artificial intelligence model; and displaying, by a display device, a UI for guiding a location of the lesion in the upper endoscopy image when a lesion is determined in the upper endoscopy image.
- the step of determining the lesion may include, when receiving an image reading command from an operator while a real-time image is being captured by the endoscope device, the server, from the time when the image reading command is input, before a preset time acquiring a plurality of images for an image; inputting, by the server, the plurality of images into the second artificial intelligence model, determining whether or not the plurality of images include lesions and the type of lesions; determining, by the server, whether the lesion is included in the real-time image when it is determined that the lesion is included in the plurality of images; and when the lesion is included, displaying, by the display device, a UI for guiding the location of the lesion in the real-time image.
- the endoscopic image when the endoscopic image is input to the first artificial intelligence model, determining, by the server, whether the image is an upper endoscopic image; and displaying, by the display device, a UI for inputting new patient information when the image is not an upper endoscopy image.
- the step of determining whether the image is an upper endoscopic image may include: acquiring, by the server, data corresponding to an average contrast of an endoscope room in which the endoscopic image is captured; and determining, by the server, whether the endoscope device is located outside the human body based on the data; may include.
- the step of determining the type of lesion included in the image includes, by the server, determining whether bleeding has occurred due to the biopsy; and when it is determined that the bleeding occurs, not performing the lesion determination on the location where the bleeding occurs.
- control method may include, by the server, dividing the endoscopic image into a plurality of frames; and determining, by the server, a lesion corresponding to the continuous frame as the same lesion when a lesion is determined in a number of consecutive frames greater than or equal to a preset number among the plurality of frames.
- the step of displaying the UI includes a first icon for inputting patient information, a second icon for confirming an image including the determined lesion, a third icon for confirming an examination result image, and a setting value for changing displaying a fourth icon and a fifth icon for returning to a real-time image; displaying a first UI for inputting a patient name, a patient chart number, a gender, and a year of birth when a user command for the first icon is input; displaying a second UI for guiding an image list including a lesion when a user command for the second icon is input; displaying a third UI for guiding a list indicating the determination result for each lesion when a user command for the third icon is input; displaying a fourth UI for changing a setting value when a user command is input through the fourth icon; displaying a fifth UI for displaying a real-time image when a user command is input through the fifth icon; and when a first user command for one of the first icon, the second icon
- the determining of the lesion included in the upper endoscopy image may include: determining whether the determined lesion is a lesion requiring real-time treatment; when the determined lesion is a lesion requiring real-time treatment, calculating a difference value between a time at which the endoscope device is received from the endoscope device and a time at which the lesion included in the endoscopy image is determined; and when the difference value is equal to or greater than a preset value, displaying information on a lesion requiring treatment and the difference value on the fifth UI.
- FIG. 1 is a system diagram according to an embodiment of the present invention.
- FIGS. 2 and 3 are exemplary views for explaining a pre-processing method of an artificial intelligence model according to an embodiment of the present invention.
- 4 to 6 are exemplary views for explaining a UI screen display method according to an embodiment of the present invention.
- FIG. 7 is a flowchart according to an embodiment of the present invention.
- FIG. 8 is a block diagram of an apparatus according to an embodiment of the present invention.
- unit refers to a hardware component such as software, FPGA, or ASIC, and “unit” or “module” performs certain roles.
- “part” or “module” is not meant to be limited to software or hardware.
- a “part” or “module” may be configured to reside on an addressable storage medium or may be configured to reproduce one or more processors.
- “part” or “module” refers to components such as software components, object-oriented software components, class components and task components, processes, functions, properties, Includes procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
- Components and functionality provided within “parts” or “modules” may be combined into a smaller number of components and “parts” or “modules” or additional components and “parts” or “modules”. can be further separated.
- spatially relative terms “below”, “beneath”, “lower”, “above”, “upper”, etc. It can be used to easily describe the correlation between a component and other components.
- a spatially relative term should be understood as a term that includes different directions of components during use or operation in addition to the directions shown in the drawings. For example, when a component shown in the drawing is turned over, a component described as “beneath” or “beneath” of another component may be placed “above” of the other component. can Accordingly, the exemplary term “below” may include both directions below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
- a computer means all types of hardware devices including at least one processor, and may be understood as encompassing software configurations operating in the corresponding hardware device according to embodiments.
- a computer may be understood to include, but is not limited to, smart phones, tablet PCs, desktops, notebooks, and user clients and applications running on each device.
- each step described in this specification is described as being performed by a computer, but the subject of each step is not limited thereto, and at least a portion of each step may be performed in different devices according to embodiments.
- FIG. 1 is a system diagram according to an embodiment of the present invention.
- the lesion determination system obtained through a real-time image may include a server 100 , an endoscope apparatus 200 , and a display apparatus 300 .
- a 3rd Generation Partnership Project (3GPP) network for information sharing between the server 100, the endoscope apparatus 200, and the display apparatus 300
- 3GPP 3rd Generation Partnership Project
- LTE Long Term
- 5G 5th Generation
- WIMAX Worldwide Interoperability for Microwave Access
- wired and wireless Internet LAN (Local Area Network), Wireless LAN (Wireless Local Area Network), WAN (Wide Area Network), PAN (Personal Area) Network), Bluetooth (Bluetooth) network, Wifi network, NFC (Near Field Communication) network, satellite broadcasting network, analog broadcasting network, DMB (Digital Multimedia Broadcasting) network, etc.
- NFC Near Field Communication
- satellite broadcasting network analog broadcasting network
- DMB Digital Multimedia Broadcasting
- the server 100 is configured to determine a gastric lesion by acquiring an image from the endoscope apparatus 200 .
- the server 100 may include an image acquiring unit, a data generating unit, a preprocessing unit, a learning unit, and a lesion determining unit.
- the configuration of the server 100 is not limited to those disclosed above.
- the server 100 may further include a database for storing information.
- the image acquisition unit may acquire a plurality of gastric lesion images.
- the image acquisition unit may receive a gastric lesion image from a photographing device provided in the endoscope apparatus 200 .
- the image acquisition unit may acquire a gastric lesion image acquired with an endoscope imaging device (digital camera) used for gastroendoscopic treatment.
- the image acquisition unit may collect an endoscopic white light image of the pathologically confirmed gastric lesion.
- the image acquisition unit may receive a plurality of gastric lesion images from image storage devices and database systems of a plurality of hospitals.
- the apparatus for storing images of a plurality of hospitals may be a device storing gastric lesion images obtained when performing gastroscopy in a plurality of hospitals.
- the image acquisition unit may acquire an image (image) photographed by changing any one of an angle, a direction, and a distance to the first region above the subject.
- the image acquisition unit may acquire a gastric lesion image in JPEG format.
- the above lesion image may be a style of a field of 35 degrees with a resolution of 1280 x 640 pixels.
- the image acquisition unit may acquire an image from which individual identifier information for each gastric lesion image is removed.
- the image acquisition unit may acquire an image in which the lesion is located in the center and the black frame area is removed from the image of the upper lesion.
- the image acquisition unit may exclude a corresponding image when an image of low quality or low resolution, such as defocus, artifact, and sound range, is acquired during the image acquisition process.
- the image acquisition unit when the image acquisition unit is an image that is not applicable to the deep learning algorithm, the image may be excluded.
- the data generator may generate a data set by associating a plurality of gastric lesion images with patient information.
- the patient information may include various information such as gender, age, height, weight, race, nationality, smoking amount, alcohol consumption amount, family history, etc. of the subject (subject).
- the patient information may include clinical information.
- Clinical information may mean all data used by a doctor who makes a decision in a hospital for a specific decision. In particular, it may be an electronic medical record data including data including gender and age, specific treatment status data, claim data, and prescription data generated in the course of treatment.
- clinical information may include biological data such as genetic information.
- the biological data material may include personal health information having numerical data such as heart rate, electrocardiogram, exercise amount, oxygen saturation, blood pressure, weight, and glucose.
- Patient information is data input to the fully connected neural network together with the result of the convolutional neural network structure in the learning unit to be described below, and it can be expected to further improve accuracy by inputting information other than the above lesion image to the artificial neural network.
- the data generator may generate a training data set and a verification data set for applying the deep learning algorithm.
- the data set may be generated by classifying the data set into a training data set required for artificial neural network learning and a validation data set for verifying learning progress information of the artificial neural network.
- the data generating unit may classify an image to be used in a data set for training and an image to be used in a data set for verification randomly among the gastric lesion images obtained from the image acquisition unit.
- the data generator may use the remainder of the selection of the data set for verification as the data set for training.
- the data set for verification may be randomly selected.
- a ratio of the data set for verification and the data set for training may be determined by a preset reference value. In this case, as for the preset reference value, the ratio of the verification data set may be set to 10% and the ratio of the training data set to 90%, but is not limited thereto.
- the data generator may generate a data set by dividing a data set for training and a data set for verification in order to prevent an overfitting state. For example, since the training data set may be in an overfitting state due to the learning characteristics of the neural network structure, the data generator may prevent the artificial neural network from becoming overfitted by using the verification data set.
- the data set for verification may be a data set that does not overlap with the data set for training. Since the data for verification is data that is not used in the construction of the artificial neural network, it is the first data encountered in the artificial neural network during the verification work. Therefore, the data set for verification may be a data set suitable for evaluating the performance of an artificial neural network when a new image (a new image not used for training) is input.
- the preprocessor may preprocess the data set to be applicable to the deep learning algorithm.
- the preprocessor can preprocess the data set to improve recognition performance in deep learning algorithms and to minimize similarity with images between patients.
- a deep learning algorithm can consist of two parts: a structure of convolutional neural networks and a structure of fully-connected neural networks.
- the artificial intelligence model according to an embodiment of the present invention may be Modified UNet++ to which an edge smoothing algorithm is applied in UNet++, which is a modified model of UNet.
- the backbone of Modified UNet++ may be DenseNet121.
- the artificial intelligence model according to the present invention advanced gastric cancer (advanced gastric cancer), early gastric cancer (early gastric cancer), high grade adenoma (high grade dysplasia), low grade adenoma (low grade) It may be a five-stage classification model of dysplasia) and normal lesions.
- the AI model is a) Weight Initialization with ImageNet, b) Augmentation: Horizontal/Vertical Flip, Rotate (-10° ⁇ +10°), c) Batch Size: 6, d) Learning Rate: 4e-04 e ) Epoch: 100 f) Optimizer: Adam, g) Loss Function: categorical crossentropy can be used as a parameter.
- the AI model according to the present invention is a two-stage classification model for whether the lesion is confined to the mucosa (mucosa-confined lesion) and whether there is submucosa-invaded lesion.
- the AI model is a) Augmentation: Horizontal/Vertical Flip, Rotate (-5° to +5°), Horizontal/Vertical Shift (-10% to +10%), Zoom (0.8 to 1.2), b) Batch Size: 2, c) Learning Rate: 1.25e-4, d) Epoch: 200, e) Dropout: 0.4, f) Learning Rate Scheduler with 0.97 decay was used can be used as a parameter.
- the artificial intelligence model according to the present invention may be a four-stage classification model into advanced gastric cancer, early gastric cancer, adenoma (dysplasia), and normal lesion (normal category).
- the pre-processing unit may include an amplifying unit (not shown) for amplifying image data to increase the number of data of gastric lesion image data.
- the amplifying unit may perform a data amplification process based on the training data set.
- the amplification unit may perform a data amplification process by applying at least one method of rotation, flipping, cropping, and noise mixing of the gastric lesion image.
- the preprocessor may perform a preprocessing process to correspond to a preset reference value.
- the preset reference value may be a value arbitrarily designated by a user.
- the preset reference value may be a value determined by the average value of the acquired gastric lesion images.
- the data set passed through the preprocessor may be provided to the learning unit.
- the learning unit can build an artificial neural network through learning that takes the preprocessed data set as an input and outputs items related to the classification result of the stomach lesion as an output.
- the learning unit applies a deep learning algorithm consisting of two parts, a Convolutional Neural Networks structure and a Fully-connected Neural Networks structure, and outputs the above lesion classification result as an output.
- a fully connected deep neural network is characterized in that a two-dimensional connection is made between nodes horizontally/vertically, there is no connection relationship between nodes located on the same layer, and only between nodes located on an adjacent layer. is a neural network
- the learning unit can build a training model through learning using a convolutional neural network as an input to a training data set that has undergone pre-processing as an input, and a fully connected deep neural network using the output of the convolutional neural network as an input.
- the convolutional neural network may extract a plurality of specific feature patterns for analyzing a gastric lesion image.
- the extracted specific feature pattern can be used for final classification in a fully connected deep neural network.
- Convolutional Neural Networks are a type of neural network mainly used in speech recognition and image recognition. It is configured to process multidimensional array data, and is specialized for multidimensional array processing such as color images. Therefore, most of the techniques using deep learning in the image recognition field are based on convolutional neural networks.
- a convolutional neural network processes an image by dividing it into multiple pieces, not one piece of data. In this way, even if the image is distorted, the partial characteristics of the image can be extracted and correct performance can be achieved.
- the convolutional neural network may have a plurality of layer structures.
- the elements constituting each layer may be composed of a convolutional layer, an activation function, a max pooling layer, an activation function, and a dropout layer.
- the convolutional layer acts as a filter called a kernel, so that partial processing of the entire image (or a new feature pattern generated) can extract a new feature pattern of the same size as the image.
- the convolutional layer can easily correct the values of the feature pattern through an activation function in the feature pattern.
- the max pooling layer can reduce the size of the image by adjusting the size by sampling some gastric lesion images.
- the convolutional neural network goes through the convolutional layer and the max pooling layer, so that the size of the feature pattern is reduced, but a plurality of feature patterns can be extracted by utilizing a plurality of kernels.
- the dropout layer may be a method that does not intentionally consider some weights for efficient training when training the weights of the convolutional neural network. On the other hand, the dropout layer may not be applied when an actual test is performed through a trained model.
- a plurality of feature patterns extracted from the convolutional neural network can be transferred to the next step, a fully connected deep neural network, and used for classification.
- Convolutional neural networks can control the number of layers.
- Convolutional neural networks can build more stable models by adjusting the number of layers according to the amount of training data for model training.
- the learning unit can build a judgment (training) model through learning that uses the pre-processed learning data set as an input to the convolutional neural network, and the output of the convolutional neural network and patient information as input to the fully connected deep neural network.
- the learning unit may allow the image data that has undergone the preprocessing process to enter the convolutional neural network preferentially, and the result from the convolutional neural network to enter the fully connected deep neural network.
- the learning unit may allow arbitrarily extracted features to enter the fully connected deep neural network without going through the convolutional neural network.
- the patient information may include various information such as gender, age, height, weight, race, nationality, smoking amount, alcohol consumption amount, family history of the subject (subject).
- the patient information may include clinical information.
- Clinical information may mean all data used by a doctor who makes a decision in a hospital for a specific decision.
- it may be an electronic medical record data including data including gender and age, specific treatment status data, claim data, and prescription data generated in the course of treatment.
- clinical information may include biological data such as genetic information.
- the biological data material may include personal health information having numerical data such as heart rate, electrocardiogram, exercise amount, oxygen saturation, blood pressure, weight, and glucose.
- Patient information is data that is input to a fully connected neural network together with the result of the convolutional neural network structure in the learning unit. have.
- the results can be derived in the way that elderly patients are more likely to have cancer.
- the learning unit compares the error between the result obtained by applying the training data to the deep learning algorithm structure (structure formed into a fully connected deep neural network through a convolutional neural network) with the actual result, and gradually increases the weight of the neural network structure as much as the error corresponds to the error.
- the result can be fed back and learned through a backpropagation algorithm that changes it to .
- the backpropagation algorithm may be to adjust the weight from each node to the next node in order to reduce the error of the result (the difference between the actual value and the result value).
- the learning unit may be to derive a final judgment model by learning a neural network using a training data set and a verification data set to obtain a weight parameter.
- the lesion determination unit may preprocess the new data set and then perform gastric lesion determination through an artificial neural network.
- the lesion determination unit may derive a determination on the new data using the final determination model derived from the above-described learning unit.
- the new data may be data including an image of a stomach lesion that the user wants to determine.
- the new data set may be a data set generated by associating a new gastric lesion image with patient information.
- the new data set can be preprocessed in a state applicable to the deep learning algorithm through the preprocessing process of the preprocessor. Thereafter, the preprocessed new data set may be input to the learning unit, and the gastric lesion image may be determined based on the learning parameters.
- the lesion determining unit is advanced gastric cancer, early gastric cancer, high-grade dysplasia, low-grade dysplasia and non-tumor (non-tumor).
- the gastric lesion determination classification may be performed by any one of at least one.
- the lesion determining unit may classify cancer into cancer and non-cancer.
- the lesion determination unit may classify the lesion into two categories, a neoplasm and a non-neoplastic, to perform the classification of the determination of the gastric lesion.
- Neoplasia classification may include AGC, EGC, HGD and LGD.
- Non-tumor categories may include lesions such as gastritis, benign ulcers, malformations, polyps, intestinal epithelialization or epithelial tumors.
- the server 100 automatically classifies and determines the ambiguous lesion by analyzing the image acquired with the endoscope device 200 in order to reduce the side effects caused by unnecessary biopsy or endoscopic resection to classify and determine the ambiguous lesion. , in the case of neoplasms (dangerous tumors), endoscopic resection may be performed.
- the endoscopy device 200 may be a device used during gastroscopy.
- the endoscope apparatus 200 may include a manipulation unit and a body unit.
- the endoscope apparatus 200 may include a body to be inserted into the body and a manipulation unit provided at the rear end of the body.
- the front end of the body part includes a photographing unit for photographing the inside of the body, a lighting unit for irradiating light to the photographing part, a water spraying unit for washing the inside of the body for easy shooting, a suction unit for sucking foreign substances or air in the body, etc.
- a channel corresponding to each of the plurality of units (parts) may be provided in the body portion.
- a biopsy channel may be provided in the insertion unit, and the endoscopic operator may insert a scalpel into the biopsy channel to collect tissue inside the body.
- a photographing unit ie, a camera
- a small camera may be provided in the case of a photographing unit (ie, a camera) for photographing the inside of the body provided in the endoscope apparatus 200.
- the imaging device may acquire a white light gastroscopic image.
- the photographing unit of the endoscope apparatus 200 may transmit the acquired gastric lesion image to the server 100 through a network.
- the server 100 may generate a control signal for controlling the biopsy unit based on the determination result of the gastric lesion.
- the biopsy unit may be a unit for collecting tissue inside the body. By collecting the tissue inside the body, it is possible to determine whether the tissue is positive or negative. In addition, by excising the tissue inside the body, it is possible to remove the cancerous tissue.
- the server 100 may be included in the endoscope apparatus 200 for acquiring an upper endoscopy image and collecting tissue inside the body.
- the upper endoscopy image acquired in real time from the endoscope device 200 is input to an artificial neural network constructed through learning, and is divided into at least one of the items related to the gastric lesion determination result, so that the determination and prediction of the gastric lesion is performed. It may be possible.
- the endoscope apparatus 200 may be formed in a capsule form.
- the endoscope apparatus 200 may be formed in a capsule form and inserted into a body of a subject to acquire an upper endoscopy image.
- the capsule-type endoscope apparatus 200 may also provide location information on where the subject's esophagus, stomach, small intestine, and large intestine are located.
- the capsule-type endoscope apparatus 200 may be located in the body of a subject (patient), and may provide an image (image) acquired in real time to the server 100 through a network.
- the capsule-type endoscope apparatus 200 provides not only the upper endoscopy image but also the location information from which the upper endoscopy image was obtained, so that the determination classification result of the server 100 is advanced gastric cancer and early gastric cancer. If it belongs to at least one of gastric cancer, high-grade dysplasia, and low-grade dysplasia, in other words, if it is a non-tumor-risk tumor, the user (physician) determines the location of the lesion. can be identified so that resection can be performed immediately.
- the server 100 performs gastric lesion determination by inputting the gastric lesion endoscope image acquired in real time by the endoscope apparatus 200 to an algorithm generated through learning, and the endoscope apparatus 200 ), a lesion suspected of a neoplasia can be excised using endoscopic mucosal resection or endoscopic submucosal dissection.
- the endoscope apparatus 200 may control the photographing unit by using the manipulation unit.
- the manipulation unit may receive a manipulation input signal from the user in order for the imaging unit to put the position of the target lesion in the field of view.
- the manipulation unit may control the position of the photographing unit based on a manipulation input signal received from the user.
- the manipulation unit may receive a manipulation input signal for capturing the corresponding image and generate a signal for capturing the corresponding gastric lesion image.
- the endoscope device 200 may be a device formed in the form of a capsule.
- the capsule endoscope apparatus 200 may be inserted into a human body of a subject (subject) and operated remotely.
- the gastric lesion image obtained from the capsule endoscopy apparatus 200 may be obtained by imaging not only an image of a region desired by the user to capture, but also all images obtained by shooting a video.
- the capsule endoscope apparatus 200 may include a photographing unit and a manipulation unit. The photographing unit may be inserted into the human body and controlled inside the human body based on a manipulation signal of the manipulation unit.
- the display device 300 may include, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, or a microelectromechanical system (MEMS) display.
- the display apparatus 300 may display the gastroscopic image obtained from the endoscope apparatus 200 and gastric lesion determination information determined from the server 100 to the user.
- the display apparatus 300 may include a touch screen, and may receive, for example, a touch input using an electronic pen or a part of the user's body, a gesture, a near point, or a hovering input.
- the display apparatus 300 may output an image of a gastric lesion obtained by the endoscope apparatus 200 . Also, the display apparatus 300 may output a result of determining the gastric lesion.
- the endoscope apparatus 200 may include a manipulation unit, a body unit, a processor, a lesion position obtaining unit, and a display unit.
- the endoscope apparatus 200 may include an artificial intelligence model and perform lesion determination. Specifically, in order to prevent the real-time gastric endoscopy from being disturbed due to the time difference that occurs when the AI model determines the lesion, the algorithm related to the detection that can cause interference to the examiner is the endoscope device 200, and the rest is preprocessed. The algorithm related to reading and may be performed by the server 100 .
- the manipulation unit may be provided at the rear end of the body unit to be manipulated based on user input information.
- the manipulation unit is a part gripped by the endoscopic operator, and may manipulate the body part inserted into the body of the subject.
- the operation unit may operate the operation of the plurality of unit devices required during the endoscopic procedure accommodated by the body portion.
- the manipulation unit may include a rotation processor.
- the rotation processor may include a part responsible for a function of generating a control signal and a function of providing a rotational force (eg, a motor).
- the operation unit may include a button for operating the photographing unit (not shown).
- the button is a button for controlling the position of the photographing unit (not shown), and may be for the user to change the position of the body part, such as up, down, left and right, forward, backward, and the like.
- the body portion is a portion inserted into the body of the examinee, and may accommodate a plurality of unit devices.
- the plurality of unit devices include a photographing unit (not shown) for photographing the body of the subject, an air supply unit for supplying air into the body, a water supply unit for supplying water into the body, a lighting unit for irradiating light into the body, and tissue in the body. It may include at least one of a biopsy unit for collecting or treating a portion of the body and a suction unit for sucking air or foreign substances from the body.
- the biopsy unit may include various medical devices such as a mass and a needle for collecting a part of tissue from a living body, and the biopsy unit such as a mass and needle is inserted into the body through the biopsy channel by an endoscopic operator, so that cells in the body can be harvested.
- various medical devices such as a mass and a needle for collecting a part of tissue from a living body
- the biopsy unit such as a mass and needle is inserted into the body through the biopsy channel by an endoscopic operator, so that cells in the body can be harvested.
- the photographing unit may accommodate a camera having a size corresponding to the diameter of the body.
- a photographing unit may be provided at the front end of the body to photograph a gastric lesion image, and may provide the photographed gastric lesion image to the lesion determining unit and the display unit through a network.
- the processor may generate a control signal for controlling the operation of the body unit based on the user's input information provided from the manipulation unit and the determination result of the server 100 .
- the processor may generate a control signal for controlling the operation of the body unit to correspond to the corresponding button.
- the processor may generate a motion control signal so that the body part moves forward through the body of the subject (patient) at a constant speed. The body part may move forward in the body of the object (patient) based on the control signal of the processor.
- the processor may generate a control signal for controlling the operation of the photographing unit (not shown).
- the control signal for controlling the operation of the imaging unit (not shown) may be a signal for the imaging unit (not shown) located in the lesion area to capture the gastric lesion image.
- the user may click the capture acquisition button.
- the processor may generate a control signal so that a photographing unit (not shown) may acquire an image from the corresponding lesion region based on input information provided from the manipulation unit.
- the processor may generate a control signal for obtaining a specific gastric lesion image from an image being photographed by a photographing unit (not shown).
- the processor may generate a control signal for controlling the operation of the biopsies unit for collecting a portion of the tissue of the object based on the determination result of the server 100 .
- the processor A control signal may be generated to control the operation of the biopsy unit to perform the resection.
- the biopsy unit may include various medical devices such as a mass and a needle for collecting a part of tissue from a living body, and the biopsy unit such as a mass and needle is inserted into the body through the biopsy channel by an endoscopic operator, so that cells in the body can be harvested.
- the processor may generate a control signal for controlling the operation of the biopsy unit based on a user input signal provided from the manipulation unit. The operation of collecting, excising, and removing cells in the body may be performed by a user using a manipulation unit.
- the lesion location acquisition unit may generate gastric lesion information by linking the gastric lesion image provided from the photographing unit (not shown) with the location information.
- the location information may be location information where the body part is currently located in the body. In other words, when the body part is located at the first point above the subject (patient), and a gastric lesion image is obtained from the first point, the lesion location acquisition unit generates gastric lesion information by linking the gastric lesion image and location information.
- the lesion location acquisition unit may provide the user (doctor) with gastric lesion information generated by linking the acquired gastric lesion image and location information.
- the processor may generate a control signal for controlling the position of the biopsy unit when the biopsy unit is not located at the lesion position by using the position information provided by the lesion position obtaining unit.
- a faster biopsy may be performed.
- rapid treatment may be possible by immediately removing cells determined to be cancerous during the endoscopy determination process.
- the endoscope apparatus 200 may acquire an upper endoscopy image.
- step S120 the endoscope apparatus 200 may transmit the acquired gastroscopic image to the server 100 .
- the server 100 may input the upper endoscopy image to the first artificial intelligence model to determine the lesion included in the upper endoscopy image.
- the server 100 may store an image of an image before a preset time from the time when the image reading command is input. A plurality of images may be acquired.
- the server 100 may receive an image reading command from the operator.
- the image reading command may be a voice command.
- the endoscope apparatus 200 may recognize the command as an image reading command and transmit a control signal for reading the image to the server 100 .
- the image reading command may be an operation of pushing a preset button included in the endoscope apparatus 200 or an operation of touching a specific UI of the display apparatus 300 .
- the server 100 may acquire a plurality of image frames for an image prior to a preset time from the time when the image reading command is input, and perform lesion determination analysis on the frame.
- the server 100 may input a plurality of images into the third artificial intelligence model, and determine whether or not a lesion is included in the plurality of images and the type of the lesion.
- the first artificial intelligence model and the second artificial intelligence model may be a lightweight artificial intelligence model for outputting a quick result
- the third artificial intelligence model may be an artificial intelligence model for deriving an accurate result. That is, the first artificial intelligence model and the second artificial intelligence model capable of deriving fast results can be used to assist the gastroscopy in real time, and when there is a doctor's command to read the image, a long time is required for precise analysis. Instead of taking this time, a third artificial intelligence model that can derive precise results can be used.
- the server 100 may determine whether a lesion is included in the real-time image.
- the server 100 may display a UI for guiding the location of the lesion in the real-time image.
- step S140 when a lesion is determined in the upper endoscopy image, the server 100 may acquire an image including the lesion and transmit it to the database of the server 100 .
- the image including the lesion stored in the database may be checked in the second UI displayed through the second icon, as will be described later.
- step S150 the server 100 may input the image into the second artificial intelligence model to determine the type of lesion included in the image.
- a detection algorithm applying YOLO_v3 and EfficientNet may be applied to the second artificial intelligence model.
- the second artificial intelligence model can recognize the lesion, remove the patient text and light reflection from the image including the lesion, read the lesion type, and display the result on the display device 300 in real time during the examination.
- the server 100 may determine whether bleeding has occurred due to the biopsy.
- the server 100 may include a fourth artificial intelligence model for detecting bleeding.
- the server 100 may input the image image to the fourth artificial intelligence model before inputting the image image to the first artificial intelligence model and the second artificial intelligence model.
- the server 100 may not perform lesion determination on a location where bleeding occurs.
- the server 100 when it is determined that the video image input through the fourth artificial intelligence model is bleeding, the server 100 does not input the video image into the first artificial intelligence model and the second artificial intelligence model, and the fourth artificial intelligence model Only when it is determined that no bleeding has occurred in the video image input through the intelligent model, the corresponding video image may be input to the first AI model and the second AI model to perform lesion determination.
- step S160 when a lesion is determined in the upper endoscopy image, the server 100 may display a UI for guiding the location of the lesion in the upper endoscopy image.
- a UI for guiding the location of the lesion may be displayed by the display apparatus 300 .
- a first icon 410 for inputting patient information a second icon 420 for confirming an image including the determined lesion, and an examination result image
- a third icon 430 for changing a setting value, a fourth icon 440 for changing a setting value, and a fifth icon 450 for returning to a real-time image may be displayed.
- the first icon 410 is an icon for inputting patient information. As shown in FIG. 4 , when a user command for the first icon 410 is input, the display device 100 may display a first UI for inputting the patient name, patient chart number, gender, and birth year. have.
- Patient information can be modified while the endoscopy is in progress. Furthermore, when the first UI is displayed on the display apparatus 300 , the first icon may be highlighted in a different color from other icons, and a user command through the first icon may not be input.
- the gastroscopy can be performed.
- the year of birth may be input only as a number, such as “1988” or “88”, and only the year before the current time may be input as the year of birth.
- the year of birth may be within a preset range from the current time. Of course, only the previous year can be entered.
- the year of birth may be input only within a preset range based on the current time point. For example, if the current time point is 2020, a year prior to 1820 cannot be input based on a preset year (eg, 200).
- a birth year out of the range is input, the display apparatus 300 may output a warning message such as "Please check your birth year".
- the server 100 may perform indexing on the performed endoscopy.
- a case in which patient information is insufficiently input may mean a case in which it cannot be distinguished from other endoscopy results.
- the server 100 may perform indexing to distinguish the same patient data.
- the server 100 may generate an arbitrary chart number and add it to the patient information. Any chart number generated by the server 100 may have a different format of chart numbering normally recorded or may be displayed in a different color.
- the display apparatus 300 may display a fifth UI displaying a real-time image.
- the second icon is an icon for checking an image stored in the database of the server 100 .
- the display apparatus 300 may display a second UI for guiding the image list including the lesion.
- the second icon may display the number of image lists together, and when the image list exceeds a preset number (eg, 99), only the preset number (eg, 99) It can be displayed together with the second icon.
- a preset number eg, 99
- the third icon is an icon for displaying the determination result for each lesion.
- the display apparatus 300 may display a third UI for guiding a list indicating a determination result for each lesion, as shown in FIG. 6 .
- the fourth icon is an icon for changing a setting value.
- a fourth UI for changing a setting value may be displayed.
- the fifth icon is an icon for returning to the real-time image.
- the display apparatus 300 may display a fifth UI for displaying a real-time image.
- the display device 300 may, when a first user command for one of the first icon, the second icon, the third icon, and the fourth icon is input, the 1 A UI corresponding to a user command may be displayed on the first layer.
- the server 100 and the display device 300 may detect the image screen in real time.
- the first UI to the fourth UI may be implemented in a layer different from the fifth UI.
- the display device 300 displays a sixth icon 510 , a seventh icon 520 , an eighth icon 530 , and a ninth icon ( 540) and a tenth icon 550 may be displayed.
- the sixth icon 510 may be generated when the server 100 is analyzing a lesion.
- the operator may determine that the server 100 is reading the suspicious part because there is a suspicious part.
- the seventh icon 520 is an icon for displaying the location and state of the lesion when the server 100 acquires a lesion from the image.
- the seventh icon may be generated to correspond to the size of the lesion, and further, may be displayed in a different color according to the state of the lesion. For example, normal may be displayed in green, LGD in yellow, HGD in dark yellow, EGC in orange, and AGC in red.
- the eighth icon 530 may be related to displaying the number of image lists, as described above.
- the ninth icon 540 is configured to display patient information obtained through the first UI.
- the patient information may include name, gender, age, and chart information, and when patient information is not input, it may be displayed as "-".
- the tenth icon 550 is configured to display the number of lesions determined during endoscopy. When the number of images including the suspicious lesion is added, the number of numbers displayed on the tenth icon 550 may be changed.
- the fifth icon 450 may not be displayed.
- the server 100 may determine whether the determined lesion is a lesion requiring real-time treatment.
- the server 100 may determine whether the determined lesion is a treatable lesion.
- the server 100 calculates the difference between the time the endoscope device is received from the endoscope device 200 and the time at which the lesion included in the endoscopy image is determined. can be calculated.
- the display apparatus 300 may display information on a lesion requiring an operation and the difference value on the fifth UI.
- the server 100 determines the time at which the upper endoscope device is received and the lesion included in the upper endoscopy image. By calculating the difference value from the determined time and notifying the operator, it is possible to proceed with the operation on the lesion in real time during the gastroscopy.
- the server 100 may determine whether the image is an upper endoscopic image.
- the server 100 may display a UI for inputting new patient information.
- the server 100 determines that the previous gastroscopic image is completed and the gastroscopy for a new patient has started, and can be displayed separately from the previously captured gastroscopic image. have.
- the server 100 may classify the upper endoscopy image according to the patient even when patient information is not input.
- the server 100 may obtain one class for a non-gastric part by extracting an image in which the esophagus is located and a tongue image in the patient's mouth from the gastroendoscopic image. Furthermore, the server 100 may acquire an image of the inside of the stomach as another class. The server 100 may learn each of the obtained two classes with a two-step classification artificial intelligence algorithm.
- the server 100 even when the server 100 receives a request to read the image, if it is determined that the read request image is a part other than the stomach as a result of reading by the above-described two-step classification artificial intelligence algorithm, the lesion determination process is performed. may not perform.
- the server 100 may obtain data corresponding to the average contrast of the endoscope room in which the endoscope image is taken, and determine whether the endoscope apparatus 200 is located outside the human body based on the obtained data. . Specifically, the server 100 may determine whether the photographed image is outside the human body based on illuminance, chroma, and contrast information of the image acquired by the endoscope apparatus 200 . Specifically, the server 100 obtains the average contrast value of the endoscope room environment, compares the obtained average contrast value with the contrast value of the image obtained through the endoscope apparatus 200, and the photographed image is photographed outside the human body. It can be determined whether or not For example, when the image acquired through the endoscope apparatus 200 is included in a specific brightness range, the server 100 may determine the image as an image outside the human body and turn off the artificial intelligence algorithm system.
- the server 100 automatically distinguishes and recognizes the patient and recognizes the reading findings when the endoscopy of one patient is started and the endoscopy of another patient is started again. can be saved to the storage device.
- the server 100 automatically recognizes and distinguishes the esophagus, the mouth, and the external part of the human body connected to the stomach from the endoscopic image, and does not perform artificial intelligence reading for the corresponding part, thereby providing the resources of the server 100 can focus only on the detection and judgment of gastric lesions.
- the server 100 may divide the endoscopic image into a plurality of frames.
- the server 100 may determine a lesion corresponding to the continuous frame as the same lesion.
- the server 100 may determine the lesion as the same lesion when the lesion is read for the same region in a plurality of image frames included in the upper endoscopy image.
- the server 100 may acquire a lesion class (eg, LGD) detected with the highest frequency with respect to an image frame corresponding to the same lesion as the determined lesion.
- LGD lesion class
- the server 100 may determine whether bleeding due to a biopsy has occurred during the endoscopy. When bleeding occurs, the server 100 may not determine it as a lesion. Specifically, the server 100 may learn a plurality of images in which bleeding has occurred as learning data to distinguish them from images of the stomach lesion.
- the server 100 may detect and remove noise in various images other than the lesion seen during the upper endoscopy. For example, when the patient text is detected on the upper endoscopy screen, the server 100 may remove the detected patient text. As another example, when the light reflection on the screen caused by the meeting of the air ejected from the endoscope device and the liquid in the stomach is detected, the server 100 may not recognize the light reflection by correcting the light reflection. As another example, the server 100 may store data on a biopsy instrument configured separately from the gastroscopic apparatus 200 in advance, and when the biopsy instrument is photographed, the corresponding image may be excluded from the lesion determination. As another example, the server 100 may perform color correction on the image captured by the endoscope apparatus 200 .
- the server 100 uses a segmented artificial intelligence algorithm to recognize and remove all patient texts, so that only the input image is read, or the light reflection from the endoscope device 200 is the average contrast value of the image. It can be classified by converting it into data, and if it is above or below a certain intensity value, the corresponding image can not be read even if it is requested to be read. Furthermore, when an external instrument such as a biopsy instrument is photographed, the external instrument is recognized by a detection algorithm modified from YOLO_v3, and the corresponding image can be controlled so that it is not read even if it is requested to be read.
- the server 100 re-images the lesion once discovered by the server 100 in order to prevent the operator from looking at the AI device screen or missing the detected lesion during the upper endoscopy.
- the seventh icon 520 may be displayed on the display device 300 to call attention to the operator.
- the server 100 may simultaneously generate a computer notification sound and display an icon including a color specified for each lesion on the UI of the display apparatus 300 .
- the server 100 may maintain the lesion position of the previous frame even if no lesion is detected in the current frame.
- the location of the lesion may be displayed in real time in a rectangular shape like the seventh icon 520 . Meanwhile, if no lesion is detected, the seventh icon 520 may be removed.
- the Max Overall Accuracy which is classified into four stages into advanced gastric cancer, early gastric cancer, dysplasia, and normal category, is 89.67%, and At this time, the sensitivity was 98.62% and the specificity was 85.28%. Advanced gastric cancer, early gastric cancer, high grade dysplasia, and low grade adenoma (low grade dysplasia) and normal lesion (normal category), the Max Overall Accuracy was 77%, and the sensitivity was 85.56% and specificity was 94.17%.
- the Max Overall Accuracy was 89.9%, which was classified as either localized (mucosa-confined lesion) or submucosa-invaded lesion (submucosa-invaded lesion), and at this time, the sensitivity was 93.13% and the specificity was 89.08%. .
- FIG. 8 is a block diagram of an apparatus according to an embodiment.
- the processor 102 may include one or more cores (not shown) and a graphic processing unit (not shown) and/or a connection path (eg, a bus, etc.) for transmitting and receiving signals to and from other components. .
- the processor 102 performs the method described with respect to FIG. 7 by executing one or more instructions stored in the memory 104 .
- the processor 102 obtains new training data by executing one or more instructions stored in the memory, and performs a test on the acquired new training data using the learned model, and the test result, labeling Extracting the first learning data in which the obtained information is obtained with an accuracy greater than or equal to a predetermined first reference value, deleting the extracted first learning data from the new learning data, and removing the new learning data from which the extracted learning data is deleted It is possible to retrain the learned model using
- the processor 102 is a RAM (Random Access Memory, not shown) and ROM (Read-Only Memory: ROM) for temporarily and / or permanently storing a signal (or, data) processed inside the processor 102. , not shown) may be further included.
- the processor 102 may be implemented in the form of a system on chip (SoC) including at least one of a graphic processing unit, a RAM, and a ROM.
- SoC system on chip
- the memory 104 may store programs (one or more instructions) for processing and controlling the processor 102 .
- Programs stored in the memory 104 may be divided into a plurality of modules according to functions.
- a software module may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- flash memory hard disk, removable disk, CD-ROM, or It may reside in any type of computer-readable recording medium well known in the art to which the present invention pertains.
- the components of the present invention may be implemented as a program (or application) to be executed in combination with a computer, which is hardware, and stored in a medium.
- Components of the present invention may be implemented as software programming or software components, and similarly, embodiments may include various algorithms implemented as data structures, processes, routines, or combinations of other programming constructs, including C, C++ , Java, assembler, etc. may be implemented in a programming or scripting language. Functional aspects may be implemented in an algorithm running on one or more processors.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Gastroenterology & Hepatology (AREA)
- Endoscopes (AREA)
Abstract
Description
Claims (10)
- 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법에 있어서,내시경 장치가, 위 내시경 영상을 획득하는 단계;상기 내시경 장치가, 상기 획득된 위 내시경 영상을 서버로 전송하는 단계;상기 서버가, 상기 위 내시경 영상을 제1 인공지능 모델에 입력하여, 상기 위 내시경 영상에 포함된 병변을 판단하는 단계;상기 서버가, 상기 위 내시경 영상 중 병변이 판단된 경우, 상기 병변을 포함하는 이미지를 획득하여, 상기 서버의 데이터베이스로 전송하는 단계;상기 서버가, 상기 이미지를 제2 인공지능 모델에 입력하여, 상기 이미지에 포함된 병변의 종류를 판단하는 단계; 및상기 위 내시경 영상 중 병변이 판단된 경우, 디스플레이 장치가, 상기 위 내시경 영상 내에서, 상기 병변의 위치를 안내하기 위한 UI를 표시하는 단계; 를 포함하는 제어 방법.
- 제1항에 있어서,상기 병변을 판단하는 단계는,상기 내시경 장치에 의해 실시간 영상이 촬영되는 중, 시술자로부터 영상 판독 명령을 수신한 경우, 상기 서버는, 상기 영상 판독 명령이 입력된 시점으로부터 기 설정된 시간 이전의 영상에 대한 복수의 이미지를 획득하는 단계;상기 서버가, 상기 복수의 이미지를 상기 제2 인공지능 모델에 입력하여, 상기 복수의 이미지에 병변이 포함되었는지 여부 및 병변의 종류를 판단하는 단계;상기 서버가, 상기 복수의 이미지 중, 병변이 포함된 것으로 판단된 경우, 상기 실시간 영상 중, 상기 병변이 포함되었는지 여부를 판단하는 단계; 및상기 병변이 포함된 경우, 상기 디스플레이 장치가 상기 실시간 영상 내에 상기 병변의 위치를 안내하기 위한 UI를 표시하는 단계; 를 포함하는 제어 방법.
- 제1항에 있어서,상기 제1 인공지능 모델에 상기 내시경 영상이 입력된 경우, 상기 서버가, 상기 영상이 위 내시경 영상인지 여부를 판단하는 단계; 및상기 디스플레이 장치가, 상기 영상이 위 내시경 영상이 아닌 경우, 새로운 환자 정보를 입력하기 위한 UI를 표시하는 단계;를 포함하는 제어 방법.
- 제3항에 있어서,상기 영상이 위 내시경 영상인지 여부를 판단하는 단계는,상기 서버가, 상기 내시경 영상이 촬영되는 내시경실의 평균 명암에 대응되는 데이터를 획득하는 단계; 및상기 서버가, 상기 데이터를 바탕으로, 상기 내시경 장치가 인체 밖에 위치하는지 판단하는 단계; 를 포함하는 제어 방법.
- 제1항에 있어서,상기 이미지에 포함된 병변의 종류를 판단하는 단계는,상기 서버가, 조직검사로 인해 출혈이 발생하였는지 여부를 판단하는 단계; 및상기 출혈 발생이 판단된 경우, 상기 출혈이 발생한 위치에 대한 병변 판단을 수행하지 않는 단계; 를 포함하는 제어 방법.
- 제1항에 있어서,상기 제어 방법은,상기 서버가, 상기 내시경 영상을 복수의 프레임으로 분할하는 단계; 및상기 서버가, 상기 복수의 프레임 중, 기 설정된 수 이상의 연속된 프레임에 병변이 판단되면, 상기 연속된 프레임에 대응되는 병변을 동일 병변으로 판단하는 단계;를 포함하는 제어 방법.
- 제1항에 있어서,상기 UI를 표시하는 단계는,환자 정보 입력을 위한 제1 아이콘, 판단된 병변을 포함하는 이미지를 확인하기 위한 제2 아이콘, 검사결과 이미지를 확인하기 위한 제3 아이콘, 설정값 변경을 위한 제4 아이콘 및 실시간 영상으로 복귀하기 위한 제5 아이콘을 표시하는 단계;상기 제1 아이콘에 대한 사용자 명령이 입력되면, 환자명, 환자 차트 번호, 성별 및 출생연도 입력을 위한 제1 UI를 표시하는 단계;상기 제2 아이콘에 대한 사용자 명령이 입력되면, 병변을 포함하는 이미지 리스트를 안내하기 위한 제2 UI를 표시하는 단계;상기 제3 아이콘에 대한 사용자 명령이 입력되면, 상기 병변별 판단결과를 나타내는 리스트를 안내하기 위한 제3 UI를 표시하는 단계;상기 제4 아이콘을 통한 사용자 명령이 입력되면, 설정값 변경을 위한 제4 UI를 표시하는 단계; 및상기 제5 아이콘을 통한 사용자 명령이 입력되면, 실시간 영상을 표시하기 위한 제5 UI를 표시하는 단계; 및 상기 제5 UI를 표시하는 동안, 상기 제1 아이콘, 상기 제2 아이콘, 상기 제3 아이콘 및 상기 제4 아이콘 중 하나의 아이콘에 대한 제1 사용자 명령이 입력되면, 상기 제1 사용자 명령에 대응되는 UI를 제1 레이어에 표시하는 단계; 를 포함하는 제어 방법.
- 제7항에 있어서,상기 위 내시경 영상에 포함된 병변을 판단하는 단계는,상기 판단된 병변이 실시간 시술이 필요한 병변인지 여부를 판단하는 단계;상기 판단된 병변이 실시간 시술이 필요한 병변인 경우, 상기 내시경 장치로부터 상기 위 내시경 장치를 수신한 시간과, 상기 위 내시경 영상에 포함된 병변을 판단한 시간과의 차이값을 산출하는 단계; 및상기 차이값이 기 설정된 값 이상인 경우, 상기 제5 UI에 시술이 필요한 병변에 대한 정보 및 상기 차이값을 표시하는 단계; 를 포함하는 제어 방법.
- 하나 이상의 인스트럭션을 저장하는 메모리; 및상기 메모리에 저장된 상기 하나 이상의 인스트럭션을 실행하는 프로세서를 포함하고,상기 프로세서는 상기 하나 이상의 인스트럭션을 실행함으로써,제1 항의 방법을 수행하는, 장치.
- 하드웨어인 컴퓨터와 결합되어, 제1 항의 방법을 수행할 수 있도록 컴퓨터에서 독출가능한 기록매체에 저장된 컴퓨터프로그램.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280009042.4A CN116745861A (zh) | 2021-01-11 | 2022-01-05 | 通过实时影像获得的病变判断系统的控制方法、装置及程序 |
US18/260,245 US11935239B2 (en) | 2021-01-11 | 2022-01-05 | Control method, apparatus and program for system for determining lesion obtained via real-time image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020210003315A KR102505791B1 (ko) | 2021-01-11 | 2021-01-11 | 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 |
KR10-2021-0003315 | 2021-01-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022149836A1 true WO2022149836A1 (ko) | 2022-07-14 |
Family
ID=82358186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/000109 WO2022149836A1 (ko) | 2021-01-11 | 2022-01-05 | 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11935239B2 (ko) |
KR (1) | KR102505791B1 (ko) |
CN (1) | CN116745861A (ko) |
WO (1) | WO2022149836A1 (ko) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102623871B1 (ko) * | 2023-07-27 | 2024-01-12 | 주식회사 인트로메딕 | 의료영상의 처리시스템 및 처리 방법 |
CN117788964B (zh) * | 2024-02-28 | 2024-05-07 | 苏州凌影云诺医疗科技有限公司 | 一种针对病变识别的跳变控制方法和系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150085943A (ko) * | 2014-01-17 | 2015-07-27 | 주식회사 인피니트헬스케어 | 의료 영상 판독 과정에서 구조화된 관심 영역 정보 생성 방법 및 그 장치 |
KR20190103937A (ko) * | 2018-02-28 | 2019-09-05 | 이화여자대학교 산학협력단 | 뉴럴 네트워크를 이용하여 캡슐 내시경 영상으로부터 병변 판독 방법 및 장치 |
JP2020073081A (ja) * | 2017-10-30 | 2020-05-14 | 公益財団法人がん研究会 | 画像診断支援装置、学習済みモデル、画像診断支援方法および画像診断支援プログラム |
JP2020089710A (ja) * | 2018-12-04 | 2020-06-11 | Hoya株式会社 | 情報処理装置、内視鏡用プロセッサ、情報処理方法およびプログラム |
KR102185886B1 (ko) * | 2020-05-22 | 2020-12-02 | 주식회사 웨이센 | 인공지능을 이용한 대장 내시경 영상 분석 방법 및 장치 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017073338A1 (ja) * | 2015-10-26 | 2017-05-04 | オリンパス株式会社 | 内視鏡画像処理装置 |
JP6890184B2 (ja) * | 2017-09-15 | 2021-06-18 | 富士フイルム株式会社 | 医療画像処理装置及び医療画像処理プログラム |
CN107658028A (zh) * | 2017-10-25 | 2018-02-02 | 北京华信佳音医疗科技发展有限责任公司 | 一种获取病变数据的方法、识别病变方法及计算机设备 |
US11100633B2 (en) * | 2018-06-13 | 2021-08-24 | Cosmo Artificial Intelligence—Al Limited | Systems and methods for processing real-time video from a medical image device and detecting objects in the video |
JP7017198B2 (ja) * | 2018-06-22 | 2022-02-08 | 株式会社Aiメディカルサービス | 消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体 |
CN108695001A (zh) | 2018-07-16 | 2018-10-23 | 武汉大学人民医院(湖北省人民医院) | 一种基于深度学习的癌症病灶范围预测辅助系统及方法 |
KR102058884B1 (ko) * | 2019-04-11 | 2019-12-24 | 주식회사 홍복 | 치매를 진단을 하기 위해 홍채 영상을 인공지능으로 분석하는 방법 |
US11191423B1 (en) * | 2020-07-16 | 2021-12-07 | DOCBOT, Inc. | Endoscopic system and methods having real-time medical imaging |
WO2021205778A1 (ja) * | 2020-04-08 | 2021-10-14 | 富士フイルム株式会社 | 医療画像処理システム、認識処理用プロセッサ装置、及び医療画像処理システムの作動方法 |
-
2021
- 2021-01-11 KR KR1020210003315A patent/KR102505791B1/ko active IP Right Grant
-
2022
- 2022-01-05 US US18/260,245 patent/US11935239B2/en active Active
- 2022-01-05 CN CN202280009042.4A patent/CN116745861A/zh active Pending
- 2022-01-05 WO PCT/KR2022/000109 patent/WO2022149836A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150085943A (ko) * | 2014-01-17 | 2015-07-27 | 주식회사 인피니트헬스케어 | 의료 영상 판독 과정에서 구조화된 관심 영역 정보 생성 방법 및 그 장치 |
JP2020073081A (ja) * | 2017-10-30 | 2020-05-14 | 公益財団法人がん研究会 | 画像診断支援装置、学習済みモデル、画像診断支援方法および画像診断支援プログラム |
KR20190103937A (ko) * | 2018-02-28 | 2019-09-05 | 이화여자대학교 산학협력단 | 뉴럴 네트워크를 이용하여 캡슐 내시경 영상으로부터 병변 판독 방법 및 장치 |
JP2020089710A (ja) * | 2018-12-04 | 2020-06-11 | Hoya株式会社 | 情報処理装置、内視鏡用プロセッサ、情報処理方法およびプログラム |
KR102185886B1 (ko) * | 2020-05-22 | 2020-12-02 | 주식회사 웨이센 | 인공지능을 이용한 대장 내시경 영상 분석 방법 및 장치 |
Also Published As
Publication number | Publication date |
---|---|
KR102505791B1 (ko) | 2023-03-03 |
US11935239B2 (en) | 2024-03-19 |
KR20220102172A (ko) | 2022-07-20 |
US20240037733A1 (en) | 2024-02-01 |
CN116745861A (zh) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022149836A1 (ko) | 실시간 영상을 통해 획득되는 병변 판단 시스템의 제어 방법, 장치 및 프로그램 | |
JP7335552B2 (ja) | 画像診断支援装置、学習済みモデル、画像診断支援装置の作動方法および画像診断支援プログラム | |
CN110136106B (zh) | 医疗内窥镜图像的识别方法、系统、设备和内窥镜影像系统 | |
EP1769729B1 (en) | System and method for in-vivo feature detection | |
WO2021147429A1 (zh) | 内窥镜图像展示方法、装置、计算机设备及存储介质 | |
WO2006087981A1 (ja) | 医用画像処理装置、管腔画像処理装置、管腔画像処理方法及びそれらのためのプログラム | |
WO2007119297A1 (ja) | 医療用画像処理装置及び医療用画像処理方法 | |
JP7166430B2 (ja) | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理装置の作動方法及びプログラム | |
WO2020224153A1 (zh) | 一种基于深度学习和图像增强的nbi图像处理方法及其应用 | |
JP5085370B2 (ja) | 画像処理装置および画像処理プログラム | |
CN111862090B (zh) | 一种基于人工智能的食管癌术前管理的方法和系统 | |
US20200090548A1 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
JP4749732B2 (ja) | 医用画像処理装置 | |
WO2023143014A1 (zh) | 一种基于人工智能的内窥镜辅助检查方法及装置 | |
WO2021139672A1 (zh) | 医疗辅助操作方法、装置、设备和计算机存储介质 | |
JPWO2020174747A1 (ja) | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム | |
WO2020054543A1 (ja) | 医療画像処理装置及び方法、内視鏡システム、プロセッサ装置、診断支援装置並びにプログラム | |
CN111839428A (zh) | 一种基于深度学习提高结肠镜腺瘤性息肉检出率的方法 | |
CN113613543A (zh) | 诊断辅助装置、诊断辅助方法以及程序 | |
CN115460968A (zh) | 图像诊断装置、图像诊断方法、图像诊断程序和学习完毕模型 | |
CN111768389A (zh) | 基于卷积神经网络和随机森林的消化道操作自动计时方法 | |
CN112971688B (zh) | 图像处理方法、装置及计算机设备 | |
WO2023075303A1 (ko) | 인공지능 기반의 내시경 진단 보조 시스템 및 이의 제어방법 | |
CN111839429A (zh) | 一种基于图像识别判断结肠镜检查是否完整的检测方法 | |
WO2022080141A1 (ja) | 内視鏡撮影装置、方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22736813 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18260245 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280009042.4 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22736813 Country of ref document: EP Kind code of ref document: A1 |