WO2023039493A1 - Système et procédés d'agrégation de caractéristiques dans des trames vidéo pour améliorer la précision d'algorithmes de détection basés sur l'ia - Google Patents

Système et procédés d'agrégation de caractéristiques dans des trames vidéo pour améliorer la précision d'algorithmes de détection basés sur l'ia Download PDF

Info

Publication number
WO2023039493A1
WO2023039493A1 PCT/US2022/076142 US2022076142W WO2023039493A1 WO 2023039493 A1 WO2023039493 A1 WO 2023039493A1 US 2022076142 W US2022076142 W US 2022076142W WO 2023039493 A1 WO2023039493 A1 WO 2023039493A1
Authority
WO
WIPO (PCT)
Prior art keywords
tissue abnormality
video
video frames
reconstructed image
tissue
Prior art date
Application number
PCT/US2022/076142
Other languages
English (en)
Inventor
Gabriele Zingaretti
James Requa
Original Assignee
Satisfai Health Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/473,775 external-priority patent/US11423318B2/en
Application filed by Satisfai Health Inc. filed Critical Satisfai Health Inc.
Publication of WO2023039493A1 publication Critical patent/WO2023039493A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This invention relates generally to the field of real-time imaging of a body cavity, with particular application to endoscopy such as colonoscopy and upper endoscopy.
  • Endoscopy refers to a medical procedure in which an instrument is used for visual examination of an internal body part.
  • a common example of endoscopy is colonoscopy, during which a flexible tube with imaging apparatus at the distal end is inserted into a person’s colon.
  • the purpose of colonoscopy is to search for and identify abnormalities in the internal wall of the colon and, in some cases, remove them. Such abnormalities include polyps and adenomas of several types.
  • Barrett's esophagus is a condition in which the lining of the esophagus changes, becoming more like the lining of the small intestine rather than the esophagus. This occurs in the area where the esophagus is joined to the stomach. Endoscopy is used in the esophagus as part of the clinical examination in cases of suspected Barrett’s esophagus.
  • Endoscopic procedures for other organs have similar characteristics, and the invention disclosed herein has applicability to other endoscopic procedures.
  • Screening colonoscopy remains the best proven method to prevent colon cancer. Clinical guidelines typically suggest that a first colonoscopy be performed at age 50. In screening colonoscopy, the colonoscopist performs a rigorous visual examination of the entire internal lining of the colon, looking for abnormalities such as polyps and adenomas. Polyps within certain parameters are often removed during the same procedure.
  • Endoscopy such as colonoscopy is typically performed by a fellowship-trained gastroenterologist. Colonoscopy also is performed by primary care physicians (PCP), general surgeons, nurse practitioners and physician assistants. In this disclosure, each person performing a colonoscopy is referred to as an endoscopist.
  • PCP primary care physicians
  • each person performing a colonoscopy is referred to as an endoscopist.
  • a well-accepted measure of quality of colonoscopy is the so-called “adenoma detection rate” (or ADR). This is a measure of the proportion of patients receiving a colonoscopy in whom an adenoma is detected.
  • ADR is a proven measure of risk of colorectal cancer between screenings (“interval colorectal cancer”) and the ADR is inversely associated with the risks of interval cancer (Kaminski M. F. et al “quality Indicator for Colonoscopy and the Risk of Interval Cancer” NEJM 2010; 362: 1795-803).
  • Another factor that contributes to the lower than ideal ADR is the difficulty of ensuring that the entire internal surface of the colon has been imaged. It is difficult for a colonoscopist to remember what has been imaged, and “integrate” those images mentally to conclude that the entire internal surface has been looked at, and thus it is extremely challenging for the endoscopist to assure that the entire internal surface of the colon has been visualized. Failure to visualize the entire internal surface incurs a risk of missing potentially harmful polyps or cancers.
  • Al Artificial Intelligence
  • Al processes the video feed in real time, and thus operates prospectively during a procedure. Accordingly, the Al only can analyze the data as it is fed to its algorithms, i.e., process information on a per frame basis.
  • the Al has no historical memory of the frames before the frame currently being analyzed, but instead processes each frame independently.
  • some of the frames may contain only partial information that limits the extraction capability of the Al algorithms.
  • the quality of the procedure is highly influenced by the dexterity of the endoscopist.
  • Temporal information means the area of interest only may be visible for a short period of time.
  • spatial information means the entirely of the area of interest may be not visible and/or may be partially obstructed.
  • said spatial and temporal information me be available at different times, e.g., portion Pl of an adenoma is visible only at time T1 and portion P2 of the adenoma is visible only at time T2.
  • the Al may be programmed to try to best characterize the information provided at T1 and T2, at neither time would it have the complete image of P1+P2. Accordingly, the Al may not be able to detect the entire abnormality in a single frame.
  • Image stacking has been used in many different disciplines to provide higher resolution and quality images from a single source.
  • One example of image stacking is used in microscopic photography where, to capture very small details of a subject, special lenses are used that provide macro-level imaging with a concomitantly narrow depth of field. In this case, to capture an entire subject of interest, multiple pictures are taken of several areas of the subject. Portions of the pictures out of focus then are removed and the resulting subpictures are stitched together to ultimately compile the final macro picture.
  • Panoramic pictures are yet another example in which multiple images are stitched together.
  • a wide angle lens provides a wide field of view, e.g., suited for outdoor photography, such lenses also introduce a high degree of distortion at the periphery. It is common practice to use a lens with very minimal distortion, pan the camera along an axis and then stitch the images together to compile a large panoramic image.
  • 3D volume reconstruction or volume rendering to create a 3D volume Another example in which multiple number of slices are stitched together is 3D volume reconstruction or volume rendering to create a 3D volume.
  • One drawback of this approach is that the algorithm has no knowledge if it is stitching together images that belong to the same object or different objects. It is therefore up to the operator to make sure the stitching is done properly, with all the images belonging to the same object.
  • none of the foregoing methods operate in real time, but rather require post-processing of the information. Accordingly, none are suitable for real time applications, such as endoscopy.
  • U.S. Patent Application Publication No. US 2010/0194851 to Pasupaleti et al. describes a system and method of stitching together multiple images to create a panoramic image by registering the images by spatial relationship.
  • This application describes that the images preferably taken on the same plane and stitched together by overlapping common portions of adjacent images. This application does not address the problems that arise when attempting to stitch together images taken on different focal planes that provide only partial information of an object.
  • US Patent No. 9,224,193 to Tsujimoto et al. describes an image processing apparatus for stacking images on the Z axis. This method employs specialized hardware as well as image processing algorithms for computing depth of field, focus and blur detection. The patent does not address features extraction and stacking images based on similarity of the extracted features. [0021] In view of the foregoing drawbacks of previously known systems, it would be desirable to provide a method of recognizing that a portion of an area of interest in a current frame belongs to the same area of interest at a previous time, such that the method sums all of the subareas and analyzes the subareas together. [0022] Furthermore, as the endoscopist continues to examine the area of interest, the Al algorithm may analyze additional information to ultimately compile a full data picture for the tissue under examination, as supposed to an instantaneous partial picture.
  • the systems and methods of the present invention enable an Al system to recognize and group portions of an area of interest in a multiple video frames generated by an endoscope, thereby enabling analysis of the subareas the multiple video frames together. In this manner, as an endoscopist continues to examine an area of interest, the Al algorithm is able to analyze additional information to ultimately compile a full data picture for the tissue under examination.
  • the inventive system and methods further provide an Al system for use with endoscopic modalities, such as colonoscopy or upper endoscopy, wherein the Al system is directed to combine multiple portions of an area of interest for analysis in real time. While this disclosure describes the present invention in the context of colonoscopy, as just one example of it application in the field of endoscopy, it should be appreciated by persons of skill in the art that the invention described herein has applicability to multiple other forms of endoscopy.
  • systems and methods are provided for generating high quality images for submission to Al detection algorithms used in endoscopic medical procedures, to thereby yield better outcomes.
  • the inventive systems and methods are expected to provide essentially seamless performance, as if the Al detection algorithms were running in their canonical form.
  • the system provides multiple display windows, preferably at least two display windows.
  • the first display window displays real time images of the procedure to the endoscopist as the examination, is being performed, for example, as in conventional colonoscopy.
  • the first display window also displays information from an automatic detection system, for example, bounding boxes, overlaid on real-time images of polyps and other abnormalities detected in the video stream images from the endoscopy machine.
  • the second display window displays at an evolving view of a stitched area of interest. As the Al module detects an area of interest shown in the first monitor display and the endoscopist explores that area, the second screen will update the information in real time by stitching together multiple images and features of the area of interest.
  • a visual indicator will display the updated information regarding detected tissue features or abnormalities. For example, if a lot of information is added in the stitched image, a red indicator may slowly transition to green (or any other color) as the accumulated information (or features) are adjudged by the Al module to become less likely to contain areas of concern.
  • the inventive software may guide the endoscopist where to next move the endoscope to collect additional information for processing by the Al module and to further visualize the area of interest.
  • Display of the first and the second display windows may be performed in a parallel or as a multi-threaded process.
  • Parallel processing advantageously allows the system to display the video data received from the endoscope in real-time, and also display the graphical indications in the second window at a frame rate that may be lower than or equal to the frame rate of the first window.
  • the present invention provides visual clues that improve the quality and quantity of the information provided to the detection algorithms.
  • Systems constructed in accordance with the inventive principles also enable the detection algorithm to determine if there are enough features extracted based on the real time images available to assess an area under examination, or if more data is required, thereby greatly improve the efficacy of the detection algorithms.
  • FIG. 1 is a schematic depicting an exemplary configuration of a system incorporating the principles of the present invention.
  • FIG. 2 is an exemplary flowchart depicting data processing in the inventive system.
  • FIG. 3 is a schematic depicting how an endoscopist might see a display screen showing just frame-based Al module detection predictions.
  • FIG. 4 depicts how an Al module configured in accordance with the principles of the present invention combines information for a single lesion over multiple frames.
  • FIG. 5 is a schematic depicting a two display screen arrangement showing how an endoscopist might see the outcome on the monitor for a system employing the multiple frame Al module of the present invention.
  • the present invention is directed to systems and methods for analyzing multiple video frames imaged by an endoscope with an artificial intelligence (“Al”) software module running on a general purpose or purpose-built computer to aggregate information about a potential tissue feature or abnormality, and to indicate to the endoscopist the location and extent of that feature or abnormality on a display viewed by the endoscopist.
  • the Al module is programmed to make a preliminary prediction based on initially available information within a video frame, to aggregate additional information for a feature from additional frames, and preferably, to provide guidance to the endoscopist to direct him or her to move the imaging end of the endoscope to gather additional video frames that will enhance the Al module detection prediction.
  • FIG. 1 exemplary colonoscopy system 10 configured in accordance with the principles of the present invention is described.
  • Patient P may be lying on an examination table (not shown) for a colonoscopy procedure using conventional colonoscope 11 and associated colonoscope CPU 12, which receives the image signals from the camera on board colonoscope 11 and generates video output 13, which may be displayed on monitor 14 located so as to be visible to the endoscopist.
  • Video output 13 also is provided to computer 15, which is programmed with an Al module configured in accordance with the principles of the present invention as described below.
  • Computer 15 which may be a general purpose or purpose-built computer, includes one of more processors, volatile and non-volatile memory, input and output ports, and is programmed to process video output 13 to generate Al augmented video output 16.
  • processors volatile and non-volatile memory
  • input and output ports input and output ports
  • process video output 13 to generate Al augmented video output 16.
  • the details of a colonoscopy procedure, including patient preparation and examination, and manipulation of colonoscope are well known to those skilled in the art.
  • Colonoscope 11 acquires real-time video of the interior of the patient’s colon and large intestine from a camera disposed at the distal tip of the colonoscope once it is inserted in the patient.
  • Data from colonoscope 11, including real-time video, is processed by computer to generate video output 13.
  • video output 13 As shown in FIG. 1, one output of computer 12 displayed in a first window on monitor 14 as real-time video of the colonoscopy procedure.
  • Video output 13 also is provided to computer 15, which preferably generates an overlay on the video indicating areas of interest detected in displayed image identified by the inventive Al module running on computer 14, e.g., a polyp, lesion or tissue abnormality.
  • computer 15 also may display in a second window on monitor 14 information about the area of interest and the quality of the aggregated frames analyzed by the Al module to identify the area of interest.
  • the Al software module running on computer 15 may be of many types, but preferably includes artificial intelligence decision-making ability and machine learning capability.
  • Video data captured from by a colonoscope of the interior of colon and large intestine of patient P is processed by colonoscopy computer 21 (corresponding to components 11 and 12 of FIG. 1).
  • Each video frame from the live video feed is sent to computer 15 of FIG. 1, which performs steps 22-29 of FIG. 2.
  • each video frame, labelled FQ, from colonoscopy machine 21 is acquired at step 22 and analyzed by the processor of computer 15 at step 23. If the Al module detects a lesion at step 24 (“Yes” branch from decision box 24), additional frames of the video stream are analyzed, at step 25, to determine if the lesion is the same lesion as identified in the previous video frame.
  • a new identifier (“ID”) is assigned to that new lesion at step 28 and additional frames are analyzed to extract data for that new lesion.
  • ID a new identifier assigned to that new lesion at step 28 and additional frames are analyzed to extract data for that new lesion.
  • features for the lesion are extracted and aggregated by combining information from the previous frame with information from the new frame at step 26.
  • the Al module then reanalyzes the aggregated data for the lesion and updates its detection prediction analysis, at step 27. Specifically, at step 26, the software extracts features from the current video frame and compares that data with previously detected features for that same lesion.
  • the Al module may issue directions, via the second window, to reposition the colonoscope camera to obtain additional video frames for analysis at step 29. Further details of that process are described below with respect to FIG. 4.
  • the foregoing process described with respect to FIG. 2 is similar to analogous to stitching together multiple adjacent or overlapping images to form a panoramic image.
  • the aggregation is done algorithmically, using the Al module, to analyze images derived from different planes and/or different angles, rather than a single plane as would commonly be the case for panoramic imaging or macroscopic photography.
  • the Al module does not simply analyze the new information from the newly acquired frame, but instead preferably reanalyzes the lesion detection prediction using all of the available information, including the current and past video frames, and thus is expected to provide greater detection accuracy.
  • the Al module may display in the second display window a progress indicator that informs the endoscopist regarding how much data has been aggregated and analyzed. This indicator will aid the endoscopist in assessing whether additional effort should be made to examine an area of interest, thus yielding more data for the Al module and potentially improving the examination procedure.
  • the Al module at step 29, also could suggest a direction to move the endoscope to collect additional information needed to complete the analysis of an area of interest, for example, by displaying directional arrows or text.
  • the Al module may use landmarks identified by a machine learning algorithm to provide registration of images between multiple frames.
  • Such anatomical landmarks may include tissue folds, discolored areas of tissue, blood vessels, polyps, ulcers or scars.
  • Such landmarks may be used by the feature extraction algorithms, at step 26, to help determine if the new image(s) provide additional information for analysis or may be used at step 25 to determine whether a current lesion is the same lesion as the a previous frame or a new lesion, which is assigned a new identifier at step 28.
  • monitor 31 displays a live feed from the colonoscope along with a real time frame-based Al module detection prediction 32, as described, for example in commonly assigned U.S. Patent No. 10,67,934, the entirety of which is incorporated herein by reference.
  • the display shows the real time video output of the colonoscope including bounding box 33 determined as an output of an Al module that highlights an area of interest as potentially including a tissue feature or lesion for the endoscopist’s attention.
  • the Al module prediction accuracy is enhanced by including multiple video frames of the same tissue feature or lesion in the analysis, and by directing the endoscopist to redirect the camera of the endoscope to obtain further images of an area of interest.
  • a lesion in real life is a three dimensional body. Due to the limitations of camera technology, the three-dimensional interior tissue wall of a colon and large intestine of a patient will be seen in a two dimensional space projected. The type of image acquired by the colonoscope camera therefore is highly dependent on the ability of the endoscopist to manipulate the colonoscope. Accordingly, a single lesion may be only partially visible in one or multiple frames.
  • the Al module is programmed to analyze each frame of the video stream to extract particular features of an area of interest, e.g., a lesion or polyp, to reconstruct a higher quality representation of the lesion that then may be analyzed by detection and characterization algorithms of the Al module.
  • an area of interest e.g., a lesion or polyp
  • three dimension lesion 41 is located on the interior wall of a patient’s colon or large intestine.
  • the endoscopist manipulates the proximal end of the colonoscope to redirect the camera at the distal tip of the colonoscope to image adjacent portions of the organ wall.
  • video frames 42, 43, 44 and 45 are generated, each of which frames includes a partial view of lesion 41.
  • Image frames I, 1+1, 1+2, 1+3 are analyzed by partial lesion/feature detector Al module 46.
  • Module 46 analyzes the partial views of the lesion in each of the multiple frames to determine whether the lesions are separate and unrelated or form part of a larger lesion, e.g., by matching up adjacent tissue boundaries in the various frames to piece together an aggregate image of the lesion. This aggregation process is concluded when, as indicated at step 47, feature boundaries in multiple images can be matched and stitched together with a degree of confidence greater than a threshold value to generate a reconstructed lesion.
  • Techniques for matching features from adjacent video frames may include color matching, matching of adjacent tissue boundaries or tissue textures, or other techniques known to those of skill in the art of image manipulation. If during this assembly process the Al module determines, e.g., by disrupted boundary profiles, that one or more portions of the image whole is missing, the Al module may compute an estimate of the completeness of the image, and/or prompt the endoscopist to reposition the colonoscope to acquire additional image frames.
  • Monitor 50 is similar to the monitor of FIG. 3, and displays the real time image from the colonoscope 51 on which bounding box 52 is overlaid, indicating the presence of a potential lesion. If the entire lesion, as determined by the Al module, is not visible in the current video frame displayed on monitor 50, bounding box 52 is overlaid on as much the potential lesion is visible in the displayed video frame.
  • second monitor 55 includes a display that may include a partial view of area of interest 56 and text 57 indicating the Al modules’ estimate of the completeness of the area of interest. If the Al module determines that additional information is required to assess an area of interest, it may overlay arrow 58 on the real time video image 51 to prompt the endoscopist to obtain additional video frames in that direction.
  • second monitor 55 may include as indicator of the completeness of the image acquisition, a progress bar, or other visual form of progress report, informing the endoscopist about the quality and quantity of data analyzed by the detection and characterization algorithms of the Al module.
  • Second monitor 55 also may include a display including an updated textual classification of an area highlighted in bounding box 52, including a confidential level of that prediction based on the aggregated image data. For example, in FIG. 5, second monitor reports that the feature located within bounding box 52 is concluded by the Al module to be a malignant adenoma with 60% confidence, based on the estimated 50% of the lesion that is observable in the acquired video stream.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Signal Processing (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Multimedia (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne des procédés et des systèmes d'agrégation de caractéristiques dans de multiples trames vidéo (42, 43, 44, 45) pour améliorer des algorithmes de détection d'anomalie de tissu, un premier algorithme de détection identifiant une anomalie (41) et agrégeant des trames vidéo adjacentes pour créer une image plus complète (47) à des fins d'analyse par un algorithme de détection basé sur l'intelligence artificielle (48), l'agrégation se produisant en temps réel lorsque la procédure médicale est en cours d'exécution.
PCT/US2022/076142 2021-09-13 2022-09-08 Système et procédés d'agrégation de caractéristiques dans des trames vidéo pour améliorer la précision d'algorithmes de détection basés sur l'ia WO2023039493A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17/473,775 US11423318B2 (en) 2019-07-16 2021-09-13 System and methods for aggregating features in video frames to improve accuracy of AI detection algorithms
US17/473,775 2021-09-13
US202217821453A 2022-08-22 2022-08-22
US17/821,453 2022-08-22

Publications (1)

Publication Number Publication Date
WO2023039493A1 true WO2023039493A1 (fr) 2023-03-16

Family

ID=83689338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/076142 WO2023039493A1 (fr) 2021-09-13 2022-09-08 Système et procédés d'agrégation de caractéristiques dans des trames vidéo pour améliorer la précision d'algorithmes de détection basés sur l'ia

Country Status (1)

Country Link
WO (1) WO2023039493A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1067934A (en) 1912-10-26 1913-07-22 Mary E Hartman Accessory for automobiles.
EP2054852B1 (fr) * 2006-08-21 2010-06-23 STI Medical Systems, LLC Analyse assistée par ordinateur utilisant une vidéo issue d'endoscopes
US20100194851A1 (en) 2009-02-03 2010-08-05 Aricent Inc. Panorama image stitching
US9224193B2 (en) 2011-07-14 2015-12-29 Canon Kabushiki Kaisha Focus stacking image processing apparatus, imaging system, and image processing system
US20180225820A1 (en) * 2015-08-07 2018-08-09 Arizona Board Of Regents On Behalf Of Arizona State University Methods, systems, and media for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
US10682108B1 (en) * 2019-07-16 2020-06-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for three-dimensional (3D) reconstruction of colonoscopic surfaces for determining missing regions
US20210406737A1 (en) * 2019-07-16 2021-12-30 DOCBOT, Inc. System and methods for aggregating features in video frames to improve accuracy of ai detection algorithms

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1067934A (en) 1912-10-26 1913-07-22 Mary E Hartman Accessory for automobiles.
EP2054852B1 (fr) * 2006-08-21 2010-06-23 STI Medical Systems, LLC Analyse assistée par ordinateur utilisant une vidéo issue d'endoscopes
US20100194851A1 (en) 2009-02-03 2010-08-05 Aricent Inc. Panorama image stitching
US9224193B2 (en) 2011-07-14 2015-12-29 Canon Kabushiki Kaisha Focus stacking image processing apparatus, imaging system, and image processing system
US20180225820A1 (en) * 2015-08-07 2018-08-09 Arizona Board Of Regents On Behalf Of Arizona State University Methods, systems, and media for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
US10682108B1 (en) * 2019-07-16 2020-06-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for three-dimensional (3D) reconstruction of colonoscopic surfaces for determining missing regions
US20210406737A1 (en) * 2019-07-16 2021-12-30 DOCBOT, Inc. System and methods for aggregating features in video frames to improve accuracy of ai detection algorithms

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CORLEY D.A. ET AL.: "Adenoma Detection Rate and Risk of Colorectal Cancer and Death", NEJM, vol. 370, 2014, pages 1298 - 306
KAMINSKI M. F. ET AL.: "quality Indicator for Colonoscopy and the Risk of Interval Cancer", NEJM, vol. 362, 2010, pages 1795 - 803
MA RUIBIN ET AL: "Real-Time 3D Reconstruction of Colonoscopic Surfaces for Determining Missing Regions", 10 October 2019, 16TH EUROPEAN CONFERENCE - COMPUTER VISION - ECCV 2020, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, PAGE(S) 573 - 582, XP047522978 *
QADIR HEMIN ALI ET AL: "Improving Automatic Polyp Detection Using CNN by Exploiting Temporal Dependency in Colonoscopy Video", IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, IEEE, PISCATAWAY, NJ, USA, vol. 24, no. 1, 1 January 2020 (2020-01-01), pages 180 - 193, XP011764467, ISSN: 2168-2194, [retrieved on 20200102], DOI: 10.1109/JBHI.2019.2907434 *

Similar Documents

Publication Publication Date Title
US11423318B2 (en) System and methods for aggregating features in video frames to improve accuracy of AI detection algorithms
US11191423B1 (en) Endoscopic system and methods having real-time medical imaging
JP7346285B2 (ja) 医療画像処理装置、内視鏡システム、医療画像処理装置の作動方法及びプログラム
US20150313445A1 (en) System and Method of Scanning a Body Cavity Using a Multiple Viewing Elements Endoscope
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
JP6967602B2 (ja) 検査支援装置、内視鏡装置、内視鏡装置の作動方法、及び検査支援プログラム
CN113573654A (zh) 用于检测并测定病灶尺寸的ai系统
JP2013524988A (ja) 複数の生体内画像のうちの一部を表示するシステム及び方法
JP2017534322A (ja) 膀胱の診断的マッピング方法及びシステム
US20210274089A1 (en) Method and Apparatus for Detecting Missed Areas during Endoscopy
WO2020054543A1 (fr) Dispositif et procédé de traitement d'image médicale, système d'endoscope, dispositif de processeur, dispositif d'aide au diagnostic et programme
WO2009102984A2 (fr) Système et procédé d’endoscopie à réalité virtuellement augmentée
WO2023024701A1 (fr) Endoscope panoramique et son procédé de traitement d'image
KR20220130855A (ko) 인공 지능 기반 대장 내시경 영상 진단 보조 시스템 및 방법
JP4686279B2 (ja) 医用診断装置及び診断支援装置
WO2021171465A1 (fr) Système d'endoscope et procédé de balayage de lumière utilisant le système d'endoscope
US11219358B2 (en) Method and apparatus for detecting missed areas during endoscopy
JP6840263B2 (ja) 内視鏡システム及びプログラム
AU2021337847A1 (en) Devices, systems, and methods for identifying unexamined regions during a medical procedure
CN113331769A (zh) 用于在内窥镜检查期间检测漏检区域的方法和装置
JP2011135936A (ja) 画像処理装置、医用画像診断装置および画像処理プログラム
WO2023039493A1 (fr) Système et procédés d'agrégation de caractéristiques dans des trames vidéo pour améliorer la précision d'algorithmes de détection basés sur l'ia
JP6745748B2 (ja) 内視鏡位置特定装置、その作動方法およびプログラム
US20230013884A1 (en) Endoscope with synthetic aperture multispectral camera array
WO2021176852A1 (fr) Dispositif, procédé et programme d'aide à la sélection d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22786867

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE