WO2022194126A1 - 基于胶囊内窥镜构建阅片模型的方法、设备及介质 - Google Patents

基于胶囊内窥镜构建阅片模型的方法、设备及介质 Download PDF

Info

Publication number
WO2022194126A1
WO2022194126A1 PCT/CN2022/080840 CN2022080840W WO2022194126A1 WO 2022194126 A1 WO2022194126 A1 WO 2022194126A1 CN 2022080840 W CN2022080840 W CN 2022080840W WO 2022194126 A1 WO2022194126 A1 WO 2022194126A1
Authority
WO
WIPO (PCT)
Prior art keywords
capsule endoscope
image
sub
model
images
Prior art date
Application number
PCT/CN2022/080840
Other languages
English (en)
French (fr)
Inventor
杨戴天杙
Original Assignee
安翰科技(武汉)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 安翰科技(武汉)股份有限公司 filed Critical 安翰科技(武汉)股份有限公司
Priority to US18/551,297 priority Critical patent/US20240188791A1/en
Publication of WO2022194126A1 publication Critical patent/WO2022194126A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/041Capsule endoscopes for imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Definitions

  • the present invention relates to the field of medical equipment, in particular to a method for constructing a reading model based on a capsule endoscope, an electronic device and a readable storage medium.
  • Capsule endoscopy is increasingly used for gastrointestinal examination.
  • Capsule endoscopes are taken orally, passed through the mouth, esophagus, stomach, small intestine, large intestine, and finally excreted from the body.
  • the capsule endoscope runs passively with the peristalsis of the digestive tract, and captures images at a certain frame rate during the process, so that doctors can check the health of each section of the patient's digestive tract.
  • the capsule endoscope can adjust the position and posture under the control of an external magnetic control device, so as to better inspect the stomach.
  • the stomach examination time is 10 to 20 minutes.
  • the shooting frame rate of 4fps there will be 2400 to 4800 pictures.
  • upload the picture data for reference by medical staff Usually, medical staff are passive in reading the uploaded picture data. They can only observe the pictures in the order they were taken, or select them through the progress bar. There is a lack of positional correspondence between pictures and actual objects. Rebuild the spatial structure. However, this process interferes with the understanding of the image and the judgment of the integrity of the inspection, especially for inexperienced inspectors, it may be difficult to review the image data displayed in this way.
  • the purpose of the present invention is to provide a method, electronic device and readable storage medium for constructing a reading model based on a capsule endoscope.
  • an embodiment of the present invention provides a method for constructing an image reading model based on a capsule endoscope, the method comprising: driving the capsule endoscope to move in a working area, and sequentially according to a predetermined first frequency Recording the position coordinates and field of view orientation of the capsule endoscope when it reaches each positioning point, and driving the capsule endoscope to sequentially capture and record images according to a predetermined second frequency;
  • the recorded images are mapped onto the 3D model to form a reading model.
  • constructing a 3D model corresponding to the outer contour of the working area includes:
  • the 3D model is represented by ⁇ (p),
  • mapping the recorded image onto the 3D model to form a reading model includes:
  • the 3D model is divided into a plurality of sub-areas
  • the sub-region image set is stitched on the 3D model to form the interpretation model.
  • mapping the recorded image to each sub-area to form a sub-area image set includes:
  • the method further includes: configuring the first frequency to be higher than the second frequency.
  • interpolation filtering processing is performed on the existing anchor points, so as to supplement the missing anchor points on the basis of the existing anchor points.
  • a cross-validation set is constructed
  • the current image is transferred to the cross-validation set.
  • the method further includes:
  • mapping identifier is generated for each image with the same kind of annotation as a whole.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can be executed on the processor, and the processor executes the program At the time, the steps in the method for constructing a reading model based on a capsule endoscope as described above are implemented.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes the construction of a reading based on the capsule endoscope as described above. Steps in the method of the slice model.
  • the beneficial effects of the present invention are as follows: the method, device and medium for constructing an image reading model based on a capsule endoscope of the present invention map the acquired image to the 3D model of the working area, thereby improving inspection visibility. It is easy to observe, save reading time and improve detection efficiency.
  • FIG. 1 is a schematic flowchart of a method for constructing an image interpretation model based on a capsule endoscope according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a specific example of the present invention.
  • FIG. 3 is a schematic structural diagram of sub-region division in a specific example of the present invention.
  • FIG. 4 is a schematic structural diagram of a specific example of matching an image with an anchor point.
  • FIG. 5 is a schematic structural diagram of an image generation map identification.
  • the first embodiment of the present invention provides a method for constructing an image reading model based on a capsule endoscope, and the method includes the following steps.
  • each working point is recorded according to a predetermined frequency, and the spatial coordinate value P(x, y, z) and the field of view orientation V of each working point are recorded according to specific requirements.
  • the field of view orientation here is the posture of the capsule endoscope, such as Euler angles (yaw, pitch, roll), or it can be a four-element or vector coordinate of orientation. Through the field of view orientation, you can know the field of view captured by the capsule endoscope in the V direction at the current coordinate point.
  • the field of view is a cone starting from the current coordinate point, and its vector direction is That is, the extension direction of the axis of the cone.
  • capturing an image with a capsule endoscope, locating its position coordinates, and the direction of the recorder's field of view are all prior art.
  • the present invention refers to Chinese patent application 201911188050.6, and the entire contents of the invention titled "Positioning System and Method for Swallowable Equipment” locate the capsule endoscope, and then obtain its position coordinates and field of view orientation, which will not be described in detail here. .
  • step S2 As shown in FIG. 2, after step S1 is completed, in the three-dimensional space coordinate system, the position coordinates of each positioning point where each capsule endoscope stays in the working area forms original point cloud data.
  • step S2 specifically includes: acquiring all position coordinates of the capsule endoscope to form original point cloud data;
  • the original point cloud data is sequentially subjected to Gaussian filtering, voxelization, voxel shell extraction, smoothing filtering, and surface reconstruction to form a 3D model corresponding to the outer contour of the working area.
  • the 3D model is represented by ⁇ (p),
  • the working area is the stomach space as an example for specific introduction.
  • the capsule endoscope When the capsule endoscope operates in the working area, it may float in the liquid in the stomach cavity, stay on the inner wall, rotate or roll, etc.
  • a very dense point cloud is obtained, as shown in the leftmost image in Figure 2.
  • the original point cloud data is huge and relatively noisy.
  • the outer contour of the working area generally refers to the largest outer contour of the working area, as shown in the middle diagram in FIG. 2 .
  • voxel shell extraction edge extraction
  • a relatively dense surface image can be obtained, as shown in the rightmost figure in FIG.
  • the 3D model is visualized on the front-end display interface of the computer, and the viewing angle of the 3D model can be changed through an external input device, such as a mouse, a touch screen, and the like.
  • the reconstructed 3D model only includes the surface data of the working area, that is, as shown in the following formula, the data ⁇ of the 3D model only includes the data of the model surface, that is, the surface data of the working area.
  • step S2 there are various methods for implementing step S2, that is, in a state where the point cloud data is known, there are various methods for processing it to form a 3D surface model, which will not be repeated here.
  • step S3 specifically includes the following steps.
  • the sub-region image set is opened in association to selectively open any image corresponding to the current sub-region image set.
  • the number of the sub-regions can be specifically set as required.
  • the sub-regions can be divided according to the specific classification of the anatomical structure, that is, the anatomical structures with the same attributes Divide into 1 sub-region, or divide anatomical structures with the same properties into multiple sub-regions.
  • the classification is made more specific, which is convenient for the later application of the reading model.
  • the stomach cavity is divided according to the attributes of the anatomical structure, which usually includes: fundus, greater curvature and other anatomical structures with finer classification granularity.
  • the gastric cavity is divided into 12 sub-regions according to attributes.
  • the number of sub-regions can be more, and the more sub-regions, the amount of data calculation will increase accordingly, however, the classification will be more detailed, and the calculation results will be more accurate, for example: dividing the stomach cavity into 20 sub-areas.
  • the present invention makes a specific introduction by taking the division of the stomach cavity into 12 sub-regions according to attributes as an example.
  • sub-region 3 is in the form of a schematic plan view, in practical applications, it may be a three-dimensional 3D model.
  • the 12 sub-regions are connected in sequence according to the attributes of the anatomical structure.
  • the properties of sub-region 2 and sub-region 3 are fundus, and the properties of sub-region 5 and sub-region 6 are greater curvature.
  • step S32 it includes: S321, matching the time sequence, traversing each of the images, and obtaining the positioning point closest to the current image acquisition time; S322, taking the position coordinates of the obtained positioning point as a starting point, and using The corresponding field of view is oriented to plan a virtual ray in the extension direction, and the intersection point of the virtual ray and the 3D model is obtained; S323, the sub-region to which the position coordinates of the current intersection point belong, and the current image is mapped to the current intersection point.
  • the sub-area to which the location coordinates of form a sub-area image set.
  • the first frequency is configured to be higher than the second frequency, that is, the frequency of positioning is configured to be higher than the frequency of image capture.
  • the first frequency is set to be 40-100 Hz
  • the second frequency is set to be lower than 30 fps.
  • interpolation filtering processing is performed on the existing positioning points, so as to supplement the missing positioning points on the basis of the existing positioning points.
  • the matching result between the image and the positioning point is more accurate.
  • step S322 for the positioning points of the image matching, they all have corresponding position coordinates and field of view orientations.
  • the positioning point P of the current image matching takes the point P as the coordinate starting point, which is located on or within the 3D model ⁇ (p), and V is the view direction, that is, the extension direction of the ray. Based on this ray, the intersection Q of the ray toward V and the 3D model ⁇ (p) at point P can be obtained.
  • step S323 based on the area division of the 3D model, the sub-areas to which the intersection point Q belongs can be obtained.
  • the Q point belongs to the sub-area 11.
  • the image corresponding to the P point is mapped to the sub-area 11 to form one of the sub-area image sets corresponding to the sub-area 11.
  • the method further includes: constructing a cross-validation set; verifying the images in each sub-region image set, and/or checking the images in each sub-region image set.
  • the image is subjected to image quality verification. If the image does not belong to the set to which it currently belongs, and/or the image quality score of the image is lower than the preset score, the current image is transferred to the cross-validation set.
  • the existence of various errors such as the unreasonable setting of sub-region boundaries, may lead to the wrong image attribution set.
  • the image quality is low, and there will be adverse effects on subsequent calls. Therefore, cross-validation is performed on the images between steps S32 and S33, and bad data in each sub-region image set is eliminated.
  • the present invention extends to the Chinese Patent Application Publication No. CN106934799A, and the title of the invention is "capsule endoscope image-assisted film reading system and method". The images in the regional image collection are checked to determine whether they belong to the current collection.
  • the title of the invention is "the evaluation method of the capsule endoscope without reference image, the electronic equipment and the medium", and the image quality of the images in each sub-region image set is checked and judged. Whether the image quality is suitable or not, if the score is too low, it will be transferred to the cross-validation set.
  • the score of the present invention may be the image quality evaluation score and/or the image content evaluation score and/or the comprehensive score of the extended patent, which will not be described here.
  • step S33 the image interpretation model formed by it is visualized on the front-end display interface of the computer, and when assisting in selecting a sub-region of the interpretation model, the image set of the sub-region is opened in association to selectively open any one corresponding to the current sub-region image set. image.
  • the method further includes: performing attribute identification and labeling on each image in each sub-region image set;
  • the images with the same kind of annotations are divided into a group; on the interpretation model, a mapping mark is generated for each image with the same kind of annotations as a whole.
  • the lesions in the image are identified, and their types are specifically marked.
  • the marking methods can be manually marked or automatically marked by an AI (Artificial Intelligence) system.
  • images with the same annotation are grouped together, and an additional mapping mark is generated on the reading model. When the mapping mark is selected, the corresponding image can be opened accordingly, thereby facilitating subsequent centralized query.
  • an embodiment of the present invention provides an electronic device, including a memory and a processor, the memory stores a computer program that can run on the processor, and when the processor executes the program, the above-mentioned Describe the steps in the method for constructing a reading model based on a capsule endoscope.
  • an embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the above-mentioned method for constructing a reading model based on a capsule endoscope. A step of.
  • the method, device and medium for constructing a reading model based on a capsule endoscope of the present invention map the acquired image to the 3D model of the working area to form a reading model, and improve the inspection visibility effect.
  • the required images can be easily obtained in the simulated reading model, which enhances the interactive initiative and operability; facilitates observation, saves reading time, and improves detection efficiency.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)

Abstract

本发明提供了一种基于胶囊内窥镜构建阅片模型的方法、设备及介质,所述方法包括:驱动胶囊内窥镜在工作区域内运动,按照预定第一频率依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及按照预定第二频率依次拍摄图像并记录;依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型;将记录的所述图像映射到所述3D模型上,以形成阅片模型。本发明将获取的图像映射到工作区域的3D模型上,提升检查可视性效果,便于观察,节约阅片时间,提高检测效率。

Description

基于胶囊内窥镜构建阅片模型的方法、设备及介质
本申请要求了申请日为2021年03月19日,申请号为202110296737.2,发明名称为“基于胶囊内窥镜构建阅片模型的方法、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及医疗设备领域,尤其涉及一种基于胶囊内窥镜构建阅片模型的方法、电子设备及可读存储介质。
背景技术
胶囊内窥镜用于消化道检查受到越来越多的应用。胶囊内窥镜由口服入,经过口腔、食管、胃、小肠、大肠,最后排出体外。通常,胶囊内窥镜随消化道蠕动而被动运行,并在这个过程中以一定的帧率拍摄图像,供医生检查患者的消化道各区段的健康状况。
以胶囊内窥镜对胃部检查为例,胶囊内窥镜在外部磁控设备的控制下可以调整位置和姿态,从而更好的检查胃部。通常胃部检查时间10~20分钟,按4fps的拍摄帧率,将有2400~4800张图片。在检测完成后,将图片数据上传,以供医护人员参考。通常情况下,对于上传的图片数据,医护人员阅片比较被动,只能按照拍摄的顺序观察图片,或者通过进度条挑选,缺少图片与实际物体之间的位置对应关系,需要通过医护人员思考并重建空间结构。而这种过程,对于图像的理解,检查完整性的判断等都有干扰,尤其是对缺乏经验的检查者,可能难以审阅以该种方式显示的图片数据。
发明内容
为解决上述技术问题,本发明的目的在于提供一种基于胶囊内窥镜构建阅片模型的方法、电子设备及可读存储介质。
为了实现上述发明目的之一,本发明一实施方式提供一种基于胶囊内窥镜构建阅片模型的方法,所述方法包括:驱动胶囊内窥镜在工作区域内运动,按照预定第一频率依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及驱动胶囊内窥镜按照预定第二频率依次拍摄图像并记录;
依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型;
将记录的所述图像映射到所述3D模型上,以形成阅片模型。
作为本发明一实施方式的进一步改进,依据所记录的胶囊内窥镜到达每一定位点时 的位置坐标,构建对应所述工作区域外轮廓的3D模型包括:
获取胶囊内窥镜的所有位置坐标构成原始点云数据;
对所述原始点云数据依次进行高斯滤波、体素化、体素外壳提取、平滑滤波处理,以及表面重建后,形成对应所述工作区域外轮廓的3D模型;
所述3D模型以Ω(p)表示,
Figure PCTCN2022080840-appb-000001
作为本发明一实施方式的进一步改进,将记录的所述图像映射到所述3D模型上,以形成阅片模型包括:
按照工作区域的结构,将所述3D模型划分为多个子区域;
将记录的所述图像映射到每一子区域以形成子区域图像集合,每一图像映射至唯一子区域;
在所述3D模型上拼合所述子区域图像集合,以形成所述阅片模型。
作为本发明一实施方式的进一步改进,将记录的所述图像映射到每一子区域以形成子区域图像集合包括:
遍历每一所述图像,获取与当前图像采集时间最接近的定位点;
以获取的定位点的位置坐标为起点,以其对应的视野朝向为延伸方向规划虚拟射线,获取所述虚拟射线与所述3D模型的交点;
获取当前所述交点的位置坐标归属的子区域,并将当前图像映射到当前所述交点的位置坐标归属的子区域,形成子区域图像集合。
作为本发明一实施方式的进一步改进,所述方法还包括:配置第一频率高于第二频率。
作为本发明一实施方式的进一步改进,在时间延续时序上,对现有的定位点进行插值滤波处理,以在现有定位点基础上补充缺失的定位点。
作为本发明一实施方式的进一步改进,构建交叉验证集合;
对每一子区域图像集合中的图像进行校验,和/或对每一子区域图像集合中的图像进行图像质量校验,
若图像不属于其当前归属的集合,和/或图像的图像质量评分低于预设分值,则将当前图像转移至交叉验证集合。
作为本发明一实施方式的进一步改进,将所有图像均归属至对应的子区域图像集合后,所述方法还包括:
对每一子区域图像集合中的每一图像进行属性识别并标注;
将每一子区域图像集合中具有同类标注的图像划分为一组;
在所述阅片模型上,分别为具有同类标注的每一图像整体生成一映射标识。
为了解决上述发明目的之一,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时,实现如上所述基于胶囊内窥镜构建阅片模型的方法中的步骤。
为了解决上述发明目的之一,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如上所述基于胶囊内窥镜构建阅片模型的方法中的步骤。
与现有技术相比,本发明的有益效果是:本发明的基于胶囊内窥镜构建阅片模型的方法、设备及介质,将获取的图像映射到工作区域的3D模型上,提升检查可视性效果,便于观察,节约阅片时间,提高检测效率。
附图说明
图1是本发明一实施方式提供的基于胶囊内窥镜构建阅片模型的方法的流程示意图。
图2是本发明一具体示例的结构示意图。
图3是本发明一具体示例中子区域划分的结构示意图。
图4是图像与定位点匹配的具体示例的结构示意图。
图5为图像生成映射标识的结构示意图。
具体实施方式
以下将结合附图所示的具体实施方式对本发明进行详细描述。但这些实施方式并不限制本发明,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包含在本发明的保护范围内。
如图1所示,本发明第一实施方式中提供一种基于胶囊内窥镜构建阅片模型的方法,所述方法包括下述步骤。
S1、驱动胶囊内窥镜在工作区域内运动,按照预定第一频率依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及驱动胶囊内窥镜按照预定第二频率依次拍摄图像并记录。
S2、依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区 域外轮廓的3D模型。
S3、将记录的所述图像映射到所述3D模型上,以形成阅片模型。
胶囊内窥镜进入所述工作区域后,即按照预定的频次记录每一工作点,且根据具体需求,记录每一工作点的空间坐标值P(x,y,z)和视野朝向V。这里的视野朝向为胶囊内窥镜的姿态,例如:欧拉角(yaw,pitch,roll),也可以是四元素或朝向的向量坐标。通过视野朝向,可以获知在当前坐标点下,胶囊内窥镜朝V方向拍摄的视野,该视野为以当前坐标点为起始的圆锥状,其向量方向为
Figure PCTCN2022080840-appb-000002
亦即圆锥的轴线延长方向。现有技术中,通过胶囊内窥镜拍摄图像、对其位置坐标进行定位,以及记录器视野朝向均为现有技术。
例如:本发明引申中国专利申请201911188050.6,发明名称为“可吞服设备定位系统及其方法”的全部内容对胶囊内窥镜进行定位,进而获取其位置坐标及视野朝向,在此不做详细赘述。
对于步骤S2,结合图2所示,在完成步骤S1后,在三维空间坐标系中,各个胶囊内窥镜停留在工作区域的各个定位点的位置坐标形成原始点云数据。相应的,步骤S2具体包括:获取胶囊内窥镜的所有位置坐标构成原始点云数据;
对所述原始点云数据依次进行高斯滤波、体素化、体素外壳提取、平滑滤波处理,以及表面重建后,形成对应所述工作区域外轮廓的3D模型。
所述3D模型以Ω(p)表示,
Figure PCTCN2022080840-appb-000003
在该具体示例中,以工作区域为胃部空间为例做具体介绍,胶囊内窥镜在工作区域内运行时,可能在胃腔的液体中漂浮、停留在内壁上旋转或翻滚等,因此会得到非常稠密的点云,如图2中的最左图所示。进一步的,原始的点云数据量庞大且相对杂论,经过高斯滤波、体素化处理后,会使得工作区域的轮廓更加清晰。这里,所述工作区域外轮廓,通常指工作区域的最大外轮廓,如图2中的中图所示。进一步的,在体素化后,对数据继续做体素外壳提取(边缘提取),以过滤离群点。进一步的,继续做平滑滤波处理(对数据网络化),可以得到相对稠密的表面图像,如图2中的最右图所示,也即本发明所述的工作区域外轮廓的3D模型。
在具体应用中,该3D模型在计算机前端显示界面可视化,且通过外部输入设备,例如:鼠标、触屏等方式可以改变3D模型的视角。重建完成后的3D模型仅包括工作区域表面数据,即如下述公式所示,3D模型的数据Ω仅包含模型表面的数据,也即工 作区域表面数据。
Figure PCTCN2022080840-appb-000004
在实际应用中,实现步骤S2的方法具有多种,也即在点云数据已知状态下,对其处理形成3D表面模型的方法具有多种,在此不做赘述。
较佳的,步骤S3具体包括下述步骤。
S31、按照工作区域的结构,将所述3D模型划分为多个子区域。
S32、将记录的所述图像映射到每一子区域以形成子区域图像集合,每一图像映射至唯一子区域。
S33、在所述3D模型上拼合所述子区域图像集合,以形成所述阅片模型。
其中,在计算机前端显示界面,辅助选择阅片模型的子区域时,关联打开所述子区域图像集合,以选择性打开当前子区域图像集合对应的任一图像。
对于步骤S31,所述子区域的数量可以根据需要具体设定。本发明较佳实施方式中,由于胶囊内窥镜工作的环境通常为解剖学结构形成的腔体内,如此,可以按照解剖学结构的具体分类,进行子区域的划分,即将属性相同的解剖学结构划分为1个子区域,或者将属性相同的解剖学结构划分为多个子区域。较佳的,在划分子区域时,尽可能使一个子区域只属于某一个解剖结构。如此,使得分类更为具体,以便于后期的阅片模型的应用。
结合图3所示,继续以胃腔作为工作区域为例,对于胃腔,按照解剖学结构的属性划分,其通常包括:胃底、大弯等分类粒度较精细的解剖学结构。在本发明具体示例中,对于步骤S31,将胃腔按照属性划分为12个子区域。当然,子区域的数量可以更多,子区域的数量越多,其数据计算量会相应增加,然而,分类也会更加详细,其计算结果也会更加精准,例如:将胃腔按照属性划分为20个子区域。本发明以胃腔按照属性划分为12个子区域为例做具体介绍。这里,虽然图3为平面示意图的形式,但在实际应用中,其可以是立体的3D模型。相应的,12个子区域按照解剖学结构的属性依次相接。例如:子区域2和子区域3的属性为胃底,子区域5和子区域6的属性为大弯。
较佳的,对于步骤S32,其包括:S321,匹配时间顺序,遍历每一所述图像,获取与当前图像采集时间最接近的定位点;S322,以获取的定位点的位置坐标为起点,以其对应的视野朝向为延伸方向规划虚拟射线,获取所述虚拟射线与所述3D模型的交点;S323,获取当前所述交点的位置坐标归属的子区域,并将当前图像映射到当前所述交点的位置坐标归属的子区域,形成子区域图像集合。
较佳的,对于步骤S321,对胶囊内窥镜的位置进行定位过程中,因受到信号干扰、运动干扰等环境因素的影响,定位点的数量会相应减少。如此,配置第一频率高于第二频率,也即将定位的频率配置为高于图像的拍摄频率。例如:本发明一具体示例中,第一频率设置为40~100Hz,第二频率设置为低于30fps。相应的,仍然能够得到比图像采集更密集的定位结果,进而实现每张图像可以匹配到采集时间较接近的定位点的效果。
较佳的,结合图4所示示例,在时间延续时序上,对现有的定位点进行插值滤波处理,以在现有定位点基础上补充缺失的定位点。如此,以使得图像与定位点的匹配结果更加精准。
接续图3所示示例,对于步骤S322,对于图像匹配的定位点,其均具有对应的位置坐标及视野朝向。以当前图像匹配的定位点P点为例,以P点为坐标起点,其位于3D模型Ω(p)之上或位于其之内,V为视野朝向,即射线的延伸方向。基于该射线,可获得位于P点时,朝向V的射线与3D模型Ω(p)的交点Q。
进一步的,对应步骤S323,基于3D模型的区域划分,可以获得交点Q所归属的子区域。在该示例中,Q点归属子区域11,此时,将P点对应的图像映射到子区域11,形成子区域11对应的子区域图像集合中的其中一幅。
较佳的,在步骤S32和步骤S33之间,所述方法还包括:构建交叉验证集合;对每一子区域图像集合中的图像进行校验,和/或对每一子区域图像集合中的图像进行图像质量校验,若图像不属于其当前归属的集合,和/或图像的图像质量评分低于预设分值,则将当前图像转移至交叉验证集合。
这里,各种误差的存在,例如:子区域边界设置的不合理性,可能会导致图像归属集合错误。另外,图像质量低下,后续调用时也会存在不良影响。因此,在步骤S32和S33之间对图像进行交叉验证,剔除各个子区域图像集合中的不良数据。本发明具体示例中,对图像校验的方法具有多种,本发明引申中国专利申请公开号CN106934799A,发明名称为“胶囊内窥镜图像辅助阅片系统及方法”的全部内容,对每一子区域图像集合中的图像进行校验,判断其是否属于当前归属的集合。另外,引申中国专利申请公开号CN111932532A,发明名称为“胶囊内窥镜无参考图像评价方法、电子设备及介质”的全部内容,对每一子区域图像集合中的图像进行图像质量校验,判断其图像质量是否合适,评分过低的则转移至交叉验证集合。其中,本发明的评分可以为,被引申专利的图像质量评价分值和/或图像内容评价分值和/或综合分值,在此不做过多赘述。
需要说明的是,对每一子区域图像集合中的图像进行校验,和/或对每一子区域图像 集合中的图像进行图像质量校验过程中,也会存在误差。因此,保留交叉验证集合,该交叉验证集合中的数据后续也可以进行选择性调阅,或者重新归类,在此不做进一步的赘述。
对于步骤S33,其形成的阅片模型在计算机前端显示界面可视化,当辅助选择阅片模型的子区域时,关联打开所述子区域图像集合,以选择性打开当前子区域图像集合对应的任一图像。
较佳的,在将所有图像均归属至对应的子区域图像集合后,所述方法还包括:对每一子区域图像集合中的每一图像进行属性识别并标注;将每一子区域图像集合中具有同类标注的图像划分为一组;在所述阅片模型上,分别为具有同类标注的每一图像整体生成一映射标识。
这里,结合图5所示,对图像中的病灶进行识别,并具体标注其类型,标注的方式可以人工标注也可以AI(Artificial Intelligence)系统自动标注。进一步的,将具有相同标注的图像归为一组,并额外在阅片模型上生成一条映射标识,当该映射标识被选择时,可以相应打开对应的图像,进而方便后续集中查询。
进一步的,本发明一实施方式提供一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,所述处理器执行所述程序时,实现如上所述基于胶囊内窥镜构建阅片模型的方法中的步骤。
进一步的,本发明一实施方式提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现如上所述基于胶囊内窥镜构建阅片模型的方法中的步骤。
综上所述,本发明的基于胶囊内窥镜构建阅片模型的方法、设备及介质,将获取的图像映射到工作区域的3D模型上形成阅片模型,提升检查可视性效果,通过各种映射,后续使用过程中,可以在模拟的阅片模型便捷获取所需图像,增强交互主动性、可操作性;便于观察,节约阅片时间,提高检测效率。
应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施方式中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。
上文所列出的一系列的详细说明仅仅是针对本发明的可行性实施方式的具体说明,它们并非用以限制本发明的保护范围,凡未脱离本发明技艺精神所作的等效实施方式或变更均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于胶囊内窥镜构建阅片模型的方法,其特征在于,所述方法包括:
    驱动胶囊内窥镜在工作区域内运动,按照预定第一频率依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及驱动胶囊内窥镜按照预定第二频率依次拍摄图像并记录;
    依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型;
    将记录的所述图像映射到所述3D模型上,以形成阅片模型。
  2. 根据权利要求1所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型包括:
    获取胶囊内窥镜的所有位置坐标构成原始点云数据;
    对所述原始点云数据依次进行高斯滤波、体素化、体素外壳提取、平滑滤波处理,以及表面重建后,形成对应所述工作区域外轮廓的3D模型;
    所述3D模型以Ω(p)表示,
    Figure PCTCN2022080840-appb-100001
  3. 根据权利要求1所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,将记录的所述图像映射到所述3D模型上,以形成阅片模型包括:
    按照工作区域的结构,将所述3D模型划分为多个子区域;
    将记录的所述图像映射到每一子区域以形成子区域图像集合,每一图像映射至唯一子区域;
    在所述3D模型上拼合所述子区域图像集合,以形成所述阅片模型。
  4. 根据权利要求3所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,将记录的所述图像映射到每一子区域以形成子区域图像集合包括:
    遍历每一所述图像,获取与当前图像采集时间最接近的定位点;
    以获取的定位点的位置坐标为起点,以其对应的视野朝向为延伸方向规划虚拟射线,获取所述虚拟射线与所述3D模型的交点;
    获取当前所述交点的位置坐标归属的子区域,并将当前图像映射到当前所述交点的位置坐标归属的子区域,形成子区域图像集合。
  5. 根据权利要求1所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,所述方法还包括:配置第一频率高于第二频率。
  6. 根据权利要求1所述的基于胶囊内窥镜构建阅片模型的方法,在时间延续时序上,对现有的定位点进行插值滤波处理,以在现有定位点基础上补充缺失的定位点。
  7. 根据权利要求4所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,将所有图像均归属至对应的子区域图像集合后,所述方法还包括:
    构建交叉验证集合;
    对每一子区域图像集合中的图像进行校验,和/或对每一子区域图像集合中的图像进行图像质量校验,
    若图像不属于其当前归属的集合,和/或图像的图像质量评分低于预设分值,则将当前图像转移至交叉验证集合。
  8. 根据权利要求4所述的基于胶囊内窥镜构建阅片模型的方法,其特征在于,将所有图像均归属至对应的子区域图像集合后,所述方法还包括:
    对每一子区域图像集合中的每一图像进行属性识别并标注;
    将每一子区域图像集合中具有同类标注的图像划分为一组;
    在所述阅片模型上,分别为具有同类标注的每一图像整体生成一映射标识。
  9. 一种电子设备,包括存储器和处理器,所述存储器存储有可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现一种基于胶囊内窥镜构建阅片模型的方法中的步骤,所述方法包括:
    驱动胶囊内窥镜在工作区域内运动,按照预定第一频率,依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及驱动胶囊内窥镜,按照预定第二频率依次拍摄图像并记录;
    依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型;
    将记录的所述图像映射到所述3D模型上,以形成阅片模型。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利一种基于胶囊内窥镜构建阅片模型的方法中的步骤,所述方法包括:
    驱动胶囊内窥镜在工作区域内运动,按照预定第一频率,依次记录胶囊内窥镜到达每一定位点时的位置坐标及视野朝向,以及驱动胶囊内窥镜,按照预定第二频率依次拍摄图像并记录;
    依据所记录的胶囊内窥镜到达每一定位点时的位置坐标,构建对应所述工作区域外轮廓的3D模型;
    将记录的所述图像映射到所述3D模型上,以形成阅片模型。
PCT/CN2022/080840 2021-03-19 2022-03-15 基于胶囊内窥镜构建阅片模型的方法、设备及介质 WO2022194126A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/551,297 US20240188791A1 (en) 2021-03-19 2022-03-15 Method for building image reading model based on capsule endoscope, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110296737.2 2021-03-19
CN202110296737.2A CN113052956B (zh) 2021-03-19 2021-03-19 基于胶囊内窥镜构建阅片模型的方法、设备及介质

Publications (1)

Publication Number Publication Date
WO2022194126A1 true WO2022194126A1 (zh) 2022-09-22

Family

ID=76513877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080840 WO2022194126A1 (zh) 2021-03-19 2022-03-15 基于胶囊内窥镜构建阅片模型的方法、设备及介质

Country Status (3)

Country Link
US (1) US20240188791A1 (zh)
CN (1) CN113052956B (zh)
WO (1) WO2022194126A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052956B (zh) * 2021-03-19 2023-03-10 安翰科技(武汉)股份有限公司 基于胶囊内窥镜构建阅片模型的方法、设备及介质
CN114429458A (zh) * 2022-01-21 2022-05-03 小荷医疗器械(海南)有限公司 内窥镜图像的处理方法、装置、可读介质和电子设备
CN114637871A (zh) * 2022-03-23 2022-06-17 安翰科技(武汉)股份有限公司 消化道数据库的建立方法、装置及存储介质
CN116721175B (zh) * 2023-08-09 2023-10-10 安翰科技(武汉)股份有限公司 一种图像显示方法、图像显示装置以及胶囊内窥镜系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103300862A (zh) * 2013-05-24 2013-09-18 浙江大学宁波理工学院 一种胶囊内窥镜病灶组织深度和三维尺寸的测量方法
CN105942959A (zh) * 2016-06-01 2016-09-21 安翰光电技术(武汉)有限公司 胶囊内窥镜系统及其三维成像方法
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN106934799A (zh) * 2017-02-24 2017-07-07 安翰光电技术(武汉)有限公司 胶囊内窥镜图像辅助阅片系统及方法
CN108430373A (zh) * 2015-10-28 2018-08-21 安多卓思公司 用于在患者体内跟踪内窥镜的位置的装置和方法
CN112089392A (zh) * 2020-10-14 2020-12-18 深圳市资福医疗技术有限公司 胶囊内窥镜控制方法、装置、设备、系统及存储介质
CN113052956A (zh) * 2021-03-19 2021-06-29 安翰科技(武汉)股份有限公司 基于胶囊内窥镜构建阅片模型的方法、设备及介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105580019B (zh) * 2013-07-30 2018-09-14 哈特弗罗公司 为优化诊断性能利用边界条件模型化血流的方法和系统
JP6633383B2 (ja) * 2015-12-17 2020-01-22 株式会社Aze 画像診断支援装置及びその制御方法、並びにプログラム及び記憶媒体
CN110033465B (zh) * 2019-04-18 2023-04-25 天津工业大学 一种应用于双目内窥镜医学图像的实时三维重建方法
CN110051434A (zh) * 2019-04-25 2019-07-26 厦门强本科技有限公司 Ar与内窥镜结合手术导航方法及终端
JP7506565B2 (ja) * 2020-09-14 2024-06-26 株式会社Screenホールディングス 画像処理装置、検査装置およびプログラム
CN112075914B (zh) * 2020-10-14 2023-06-02 深圳市资福医疗技术有限公司 胶囊内窥镜检查系统
CN112261399B (zh) * 2020-12-18 2021-03-16 安翰科技(武汉)股份有限公司 胶囊内窥镜图像三维重建方法、电子设备及可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103300862A (zh) * 2013-05-24 2013-09-18 浙江大学宁波理工学院 一种胶囊内窥镜病灶组织深度和三维尺寸的测量方法
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108430373A (zh) * 2015-10-28 2018-08-21 安多卓思公司 用于在患者体内跟踪内窥镜的位置的装置和方法
CN105942959A (zh) * 2016-06-01 2016-09-21 安翰光电技术(武汉)有限公司 胶囊内窥镜系统及其三维成像方法
CN106934799A (zh) * 2017-02-24 2017-07-07 安翰光电技术(武汉)有限公司 胶囊内窥镜图像辅助阅片系统及方法
CN112089392A (zh) * 2020-10-14 2020-12-18 深圳市资福医疗技术有限公司 胶囊内窥镜控制方法、装置、设备、系统及存储介质
CN113052956A (zh) * 2021-03-19 2021-06-29 安翰科技(武汉)股份有限公司 基于胶囊内窥镜构建阅片模型的方法、设备及介质

Also Published As

Publication number Publication date
US20240188791A1 (en) 2024-06-13
CN113052956A (zh) 2021-06-29
CN113052956B (zh) 2023-03-10

Similar Documents

Publication Publication Date Title
WO2022194126A1 (zh) 基于胶囊内窥镜构建阅片模型的方法、设备及介质
JP4631057B2 (ja) 内視鏡システム
US10970862B1 (en) Medical procedure using augmented reality
JP7127785B2 (ja) 情報処理システム、内視鏡システム、学習済みモデル、情報記憶媒体及び情報処理方法
US20110032347A1 (en) Endoscopy system with motion sensors
EP2452649A1 (en) Visualization of anatomical data by augmented reality
WO2022194014A1 (zh) 胶囊内窥镜的完备性自检方法、电子设备及可读存储介质
JP2006519631A (ja) 仮想内視鏡法の実行システムおよび実行方法
Liu et al. Global and local panoramic views for gastroscopy: an assisted method of gastroscopic lesion surveillance
WO2022194015A1 (zh) 胶囊内窥镜的分区完备性自检方法、设备及可读存储介质
CN103402434A (zh) 医用图像诊断装置、医用图像显示装置、医用图像处理装置以及医用图像处理程序
Merritt et al. Real-time CT-video registration for continuous endoscopic guidance
JP6493885B2 (ja) 画像位置合せ装置、画像位置合せ装置の作動方法および画像位置合せプログラム
Wang et al. 3-D tracking for augmented reality using combined region and dense cues in endoscopic surgery
CN113197665A (zh) 一种基于虚拟现实的微创外科手术模拟方法、系统
CN111477318B (zh) 一种用于远程操纵的虚拟超声探头跟踪方法
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
CN113786229B (zh) 一种基于ar增强现实的辅助穿刺导航系统
JP7388648B2 (ja) 内視鏡診断支援方法及び内視鏡診断支援システム
US11601732B2 (en) Display system for capsule endoscopic image and method for generating 3D panoramic view
US11657547B2 (en) Endoscopic surgery support apparatus, endoscopic surgery support method, and endoscopic surgery support system
WO2024028934A1 (ja) 内視鏡検査支援装置、内視鏡検査支援方法、及び、記録媒体
Lin et al. Augmented‐reality‐based surgical navigation for endoscope retrograde cholangiopancreatography: A phantom study
US20240016365A1 (en) Image processing device, method, and program
Chen QUiLT (Quantitative Ultrasound in Longitudinal Tissue Tracking): Stitching 2D images into 3D Volumes for Organ Health Monitoring

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770481

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18551297

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22770481

Country of ref document: EP

Kind code of ref document: A1