CN117204791A - Endoscopic instrument guidance method and system - Google Patents

Endoscopic instrument guidance method and system Download PDF

Info

Publication number
CN117204791A
CN117204791A CN202311308866.4A CN202311308866A CN117204791A CN 117204791 A CN117204791 A CN 117204791A CN 202311308866 A CN202311308866 A CN 202311308866A CN 117204791 A CN117204791 A CN 117204791A
Authority
CN
China
Prior art keywords
instrument
camera
endoscopic
plane
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311308866.4A
Other languages
Chinese (zh)
Inventor
洪凯程
郭运博
范梅生
李俊博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Hi Tech Achievements Transformation Brokerage Co ltd
Suzhou Institute of Biomedical Engineering and Technology of CAS
Original Assignee
Jinan Hi Tech Achievements Transformation Brokerage Co ltd
Suzhou Institute of Biomedical Engineering and Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Hi Tech Achievements Transformation Brokerage Co ltd, Suzhou Institute of Biomedical Engineering and Technology of CAS filed Critical Jinan Hi Tech Achievements Transformation Brokerage Co ltd
Priority to CN202311308866.4A priority Critical patent/CN117204791A/en
Publication of CN117204791A publication Critical patent/CN117204791A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Endoscopes (AREA)

Abstract

本发明公开了一种内窥镜器械引导方法以及系统,属于医疗图像处理领域,通过对采集到的内窥镜图像帧序列进行特征提取和匹配,通过计算获取相机的实时位置;随后在医生操作器械时,通过传感器获得器械的角度和伸出距离,计算出相机与器械的相对位置关系;最后在三维空间中获得器械与相机视野的重叠部分,投影至对应的相机画面实现实时引导,通过上述步骤,仅通过两个额外传感器获得内镜器械的伸出位置,实现实时引导,体积小,不会给设备增加额外负担,系统处理速度快,实时性好,可兼容480p到1080p的不同图像分辨率,最高可支持60fps帧率输出,系统位置检测精度高,一致性较好,在长时间检查/手术时(2h以上)可保持较高的器械定位和引导精度。

The invention discloses an endoscopic instrument guidance method and system, which belongs to the field of medical image processing. By performing feature extraction and matching on the collected endoscopic image frame sequence, the real-time position of the camera is obtained through calculation; and then the real-time position of the camera is obtained during the doctor's operation. When the instrument is installed, the angle and extension distance of the instrument are obtained through the sensor, and the relative position relationship between the camera and the instrument is calculated; finally, the overlapping portion of the field of view of the instrument and the camera is obtained in the three-dimensional space, and projected to the corresponding camera screen to achieve real-time guidance. Through the above steps , only uses two additional sensors to obtain the extended position of the endoscopic instrument, achieving real-time guidance. It is small in size and does not add additional burden to the equipment. The system has fast processing speed, good real-time performance, and is compatible with different image resolutions from 480p to 1080p. , which can support up to 60fps frame rate output. The system has high position detection accuracy and good consistency. It can maintain high instrument positioning and guidance accuracy during long-term inspections/surgeries (more than 2 hours).

Description

一种内窥镜器械引导方法以及系统Endoscopic instrument guidance method and system

技术领域Technical field

本发明涉及医疗图像处理领域,尤其是涉及内窥镜器械引导方法以及系统。The present invention relates to the field of medical image processing, and in particular to endoscopic instrument guidance methods and systems.

背景技术Background technique

医用内窥镜是一种微创的检查或手术辅助医疗器械。内窥镜包含光学镜头、摄像头和机械辅助装置,内窥镜经过人体天然的孔道或外部微小切口进入人体内,可以观察内部的病变组织,辅助完成诊疗和手术等操作。相比传统医疗手段,内窥镜系统具有创伤小、使用方便和手术时间短等显著优势,在各个科室应用广泛。Medical endoscope is a minimally invasive inspection or surgical auxiliary medical device. An endoscope includes an optical lens, a camera and a mechanical auxiliary device. The endoscope enters the human body through the natural orifices of the human body or small external incisions. It can observe internal diseased tissues and assist in completing operations such as diagnosis, treatment and surgery. Compared with traditional medical methods, endoscopic systems have significant advantages such as less trauma, ease of use, and shorter operation time, and are widely used in various departments.

内窥镜通过自然孔道进入人体,镜体前端插入部为软性可弯曲结构,包含摄像头、光源、水气通道和器械孔道;镜体后端为操作手柄,与主机相连。医生在使用内窥镜进行取活检或手术时,通过手柄控制前端镜体方向,镜体固定好后,将手术器械插入器械孔道,通过操作孔道内部的抬钳器钢丝控制器械的伸出角度,完成操作。由于内窥镜摄像头和器械通道的出口不在同一平面,常存在器械脱离相机视野的情况,此时医生无法在画面内看到器械,需要凭借经验调整器械至合适角度,操作较为不便。The endoscope enters the human body through a natural orifice. The insertion part at the front end of the endoscope body is a soft and flexible structure, including a camera, a light source, a water and vapor channel, and an instrument channel; the rear end of the endoscope body is an operating handle, which is connected to the host computer. When the doctor uses an endoscope to perform biopsy or surgery, he controls the direction of the front end of the scope through the handle. After the scope is fixed, he inserts the surgical instrument into the instrument hole and controls the extension angle of the instrument by operating the forceps lifting wire inside the hole. Complete the operation. Since the endoscope camera and the exit of the instrument channel are not on the same plane, there are often situations where the instrument is out of the camera's field of view. At this time, the doctor cannot see the instrument in the screen and needs to rely on experience to adjust the instrument to the appropriate angle, which is inconvenient to operate.

目前常用的内窥镜手术/检查引导方法多需要外部大型设备进行辅助,如CT、超声和MRI等影像设备,通过多模影像配准或三维建模方式开展术中引导,运算量大,大型设备占据空间大,成本高。Currently, commonly used endoscopic surgery/examination guidance methods mostly require the assistance of external large-scale equipment, such as imaging equipment such as CT, ultrasound, and MRI. Intraoperative guidance is carried out through multi-mode image registration or three-dimensional modeling, which requires a large amount of calculation and is large-scale. The equipment takes up a lot of space and is expensive.

发明内容Contents of the invention

为了克服现有技术的不足,本发明的目的之一在于提供一种内窥镜器械引导方法,实现手术/检查时的器械引导,无需大型设备,运算量小、处理速度高,实时性好,可扩展性强。In order to overcome the shortcomings of the existing technology, one of the purposes of the present invention is to provide an endoscopic instrument guidance method to realize instrument guidance during surgery/inspection without the need for large-scale equipment, with small computational load, high processing speed, and good real-time performance. Strong scalability.

为了克服现有技术的不足,本发明的目的之二在于提供一种内窥镜器械引导方法,实现手术/检查时的器械系统,无需大型设备,运算量小、处理速度高,实时性好,可扩展性强。In order to overcome the shortcomings of the existing technology, the second object of the present invention is to provide an endoscopic instrument guidance method to implement an instrument system during surgery/inspection without the need for large-scale equipment, with low computational load, high processing speed, and good real-time performance. Strong scalability.

本发明的目的之一采用如下技术方案实现:One of the purposes of the present invention is achieved by adopting the following technical solutions:

一种内窥镜器械引导方法,包括以下步骤:An endoscopic instrument guidance method includes the following steps:

S1:相机获取多个内窥镜图像,对获取的相邻帧图像分别进行特征提取与匹配,根据匹配结果求解相机位姿,在获取相机位姿后,求取所有特征点的空间信息,根据空间信息优化相机位姿;S1: The camera acquires multiple endoscopic images, performs feature extraction and matching on the acquired adjacent frame images, and solves the camera pose based on the matching results. After acquiring the camera pose, obtain the spatial information of all feature points. According to Spatial information optimizes camera pose;

S2:操作器械时,通过传感器获得器械相对于器械通道移动的角度和距离,从而获取器械头端相对于器械通道的位置,计算出相机与器械的相对位置关系;S2: When operating an instrument, the sensor is used to obtain the angle and distance the instrument moves relative to the instrument channel, thereby obtaining the position of the instrument head relative to the instrument channel, and calculating the relative positional relationship between the camera and the instrument;

S3:在三维空间内连接器械头端和器械通道平面,构建延长线,将器械延长至相机所在的视野内,并将延长线投影到相机图像上实现器械位置的预测和引导。S3: Connect the instrument head end and the instrument channel plane in the three-dimensional space, construct an extension line, extend the instrument into the field of view of the camera, and project the extension line onto the camera image to predict and guide the position of the instrument.

进一步地,步骤S1与S2并行操作并且步骤S1贯穿整个内窥镜器械引导过程。Further, steps S1 and S2 operate in parallel and step S1 runs through the entire endoscopic instrument guidance process.

进一步地,步骤S1具体包括以下步骤:Further, step S1 specifically includes the following steps:

S11:相机初始化:对初始镜头进行旋转和平移,获取两帧初始图像;S11: Camera initialization: rotate and translate the initial lens to obtain two frames of initial images;

S12:特征提取:提取相邻两帧图像中的特征点,计算BRIEF描述子;S12: Feature extraction: extract feature points in two adjacent frames of images and calculate the BRIEF descriptor;

S13:对所有特征点进行描述子匹配,获取特征点对;S13: Perform descriptor matching on all feature points and obtain feature point pairs;

S14:相机运动求解:当特征点对的数量小于预设值时,返回步骤S11,当特征点对的数量大于等于预设值时,根据特征点对求解相机的运动,获得相机旋转矩阵R和平移矩阵t,得到相机的最新位置;S14: Camera motion solution: When the number of feature point pairs is less than the preset value, return to step S11. When the number of feature point pairs is greater than or equal to the preset value, solve the camera motion based on the feature point pairs to obtain the camera rotation matrix R and Translation matrix t, get the latest position of the camera;

S15:获取特征点的空间信息:通过三角测量,利用前后两帧中同一空间特征点的不同2D投影,通过最小二乘法求解得到特征点的空间位置;S15: Obtain the spatial information of the feature points: through triangulation, use different 2D projections of the same spatial feature points in the two frames before and after, and obtain the spatial position of the feature points through the least squares method;

S16:相机位姿优化:存储获得的三维空间特征点信息,当存储的图像帧数小于10时,相机位姿不变;当存储的图像帧数大于等于10时,进行相机位姿优化;S16: Camera pose optimization: Store the obtained three-dimensional space feature point information. When the number of stored image frames is less than 10, the camera pose remains unchanged; when the number of stored image frames is greater than or equal to 10, the camera pose is optimized;

S17:继续获取图像,重复步骤S12至步骤S16,直至内窥镜器械引导完成。S17: Continue to acquire images and repeat steps S12 to S16 until the endoscopic instrument guidance is completed.

进一步地,步骤S11中,将采集第一帧图像时相机坐标系作为世界坐标系,作为内窥镜器械引导方法中计算的坐标系。Further, in step S11, the camera coordinate system when collecting the first frame image is used as the world coordinate system and used as the coordinate system calculated in the endoscopic instrument guidance method.

进一步地,步骤S14中,预设值为8,采用2D-2D对极约束求解相机的运动。Further, in step S14, the default value is 8, and 2D-2D epipolar constraints are used to solve the camera motion.

进一步地,步骤S2具体包括以下步骤:Further, step S2 specifically includes the following steps:

S21:器械与初始计算平面对准:操作器械进入内窥镜的器械通道,在插入时阅读位置传感器的读数,当读数达到内镜器械通道的内部长度时停止操作,并将位置传感器读数归零,保证器械头端与通道的伸出平面对齐;S21: Align the instrument with the initial calculation plane: operate the instrument into the instrument channel of the endoscope, read the position sensor reading during insertion, stop the operation when the reading reaches the internal length of the endoscopic instrument channel, and reset the position sensor reading to zero , ensure that the head end of the instrument is aligned with the extending plane of the channel;

S22:操作器械,获取运动参数:操作器械,通过阅读位置传感器和旋转编码器的读数分别获得器械相对于初始位置移动的距离和偏转的角度;S22: Operate the instrument and obtain motion parameters: Operate the instrument and obtain the distance and deflection angle of the instrument relative to the initial position by reading the readings of the position sensor and the rotary encoder respectively;

S23:器械头端位置计算:由于器械的偏转为单自由度运动,根据移动距离和偏转角度求取器械头端相对于器械通道平面的位置关系;S23: Calculation of the position of the instrument head end: Since the deflection of the instrument is a single degree of freedom motion, the position relationship of the instrument head end relative to the instrument channel plane is calculated based on the movement distance and deflection angle;

S24:器械与相机位置关系求取:器械通道平面与相机位置关系为固定已知,通过器械头端相对于器械通道平面的位置关系获得器械与相机的位置关系。S24: Determine the positional relationship between the instrument and the camera: The positional relationship between the instrument channel plane and the camera is fixed and known. The positional relationship between the instrument and the camera is obtained through the positional relationship between the instrument head end and the instrument channel plane.

进一步地,在步骤S23中,器械头端相对于器械通道平面的位置关系为旋转矩阵R和平移向量t。Further, in step S23, the positional relationship of the instrument head end relative to the instrument channel plane is the rotation matrix R and the translation vector t.

进一步地,步骤S3具体包括以下步骤:Further, step S3 specifically includes the following steps:

S31:求取最近以及最远特征点:读取当前时刻的相机位置,获取相机定位中提取到的特征点集合,求取所有特征点与相机的距离,分别获得距相机最远点和最近点;S31: Find the nearest and farthest feature points: read the camera position at the current moment, obtain the set of feature points extracted during camera positioning, find the distances between all feature points and the camera, and obtain the farthest point and the closest point from the camera respectively. ;

S32:获取相机视椎体:S32: Obtain the camera frustum:

S33:求取器械延长线交点:将器械头端与末端平面点相连,引出射线与相机视椎体相交,获得两个交点,求解两个交点的空间位置坐标;S33: Find the intersection point of the extension line of the instrument: connect the head end of the instrument and the end plane point, draw the ray to intersect with the camera's visual frustum, obtain two intersection points, and solve for the spatial position coordinates of the two intersection points;

S34:构建引导线:根据相机投影方程,将两个交点的三维坐标投影到相机的像素平面,获得二维像素坐标,在相机成像画面中将两交点相连,获得器械引导线,实现引导。S34: Construct a guide line: According to the camera projection equation, project the three-dimensional coordinates of the two intersection points onto the pixel plane of the camera to obtain the two-dimensional pixel coordinates. Connect the two intersection points in the camera imaging screen to obtain the instrument guide line to achieve guidance.

进一步地,步骤S32具体包括以下步骤:Further, step S32 specifically includes the following steps:

S321:通过相机点分别与最近以及最远点连线,取最近以及最远点与连线垂直的面作为最近以及最远视锥平面;S321: Connect the camera point to the nearest and farthest points respectively, and take the planes perpendicular to the connection between the nearest and farthest points as the nearest and farthest frustum planes;

S322:计算最近视锥平面与最远视锥平面的夹角θ,将两个平面同时反向旋转得到相互平行的视锥平面;S322: Calculate the angle θ between the closest viewing cone plane and the farthest viewing cone plane, and reversely rotate the two planes at the same time Obtain mutually parallel viewing frustum planes;

S323:根据相机的横向和纵向视场角引出射线,与最近/最远视锥平面相交,获得近裁剪面和远裁剪面,两个裁剪面包围的中间部分即为相机的可见视椎体空间。S323: Extract rays according to the horizontal and vertical field of view angles of the camera, intersect with the nearest/farthest view cone plane, and obtain the near clipping plane and the far clipping plane. The middle part surrounded by the two clipping planes is the visible view cone space of the camera.

本发明的目的之二采用如下技术方案实现:The second object of the present invention is achieved by adopting the following technical solutions:

一种内窥镜器械引导系统,用以实施上述内窥镜器械引导方法,包括相机定位模块、器械位置检测模块以及器械引导模块,所述相机定位模块根据所述相机采集的图像计算相机位姿并优化相机位姿;所述器械位置检测模块包括编码器以及位置传感器,所述编码器获得器械相对于器械通道移动的角度,所述位置传感器获得器械相对于器械通道移动的距离,通过角度以及距离计算器械尖端相对于器械通道的位置;所述器械引导模块在三维空间内连接器械头端和通道平面,构建延长线,将器械延长至相机所在的视野内,并将延长线投影到相机图像上实现器械位置的预测和引导。An endoscopic instrument guidance system used to implement the above endoscopic instrument guidance method, including a camera positioning module, an instrument position detection module and an instrument guidance module. The camera positioning module calculates the camera pose according to the images collected by the camera. And optimize the camera posture; the instrument position detection module includes an encoder and a position sensor. The encoder obtains the angle of movement of the instrument relative to the instrument channel. The position sensor obtains the distance of movement of the instrument relative to the instrument channel. Through the angle and The distance calculates the position of the instrument tip relative to the instrument channel; the instrument guidance module connects the instrument head end and the channel plane in a three-dimensional space, constructs an extension line, extends the instrument into the field of view of the camera, and projects the extension line to the camera image Prediction and guidance of device position are achieved.

相比现有技术,本发明内窥镜器械引导方法对采集到的内窥镜图像帧序列进行特征提取和匹配,通过计算获取相机的实时位置;随后在医生操作器械时,通过传感器获得器械的角度和伸出距离,计算出相机与器械的相对位置关系;最后在三维空间中获得器械与相机视野的重叠部分,投影至对应的相机画面实现实时引导,通过上述步骤,仅通过两个额外传感器获得内镜器械的伸出位置,实现实时引导,体积小,不会给设备增加额外负担,系统处理速度快,实时性好,可兼容480p到1080p的不同图像分辨率,最高可支持60fps帧率输出,系统位置检测精度高,一致性较好,在长时间检查/手术时(2h以上)可保持较高的器械定位和引导精度。Compared with the existing technology, the endoscopic instrument guidance method of the present invention performs feature extraction and matching on the collected endoscopic image frame sequence, and obtains the real-time position of the camera through calculation; then, when the doctor operates the instrument, the instrument's position is obtained through the sensor. angle and protrusion distance to calculate the relative positional relationship between the camera and the instrument; finally, the overlapping portion of the field of view of the instrument and the camera is obtained in the three-dimensional space, and projected to the corresponding camera image to achieve real-time guidance. Through the above steps, it is obtained only through two additional sensors. The extended position of endoscopic instruments enables real-time guidance. It is small in size and does not add additional burden to the equipment. The system has fast processing speed and good real-time performance. It is compatible with different image resolutions from 480p to 1080p, and can support up to 60fps frame rate output. , the system has high position detection accuracy and good consistency, and can maintain high instrument positioning and guidance accuracy during long-term inspections/operations (more than 2 hours).

附图说明Description of the drawings

图1为本发明内窥镜器械引导方法的流程图;Figure 1 is a flow chart of the endoscopic instrument guidance method of the present invention;

图2为图1的内窥镜器械引导方法中优化相机位姿的流程图;Figure 2 is a flow chart for optimizing camera posture in the endoscopic instrument guidance method of Figure 1;

图3为图1的内窥镜器械引导方法中器械位置检测与器械引导流程图;Figure 3 is a flow chart of instrument position detection and instrument guidance in the endoscopic instrument guidance method of Figure 1;

图4为本发明内窥镜的前端结构示意图;Figure 4 is a schematic diagram of the front end structure of the endoscope according to the present invention;

图5为本发明内窥镜器械引导方法的器械引导示意图。Figure 5 is a schematic diagram of instrument guidance of the endoscopic instrument guidance method of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在另一中间组件,通过中间组件固定。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在另一中间组件。当一个组件被认为是“设置于”另一个组件,它可以是直接设置在另一个组件上或者可能同时存在另一中间组件。本文所使用的术语“垂直的”、“水平的”、“左”、“右”以及类似的表述只是为了说明的目的。It should be noted that when a component is referred to as being "fixed to" another component, it can be directly on the other component or another intermediate component may be present through which it is fixed. When a component is said to be "connected" to another component, it can be directly connected to the other component or there may be another intermediate component present at the same time. When a component is said to be "disposed on" another component, it can be directly located on the other component or another intervening component may be present. The terms "vertical," "horizontal," "left," "right" and similar expressions are used herein for illustrative purposes only.

除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which the invention belongs. The terminology used herein in the description of the invention is for the purpose of describing specific embodiments only and is not intended to limit the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.

图1为本发明内窥镜器械引导方法,包括以下步骤:Figure 1 shows the endoscopic instrument guidance method of the present invention, which includes the following steps:

S1:相机获取多个内窥镜图像,对获取的相邻帧图像分别进行特征提取与匹配,根据匹配结果求解相机位姿,在获取相机位姿后,求取所有特征点的空间信息,根据空间信息优化相机位姿;S1: The camera acquires multiple endoscopic images, performs feature extraction and matching on the acquired adjacent frame images, and solves the camera pose based on the matching results. After acquiring the camera pose, obtain the spatial information of all feature points. According to Spatial information optimizes camera pose;

S2:操作器械时,通过传感器获得器械相对于器械通道移动的角度和距离,从而获取器械头端相对于器械通道的位置,计算出相机与器械的相对位置关系;S2: When operating an instrument, the sensor is used to obtain the angle and distance the instrument moves relative to the instrument channel, thereby obtaining the position of the instrument head relative to the instrument channel, and calculating the relative positional relationship between the camera and the instrument;

S3:在三维空间内连接器械头端和器械通道平面,构建延长线,将器械延长至相机所在的视野内,并将延长线投影到相机图像上实现器械位置的预测和引导。S3: Connect the instrument head end and the instrument channel plane in the three-dimensional space, construct an extension line, extend the instrument into the field of view of the camera, and project the extension line onto the camera image to predict and guide the position of the instrument.

步骤S1与S2并行操作并且步骤S1贯穿整个内窥镜器械引导过程,实现内窥镜器械的实时引导纠正。Steps S1 and S2 operate in parallel and step S1 runs through the entire endoscopic instrument guidance process to realize real-time guidance and correction of the endoscopic instrument.

请继续参阅图2,步骤S1具体包括以下步骤:Please continue to refer to Figure 2. Step S1 specifically includes the following steps:

S11:相机初始化:对初始镜头进行旋转和平移,获取两帧初始图像,将采集第一帧图像时相机坐标系作为世界坐标系,作为内窥镜器械引导方法中计算的坐标系;S11: Camera initialization: Rotate and translate the initial lens, obtain two frames of initial images, and use the camera coordinate system when collecting the first frame of image as the world coordinate system and the coordinate system calculated in the endoscopic instrument guidance method;

S12:特征提取:利用ORB算法提取相邻两帧图像中的特征点,计算BRIEF描述子;S12: Feature extraction: Use the ORB algorithm to extract feature points in two adjacent frames of images and calculate the BRIEF descriptor;

S13:使用快速近似最邻近算法(FLANN)对所有特征点进行描述子匹配,获取特征点对;S13: Use the fast approximate nearest neighbor algorithm (FLANN) to perform descriptor matching on all feature points and obtain feature point pairs;

S14:相机运动求解:当特征点对的数量小于预设值时,返回步骤S11,当特征点对的数量大于等于预设值时,根据特征点对采用2D-2D对极约束求解相机的运动,获得相机旋转矩阵R和平移矩阵t,得到相机的最新位置;S14: Camera motion solution: When the number of feature point pairs is less than the preset value, return to step S11. When the number of feature point pairs is greater than or equal to the preset value, use 2D-2D epipolar constraints to solve the camera motion based on the feature point pairs. , obtain the camera rotation matrix R and translation matrix t, and obtain the latest position of the camera;

S15:获取特征点的空间信息:通过三角测量,利用前后两帧中同一空间特征点的不同2D投影,通过最小二乘法求解得到特征点的空间位置;S15: Obtain the spatial information of the feature points: through triangulation, use different 2D projections of the same spatial feature points in the two frames before and after, and obtain the spatial position of the feature points through the least squares method;

S16:相机位姿优化:存储获得的三维空间特征点信息,当存储的图像帧数小于10时,相机位姿不变;当存储的图像帧数大于等于10时,使用BA代价函数进行相机位姿优化;S16: Camera pose optimization: Store the obtained three-dimensional space feature point information. When the number of stored image frames is less than 10, the camera pose remains unchanged; when the number of stored image frames is greater than or equal to 10, use the BA cost function to perform camera positioning. posture optimization;

S17:继续获取图像,重复步骤S12至步骤S16,直至内窥镜器械引导完成。S17: Continue to acquire images and repeat steps S12 to S16 until the endoscopic instrument guidance is completed.

具体的,步骤S14中,预设值为8,采用2D-2D对极约束求解相机的运动。Specifically, in step S14, the default value is 8, and 2D-2D epipolar constraints are used to solve the camera motion.

请继续参阅图3以及图4,步骤S2具体包括以下步骤:Please continue to refer to Figure 3 and Figure 4. Step S2 specifically includes the following steps:

S21:器械与初始计算平面对准:操作器械进入内窥镜的器械通道,在插入时阅读位置传感器的读数,当读数达到内镜器械通道的内部长度时停止操作,并将位置传感器读数归零,保证器械头端与通道的伸出平面对齐;S21: Align the instrument with the initial calculation plane: operate the instrument into the instrument channel of the endoscope, read the position sensor reading during insertion, stop the operation when the reading reaches the internal length of the endoscopic instrument channel, and reset the position sensor reading to zero , ensure that the head end of the instrument is aligned with the extending plane of the channel;

S22:操作器械,获取运动参数:操作器械,通过阅读位置传感器和旋转编码器的读数分别获得器械相对于初始位置移动的距离和偏转的角度;S22: Operate the instrument and obtain motion parameters: Operate the instrument and obtain the distance and deflection angle of the instrument relative to the initial position by reading the readings of the position sensor and the rotary encoder respectively;

S23:器械头端位置计算:由于器械的偏转为单自由度运动,根据移动距离和偏转角度求取器械头端相对于器械通道平面的位置关系;S23: Calculation of the position of the instrument head end: Since the deflection of the instrument is a single degree of freedom motion, the position relationship of the instrument head end relative to the instrument channel plane is calculated based on the movement distance and deflection angle;

S24:器械与相机位置关系求取:器械通道平面与相机位置关系为固定已知,通过器械头端相对于器械通道平面的位置关系获得器械与相机的位置关系。S24: Determine the positional relationship between the instrument and the camera: The positional relationship between the instrument channel plane and the camera is fixed and known. The positional relationship between the instrument and the camera is obtained through the positional relationship between the instrument head end and the instrument channel plane.

具体的,在步骤S23中,器械头端相对于器械通道平面的位置关系为旋转矩阵R和平移向量t。Specifically, in step S23, the positional relationship between the instrument head end and the instrument channel plane is the rotation matrix R and the translation vector t.

请继续参阅图5,步骤S3具体包括以下步骤:Please continue to refer to Figure 5. Step S3 specifically includes the following steps:

S31:求取最近以及最远特征点:读取当前时刻的相机位置,获取相机定位中提取到的特征点集合,求取所有特征点与相机的距离,分别获得距相机最远点和最近点;S31: Find the nearest and farthest feature points: read the camera position at the current moment, obtain the set of feature points extracted during camera positioning, find the distances between all feature points and the camera, and obtain the farthest point and the closest point from the camera respectively. ;

S32:获取相机视椎体:S32: Obtain the camera frustum:

S33:求取器械延长线交点:将器械头端与末端平面点相连,引出射线与相机视椎体相交,获得两个交点,求解两个交点的空间位置坐标;S33: Find the intersection point of the extension line of the instrument: connect the head end of the instrument and the end plane point, draw the ray to intersect with the camera's visual frustum, obtain two intersection points, and solve for the spatial position coordinates of the two intersection points;

S34:构建引导线:根据相机投影方程,将两个交点的三维坐标投影到相机的像素平面,获得二维像素坐标,在相机成像画面中将两交点相连,获得器械引导线,实现引导。S34: Construct a guide line: According to the camera projection equation, project the three-dimensional coordinates of the two intersection points onto the pixel plane of the camera to obtain the two-dimensional pixel coordinates. Connect the two intersection points in the camera imaging screen to obtain the instrument guide line to achieve guidance.

具体的,步骤S32具体包括以下步骤:Specifically, step S32 specifically includes the following steps:

S321:通过相机点分别与最近以及最远点连线,取最近以及最远点与连线垂直的面作为最近以及最远视锥平面;S321: Connect the camera point to the nearest and farthest points respectively, and take the planes perpendicular to the connection between the nearest and farthest points as the nearest and farthest frustum planes;

S322:计算最近视锥平面与最远视锥平面的夹角θ,将两个平面同时反向旋转得到相互平行的视锥平面;S322: Calculate the angle θ between the closest viewing cone plane and the farthest viewing cone plane, and reversely rotate the two planes at the same time Obtain mutually parallel viewing frustum planes;

S323:根据相机的横向和纵向视场角引出射线,与最近/最远视锥平面相交,获得近裁剪面和远裁剪面,两个裁剪面包围的中间部分即为相机的可见视椎体空间。S323: Extract rays according to the horizontal and vertical field of view angles of the camera, intersect with the nearest/farthest view cone plane, and obtain the near clipping plane and the far clipping plane. The middle part surrounded by the two clipping planes is the visible view cone space of the camera.

本发明还涉及一种内窥镜器械引导系统,用以实施上述内窥镜器械引导方法,包括相机定位模块、器械位置检测模块以及器械引导模块,所述相机定位模块根据所述相机采集的图像计算相机位姿并优化相机位姿;所述器械位置检测模块包括编码器以及位置传感器,所述编码器获得器械相对于器械通道移动的角度,所述位置传感器获得器械相对于器械通道移动的距离,通过角度以及距离计算器械尖端相对于器械通道的位置;所述器械引导模块在三维空间内连接器械头端和通道平面,构建延长线,将器械延长至相机所在的视野内,并将延长线投影到相机图像上实现器械位置的预测和引导。The present invention also relates to an endoscopic instrument guidance system for implementing the above endoscopic instrument guidance method, including a camera positioning module, an instrument position detection module and an instrument guidance module. The camera positioning module is based on the image collected by the camera. Calculate the camera pose and optimize the camera pose; the instrument position detection module includes an encoder and a position sensor, the encoder obtains the angle of movement of the instrument relative to the instrument channel, and the position sensor obtains the distance of movement of the instrument relative to the instrument channel , calculate the position of the instrument tip relative to the instrument channel through angle and distance; the instrument guidance module connects the instrument head end and the channel plane in a three-dimensional space, constructs an extension line, extends the instrument into the field of view of the camera, and extends the extension line Projection onto the camera image enables prediction and guidance of instrument position.

本发明内窥镜器械引导方法对采集到的内窥镜图像帧序列进行特征提取和匹配,通过计算获取相机的实时位置;随后在医生操作器械时,通过传感器获得器械的角度和伸出距离,计算出相机与器械的相对位置关系;最后在三维空间中获得器械与相机视野的重叠部分,投影至对应的相机画面实现实时引导,通过上述步骤,仅通过两个额外传感器获得内镜器械的伸出位置,实现实时引导,体积小,不会给设备增加额外负担,系统处理速度快,实时性好,可兼容480p到1080p的不同图像分辨率,最高可支持60fps帧率输出,系统位置检测精度高,一致性较好,在长时间检查/手术时(2h以上)可保持较高的器械定位和引导精度。The endoscopic instrument guidance method of the present invention performs feature extraction and matching on the collected endoscopic image frame sequence, and obtains the real-time position of the camera through calculation; then when the doctor operates the instrument, the angle and extension distance of the instrument are obtained through the sensor, and the calculation The relative positional relationship between the camera and the instrument is obtained; finally, the overlapping portion of the field of view of the instrument and the camera is obtained in the three-dimensional space, and projected to the corresponding camera image to achieve real-time guidance. Through the above steps, the extension of the endoscopic instrument is obtained only through two additional sensors. position, realizing real-time guidance, small size, not adding extra burden to the device, fast system processing speed, good real-time performance, compatible with different image resolutions from 480p to 1080p, up to 60fps frame rate output, and high system position detection accuracy , has good consistency, and can maintain high instrument positioning and guidance accuracy during long-term inspections/operations (more than 2 hours).

以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进演变,都是依据本发明实质技术对以上实施例做的等同修饰与演变,这些都属于本发明的保护范围。The above embodiments only express several embodiments of the present invention, and their descriptions are relatively specific and detailed, but they should not be construed as limiting the scope of the invention. It should be noted that, for those of ordinary skill in the art, several modifications and improvements can be made without departing from the concept of the present invention, which are equivalent modifications to the above embodiments based on the essential technology of the present invention. and evolution, these all belong to the protection scope of the present invention.

Claims (10)

1. An endoscopic instrument guide method, comprising the steps of:
s1: the camera acquires a plurality of endoscope images, performs feature extraction and matching on the acquired adjacent frame images respectively, solves the pose of the camera according to the matching result, solves the spatial information of all feature points after acquiring the pose of the camera, and optimizes the pose of the camera according to the spatial information;
s2: when the instrument is operated, the angle and the distance of movement of the instrument relative to the instrument channel are obtained through the sensor, so that the position of the head end of the instrument relative to the instrument channel is obtained, and the relative position relation between the camera and the instrument is calculated;
s3: and connecting the instrument head end and the instrument channel plane in the three-dimensional space, constructing an extension line, extending the instrument to the field of view of the camera, and projecting the extension line to the camera image to realize the prediction and the guidance of the instrument position.
2. The endoscopic instrument guide method according to claim 1, wherein: steps S1 and S2 operate in parallel and step S1 extends through the entire endoscopic instrument guide procedure.
3. The endoscopic instrument guide method according to claim 1, wherein: the step S1 specifically comprises the following steps:
s11: camera initialization: rotating and translating the initial lens to obtain two frames of initial images;
s12: feature extraction: extracting feature points in two adjacent frames of images, and calculating BRIEF descriptors;
s13: carrying out descriptor matching on all the characteristic points to obtain characteristic point pairs;
s14: camera motion solution: returning to the step S11 when the number of the characteristic point pairs is smaller than a preset value, and solving the movement of the camera according to the characteristic point pairs to obtain a camera rotation matrix R and a translation matrix t when the number of the characteristic point pairs is larger than or equal to the preset value to obtain the latest position of the camera;
s15: acquiring spatial information of feature points: by means of triangulation, different 2D projections of the same spatial feature points in the front frame and the rear frame are utilized, and the spatial positions of the feature points are obtained through a least square method;
s16: and (3) optimizing the pose of the camera: storing the obtained three-dimensional space feature point information, and keeping the pose of the camera unchanged when the number of stored image frames is smaller than 10; when the stored image frame number is more than or equal to 10, performing pose optimization of the camera;
s17: continuing to acquire images, repeating steps S12 to S16 until the guiding of the endoscopic instrument is completed.
4. The endoscopic instrument guide method according to claim 3, wherein: in step S11, the camera coordinate system in which the first frame image is acquired is used as the world coordinate system, and is used as the coordinate system calculated in the endoscopic instrument guidance method.
5. The endoscopic instrument guide method according to claim 3, wherein: in step S14, the preset value is 8, and the 2D-2D epipolar constraint is adopted to solve the motion of the camera.
6. The endoscopic instrument guide method according to claim 1, wherein: the step S2 specifically comprises the following steps:
s21: alignment of the instrument with the initial calculation plane: operating the instrument to enter the instrument channel of the endoscope, reading the reading of the position sensor during insertion, stopping the operation when the reading reaches the inner length of the instrument channel of the endoscope, and zeroing the reading of the position sensor to ensure that the head end of the instrument is aligned with the extending plane of the channel;
s22: operating the instrument to obtain motion parameters: operating the instrument, and respectively obtaining the moving distance and the deflection angle of the instrument relative to the initial position through reading the readings of the position sensor and the rotary encoder;
s23: calculating the position of the instrument head end: because the deflection of the instrument is single-degree-of-freedom motion, the position relation of the head end of the instrument relative to the plane of the instrument channel is obtained according to the moving distance and the deflection angle;
s24: obtaining the position relation between the instrument and the camera: the positional relationship between the instrument channel plane and the camera is fixedly known, and the positional relationship between the instrument and the camera is obtained through the positional relationship between the instrument head end and the instrument channel plane.
7. The endoscopic instrument guide method according to claim 6, wherein: in step S23, the positional relationship of the instrument tip with respect to the instrument channel plane is the rotation matrix R and the translation vector t.
8. The endoscopic instrument guide method according to claim 1, wherein: the step S3 specifically comprises the following steps:
s31: the nearest and farthest feature points are obtained: reading the camera position at the current moment, acquiring a feature point set extracted from camera positioning, solving the distances between all feature points and the camera, and respectively acquiring the farthest point and the nearest point from the camera;
s32: acquiring a camera vision cone:
s33: solving the intersection point of the extension lines of the instrument: connecting the instrument head end with the tail end plane point, leading out rays to intersect with the camera view cone, obtaining two intersection points, and solving the spatial position coordinates of the two intersection points;
s34: constructing a guide wire: according to a camera projection equation, the three-dimensional coordinates of the two intersection points are projected to a pixel plane of a camera to obtain two-dimensional pixel coordinates, the two intersection points are connected in an imaging picture of the camera to obtain instrument guide lines, and guidance is achieved.
9. The endoscopic instrument guide method according to claim 8, wherein: the step S32 specifically includes the following steps:
s321: connecting the camera points with the nearest and farthest points respectively, and taking the surfaces of the nearest and farthest points perpendicular to the connecting lines as nearest and farthest viewing cone planes;
s322: calculating the included angle theta between the plane of the shortest sight cone and the plane of the farthest sight cone, and reversely rotating the two planes simultaneouslyObtaining mutually parallel viewing cone planes;
s323: and (3) extracting rays according to the transverse and longitudinal field angles of the camera, intersecting with the nearest/far view cone plane to obtain a near clipping surface and a far clipping surface, wherein the middle part surrounded by the two clipping surfaces is the visible cone space of the camera.
10. An endoscopic instrument guide system for performing the endoscopic instrument guide method of any of claims 1-9, wherein: the camera positioning module calculates the pose of the camera according to the image acquired by the camera and optimizes the pose of the camera; the instrument position detection module comprises an encoder and a position sensor, wherein the encoder obtains the movement angle of the instrument relative to the instrument channel, the position sensor obtains the movement distance of the instrument relative to the instrument channel, and the position of the tip of the instrument relative to the instrument channel is calculated through the angle and the distance; the instrument guiding module is connected with the instrument head end and the channel plane in a three-dimensional space, an extension line is constructed, the instrument is prolonged to be in the field of view of the camera, and the extension line is projected onto the camera image to realize the prediction and guiding of the instrument position.
CN202311308866.4A 2023-10-10 2023-10-10 Endoscopic instrument guidance method and system Pending CN117204791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311308866.4A CN117204791A (en) 2023-10-10 2023-10-10 Endoscopic instrument guidance method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311308866.4A CN117204791A (en) 2023-10-10 2023-10-10 Endoscopic instrument guidance method and system

Publications (1)

Publication Number Publication Date
CN117204791A true CN117204791A (en) 2023-12-12

Family

ID=89042503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311308866.4A Pending CN117204791A (en) 2023-10-10 2023-10-10 Endoscopic instrument guidance method and system

Country Status (1)

Country Link
CN (1) CN117204791A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117942170A (en) * 2024-03-26 2024-04-30 北京云力境安科技有限公司 Control method, equipment and storage medium for instrument conveying length

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117942170A (en) * 2024-03-26 2024-04-30 北京云力境安科技有限公司 Control method, equipment and storage medium for instrument conveying length

Similar Documents

Publication Publication Date Title
JP5153620B2 (en) System for superimposing images related to a continuously guided endoscope
EP2043499B1 (en) Endoscopic vision system
CN106535806B (en) The quantitative three-dimensional imaging of surgical scene from multiport visual angle
JP5836267B2 (en) Method and system for markerless tracking registration and calibration for an electromagnetic tracking endoscope system
US20130281821A1 (en) Intraoperative camera calibration for endoscopic surgery
US20150313503A1 (en) Electromagnetic sensor integration with ultrathin scanning fiber endoscope
US20070161854A1 (en) System and method for endoscopic measurement and mapping of internal organs, tumors and other objects
WO2013111535A1 (en) Endoscopic image diagnosis assistance device, method, and program
US20220012954A1 (en) Generation of synthetic three-dimensional imaging from partial depth maps
Noonan et al. A stereoscopic fibroscope for camera motion and 3D depth recovery during minimally invasive surgery
ITUB20155830A1 (en) "NAVIGATION, TRACKING AND GUIDE SYSTEM FOR THE POSITIONING OF OPERATOR INSTRUMENTS"
WO2016009339A1 (en) Image integration and robotic endoscope control in x-ray suite
JP5750669B2 (en) Endoscope system
CN103948432A (en) Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation
Sun et al. Surface reconstruction from tracked endoscopic video using the structure from motion approach
CN110051434A (en) AR operation piloting method and terminal in conjunction with endoscope
Lapeer et al. Image‐enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking
Gu et al. Vision–kinematics interaction for robotic-assisted bronchoscopy navigation
CN101889853B (en) Three-dimensional endoscope system capable of rotating freely for angles
CN110288653A (en) A multi-angle ultrasonic image fusion method, system and electronic equipment
CN117204791A (en) Endoscopic instrument guidance method and system
JP6145870B2 (en) Image display apparatus and method, and program
CN114126527B (en) Composite medical imaging system and method
US20240115338A1 (en) Endoscope master-slave motion control method and surgical robot system
CN109068035A (en) A kind of micro- camera array endoscopic imaging system of intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination