WO2017211087A1 - 一种内窥镜手术导航方法和系统 - Google Patents

一种内窥镜手术导航方法和系统 Download PDF

Info

Publication number
WO2017211087A1
WO2017211087A1 PCT/CN2017/071006 CN2017071006W WO2017211087A1 WO 2017211087 A1 WO2017211087 A1 WO 2017211087A1 CN 2017071006 W CN2017071006 W CN 2017071006W WO 2017211087 A1 WO2017211087 A1 WO 2017211087A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
endoscope
patient
registered
Prior art date
Application number
PCT/CN2017/071006
Other languages
English (en)
French (fr)
Inventor
杨健
王涌天
梁萍
艾丹妮
楚亚奎
陈雷
丛伟建
陈钢
Original Assignee
北京理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京理工大学 filed Critical 北京理工大学
Publication of WO2017211087A1 publication Critical patent/WO2017211087A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/00234Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery

Definitions

  • the present invention relates to the field of surgical navigation, and in particular to an endoscopic surgical navigation method and system.
  • the skull base tumor Due to its deep location, the skull base tumor is difficult to distinguish adjacent structures.
  • the diagnosis and treatment process involves multidisciplinary techniques such as neurosurgery, otolaryngology and head and neck surgery. Complete tumor resection is difficult.
  • the diagnosis and treatment of skull base tumors has progressed from open-hole craniotomy to endoscopic minimally invasive surgery.
  • Endoscopic minimally invasive technique is simple and quick to recover after surgery. Endoscopic image guidance avoids the damage of facial skin structure caused by surgical approach and reduces the probability of various complications.
  • the organizational structure information of single-modal medical images is relatively simple.
  • the image accuracy and imaging quality have a great influence on the surgical navigation.
  • the effect is not satisfactory when used for surgical navigation.
  • the virtual scene reconstruction using a single rendering mode takes a long time, showing that the anatomical structure is not strong, it is easy to cause judgment errors, the structure is not clear, and the calculation amount is large, causing the navigation process to be stuck, which greatly limits the role of the navigation system;
  • the accuracy of the real-time tracking and registration method based on artificial marker points is affected by the image quality, and is also related to the doctor's registration method, which artificially increases the source of error.
  • the present invention provides an endoscopic surgical navigation method and system, which improves image rendering speed and improves navigation accuracy.
  • an endoscopic surgical navigation method including:
  • Gaussian function attenuation is performed on the edge of the real-time image of the endoscope, and is merged with the virtual scene view of the endoscope to realize layered rendering of the scene.
  • the image data of the affine-matched image of the image is subjected to a remixed rendering of the scene and a virtual scene is obtained, which specifically includes:
  • the CUDA acceleration mode is used to reconstruct the scene and the virtual scene is obtained.
  • the method further includes:
  • the real-time point cloud data is quickly registered based on the 3PCHM method, and the registration of the navigation image and the patient pose is corrected.
  • the image is fully affine-matched by using the medical image data of the multi-modal medical image data as the reference and the other medical image data as the image to be registered, and specifically includes:
  • a rotation and translation matrix between the reference image and the image to be registered is calculated to achieve full affine matching of the two images.
  • the calculating, according to the corresponding set, the rotation and translation matrix between the reference image and the image to be registered, and performing full affine matching of the two images specifically including:
  • the invention also provides an endoscopic surgical navigation system, comprising:
  • the computer is configured to read multimodal medical image data; use any medical image data in the multimodal medical image data as a reference image, and use other medical image data as a to-be-registered image to perform image imitation Shooting matching; performing image reconstruction on the image data of the image with full affine matching and obtaining a virtual scene; selecting a reference point based on CT image data in the multimodal medical image data, and selecting on the patient's body Corresponding to the marker point of the reference point, the CT navigation image and the patient position are completed by using a 3-point convex hull rapid registration calculation method 3PCHM or ICP fast registration calculation method. Registration of posture;
  • the binocular camera is configured to track the endoscope and the surgical tool, and acquire a posture relationship between the endoscope and the surgical tool and the patient's body; according to the obtained posture relationship, Obtaining a virtual scene view of the endoscope in the virtual scene;
  • the computer is further configured to locate an endoscope for the binocular camera, and further obtain a virtual scene view of the endoscope, perform Gaussian function attenuation on an edge of the endoscope real-time image, and interact with the endoscope Virtual scene view fusion for layered rendering.
  • the computer performs a mixed scene rendering and a virtual scene on the image data of the image that is completely affine-matched, and includes:
  • the CUDA acceleration mode is used to reconstruct the scene and the virtual scene is obtained.
  • system further includes:
  • a depth camera for acquiring real-time point cloud data of a patient's face when the patient moves during the surgery
  • the computer is further configured to quickly register the real-time point cloud data acquired by the depth camera based on the 3PCHM method, and correct registration of the navigation image with the patient pose.
  • the computer uses any one of the multi-modal medical image data as a reference image, and uses other medical image data as a to-be-registered image to perform image affine matching, which specifically includes:
  • a rotation and translation matrix between the reference image and the image to be registered is calculated to achieve full affine matching of the two images.
  • the calculating, according to the corresponding set, the rotation and translation matrix between the reference image and the image to be registered, and implementing full affine matching of the two images specifically: according to the 3PCHM method Calculating the corresponding set to obtain a rotation and translation matrix between the reference image and the image to be registered, and achieving full affine matching of the two images.
  • the endoscopic surgical navigation method and system first reads multimodal medical image data, and uses any medical image data in the multimodal medical image data as a reference image to other medical images.
  • the data is used as the image to be registered, and the image is fully affine-matched.
  • the image data of the image is fully affine-matched and reconstructed, and the virtual scene is obtained.
  • the CT image data in the multi-modal medical image data is selected as the standard.
  • the reference point For the reference point, select the marker point corresponding to the reference point on the patient's body, and use the 3PCHM method or the ICP rapid registration calculation method to complete the registration of the navigation image collected by the endoscope and the patient's posture; after completing the registration of the patient's posture, Track endoscopes and surgical tools, and get endoscopes and hands Obtaining the positional relationship between the tool and the patient's body, and acquiring the virtual scene view of the endoscope in the virtual scene according to the pose relationship; finally, performing Gaussian function attenuation on the edge of the view for the virtual scene view of the endoscope, Layered rendering.
  • the program not only improves the rendering speed, but also improves the navigation accuracy by reducing the registration of the patient's posture, reducing the error and improving the safety of endoscopic minimally invasive surgery.
  • FIG. 1 is a flowchart of a method for navigating an endoscopic operation according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a hybrid scene rendering process according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of an endoscopic surgical navigation method based on surface point cloud fast registration according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of an application scenario and a navigation diagram of an endoscopic surgical navigation system according to an embodiment of the present invention
  • FIG. 5 is a diagram of a CPU and GPU processing module of an endoscopic surgical navigation system according to an embodiment of the present invention.
  • An embodiment of the present invention provides an endoscopic surgical navigation method. As shown in FIG. 1, the method includes:
  • Multimodal medical images refer to image data with different imaging principles, including Nuclear Magnetic Resonance (NMR) image data, CT scan image data, X-ray image data, and ultrasound image data.
  • NMR Nuclear Magnetic Resonance
  • the full affine matching of the image specifically includes:
  • NMR or CT image data can be selected for the registration image.
  • the corresponding set is calculated according to a 3Points Convex Hull Matching (3PCHM) or an Iterative Closet Points (ICP) registration algorithm, and the rotation between the two images is obtained.
  • the transformation matrix using Affine transformation method to register the two images to be registered, to achieve full affine matching of the image.
  • This key structure includes important human tissue structures such as blood vessels, nerves, and tumors during surgery.
  • the CUDA Computer Unified Device Architecture
  • the hybrid rendering scene reconstruction method based on CUDA acceleration can greatly improve the rendering efficiency, reduce the calculation amount, and shorten the reconstruction time.
  • the positional relationship between the endoscope and the surgical tool and the patient's body is calculated according to the registration.
  • GED Gaussian edge attenuation
  • the semi-automatic registration of multi-modal images is performed by the registration method based on Affine transform. Combined with the different imaging properties of the same tissue under multi-modal images, the characteristics of different organizational structures are displayed, and the invariant matching of images is realized by full affine transformation. It provides the possibility to simultaneously utilize a large amount of anatomical information of multiple modal images;
  • the layered rendering method of the region of interest is used to implement the augmented reality guidance for the observation area, and the position of the moving cube is changed along with the endoscopic posture change in the display and rendering area, and the CUDA acceleration is combined for the area simultaneously.
  • the speculum image and the virtual scene perform different rendering operations, improve the rendering speed, and improve the distance perception and the scene immersion;
  • the embodiment of the present invention further provides an endoscopic surgical navigation method. As shown in FIG. 3, the method adds steps 105a and 105b to FIG.
  • the real-time point cloud data is quickly registered, and the registration of the navigation image and the patient posture is corrected.
  • the real-time registration of the patient's face during surgery is further improved by steps 105a and 105b.
  • This process mainly completes the tracking of the patient's posture during the operation in order to overcome the shortcomings of tracking inaccuracy caused by the patient's movement. If the patient pose does not move during navigation, 105a and 105b will not be used.
  • the program has important clinical and practical significance, which is more helpful for the real-time display effect in the system tracking, and does not cause image misalignment and rendering errors during the guiding process.
  • the embodiment of the invention further provides an endoscopic surgical navigation system, the system comprising:
  • the computer is used for reading multimodal medical image data, using any medical image data in the multimodal medical image data as a reference image, and using other medical image data as a to-be-registered image to perform image full affine matching;
  • the image data of the image is completely affine-matched, and the reconstructed scene is mixed and rendered to obtain a virtual scene;
  • the reference image is selected based on the CT image data in the multi-modal medical image data, and the marker point corresponding to the reference point is selected on the patient's body.
  • the 3PCHM method or the ICP rapid registration calculation method is used to complete the registration of the CT image and the patient's posture;
  • the binocular camera is used to track the endoscope and the surgical tool, and obtain the posture relationship between the endoscope and the surgical tool and the patient's body; and obtain the virtual scene view of the endoscope in the virtual scene according to the obtained posture relationship ;
  • the computer is also used to locate the endoscope for the binocular camera, and then obtain the virtual scene view of the endoscope, perform Gaussian function attenuation on the edge of the real-time image of the endoscope, and fuse with the virtual scene view of the endoscope to realize layering. Rendering.
  • the computer performs a hybrid scene reconstruction on the image data of the image with full affine matching and obtains a virtual scene, which specifically includes:
  • the CUDA acceleration method is used to reconstruct the scene and obtain the virtual scene.
  • the system also includes a depth camera.
  • a depth camera is used to acquire real-time point cloud data of a patient's face as the patient moves during the procedure.
  • the computer is further configured to quickly register the real-time point cloud data acquired by the depth camera based on the 3PCHM method, and correct the registration of the navigation image with the patient pose.
  • the computer performs image full affine matching, which specifically includes:
  • the corresponding set is calculated, and the rotation and translation matrix between the reference image and the image to be registered are obtained, and the full affine matching of the two images is realized.
  • FIG. 4 is a schematic diagram of an application scenario and a navigation diagram of an endoscopic surgical navigation system according to an embodiment of the present invention.
  • the figure includes a computer 41, a binocular camera 42, an endoscope 43 and a surgical tool 44, a depth camera 45, and a patient body 46.
  • Mark points 47 are provided on the endoscope 43 and the surgical tool 44 to facilitate binocular camera acquisition and to know the pose relationship.
  • the computer 41 includes a central processing unit (CPU) for performing functions such as mathematical calculation and image configuration.
  • a graphics processing unit (GPU) may also be included.
  • the GPU primarily performs functions related to graphics processing.
  • Figure 5 shows a CPU and GPU processing block diagram of an endoscopic surgical navigation system.
  • the main functions of the CPU include: reading multimodal medical image data; segmentation and labeling of key structures in image data; multimodal image registration based on Affine transform and 3PCHM or ICP fast registration algorithm.
  • the main functions of the GPU include: CUDA-based accelerated hybrid rendering reconstruction; 3D volume data image and patient registration; real-time tracking and registration based on depth camera and 3PCHM fast registration method; positional relationship between surgical tools and patient entities; Obtain the relative relationship between the surgical tool and the human body in any pose and the virtual perspective; enhance the hierarchical rendering information of the region of interest.
  • the present invention provides an endoscopic surgical navigation method and system, the method comprising: reading multimodal medical image data; using any medical image data in the multimodal medical image data as a reference image, and other medical images Data is used as the image to be registered, and the image is fully affine matched; the reconstructed scene is mixed and rendered and the virtual scene is obtained; the registration of the CT navigation image and the patient pose is completed by using the 3PCHM method or the ICP fast registration calculation method; And the posture relationship between the surgical tool and the patient's body acquires the virtual scene view of the endoscope in the virtual scene; performs Gaussian function attenuation on the edge of the real-time image of the endoscope, and fuses with the virtual scene view of the endoscope to implement the scene Layered rendering.
  • the invention improves the image rendering speed and improves the navigation precision.
  • the invention has industrial applicability.

Abstract

一种内窥镜手术导航方法和系统,包括:读取多模态医学影像数据(101);以多模态医学影像数据中的任意一种影像数据为基准图像,以其他影像数据作为待配准图像,进行图像全仿射匹配(102);对图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景(103);使用凸包快速配准方法完成CT影像与病人位姿的配准;通过凸包优化的表面点云快速校准;跟踪内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系(105);依据得到的所述位姿关系,在虚拟场景中获取内窥镜的虚拟场景视图(106);对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现场景分层渲染(107)。该方法和系统提升了影像渲染速度,提高了导航精度。

Description

一种内窥镜手术导航方法和系统 技术领域
本发明涉及手术导航领域,具体地说,涉及一种内窥镜手术导航方法和系统。
背景技术
颅底肿瘤由于其发生位置较深,临近结构复杂难辨,诊疗过程涉及神经外科、耳鼻喉科和头颈外科等多学科技术,完整的肿瘤切除较为困难。经过百余年的发展,颅底肿瘤诊疗由裸眼开颅手术发展到内窥镜微创手术阶段。内镜微创技术术式简洁、术后恢复快,通过内窥镜图像引导避免了手术入路对于面部皮肤结构的破坏,降低了各种并发症发生的概率。
目前,常规鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术采用单纯鼻内窥镜视频导航,基于医学影像引导的手术导航系统目前大多能够提供较为准确的三视图信息,同时显示内窥镜图像或者能够显示当前位姿手术工具与人体的相对位姿,但是依然存在几个方面的缺陷:
1、单模态医学影像的组织结构信息较为单一,图像精度和成像质量对于手术导航影响较大,用于手术导航时效果不理想;
2、手术导航中对于手术器械与人体之间的相对位置与距离表现并不精确,无法达到精准引导的目的;
3、采用单一渲染方式的虚拟场景重建耗时长,显示解剖结构距离感不强,容易引起判断错误,结构不清晰,计算量大引起导航过程卡顿,均极大限制了导航系统的作用;
4、基于人工标志点的实时跟踪和配准方法的配准精度除受到图像质量的影响,还与医生配准的操作方法有关,人为增加了误差来源。
因此,迫切需要一种新型的内窥镜手术导航方案。
发明内容
为了克服上述技术问题,本发明提供了一种内窥镜手术导航方法和系统,提升了影像渲染速度,提高了导航精度。
为了实现上述目的,本发明提供了一种内窥镜手术导航方法,包括:
读取多模态医学影像数据;
以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;
对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;
以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准算法3PCHM或ICP快速配准计算方法完成CT导航影像与病人位姿的配准;
完成病人位姿的配准后,跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;
依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;
对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现场景分层渲染。
进一步地,所述对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:
对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;
对分割和标注后的影像数据进行快速渲染;
对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;
针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
进一步地,在跟踪手术工具并获取所述手术工具与所述病人身体之间的位姿关系前,还包括:
当病人在手术过程中出现移动时,获取病人面部的实时点云数据;
基于所述3PCHM方法快速配准所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
进一步地,所述以所述多模态医学影像数据中的任意一种医学影像数据为基准,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:
在所述待配准图像中选取标记点;
在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;
根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
进一步的,所述根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:
根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
本发明还提供了一种内窥镜手术导航系统,包括:
计算机、双目摄像机、内窥镜和手术工具;
所述计算机用于读取多模态医学影像数据;以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准计算方法3PCHM或ICP快速配准计算方法完成所述CT导航影像与病人位 姿的配准;
所述双目摄像机用于跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;
所述计算机还用于针对所述双目摄像机定位内窥镜,进而获取的所述内窥镜的虚拟场景视图,对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现分层渲染。
进一步地,所述计算机对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:
对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;
对分割和标注后的影像数据进行快速渲染;
对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;
针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
进一步地,所述系统还包括:
深度相机,用于当病人在手术过程中出现移动时,获取病人面部的实时点云数据;
所述计算机还用于基于所述3PCHM方法快速配准所述深度相机获取的所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
进一步地,所述计算机以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:
在所述待配准图像中选取标记点;
在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;
根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
进一步的,所述根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
本发明实施例所述的内窥镜手术导航方法和系统,首先读取多模态医学影像数据,并以多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,然后对图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应参考点的标志点,使用3PCHM方法或ICP快速配准计算方法完成内窥镜采集的导航影像与病人位姿的配准;完成病人位姿的配准后,跟踪内窥镜和手术工具,并获取内窥镜和手 术工具与病人身体之间的位姿关系,并依据位姿关系,在虚拟场景中获取内窥镜的虚拟场景视图;最后针对内窥镜的虚拟场景视图,对视图边缘进行高斯函数衰减,实现分层渲染。该方案不但提升渲染速度,并且通过对病人位姿的配准提高了导航精度,降低了误差,改进了内窥镜微创手术的安全性。
附图说明
图1为本发明实施例提供的内窥镜手术导航方法的流程图;
图2为本发明实施例提供的重建场景混合渲染的流程图;
图3为本发明实施例提供的基于表面点云快速配准的内窥镜手术导航方法的流程图;
图4为本发明实施例的内窥镜手术导航系统的应用场景示意图和导航示意图;
图5为本发明实施例的内窥镜手术导航系统的CPU与GPU处理模块图。
具体实施方式
以下实施例用于说明本发明,但不用来限制本发明的范围。
下面结合附图对本发明做进一步描述。
本发明实施例提供了一种内窥镜手术导航方法,如图1所示,该方法包括:
101、读取多模态医学影像数据。
多模态医学影像指成像原理不同的影像数据,包括核磁共振(Nuclear Magnetic Resonance,NMR)影像数据、CT扫描影像数据、X-ray影像数据、超声影像数据等。
102、以多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配。
图像的全仿射匹配具体包括:
(1)、在待配准图像中选取标记点。
具体的,可以选取NMR或CT影像数据做带配准图像。
(2)、在基准图像中以预设定的顺序选取参考点,建立待配准图像的标记点与基准图像的参考点之间的对应集。
(3)、根据对应集,计算得到基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
具体的,根据3点凸包快速配准算法(3Points Convex Hull Matching,3PCHM)或迭代就近点(Iterative Closet Points,ICP)配准算法对该对应集进行计算,求取两幅图像之间的旋转和转换矩阵,使用Affine变换方法将两幅待配准图像进行配准变换,实现图像的全仿射匹配。
103、对图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景。
重建场景混合渲染的具体过程如图2所示,包括:
201、对图像全仿射匹配后的影像数据中的关键结构进行分割和标注。
该关键结构包括手术中比较重要的人体组织结构,如血管、神经、肿瘤等。
采用Otsu阈值法或区域生长法(region growing)对CT和MRI中的显影值较高、目标组织 信息较明显、有明显解剖结构的位置(例如骨骼)进行直接提取,多点分割的三维结构;而对目标组织信息不明显的解剖结构则进行初始分割,之后使用快速行进方法对初始分割区域进行二次分割,以获得更加准确的分割结果。该步骤得到的分割数据通过颜色映射,衰减加权的渲染为最终的融合显示提速和距离感知准确性提供保证。
202、对分割和标注后的影像数据进行快速渲染。
快速完成精确结构的重建渲染为虚拟场景渲染提供高速方法。
203、对图像全仿射匹配后的影像数据进行基于移动立方体(Marching Cube)的体渲染。
通过该步骤,可以表现出颅底结构前后遮挡关系。
204、针对快速渲染和体渲染的影像数据,采用基于CUDA(Compute Unified Device Architecture)加速方式进行重建场景混合渲染并得到虚拟场景。
基于CUDA加速的混合渲染场景重建方式可以大大提高渲染效率,降低计算量,缩短重建时间。
104、以多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应参考点的标志点,使用3PCHM方法或者ICP快速配准计算方法完成CT导航影像与病人位姿的配准。
105、跟踪内窥镜和手术工具,并获取内窥镜和手术工具与病人身体之间的位姿关系。
在完成病人位姿的配准后,根据该配准计算得到内窥镜和手术工具与病人身体之间的位姿关系。
106、依据得到的位姿关系,在虚拟场景中获取内窥镜的虚拟场景视图。
107、结合内窥镜的虚拟场景视图,对内窥镜实时图像边缘进行高斯函数衰减,实现分层渲染。
通过107,针对任意视角下的虚拟视角可以获得距离感更加真实的渲染,在增强场景中完成人体重建的同时将经过高斯边缘衰减(GED)处理的真实内窥镜图像融合,增强显示当前位姿状态下的解剖结构信息,同时能够针对性突出关键结构(血管、肿瘤等)的路径与走势,克服常见系统距离感知上的不足。
本发明实施例提供的内窥镜手术导航方法,具有以下优点:
a、采用基于区域生长和快速行进方法多种分割方法结合,能够克服医学图像质量和多模态影像中同一组织结构成像性质不同的情况,对于关键结构(血管、神经、肿瘤等)的分割更精确,能够更为快速的完成术前分割;
b、采用基于面绘制和移动立方体算法体渲染结合的混合渲染方式,能够在降低计算复杂度,加速渲染速度,显示效果提供更加准确的深度感知,为医生提供更加准确的辅助诊疗能力;
c、采用基于Affine变换的配准方法完成多模态影像半自动配准,结合多模态影像下同一组织成像性质不同,显示组织结构不同的特点,通过全仿射变换实现图像的不变性匹配,为同时利用多种模态影像的大量解剖结构信息提供了可能;
d、采用感兴趣区域分层渲染方式对观察区域实现增强现实引导,在显示和渲染区域上采用了移动立方体位置随内窥镜位姿变化而变化的方式,结合CUDA加速针对该区域同时对内窥镜图像和虚拟场景进行不同的渲染操作,提高渲染速度,在距离感知和场景沉浸感上均有提升;
e、采用高斯边缘衰减算法对内窥镜图像进行实时处理,实现内窥镜图像与虚拟场景融合的无缝过渡,在视觉上达到平滑的过渡,能够将内窥镜图像中的肉眼可见结构与重建结构良好的匹配与过渡,明显提升了手术导航中实时图像的提示效果;
f、基于深度相机的面形数据到三维体数据的配准方式,能够快速完成术中的病人位姿跟踪与配准,避免了人工标志点安放和遮挡等问题的不便,同时提高了配准和跟踪的效率,较少医生手术中的压力。
本发明实施例还提供了一种内窥镜手术导航方法,如图3所示,该方法在图1的基础上增加了步骤105a和105b。
105a、当病人在手术过程中出现移动时,获取病人面部的实时点云数据;
105b、基于3PCHM方法快速配准实时点云数据,矫正导航影像与病人位姿的配准。
通过步骤105a和105b,进一步改进手术中病人面部的实时配准。这一过程主要完成术中病人位姿的追踪,以便克服病人移动造成的跟踪不准的缺点。如果导航过程中如果病人位姿没有移动,则不会用到105a和105b。当病人位姿在手术中移动时,该方案具有重要的临床和实际意义,对系统跟踪中的实时显示效果更有帮助,不会在引导过程中出现影像的错位和渲染错误。
本发明实施例还提供了一种内窥镜手术导航系统,该系统包括:
计算机、双目摄像机、内窥镜和手术工具。
计算机用于读取多模态医学影像数据,以多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;对图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3PCHM方法或ICP快速配准计算方法完成CT影像与病人位姿的配准;
双目摄像机用于跟踪内窥镜和手术工具,并获取内窥镜和手术工具与病人身体之间的位姿关系;依据得到的位姿关系,在虚拟场景中获取内窥镜的虚拟场景视图;
计算机还用于针对双目摄像机定位内窥镜,进而获取的内窥镜的虚拟场景视图,对内窥镜实时图像边缘进行高斯函数衰减,并与内窥镜的虚拟场景视图融合,实现分层渲染。
进一步的,计算机对图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:
对图像全仿射匹配后的影像数据中的关键结构进行分割和标注;
对分割和标注后的影像数据进行快速渲染;
对图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;
针对快速渲染和体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
进一步地,该系统还包括深度相机。
深度相机用于当病人在手术过程中出现移动时,获取病人面部的实时点云数据。计算机还用于基于3PCHM方法快速配准深度相机获取的实时点云数据,矫正所述导航影像与所述病人位姿的配准。
进一步地,计算机进行图像全仿射匹配,具体包括:
在待配准图像中选取标记点;
在基准图像中以预设定的顺序选取参考点,建立待配准图像的标记点与基准图像的参考点之间的对应集;
根据3PCHM方法对对应集进行计算,得到基准图像和待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
图4为本发明实施例的内窥镜手术导航系统的应用场景示意图和导航示意图。图中包括计算机41、双目摄像机42、内窥镜43和手术工具44、深度相机45以及病人人体46。在内窥镜43和手术工具44上设置标志点47,便于双目摄像机采集并获知位姿关系。
计算机41中包括中央处理器(Central Processing Unit,CPU),用于执行数学计算和影像配置等功能。可选的还可以包括图形处理器(Graphics Processing Unit,GPU)。GPU主要执行与图形处理有关的功能。
图5示出了内窥镜手术导航系统的CPU与GPU处理模块图。
CPU的主要功能包括:读取多模态医学影像数据;影像数据中的关键结构进行分割和标注;基于Affine变换和3PCHM或ICP快速配准算法的多模态影像配准。
GPU的主要功能包括:基于CUDA的加速混合渲染重建;三维体数据图像与病人的配准;基于深度相机和3PCHM快速配准方法的实时跟踪与配准;手术工具与病人实体的位姿关系;获取任意位姿下手术工具与人体相对关系及虚拟视角;增强显示感兴趣区域分层渲染信息。
虽然,上文中已经用一般性说明及具体实施方案对本发明作了详尽的描述,但在本发明基础上,可以对之作一些修改或改进,这对本领域技术人员而言是显而易见的。因此,在不偏离本发明精神的基础上所做的这些修改或改进,均属于本发明要求保护的范围。
工业实用性
本发明提供一种内窥镜手术导航方法和系统,该方法包括:读取多模态医学影像数据;以多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;进行重建场景混合渲染并得到虚拟场景;使用3PCHM方法或ICP快速配准计算方法完成CT导航影像与病人位姿的配准;依据内窥镜和手术工具与病人身体之间的位姿关系在虚拟场景中获取内窥镜的虚拟场景视图;对内窥镜实时图像边缘进行高斯函数衰减,并与内窥镜的虚拟场景视图融合,实现场景分层渲染。本发明提升了影像渲染速度,提高了导航精度。本发明具有工业实用性。

Claims (10)

  1. 一种内窥镜手术导航方法,其特征在于,包括:
    读取多模态医学影像数据;
    以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;
    对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;
    以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准算法3PCHM或ICP快速配准计算方法完成CT导航影像与病人位姿的配准;
    完成病人位姿的配准后,跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;
    依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;
    对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现场景分层渲染。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:
    对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;
    对分割和标注后的影像数据进行快速渲染;
    对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;
    针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
  3. 根据权利要求1所述的方法,其特征在于,在跟踪手术工具并获取所述手术工具与所述病人身体之间的位姿关系前,还包括:
    当病人在手术过程中出现移动时,获取病人面部的实时点云数据;
    基于所述3PCHM方法快速配准所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
  4. 根据权利要求1所述的方法,其特征在于,所述以所述多模态医学影像数据中的任意一种医学影像数据为基准,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:
    在所述待配准图像中选取标记点;
    在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;
    根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:
    根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
  6. 一种内窥镜手术导航系统,其特征在于,包括:
    计算机、双目摄像机、内窥镜和手术工具;
    所述计算机用于读取多模态医学影像数据;以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准计算方法3PCHM或ICP快速配准计算方法完成所述CT导航影像与病人位姿的配准;
    所述双目摄像机用于跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;
    所述计算机还用于针对所述双目摄像机定位内窥镜,进而获取的所述内窥镜的虚拟场景视图,对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现分层渲染。
  7. 根据权利要求6所述的系统,其特征在于,所述计算机对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:
    对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;
    对分割和标注后的影像数据进行快速渲染;
    对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;
    针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
  8. 根据权利要求6所述的系统,其特征在于,所述系统还包括:
    深度相机,用于当病人在手术过程中出现移动时,获取病人面部的实时点云数据;
    所述计算机还用于基于所述3PCHM方法快速配准所述深度相机获取的所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
  9. 根据权利要求6所述的系统,其特征在于,所述计算机以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:
    在所述待配准图像中选取标记点;
    在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;
    根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
  10. 根据权利要求9所述的系统,其特征在于,所述根据所述对应集,计算得到所述基 准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:
    根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
PCT/CN2017/071006 2016-06-06 2017-01-12 一种内窥镜手术导航方法和系统 WO2017211087A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610392750.7A CN107456278B (zh) 2016-06-06 2016-06-06 一种内窥镜手术导航方法和系统
CN201610392750.7 2016-06-06

Publications (1)

Publication Number Publication Date
WO2017211087A1 true WO2017211087A1 (zh) 2017-12-14

Family

ID=60544598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/071006 WO2017211087A1 (zh) 2016-06-06 2017-01-12 一种内窥镜手术导航方法和系统

Country Status (2)

Country Link
CN (1) CN107456278B (zh)
WO (1) WO2017211087A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581710A (zh) * 2020-05-19 2020-08-25 北京数字绿土科技有限公司 架空输电线路杆塔挠度自动获取方法及装置
CN113012126A (zh) * 2021-03-17 2021-06-22 武汉联影智融医疗科技有限公司 标记点重建方法、装置、计算机设备和存储介质
CN113521499A (zh) * 2020-04-22 2021-10-22 西门子医疗有限公司 用于产生控制信号的方法
CN114145846A (zh) * 2021-12-06 2022-03-08 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114191078A (zh) * 2021-12-29 2022-03-18 上海复旦数字医疗科技股份有限公司 一种基于混合现实的内窥镜手术导航机器人系统
CN114581635A (zh) * 2022-03-03 2022-06-03 上海涞秋医疗科技有限责任公司 一种基于HoloLens眼镜的定位方法及系统
CN114191078B (zh) * 2021-12-29 2024-04-26 上海复旦数字医疗科技股份有限公司 一种基于混合现实的内窥镜手术导航机器人系统

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108272513B (zh) * 2018-01-26 2021-03-16 智美康民(珠海)健康科技有限公司 临床定位方法、装置、计算机设备和存储介质
CN108324369B (zh) * 2018-02-01 2019-11-22 艾瑞迈迪医疗科技(北京)有限公司 基于面的术中配准方法及神经导航设备
CN111166473A (zh) * 2018-12-04 2020-05-19 艾瑞迈迪科技石家庄有限公司 一种髋膝关节置换手术的导航方法和系统
CN112315582B (zh) * 2019-08-05 2022-03-25 罗雄彪 一种手术器械的定位方法、系统及装置
CN110368089A (zh) * 2019-08-07 2019-10-25 湖南省华芯医疗器械有限公司 一种支气管内窥镜三维导航方法
CN110522516B (zh) * 2019-09-23 2021-02-02 杭州师范大学 一种用于手术导航的多层次交互可视化方法
CN111784664B (zh) * 2020-06-30 2021-07-20 广州柏视医疗科技有限公司 肿瘤淋巴结分布图谱生成方法
CN113808181A (zh) * 2020-10-30 2021-12-17 上海联影智能医疗科技有限公司 医学图像的处理方法、电子设备和存储介质
CN113077433B (zh) * 2021-03-30 2023-04-07 山东英信计算机技术有限公司 基于深度学习的肿瘤靶区云检测装置、系统、方法及介质
CN114305684B (zh) * 2021-12-06 2024-04-12 南京航空航天大学 一种自主多自由度扫描型内窥镜微创手术导航装置及其系统
CN116416414B (zh) * 2021-12-31 2023-09-22 杭州堃博生物科技有限公司 肺部支气管镜导航方法、电子装置及计算机可读存储介质
CN115281584B (zh) * 2022-06-30 2023-08-15 中国科学院自动化研究所 柔性内窥镜机器人控制系统及柔性内窥镜机器人模拟方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167296A (en) * 1996-06-28 2000-12-26 The Board Of Trustees Of The Leland Stanford Junior University Method for volumetric image navigation
CN101797182A (zh) * 2010-05-20 2010-08-11 北京理工大学 一种基于增强现实技术的鼻内镜微创手术导航系统
US20120046521A1 (en) * 2010-08-20 2012-02-23 Mark Hunter Systems, instruments, and methods for four dimensional soft tissue navigation
CN102999902A (zh) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 基于ct配准结果的光学导航定位系统及其导航方法
CN103371870A (zh) * 2013-07-16 2013-10-30 深圳先进技术研究院 一种基于多模影像的外科手术导航系统
CN103356155B (zh) * 2013-06-24 2014-12-31 清华大学深圳研究生院 虚拟内窥镜辅助的腔体病灶检查系统
CN104434313A (zh) * 2013-09-23 2015-03-25 中国科学院深圳先进技术研究院 一种腹部外科手术导航方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080287909A1 (en) * 2007-05-17 2008-11-20 Viswanathan Raju R Method and apparatus for intra-chamber needle injection treatment
US9439623B2 (en) * 2012-05-22 2016-09-13 Covidien Lp Surgical planning system and navigation system
CN103040525B (zh) * 2012-12-27 2016-08-03 深圳先进技术研究院 一种多模医学影像手术导航方法及系统
GB2524498A (en) * 2014-03-24 2015-09-30 Scopis Gmbh Electromagnetic navigation system for microscopic surgery

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167296A (en) * 1996-06-28 2000-12-26 The Board Of Trustees Of The Leland Stanford Junior University Method for volumetric image navigation
CN101797182A (zh) * 2010-05-20 2010-08-11 北京理工大学 一种基于增强现实技术的鼻内镜微创手术导航系统
US20120046521A1 (en) * 2010-08-20 2012-02-23 Mark Hunter Systems, instruments, and methods for four dimensional soft tissue navigation
CN102999902A (zh) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 基于ct配准结果的光学导航定位系统及其导航方法
CN103356155B (zh) * 2013-06-24 2014-12-31 清华大学深圳研究生院 虚拟内窥镜辅助的腔体病灶检查系统
CN103371870A (zh) * 2013-07-16 2013-10-30 深圳先进技术研究院 一种基于多模影像的外科手术导航系统
CN104434313A (zh) * 2013-09-23 2015-03-25 中国科学院深圳先进技术研究院 一种腹部外科手术导航方法及系统

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113521499A (zh) * 2020-04-22 2021-10-22 西门子医疗有限公司 用于产生控制信号的方法
CN113521499B (zh) * 2020-04-22 2024-02-13 西门子医疗有限公司 用于产生控制信号的方法
CN111581710A (zh) * 2020-05-19 2020-08-25 北京数字绿土科技有限公司 架空输电线路杆塔挠度自动获取方法及装置
CN111581710B (zh) * 2020-05-19 2021-04-13 北京数字绿土科技有限公司 架空输电线路杆塔挠度自动获取方法及装置
CN113012126A (zh) * 2021-03-17 2021-06-22 武汉联影智融医疗科技有限公司 标记点重建方法、装置、计算机设备和存储介质
CN113012126B (zh) * 2021-03-17 2024-03-22 武汉联影智融医疗科技有限公司 标记点重建方法、装置、计算机设备和存储介质
CN114145846A (zh) * 2021-12-06 2022-03-08 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114145846B (zh) * 2021-12-06 2024-01-09 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114191078A (zh) * 2021-12-29 2022-03-18 上海复旦数字医疗科技股份有限公司 一种基于混合现实的内窥镜手术导航机器人系统
CN114191078B (zh) * 2021-12-29 2024-04-26 上海复旦数字医疗科技股份有限公司 一种基于混合现实的内窥镜手术导航机器人系统
CN114581635A (zh) * 2022-03-03 2022-06-03 上海涞秋医疗科技有限责任公司 一种基于HoloLens眼镜的定位方法及系统
CN114581635B (zh) * 2022-03-03 2023-03-24 上海涞秋医疗科技有限责任公司 一种基于HoloLens眼镜的定位方法及系统

Also Published As

Publication number Publication date
CN107456278B (zh) 2021-03-05
CN107456278A (zh) 2017-12-12

Similar Documents

Publication Publication Date Title
WO2017211087A1 (zh) 一种内窥镜手术导航方法和系统
US11883118B2 (en) Using augmented reality in surgical navigation
CN103040525B (zh) 一种多模医学影像手术导航方法及系统
Chu et al. Registration and fusion quantification of augmented reality based nasal endoscopic surgery
Collins et al. Augmented reality guided laparoscopic surgery of the uterus
CN107067398B (zh) 用于三维医学模型中缺失血管的补全方法及装置
WO2013111535A1 (ja) 内視鏡画像診断支援装置および方法並びにプログラム
WO2015161728A1 (zh) 三维模型的构建方法及装置、图像监控方法及装置
CN114145846B (zh) 基于增强现实辅助的手术导航方法及系统
US20160228075A1 (en) Image processing device, method and recording medium
JP2016511049A (ja) デュアルデータ同期を用いた解剖学的部位の位置の再特定
CN107689045B (zh) 内窥镜微创手术导航的图像显示方法、装置及系统
CN101797182A (zh) 一种基于增强现实技术的鼻内镜微创手术导航系统
US11961193B2 (en) Method for controlling a display, computer program and mixed reality display device
CN112641514B (zh) 一种微创介入导航系统与方法
EP3110335B1 (en) Zone visualization for ultrasound-guided procedures
Liu et al. Intraoperative image‐guided transoral robotic surgery: pre‐clinical studies
JP2014064722A (ja) 仮想内視鏡画像生成装置および方法並びにプログラム
Li et al. A fully automatic surgical registration method for percutaneous abdominal puncture surgical navigation
JP2022517807A (ja) 医療用ナビゲーションのためのシステムおよび方法
CN115105204A (zh) 一种腹腔镜增强现实融合显示方法
Wu et al. Process analysis and application summary of surgical navigation system
CN111743628A (zh) 一种基于计算机视觉的自动穿刺机械臂路径规划的方法
Lange et al. Development of navigation systems for image-guided laparoscopic tumor resections in liver surgery
Bartholomew et al. Surgical navigation in the anterior skull base using 3-dimensional endoscopy and surface reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17809528

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17809528

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17809528

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/07/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17809528

Country of ref document: EP

Kind code of ref document: A1