WO2017211087A1 - 一种内窥镜手术导航方法和系统 - Google Patents
一种内窥镜手术导航方法和系统 Download PDFInfo
- Publication number
- WO2017211087A1 WO2017211087A1 PCT/CN2017/071006 CN2017071006W WO2017211087A1 WO 2017211087 A1 WO2017211087 A1 WO 2017211087A1 CN 2017071006 W CN2017071006 W CN 2017071006W WO 2017211087 A1 WO2017211087 A1 WO 2017211087A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image data
- endoscope
- patient
- registered
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/00234—Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
Definitions
- the present invention relates to the field of surgical navigation, and in particular to an endoscopic surgical navigation method and system.
- the skull base tumor Due to its deep location, the skull base tumor is difficult to distinguish adjacent structures.
- the diagnosis and treatment process involves multidisciplinary techniques such as neurosurgery, otolaryngology and head and neck surgery. Complete tumor resection is difficult.
- the diagnosis and treatment of skull base tumors has progressed from open-hole craniotomy to endoscopic minimally invasive surgery.
- Endoscopic minimally invasive technique is simple and quick to recover after surgery. Endoscopic image guidance avoids the damage of facial skin structure caused by surgical approach and reduces the probability of various complications.
- the organizational structure information of single-modal medical images is relatively simple.
- the image accuracy and imaging quality have a great influence on the surgical navigation.
- the effect is not satisfactory when used for surgical navigation.
- the virtual scene reconstruction using a single rendering mode takes a long time, showing that the anatomical structure is not strong, it is easy to cause judgment errors, the structure is not clear, and the calculation amount is large, causing the navigation process to be stuck, which greatly limits the role of the navigation system;
- the accuracy of the real-time tracking and registration method based on artificial marker points is affected by the image quality, and is also related to the doctor's registration method, which artificially increases the source of error.
- the present invention provides an endoscopic surgical navigation method and system, which improves image rendering speed and improves navigation accuracy.
- an endoscopic surgical navigation method including:
- Gaussian function attenuation is performed on the edge of the real-time image of the endoscope, and is merged with the virtual scene view of the endoscope to realize layered rendering of the scene.
- the image data of the affine-matched image of the image is subjected to a remixed rendering of the scene and a virtual scene is obtained, which specifically includes:
- the CUDA acceleration mode is used to reconstruct the scene and the virtual scene is obtained.
- the method further includes:
- the real-time point cloud data is quickly registered based on the 3PCHM method, and the registration of the navigation image and the patient pose is corrected.
- the image is fully affine-matched by using the medical image data of the multi-modal medical image data as the reference and the other medical image data as the image to be registered, and specifically includes:
- a rotation and translation matrix between the reference image and the image to be registered is calculated to achieve full affine matching of the two images.
- the calculating, according to the corresponding set, the rotation and translation matrix between the reference image and the image to be registered, and performing full affine matching of the two images specifically including:
- the invention also provides an endoscopic surgical navigation system, comprising:
- the computer is configured to read multimodal medical image data; use any medical image data in the multimodal medical image data as a reference image, and use other medical image data as a to-be-registered image to perform image imitation Shooting matching; performing image reconstruction on the image data of the image with full affine matching and obtaining a virtual scene; selecting a reference point based on CT image data in the multimodal medical image data, and selecting on the patient's body Corresponding to the marker point of the reference point, the CT navigation image and the patient position are completed by using a 3-point convex hull rapid registration calculation method 3PCHM or ICP fast registration calculation method. Registration of posture;
- the binocular camera is configured to track the endoscope and the surgical tool, and acquire a posture relationship between the endoscope and the surgical tool and the patient's body; according to the obtained posture relationship, Obtaining a virtual scene view of the endoscope in the virtual scene;
- the computer is further configured to locate an endoscope for the binocular camera, and further obtain a virtual scene view of the endoscope, perform Gaussian function attenuation on an edge of the endoscope real-time image, and interact with the endoscope Virtual scene view fusion for layered rendering.
- the computer performs a mixed scene rendering and a virtual scene on the image data of the image that is completely affine-matched, and includes:
- the CUDA acceleration mode is used to reconstruct the scene and the virtual scene is obtained.
- system further includes:
- a depth camera for acquiring real-time point cloud data of a patient's face when the patient moves during the surgery
- the computer is further configured to quickly register the real-time point cloud data acquired by the depth camera based on the 3PCHM method, and correct registration of the navigation image with the patient pose.
- the computer uses any one of the multi-modal medical image data as a reference image, and uses other medical image data as a to-be-registered image to perform image affine matching, which specifically includes:
- a rotation and translation matrix between the reference image and the image to be registered is calculated to achieve full affine matching of the two images.
- the calculating, according to the corresponding set, the rotation and translation matrix between the reference image and the image to be registered, and implementing full affine matching of the two images specifically: according to the 3PCHM method Calculating the corresponding set to obtain a rotation and translation matrix between the reference image and the image to be registered, and achieving full affine matching of the two images.
- the endoscopic surgical navigation method and system first reads multimodal medical image data, and uses any medical image data in the multimodal medical image data as a reference image to other medical images.
- the data is used as the image to be registered, and the image is fully affine-matched.
- the image data of the image is fully affine-matched and reconstructed, and the virtual scene is obtained.
- the CT image data in the multi-modal medical image data is selected as the standard.
- the reference point For the reference point, select the marker point corresponding to the reference point on the patient's body, and use the 3PCHM method or the ICP rapid registration calculation method to complete the registration of the navigation image collected by the endoscope and the patient's posture; after completing the registration of the patient's posture, Track endoscopes and surgical tools, and get endoscopes and hands Obtaining the positional relationship between the tool and the patient's body, and acquiring the virtual scene view of the endoscope in the virtual scene according to the pose relationship; finally, performing Gaussian function attenuation on the edge of the view for the virtual scene view of the endoscope, Layered rendering.
- the program not only improves the rendering speed, but also improves the navigation accuracy by reducing the registration of the patient's posture, reducing the error and improving the safety of endoscopic minimally invasive surgery.
- FIG. 1 is a flowchart of a method for navigating an endoscopic operation according to an embodiment of the present invention
- FIG. 2 is a flowchart of a hybrid scene rendering process according to an embodiment of the present invention.
- FIG. 3 is a flowchart of an endoscopic surgical navigation method based on surface point cloud fast registration according to an embodiment of the present invention
- FIG. 4 is a schematic diagram of an application scenario and a navigation diagram of an endoscopic surgical navigation system according to an embodiment of the present invention
- FIG. 5 is a diagram of a CPU and GPU processing module of an endoscopic surgical navigation system according to an embodiment of the present invention.
- An embodiment of the present invention provides an endoscopic surgical navigation method. As shown in FIG. 1, the method includes:
- Multimodal medical images refer to image data with different imaging principles, including Nuclear Magnetic Resonance (NMR) image data, CT scan image data, X-ray image data, and ultrasound image data.
- NMR Nuclear Magnetic Resonance
- the full affine matching of the image specifically includes:
- NMR or CT image data can be selected for the registration image.
- the corresponding set is calculated according to a 3Points Convex Hull Matching (3PCHM) or an Iterative Closet Points (ICP) registration algorithm, and the rotation between the two images is obtained.
- the transformation matrix using Affine transformation method to register the two images to be registered, to achieve full affine matching of the image.
- This key structure includes important human tissue structures such as blood vessels, nerves, and tumors during surgery.
- the CUDA Computer Unified Device Architecture
- the hybrid rendering scene reconstruction method based on CUDA acceleration can greatly improve the rendering efficiency, reduce the calculation amount, and shorten the reconstruction time.
- the positional relationship between the endoscope and the surgical tool and the patient's body is calculated according to the registration.
- GED Gaussian edge attenuation
- the semi-automatic registration of multi-modal images is performed by the registration method based on Affine transform. Combined with the different imaging properties of the same tissue under multi-modal images, the characteristics of different organizational structures are displayed, and the invariant matching of images is realized by full affine transformation. It provides the possibility to simultaneously utilize a large amount of anatomical information of multiple modal images;
- the layered rendering method of the region of interest is used to implement the augmented reality guidance for the observation area, and the position of the moving cube is changed along with the endoscopic posture change in the display and rendering area, and the CUDA acceleration is combined for the area simultaneously.
- the speculum image and the virtual scene perform different rendering operations, improve the rendering speed, and improve the distance perception and the scene immersion;
- the embodiment of the present invention further provides an endoscopic surgical navigation method. As shown in FIG. 3, the method adds steps 105a and 105b to FIG.
- the real-time point cloud data is quickly registered, and the registration of the navigation image and the patient posture is corrected.
- the real-time registration of the patient's face during surgery is further improved by steps 105a and 105b.
- This process mainly completes the tracking of the patient's posture during the operation in order to overcome the shortcomings of tracking inaccuracy caused by the patient's movement. If the patient pose does not move during navigation, 105a and 105b will not be used.
- the program has important clinical and practical significance, which is more helpful for the real-time display effect in the system tracking, and does not cause image misalignment and rendering errors during the guiding process.
- the embodiment of the invention further provides an endoscopic surgical navigation system, the system comprising:
- the computer is used for reading multimodal medical image data, using any medical image data in the multimodal medical image data as a reference image, and using other medical image data as a to-be-registered image to perform image full affine matching;
- the image data of the image is completely affine-matched, and the reconstructed scene is mixed and rendered to obtain a virtual scene;
- the reference image is selected based on the CT image data in the multi-modal medical image data, and the marker point corresponding to the reference point is selected on the patient's body.
- the 3PCHM method or the ICP rapid registration calculation method is used to complete the registration of the CT image and the patient's posture;
- the binocular camera is used to track the endoscope and the surgical tool, and obtain the posture relationship between the endoscope and the surgical tool and the patient's body; and obtain the virtual scene view of the endoscope in the virtual scene according to the obtained posture relationship ;
- the computer is also used to locate the endoscope for the binocular camera, and then obtain the virtual scene view of the endoscope, perform Gaussian function attenuation on the edge of the real-time image of the endoscope, and fuse with the virtual scene view of the endoscope to realize layering. Rendering.
- the computer performs a hybrid scene reconstruction on the image data of the image with full affine matching and obtains a virtual scene, which specifically includes:
- the CUDA acceleration method is used to reconstruct the scene and obtain the virtual scene.
- the system also includes a depth camera.
- a depth camera is used to acquire real-time point cloud data of a patient's face as the patient moves during the procedure.
- the computer is further configured to quickly register the real-time point cloud data acquired by the depth camera based on the 3PCHM method, and correct the registration of the navigation image with the patient pose.
- the computer performs image full affine matching, which specifically includes:
- the corresponding set is calculated, and the rotation and translation matrix between the reference image and the image to be registered are obtained, and the full affine matching of the two images is realized.
- FIG. 4 is a schematic diagram of an application scenario and a navigation diagram of an endoscopic surgical navigation system according to an embodiment of the present invention.
- the figure includes a computer 41, a binocular camera 42, an endoscope 43 and a surgical tool 44, a depth camera 45, and a patient body 46.
- Mark points 47 are provided on the endoscope 43 and the surgical tool 44 to facilitate binocular camera acquisition and to know the pose relationship.
- the computer 41 includes a central processing unit (CPU) for performing functions such as mathematical calculation and image configuration.
- a graphics processing unit (GPU) may also be included.
- the GPU primarily performs functions related to graphics processing.
- Figure 5 shows a CPU and GPU processing block diagram of an endoscopic surgical navigation system.
- the main functions of the CPU include: reading multimodal medical image data; segmentation and labeling of key structures in image data; multimodal image registration based on Affine transform and 3PCHM or ICP fast registration algorithm.
- the main functions of the GPU include: CUDA-based accelerated hybrid rendering reconstruction; 3D volume data image and patient registration; real-time tracking and registration based on depth camera and 3PCHM fast registration method; positional relationship between surgical tools and patient entities; Obtain the relative relationship between the surgical tool and the human body in any pose and the virtual perspective; enhance the hierarchical rendering information of the region of interest.
- the present invention provides an endoscopic surgical navigation method and system, the method comprising: reading multimodal medical image data; using any medical image data in the multimodal medical image data as a reference image, and other medical images Data is used as the image to be registered, and the image is fully affine matched; the reconstructed scene is mixed and rendered and the virtual scene is obtained; the registration of the CT navigation image and the patient pose is completed by using the 3PCHM method or the ICP fast registration calculation method; And the posture relationship between the surgical tool and the patient's body acquires the virtual scene view of the endoscope in the virtual scene; performs Gaussian function attenuation on the edge of the real-time image of the endoscope, and fuses with the virtual scene view of the endoscope to implement the scene Layered rendering.
- the invention improves the image rendering speed and improves the navigation precision.
- the invention has industrial applicability.
Abstract
Description
Claims (10)
- 一种内窥镜手术导航方法,其特征在于,包括:读取多模态医学影像数据;以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准算法3PCHM或ICP快速配准计算方法完成CT导航影像与病人位姿的配准;完成病人位姿的配准后,跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现场景分层渲染。
- 根据权利要求1所述的方法,其特征在于,所述对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;对分割和标注后的影像数据进行快速渲染;对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
- 根据权利要求1所述的方法,其特征在于,在跟踪手术工具并获取所述手术工具与所述病人身体之间的位姿关系前,还包括:当病人在手术过程中出现移动时,获取病人面部的实时点云数据;基于所述3PCHM方法快速配准所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
- 根据权利要求1所述的方法,其特征在于,所述以所述多模态医学影像数据中的任意一种医学影像数据为基准,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:在所述待配准图像中选取标记点;在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
- 根据权利要求4所述的方法,其特征在于,所述根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
- 一种内窥镜手术导航系统,其特征在于,包括:计算机、双目摄像机、内窥镜和手术工具;所述计算机用于读取多模态医学影像数据;以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配;对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景;以所述多模态医学影像数据中的CT影像数据为准选取参考点,在病人身体上选择对应所述参考点的标志点,使用3点凸包快速配准计算方法3PCHM或ICP快速配准计算方法完成所述CT导航影像与病人位姿的配准;所述双目摄像机用于跟踪所述内窥镜和手术工具,并获取所述内窥镜和手术工具与所述病人身体之间的位姿关系;依据得到的所述位姿关系,在所述虚拟场景中获取内窥镜的虚拟场景视图;所述计算机还用于针对所述双目摄像机定位内窥镜,进而获取的所述内窥镜的虚拟场景视图,对内窥镜实时图像边缘进行高斯函数衰减,并与所述内窥镜的虚拟场景视图融合,实现分层渲染。
- 根据权利要求6所述的系统,其特征在于,所述计算机对所述图像全仿射匹配后的影像数据进行重建场景混合渲染并得到虚拟场景,具体包括:对所述图像全仿射匹配后的影像数据中的关键结构进行分割和标注;对分割和标注后的影像数据进行快速渲染;对所述图像全仿射匹配后的影像数据进行基于移动立方体的体渲染;针对所述快速渲染和所述体渲染的影像数据,采用基于CUDA加速方式进行重建场景混合渲染并得到虚拟场景。
- 根据权利要求6所述的系统,其特征在于,所述系统还包括:深度相机,用于当病人在手术过程中出现移动时,获取病人面部的实时点云数据;所述计算机还用于基于所述3PCHM方法快速配准所述深度相机获取的所述实时点云数据,矫正所述导航影像与所述病人位姿的配准。
- 根据权利要求6所述的系统,其特征在于,所述计算机以所述多模态医学影像数据中的任意一种医学影像数据为基准图像,以其他医学影像数据作为待配准图像,进行图像全仿射匹配,具体包括:在所述待配准图像中选取标记点;在所述基准图像中以预设定的顺序选取参考点,建立所述待配准图像的标记点与所述基准图像的参考点之间的对应集;根据所述对应集,计算得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
- 根据权利要求9所述的系统,其特征在于,所述根据所述对应集,计算得到所述基 准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配,具体包括:根据所述3PCHM方法对所述对应集进行计算,得到所述基准图像和所述待配准图像之间的旋转和平移矩阵,实现两幅图像的全仿射匹配。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610392750.7A CN107456278B (zh) | 2016-06-06 | 2016-06-06 | 一种内窥镜手术导航方法和系统 |
CN201610392750.7 | 2016-06-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017211087A1 true WO2017211087A1 (zh) | 2017-12-14 |
Family
ID=60544598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/071006 WO2017211087A1 (zh) | 2016-06-06 | 2017-01-12 | 一种内窥镜手术导航方法和系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107456278B (zh) |
WO (1) | WO2017211087A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111581710A (zh) * | 2020-05-19 | 2020-08-25 | 北京数字绿土科技有限公司 | 架空输电线路杆塔挠度自动获取方法及装置 |
CN113012126A (zh) * | 2021-03-17 | 2021-06-22 | 武汉联影智融医疗科技有限公司 | 标记点重建方法、装置、计算机设备和存储介质 |
CN113521499A (zh) * | 2020-04-22 | 2021-10-22 | 西门子医疗有限公司 | 用于产生控制信号的方法 |
CN114145846A (zh) * | 2021-12-06 | 2022-03-08 | 北京理工大学 | 基于增强现实辅助的手术导航方法及系统 |
CN114191078A (zh) * | 2021-12-29 | 2022-03-18 | 上海复旦数字医疗科技股份有限公司 | 一种基于混合现实的内窥镜手术导航机器人系统 |
CN114581635A (zh) * | 2022-03-03 | 2022-06-03 | 上海涞秋医疗科技有限责任公司 | 一种基于HoloLens眼镜的定位方法及系统 |
CN114191078B (zh) * | 2021-12-29 | 2024-04-26 | 上海复旦数字医疗科技股份有限公司 | 一种基于混合现实的内窥镜手术导航机器人系统 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108272513B (zh) * | 2018-01-26 | 2021-03-16 | 智美康民(珠海)健康科技有限公司 | 临床定位方法、装置、计算机设备和存储介质 |
CN108324369B (zh) * | 2018-02-01 | 2019-11-22 | 艾瑞迈迪医疗科技(北京)有限公司 | 基于面的术中配准方法及神经导航设备 |
CN111166473A (zh) * | 2018-12-04 | 2020-05-19 | 艾瑞迈迪科技石家庄有限公司 | 一种髋膝关节置换手术的导航方法和系统 |
CN112315582B (zh) * | 2019-08-05 | 2022-03-25 | 罗雄彪 | 一种手术器械的定位方法、系统及装置 |
CN110368089A (zh) * | 2019-08-07 | 2019-10-25 | 湖南省华芯医疗器械有限公司 | 一种支气管内窥镜三维导航方法 |
CN110522516B (zh) * | 2019-09-23 | 2021-02-02 | 杭州师范大学 | 一种用于手术导航的多层次交互可视化方法 |
CN111784664B (zh) * | 2020-06-30 | 2021-07-20 | 广州柏视医疗科技有限公司 | 肿瘤淋巴结分布图谱生成方法 |
CN113808181A (zh) * | 2020-10-30 | 2021-12-17 | 上海联影智能医疗科技有限公司 | 医学图像的处理方法、电子设备和存储介质 |
CN113077433B (zh) * | 2021-03-30 | 2023-04-07 | 山东英信计算机技术有限公司 | 基于深度学习的肿瘤靶区云检测装置、系统、方法及介质 |
CN114305684B (zh) * | 2021-12-06 | 2024-04-12 | 南京航空航天大学 | 一种自主多自由度扫描型内窥镜微创手术导航装置及其系统 |
CN116416414B (zh) * | 2021-12-31 | 2023-09-22 | 杭州堃博生物科技有限公司 | 肺部支气管镜导航方法、电子装置及计算机可读存储介质 |
CN115281584B (zh) * | 2022-06-30 | 2023-08-15 | 中国科学院自动化研究所 | 柔性内窥镜机器人控制系统及柔性内窥镜机器人模拟方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6167296A (en) * | 1996-06-28 | 2000-12-26 | The Board Of Trustees Of The Leland Stanford Junior University | Method for volumetric image navigation |
CN101797182A (zh) * | 2010-05-20 | 2010-08-11 | 北京理工大学 | 一种基于增强现实技术的鼻内镜微创手术导航系统 |
US20120046521A1 (en) * | 2010-08-20 | 2012-02-23 | Mark Hunter | Systems, instruments, and methods for four dimensional soft tissue navigation |
CN102999902A (zh) * | 2012-11-13 | 2013-03-27 | 上海交通大学医学院附属瑞金医院 | 基于ct配准结果的光学导航定位系统及其导航方法 |
CN103371870A (zh) * | 2013-07-16 | 2013-10-30 | 深圳先进技术研究院 | 一种基于多模影像的外科手术导航系统 |
CN103356155B (zh) * | 2013-06-24 | 2014-12-31 | 清华大学深圳研究生院 | 虚拟内窥镜辅助的腔体病灶检查系统 |
CN104434313A (zh) * | 2013-09-23 | 2015-03-25 | 中国科学院深圳先进技术研究院 | 一种腹部外科手术导航方法及系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080287909A1 (en) * | 2007-05-17 | 2008-11-20 | Viswanathan Raju R | Method and apparatus for intra-chamber needle injection treatment |
US9439623B2 (en) * | 2012-05-22 | 2016-09-13 | Covidien Lp | Surgical planning system and navigation system |
CN103040525B (zh) * | 2012-12-27 | 2016-08-03 | 深圳先进技术研究院 | 一种多模医学影像手术导航方法及系统 |
GB2524498A (en) * | 2014-03-24 | 2015-09-30 | Scopis Gmbh | Electromagnetic navigation system for microscopic surgery |
-
2016
- 2016-06-06 CN CN201610392750.7A patent/CN107456278B/zh active Active
-
2017
- 2017-01-12 WO PCT/CN2017/071006 patent/WO2017211087A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6167296A (en) * | 1996-06-28 | 2000-12-26 | The Board Of Trustees Of The Leland Stanford Junior University | Method for volumetric image navigation |
CN101797182A (zh) * | 2010-05-20 | 2010-08-11 | 北京理工大学 | 一种基于增强现实技术的鼻内镜微创手术导航系统 |
US20120046521A1 (en) * | 2010-08-20 | 2012-02-23 | Mark Hunter | Systems, instruments, and methods for four dimensional soft tissue navigation |
CN102999902A (zh) * | 2012-11-13 | 2013-03-27 | 上海交通大学医学院附属瑞金医院 | 基于ct配准结果的光学导航定位系统及其导航方法 |
CN103356155B (zh) * | 2013-06-24 | 2014-12-31 | 清华大学深圳研究生院 | 虚拟内窥镜辅助的腔体病灶检查系统 |
CN103371870A (zh) * | 2013-07-16 | 2013-10-30 | 深圳先进技术研究院 | 一种基于多模影像的外科手术导航系统 |
CN104434313A (zh) * | 2013-09-23 | 2015-03-25 | 中国科学院深圳先进技术研究院 | 一种腹部外科手术导航方法及系统 |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113521499A (zh) * | 2020-04-22 | 2021-10-22 | 西门子医疗有限公司 | 用于产生控制信号的方法 |
CN113521499B (zh) * | 2020-04-22 | 2024-02-13 | 西门子医疗有限公司 | 用于产生控制信号的方法 |
CN111581710A (zh) * | 2020-05-19 | 2020-08-25 | 北京数字绿土科技有限公司 | 架空输电线路杆塔挠度自动获取方法及装置 |
CN111581710B (zh) * | 2020-05-19 | 2021-04-13 | 北京数字绿土科技有限公司 | 架空输电线路杆塔挠度自动获取方法及装置 |
CN113012126A (zh) * | 2021-03-17 | 2021-06-22 | 武汉联影智融医疗科技有限公司 | 标记点重建方法、装置、计算机设备和存储介质 |
CN113012126B (zh) * | 2021-03-17 | 2024-03-22 | 武汉联影智融医疗科技有限公司 | 标记点重建方法、装置、计算机设备和存储介质 |
CN114145846A (zh) * | 2021-12-06 | 2022-03-08 | 北京理工大学 | 基于增强现实辅助的手术导航方法及系统 |
CN114145846B (zh) * | 2021-12-06 | 2024-01-09 | 北京理工大学 | 基于增强现实辅助的手术导航方法及系统 |
CN114191078A (zh) * | 2021-12-29 | 2022-03-18 | 上海复旦数字医疗科技股份有限公司 | 一种基于混合现实的内窥镜手术导航机器人系统 |
CN114191078B (zh) * | 2021-12-29 | 2024-04-26 | 上海复旦数字医疗科技股份有限公司 | 一种基于混合现实的内窥镜手术导航机器人系统 |
CN114581635A (zh) * | 2022-03-03 | 2022-06-03 | 上海涞秋医疗科技有限责任公司 | 一种基于HoloLens眼镜的定位方法及系统 |
CN114581635B (zh) * | 2022-03-03 | 2023-03-24 | 上海涞秋医疗科技有限责任公司 | 一种基于HoloLens眼镜的定位方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN107456278B (zh) | 2021-03-05 |
CN107456278A (zh) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017211087A1 (zh) | 一种内窥镜手术导航方法和系统 | |
US11883118B2 (en) | Using augmented reality in surgical navigation | |
CN103040525B (zh) | 一种多模医学影像手术导航方法及系统 | |
Chu et al. | Registration and fusion quantification of augmented reality based nasal endoscopic surgery | |
Collins et al. | Augmented reality guided laparoscopic surgery of the uterus | |
CN107067398B (zh) | 用于三维医学模型中缺失血管的补全方法及装置 | |
WO2013111535A1 (ja) | 内視鏡画像診断支援装置および方法並びにプログラム | |
WO2015161728A1 (zh) | 三维模型的构建方法及装置、图像监控方法及装置 | |
CN114145846B (zh) | 基于增强现实辅助的手术导航方法及系统 | |
US20160228075A1 (en) | Image processing device, method and recording medium | |
JP2016511049A (ja) | デュアルデータ同期を用いた解剖学的部位の位置の再特定 | |
CN107689045B (zh) | 内窥镜微创手术导航的图像显示方法、装置及系统 | |
CN101797182A (zh) | 一种基于增强现实技术的鼻内镜微创手术导航系统 | |
US11961193B2 (en) | Method for controlling a display, computer program and mixed reality display device | |
CN112641514B (zh) | 一种微创介入导航系统与方法 | |
EP3110335B1 (en) | Zone visualization for ultrasound-guided procedures | |
Liu et al. | Intraoperative image‐guided transoral robotic surgery: pre‐clinical studies | |
JP2014064722A (ja) | 仮想内視鏡画像生成装置および方法並びにプログラム | |
Li et al. | A fully automatic surgical registration method for percutaneous abdominal puncture surgical navigation | |
JP2022517807A (ja) | 医療用ナビゲーションのためのシステムおよび方法 | |
CN115105204A (zh) | 一种腹腔镜增强现实融合显示方法 | |
Wu et al. | Process analysis and application summary of surgical navigation system | |
CN111743628A (zh) | 一种基于计算机视觉的自动穿刺机械臂路径规划的方法 | |
Lange et al. | Development of navigation systems for image-guided laparoscopic tumor resections in liver surgery | |
Bartholomew et al. | Surgical navigation in the anterior skull base using 3-dimensional endoscopy and surface reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17809528 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17809528 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17809528 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29/07/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17809528 Country of ref document: EP Kind code of ref document: A1 |