WO2017036023A1 - Positioning system for use in surgical operation - Google Patents

Positioning system for use in surgical operation Download PDF

Info

Publication number
WO2017036023A1
WO2017036023A1 PCT/CN2015/099144 CN2015099144W WO2017036023A1 WO 2017036023 A1 WO2017036023 A1 WO 2017036023A1 CN 2015099144 W CN2015099144 W CN 2015099144W WO 2017036023 A1 WO2017036023 A1 WO 2017036023A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
image
processing module
data
central processing
Prior art date
Application number
PCT/CN2015/099144
Other languages
French (fr)
Chinese (zh)
Inventor
樊昊
Original Assignee
北京医千创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京医千创科技有限公司 filed Critical 北京医千创科技有限公司
Publication of WO2017036023A1 publication Critical patent/WO2017036023A1/en

Links

Images

Definitions

  • the present invention relates to the field of medical device technology, and in particular, to a surgical positioning system.
  • Laparoscopic, endoscopic (eg gastroscopy, colonoscopy, bronchoscopy) and surgical robots are all representative of minimally invasive techniques.
  • various cameras are the main observation tools. They replace the human eye and are mainly used to perform two tasks: 1. Identify the location of the lesion and lesion in the human body; 2. Identify the position of the surgical instrument and the instrument in the human body.
  • the surgical instruments are relatively large and the camera is not difficult to recognize. However, for identifying lesions, especially early lesions, the camera is difficult.
  • the reasons are: 1.
  • the camera imaging technology uses visible light. Visible light can see the surface of the lesion or the structure of the organizer, and can not see the lesion or tissue structure hidden in the deep layer. For example, in laparoscopic surgery, the camera can see large tumors, but can not see the deep supply of tumor blood vessels.
  • the technique of finding lesions before surgery is not necessarily a camera-type technique, but may also be other influential examinations such as ultrasound, nuclear magnetic, and CT.
  • the original signal acquisition methods of these technologies are different from the original signal acquisition methods of the cameras, and the types of lesions that they are good at discovering are also different.
  • Some early lesions can be detected early with other techniques, and camera technology will have to wait until later to discover. For example, some early breast cancers discovered by nuclear magnetic or molybdenum targets are not much different from normal tissues under the observation of the camera and are difficult to distinguish.
  • the CN200680020112 patent mentions a technique for providing a surgical robot with a laparoscopic ultrasound probe specifically for use in surgery, which produces a 2D image that can be processed by the processor to generate at least a portion of the 3D anatomical image. After that, the image and the camera image are transmitted to the processor, and after processing, the camera image is the main image on the display screen, and the ultrasonic image is displayed in the form of the auxiliary image. And the design can also compare the 3D camera view with the 2D ultrasound image slice.
  • the CN201310298142 patent mentions another technique.
  • the technique converts the pre-operative 3D image into a virtual ultrasound image, which is registered with the intraoperative ultrasound, and the resulting image is then fused with the endoscopic image during the operation, and finally the postoperative evaluation is performed on the cloud platform.
  • the CN201310298142 patent has other problems: 1) The cloud server function of this technology is post-installed, and the function of the cloud server is placed in the final stage of the process - the post-evaluation phase. 2/Cloud server function in parallel with other registration functions, reducing the user's dependence on the cloud server. The doctor uses the system, without using the cloud server, can complete the pre-operative 3D image acquisition, the intra-operative ultrasound image fusion, the new fusion image and the process of the camera image fusion. The above two points make a lot of computing work must be done on the local processor, which puts certain requirements on the configuration of the local processor. Mobile devices and wearable devices are inherently limited in size, and compared to desktops and even workstations, these configuration requirements are not easily met.
  • a special intraoperative ultrasound device which can realize the comparison between the pre-operative non-instant image data and the intra-operative visible light image; the software running environment is low, and the mobile device or the wearable device is convenient for the 3D image. Read and even show the results of the fusion; it is also theoretically guaranteed that as long as the early detection can find the location of the lesion, the localization system of the lesion can be found during surgery. Popularizing early detection, early treatment has important medical implications.
  • an object of the present invention is to provide a surgical positioning system which provides an original and optical camera for the deficiencies of the original laparoscopic, medical endoscope and robotic surgical positioning system.
  • the data acquisition method with different signal acquisition methods is performed on the cloud server side to perform 3D visualization processing and then merged with the video or image data of the optical camera to improve the surgical positioning system of the lesion discovery rate in the operation.
  • a surgical positioning system that performs positioning by direct comparison between visible light images and non-instant images of imaging;
  • the system includes a DICOM data input module, a data visualization processing module, a visible light image input module, a central processing module, and an image Display output module;
  • the data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output.
  • the DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
  • the visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
  • a central processing module configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
  • the image display output module is divided into a central processing pre-display output module and a central processing display output module; the two display output modules respectively exist independently and run separately; the central processing pre-display output module is connected with the cloud data visualization operation processing module for The 3D model is displayed; a centrally processed display output module is coupled to the central processing module for displaying an optical image and a 3D model.
  • Figure 1 is a schematic view showing the structure of a surgical positioning system.
  • FIG. 1 it is a structural diagram of a surgical positioning system, which is positioned by direct comparison between a visible image of visible light and a non-instant image of imaging; the system includes a DICOM data input module 100 and a data visualization operation processing module 200. , visible light image input module 300, central processing module 400, image display output module;
  • the data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output.
  • the DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
  • the visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
  • a central processing module configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
  • the image display output module is divided into a central processing pre-display output module 501 and a central processing post-display output module 502; the two display output modules respectively exist independently and run separately;
  • the output module is connected to the cloud data visualization operation processing module for displaying the 3D model;
  • the central processing display output module is connected to the central processing module for displaying an optical image and a 3D model.
  • the doctor uploads the patient's CT data to the cloud server in the form of a DICOM file.
  • the 3D model data of the patient's kidney and the location data of the stones in the kidney are transmitted to the central processing module.
  • the central processing module accepts the image data transmitted from the ureteroscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is in the lens, and arrives at the buried position. The path of travel required for the stones in the chamber. In this way, along the path prompted by the central processing module, the diverticulum stones buried in the tissue can be found.
  • a kidney tumor patient needs to have a partial laparoscopic nephrectomy.
  • the doctor uploads the patient's kidney CT data to the cloud server in the form of a DICOM file.
  • the 3D model data of the patient's kidney, the location data of the tumor in the kidney, and the location of the renal blood vessels are transmitted to the central processing module.
  • the center processing module is also located in the cloud, and receives the image data transmitted from the laparoscopic camera and the 3D data transmitted from the data visualization processing module.
  • the direction of the renal tumor blood vessels supplied under the renal capsule is judged. Wear the device (glasses).
  • the doctor thus selectively blocks only the blood vessels supplying the kidney tumor and completes the surgery. Conventional methods are needed to block larger arteries and veins, resulting in more extensive renal tissue ischemia and impaired renal function.
  • a peripheral lung cancer patient needs a bronchoscopy biopsy.
  • the doctor uploads the patient's DICOM file to the cloud server.
  • the 3D model data of the patient's lungs and bronchial tubes, the location of the tumor in the lungs, and the location of the blood vessels around the tumor are transmitted to the central processing module.
  • the central processing module receives the image data transmitted from the bronchoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the bronchial lens is in the patient 3D model, and which bronchus the tumor needs to pass. Rumor, there is no biopsy around the tumor Need to avoid blood vessels, etc. It can even help identify and select biopsy at the marginal site of the tumor because the edge of the tumor is detected at a higher rate than the central cancer cell of the tumor (the ratio of necrotic cells in the center of the tumor is too high).
  • a breast cancer patient needs a total laparoscopic mastectomy. Before surgery, tiny early breast cancer lesions were found by NMR, and it was difficult to identify cancer lesions by the camera alone.
  • the doctor sends the patient's nuclear magnetic DICOM file to the medical data visualization processing module. After data visualization, the patient's mammary gland along with the tumor's 3D model data is passed to the central processing module.
  • the central processing module receives the image data transmitted from the endoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is located, and where the tumor needs to be moved. In the end, the tumor is not easily recognized by the camera and the tumor is removed.
  • a partial liver resection of the robot is required.
  • the patient's color Doppler ultrasound results in the location of liver cancer and the abnormal proliferation of vascular tumors.
  • the doctor sends the patient's preoperative color ultrasound DICOM file to the medical data visualization processing module.
  • the patient's liver, tumor and blood vessel 3D model data are transmitted to the central processing module and the mobile phone.
  • the doctor had a general understanding of the blood vessel distribution at the surgical site by the mobile phone before surgery.
  • the intraoperative central processing module accepts the image data transmitted from the robot camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the surgical instrument is in the patient 3D model, and where the tumor needs to be reached.
  • the moving, abnormally proliferating blood vessels are buried there, helping the doctor to find a surgical path that avoids the abnormally proliferating variegated blood vessels, and finally removes the tumor easily.

Landscapes

  • Endoscopes (AREA)

Abstract

A positioning system for use in a surgical operation, positioning according to a direct comparison between a visible-light real-time image and an imaging non-real-time image. The system comprises: a DICOM data login module (100), a data visualization computation and processing module (200), a visible light image input module (300), a central processing module (400), and an image display output module. The system is configured to generate, based on imaging data, a non-real-time 3D imaging model, and subsequently combine the non-real-time 3D imaging model with a real-time camera image taken during an operation, thereby reducing operation equipment requirements, e.g., not requiring professional laparoscopic ultrasound or endoscopic ultrasound equipment but only employing a standard pre-surgical testing result. Theoretically, the system can absolutely ensure a location of a pathological change to be shown in the 3D model, and consequently, a camera can be operated based only on a relative location of a surgical instrument and a major anatomical structure in the 3D map.

Description

手术定位系统Surgical positioning system 技术领域Technical field
本发明涉及医疗器械技术领域,尤其涉及一种手术定位系统。The present invention relates to the field of medical device technology, and in particular, to a surgical positioning system.
背景技术Background technique
现今的医学检查治疗进入了微创时代。像腹腔镜、内窥镜(如胃镜、肠镜、支气管镜)和手术机器人等都是微创技术的代表。在这些代表技术中,各种摄像头是主要观察工具。它们取代了人眼,主要用来执行两个任务:1、识别病变和病变在人体的位置;2、识别手术器械和器械在人体的位置。Today's medical examination and treatment has entered the era of minimally invasive surgery. Laparoscopic, endoscopic (eg gastroscopy, colonoscopy, bronchoscopy) and surgical robots are all representative of minimally invasive techniques. Among these representative technologies, various cameras are the main observation tools. They replace the human eye and are mainly used to perform two tasks: 1. Identify the location of the lesion and lesion in the human body; 2. Identify the position of the surgical instrument and the instrument in the human body.
手术器械比较大,摄像头识别起来没有难度。但是对于识别病变,特别是早期病变,摄像头有一定难度。原因在于:1、摄像头成像技术利用的是可见光。可见光能看到表面的病变或组织者结构,看不到隐藏在深层的病变或者组织结构。比如,腹腔镜手术中,摄像头能看见大的肿瘤,却看不见深层供应肿瘤的血管。2、手术前发现病变的技术未必是摄像头类技术,也可能是其它影响学检查,如超声、核磁、CT等。这些技术的原始信号采集方式与摄像头的原始信号采集方式不同,各自擅长发现的病变种类也不相同。一些早期病变,用其它技术能早期发现,用摄像头技术要等晚一些才能发现。比如核磁或钼靶发现的一些早期乳腺癌,在摄像头的观察下却和正常组织差别不大,难以区分。The surgical instruments are relatively large and the camera is not difficult to recognize. However, for identifying lesions, especially early lesions, the camera is difficult. The reasons are: 1. The camera imaging technology uses visible light. Visible light can see the surface of the lesion or the structure of the organizer, and can not see the lesion or tissue structure hidden in the deep layer. For example, in laparoscopic surgery, the camera can see large tumors, but can not see the deep supply of tumor blood vessels. 2. The technique of finding lesions before surgery is not necessarily a camera-type technique, but may also be other influential examinations such as ultrasound, nuclear magnetic, and CT. The original signal acquisition methods of these technologies are different from the original signal acquisition methods of the cameras, and the types of lesions that they are good at discovering are also different. Some early lesions can be detected early with other techniques, and camera technology will have to wait until later to discover. For example, some early breast cancers discovered by nuclear magnetic or molybdenum targets are not much different from normal tissues under the observation of the camera and are difficult to distinguish.
目前常用的方法是:1、早期手术,医生术前结合那些影像检查结果,人为估计一个大概的病变区域(比如左乳房外侧1/4象限),以缩小一些手术中探查的范围,以期望能提高发现病变的概率。2、继续观察,等病变长大,相对不那么早期,再来手术。但是这两种方法依然各有不足。后者无疑拖延了病情,耽误了治疗。前者虽然可以增大一些早期发现的概率,但是基于人为估计的范围定位仍然不够精确,手术中还是要花费大量的时间和精力来定位,并且仍然做不到理论上能确保发现病变。 At present, the commonly used methods are: 1. Early surgery, doctors combined with those images before surgery, artificially estimate a general lesion area (such as the 1/4 quadrant of the left breast) to narrow the scope of some explorations in the future, in anticipation of Increase the probability of finding a lesion. 2, continue to observe, and other lesions grow up, relatively not so early, and then surgery. But these two methods still have their own shortcomings. The latter undoubtedly delayed the condition and delayed treatment. Although the former can increase the probability of some early discoveries, the range estimation based on human estimation is still not accurate enough. It takes a lot of time and effort to locate in the operation, and it still cannot theoretically ensure the detection of lesions.
也有一些人提出新的解决办法。CN200680020112专利提到一项技术,为手术机器人提供一种专门在术中使用的腹腔镜超声探头,该探头产生2D图像,经过处理器处理能生成至少一部分3D解剖图像。之后该图像和摄像头图像均传输到处理器,经处理后在显示屏上以摄像头图像为主画面,超声图像为辅画面的形式展示出来。且该设计还能完成3D摄像头视图与2D超声图像切片的比较。There are also some people who propose new solutions. The CN200680020112 patent mentions a technique for providing a surgical robot with a laparoscopic ultrasound probe specifically for use in surgery, which produces a 2D image that can be processed by the processor to generate at least a portion of the 3D anatomical image. After that, the image and the camera image are transmitted to the processor, and after processing, the camera image is the main image on the display screen, and the ultrasonic image is displayed in the form of the auxiliary image. And the design can also compare the 3D camera view with the 2D ultrasound image slice.
CN201310298142专利提到另一项技术。该技术将术前的3D影像转化成虚拟超声图像,与术中超声配准,得出的图像再与手术中的内窥镜图像做融合,最后在云平台上完成术后评估。The CN201310298142 patent mentions another technique. The technique converts the pre-operative 3D image into a virtual ultrasound image, which is registered with the intraoperative ultrasound, and the resulting image is then fused with the endoscopic image during the operation, and finally the postoperative evaluation is performed on the cloud platform.
上述两种专利采用的都是即时超声图像与即时摄像头图像的比较技术,也都引入了内窥镜超声探头或腔镜超声探头(也译做探针)的概念。由于都是同一时间同一地点的即时图像,处理器省去了究竟该寻找哪一张超声图像来与当前的摄像头图片比较的问题,简化了软件运算,付出的代价是要添置专用的硬件-内窥镜超声探头或腔镜超声探头(也译做探针)。但是这两个方案在没有专门的内窥镜超声探头或腔镜超声探头(也译做探针)医院中无法使用,限制了应用范围。实际医疗环境中需要一种通过改善软件,降低对硬件设备依赖性的方案。Both of the above patents use the comparison technique of instant ultrasound images and instant camera images, and also introduce the concept of endoscopic ultrasound probes or endoscopic ultrasound probes (also referred to as probes). Since the images are instantaneous images at the same time and at the same time, the processor omits the problem of which ultrasound image to look for compared with the current camera image, which simplifies the software operation, and the price is to add dedicated hardware. A speculum ultrasound probe or a laparoscopic ultrasound probe (also referred to as a probe). However, these two programs are not available in hospitals without specialized endoscopic ultrasound probes or endoscopic ultrasound probes (also referred to as probes), which limits the scope of application. In the actual medical environment, there is a need for a solution to reduce the dependence on hardware devices by improving software.
CN201310298142专利还有其它问题:1/这项技术的云服务器功能后置,云端服务器的功能被放在流程的最后环节-术后评估的阶段进行。2/云服务器功能与其它配准功能并联,降低了使用者对云服务器的依赖性。医生使用该系统,不使用云服务器,一样可以完成术前3D图像获取,与术中超声即时图像融合,新融合图像再与手术中摄像头图像融合的过程。上述两点使得大量运算工作必须在本地端的处理器完成,对本地处理器的配置提出了一定要求。而移动设备和可穿戴设备先天受体积限制,与台式机甚至工作站比,不容易满足这些配置要求。The CN201310298142 patent has other problems: 1) The cloud server function of this technology is post-installed, and the function of the cloud server is placed in the final stage of the process - the post-evaluation phase. 2/Cloud server function in parallel with other registration functions, reducing the user's dependence on the cloud server. The doctor uses the system, without using the cloud server, can complete the pre-operative 3D image acquisition, the intra-operative ultrasound image fusion, the new fusion image and the process of the camera image fusion. The above two points make a lot of computing work must be done on the local processor, which puts certain requirements on the configuration of the local processor. Mobile devices and wearable devices are inherently limited in size, and compared to desktops and even workstations, these configuration requirements are not easily met.
基于上述,提供一种无需专用的术中超声设备,能实现术前非即时影像数据与术中可见光的即时图像的对比;对软件运行环境要求低,方便移动设备或可穿戴设备对3D图像的读取甚至显示融合后结果;还要从理论上保证只要早期检查能发现病变的位置,术中就能找到病变的定位系统,对于真正 普及早期发现,早期治疗有重要的医学意义。Based on the above, a special intraoperative ultrasound device is provided, which can realize the comparison between the pre-operative non-instant image data and the intra-operative visible light image; the software running environment is low, and the mobile device or the wearable device is convenient for the 3D image. Read and even show the results of the fusion; it is also theoretically guaranteed that as long as the early detection can find the location of the lesion, the localization system of the lesion can be found during surgery. Popularizing early detection, early treatment has important medical implications.
发明内容Summary of the invention
为解决上述技术问题,本发明的目的是提供一种手术定位系统,该系统针对原有腹腔镜、医用内窥镜和机器人采用的手术定位系统的不足,提供了一种把与光学摄像头的原始信号采集方式不同的影像学检查的数据在云服务器端先行做3D可视化处理再与光学摄像头的视频或图像数据融合,以提高手术中病变发现率的手术定位系统。In order to solve the above technical problems, an object of the present invention is to provide a surgical positioning system which provides an original and optical camera for the deficiencies of the original laparoscopic, medical endoscope and robotic surgical positioning system. The data acquisition method with different signal acquisition methods is performed on the cloud server side to perform 3D visualization processing and then merged with the video or image data of the optical camera to improve the surgical positioning system of the lesion discovery rate in the operation.
本发明的目的通过以下的技术方案来实现:The object of the invention is achieved by the following technical solutions:
手术定位系统,该手术定位系统通过可见光即时图像和影像学非即时图像的直接对比而完成定位;所述系统包括DICOM数据输入模块、数据可视化运算处理模块、可见光图像输入模块、中心处理模块、图像显示输出模块;所述a surgical positioning system that performs positioning by direct comparison between visible light images and non-instant images of imaging; the system includes a DICOM data input module, a data visualization processing module, a visible light image input module, a central processing module, and an image Display output module;
数据可视化运算处理模块位于云端,与中心处理模块连接,接收DICOM数据输入模块的数据,并将接收到的数据进行可视化处理后,将患者的3D模型数据传递到中心处理模块和/或图像显示输出模块;The data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output. Module
DICOM数据输入模块与所述数据可视化运算处理模块连接,用于将检测到的数据以DICOM文件的格式上传;The DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
可见光图像输入模块和所述中心处理模块连接,用于将术中实时图像数据传输到中心处理模块;The visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
中心处理模块,用于接收所述可见光图像输入模块传来的图像数据和可视化处理模块传来的3D模型数据;a central processing module, configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
图像显示输出模块分为中心处理前显示输出模块和中心处理后显示输出模块;两显示输出模块分别独立存在,单独运行;中心处理前显示输出模块与所述云端数据可视化运算处理模块连接,用于显示所述3D模型;中心处理后显示输出模块与所述中心处理模块连接,用于显示光学图像和3D模型。The image display output module is divided into a central processing pre-display output module and a central processing display output module; the two display output modules respectively exist independently and run separately; the central processing pre-display output module is connected with the cloud data visualization operation processing module for The 3D model is displayed; a centrally processed display output module is coupled to the central processing module for displaying an optical image and a 3D model.
与现有技术相比,本发明的一个或多个实施例可以具有如下优点:One or more embodiments of the present invention may have the following advantages over the prior art:
1、以影像学数据为基础,生成术前非即时的影像3D模型,再和术中的 即时摄像头图像做融合,可以降低对术中设备的要求,比如无需专门的腔镜超声或内窥镜超声设备,只需利用常规的术前影像检查结果即可。该方法理论上能确保100%病变的位置在3D模型上显示出来,摄像头只要关注手术器械和标志性解剖结构在3D地图上的相对应位置即可。1. Based on imaging data, generate pre-operative non-instant image 3D models, and then intraoperative Instant camera image fusion can reduce the requirements of intraoperative equipment, such as no need for special endoscopic ultrasound or endoscopic ultrasound equipment, just use the regular preoperative image examination results. The method theoretically ensures that the position of the 100% lesion is displayed on the 3D model, and the camera only needs to pay attention to the corresponding position of the surgical instrument and the iconic anatomical structure on the 3D map.
2、先在云端完成大量复杂运算,生成3D模型,且该云计算模块与其它模块为串联流程,可以强制云模块的使用,降低系统对本地硬件环境的要求,方便手术前规划时把3D图像显示在低端配置的移动设备或穿戴设备上。2. Complete a large number of complex operations in the cloud to generate a 3D model, and the cloud computing module and other modules are in a series process, which can force the use of the cloud module, reduce the system requirements for the local hardware environment, and facilitate the 3D image when planning before surgery. Displayed on mobile devices or wearable devices configured at the low end.
附图说明DRAWINGS
图1是手术定位系统结构示意图。Figure 1 is a schematic view showing the structure of a surgical positioning system.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚,下面将结合实施例及附图对本发明作进一步详细的描述。The present invention will be further described in detail below with reference to the embodiments and the accompanying drawings.
如图1所示,为手术定位系统结构图,所述手术定位系统通过可见光即时图像和影像学非即时图像的直接对比而完成定位;该系统包括DICOM数据输入模块100、数据可视化运算处理模块200、可见光图像输入模块300、中心处理模块400、图像显示输出模块;所述As shown in FIG. 1 , it is a structural diagram of a surgical positioning system, which is positioned by direct comparison between a visible image of visible light and a non-instant image of imaging; the system includes a DICOM data input module 100 and a data visualization operation processing module 200. , visible light image input module 300, central processing module 400, image display output module;
数据可视化运算处理模块位于云端,与中心处理模块连接,接收DICOM数据输入模块的数据,并将接收到的数据进行可视化处理后,将患者的3D模型数据传递到中心处理模块和/或图像显示输出模块;The data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output. Module
DICOM数据输入模块与所述数据可视化运算处理模块连接,用于将检测到的数据以DICOM文件的格式上传;The DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
可见光图像输入模块和所述中心处理模块连接,用于将术中实时图像数据传输到中心处理模块;The visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
中心处理模块,用于接收所述可见光图像输入模块传来的图像数据和可视化处理模块传来的3D模型数据;a central processing module, configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
图像显示输出模块分为中心处理前显示输出模块501和中心处理后显示输出模块502;两显示输出模块分别独立存在,单独运行;中心处理前显示 输出模块与所述云端数据可视化运算处理模块连接,用于显示所述3D模型;中心处理后显示输出模块与所述中心处理模块连接,用于显示光学图像和3D模型。The image display output module is divided into a central processing pre-display output module 501 and a central processing post-display output module 502; the two display output modules respectively exist independently and run separately; The output module is connected to the cloud data visualization operation processing module for displaying the 3D model; the central processing display output module is connected to the central processing module for displaying an optical image and a 3D model.
上述实施例的具体实施过程通过以下实施例进行详细说明:The specific implementation process of the above embodiment is described in detail by the following embodiments:
实施例1Example 1
一个肾脏憩室结实病人的CT平扫检查发现一颗埋藏于肾实质内的结石(憩室结石),CT造影增强显示肾和输尿管的内部腔道结构(集合系统)。医生把患者的CT资料以DICOM文件的格式上传到云端服务器。经过数据可视化处理,把患者肾脏的3D模型数据和结石在肾脏的位置数据传递至中心处理模块上。中心处理模块接受输尿管软镜摄像头传来的图像数据和数据可视化处理模块传来的3D数据,通过配准、融合,判断出镜头内景象处在患者3D模型的哪个相对位置上,以及到达掩埋在憩室内的结石所需要的行进路径。这样,沿着中心处理模块提示的路径前进,就能找到埋藏在组织里的憩室结石。A CT scan of a patient with a strong renal diverticulum revealed a stone embedded in the renal parenchyma (the diverticulum stone), and CT contrast enhanced the internal lumen structure of the kidney and ureter (aggregate system). The doctor uploads the patient's CT data to the cloud server in the form of a DICOM file. After data visualization, the 3D model data of the patient's kidney and the location data of the stones in the kidney are transmitted to the central processing module. The central processing module accepts the image data transmitted from the ureteroscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is in the lens, and arrives at the buried position. The path of travel required for the stones in the chamber. In this way, along the path prompted by the central processing module, the diverticulum stones buried in the tissue can be found.
实施例2Example 2
一个肾肿瘤患者,需要做腹腔镜部分肾切除。医生把患者的肾脏CT数据以DICOM文件的格式上传到云端服务器。经过数据可视化处理,把患者肾脏的3D模型数据、肿瘤在肾脏的位置数据和肾脏血管的位置情况传递至中心处理模块上。该中心处理模块也位于云端,接受腹腔镜摄像头传来的图像数据和数据可视化处理模块传来的3D数据,通过配准、融合,判断出肾脏包膜下供应肾脏肿瘤血管的走向,显示在可穿戴设备(眼镜)上。医生从而选择性只阻断供应肾脏肿瘤血管,完成手术。避免了常规方法需要阻断更大的动静脉,造成更广泛地肾组织缺血,肾功能受损。A kidney tumor patient needs to have a partial laparoscopic nephrectomy. The doctor uploads the patient's kidney CT data to the cloud server in the form of a DICOM file. After data visualization, the 3D model data of the patient's kidney, the location data of the tumor in the kidney, and the location of the renal blood vessels are transmitted to the central processing module. The center processing module is also located in the cloud, and receives the image data transmitted from the laparoscopic camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, the direction of the renal tumor blood vessels supplied under the renal capsule is judged. Wear the device (glasses). The doctor thus selectively blocks only the blood vessels supplying the kidney tumor and completes the surgery. Conventional methods are needed to block larger arteries and veins, resulting in more extensive renal tissue ischemia and impaired renal function.
实施例3Example 3
一个周围型肺癌患者,需要做支气管镜活检。医生把患者的DICOM文件上传到云端服务器。经过数据可视化处理,把患者肺和各级支气管的3D模型数据、肿瘤在肺的位置和肿瘤周围血管的位置情况传递至中心处理模块上。中心处理模块接受支气管镜摄像头传来的图像数据和数据可视化处理模块传来的3D数据,通过配准、融合,判断出支气管镜头处在患者3D模型的哪个相对位置上,到达肿瘤需要经过哪些支气管岔道,肿瘤周围有没有活检 时需要避开的血管等。甚至可以协助辨别和选择肿瘤的边缘部位活检,因为肿瘤的边缘比肿瘤的中心癌细胞检出率高(肿瘤中心坏死细胞的比率过高)。A peripheral lung cancer patient needs a bronchoscopy biopsy. The doctor uploads the patient's DICOM file to the cloud server. Through data visualization, the 3D model data of the patient's lungs and bronchial tubes, the location of the tumor in the lungs, and the location of the blood vessels around the tumor are transmitted to the central processing module. The central processing module receives the image data transmitted from the bronchoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the bronchial lens is in the patient 3D model, and which bronchus the tumor needs to pass. Rumor, there is no biopsy around the tumor Need to avoid blood vessels, etc. It can even help identify and select biopsy at the marginal site of the tumor because the edge of the tumor is detected at a higher rate than the central cancer cell of the tumor (the ratio of necrotic cells in the center of the tumor is too high).
实施例4Example 4
一个乳腺癌患者,需要做全腔镜乳腺切除术。术前,通过核磁共振发现了微小的早期乳腺癌病灶,术中单凭摄像头难以识别癌症病灶。医生把患者的核磁DICOM文件发送到医学数据可视化处理模块。经过数据可视化处理,把患者乳腺连同肿瘤的3D模型数据传递至中心处理模块上。中心处理模块接受腔镜摄像头传来的图像数据和数据可视化处理模块传来的3D数据,通过配准、融合,判断出手术器械处在患者3D模型的哪个相对位置上,到达肿瘤需要向哪里移动,从而最后达到摄像头不易识别的肿瘤并把肿瘤切除。A breast cancer patient needs a total laparoscopic mastectomy. Before surgery, tiny early breast cancer lesions were found by NMR, and it was difficult to identify cancer lesions by the camera alone. The doctor sends the patient's nuclear magnetic DICOM file to the medical data visualization processing module. After data visualization, the patient's mammary gland along with the tumor's 3D model data is passed to the central processing module. The central processing module receives the image data transmitted from the endoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is located, and where the tumor needs to be moved. In the end, the tumor is not easily recognized by the camera and the tumor is removed.
实施例5Example 5
一个肝癌患者,需要做机器人肝部分切除。患者的彩超结果发现肝癌位置及营养肿瘤的异常增生的血管。医生把患者的术前彩色超声DICOM文件发送到医学数据可视化处理模块。经过数据可视化处理,把患者肝脏、肿瘤连同血管的3D模型数据分别传递至中心处理模块上和手机端上。医生术前通过手机对手术部位的血管分布有一个大概了解。术中中心处理模块接受机器人摄像头传来的图像数据和数据可视化处理模块传来的3D数据,通过配准、融合,判断出手术器械处在患者3D模型的哪个相对位置上,到达肿瘤需要向哪里移动,异常增生的血管埋藏在那里,从而帮助医生寻找到一条规避开异常增生的变异血管的手术路径,最后轻松切除肿瘤。For a liver cancer patient, a partial liver resection of the robot is required. The patient's color Doppler ultrasound results in the location of liver cancer and the abnormal proliferation of vascular tumors. The doctor sends the patient's preoperative color ultrasound DICOM file to the medical data visualization processing module. After data visualization, the patient's liver, tumor and blood vessel 3D model data are transmitted to the central processing module and the mobile phone. The doctor had a general understanding of the blood vessel distribution at the surgical site by the mobile phone before surgery. The intraoperative central processing module accepts the image data transmitted from the robot camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the surgical instrument is in the patient 3D model, and where the tumor needs to be reached. The moving, abnormally proliferating blood vessels are buried there, helping the doctor to find a surgical path that avoids the abnormally proliferating variegated blood vessels, and finally removes the tumor easily.
虽然本发明所揭露的实施方式如上,但所述的内容只是为了便于理解本发明而采用的实施方式,并非用以限定本发明。任何本发明所属技术领域内的技术人员,在不脱离本发明所揭露的精神和范围的前提下,可以在实施的形式上及细节上作任何的修改与变化,但本发明的专利保护范围,仍须以所附的权利要求书所界定的范围为准。 While the embodiments of the present invention have been described above, the described embodiments are merely illustrative of the embodiments of the invention, and are not intended to limit the invention. Any modification and variation of the form and details of the invention may be made by those skilled in the art without departing from the spirit and scope of the invention. It is still subject to the scope defined by the appended claims.

Claims (4)

  1. 手术定位系统,其特征在于,所述手术定位系统通过可见光即时图像和影像学非即时图像的直接对比而完成定位;该系统包括DICOM数据输入模块、数据可视化运算处理模块、可见光图像输入模块、中心处理模块、图像显示输出模块;所述The surgical positioning system is characterized in that the surgical positioning system completes positioning by direct comparison between the visible light image and the non-instant image of the imaging; the system comprises a DICOM data input module, a data visualization operation processing module, a visible light image input module, and a center Processing module, image display output module;
    数据可视化运算处理模块位于云端,与中心处理模块连接,接收DICOM数据输入模块的数据,并将接收到的数据进行可视化处理后,将患者的3D模型数据传递到中心处理模块和/或图像显示输出模块;The data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output. Module
    DICOM数据输入模块与所述数据可视化运算处理模块连接,用于将检测到的数据以DICOM文件的格式上传;The DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
    可见光图像输入模块和所述中心处理模块连接,用于将术中实时图像数据传输到中心处理模块;The visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
    中心处理模块,用于接收所述可见光图像输入模块传来的图像数据和可视化处理模块传来的3D模型数据;a central processing module, configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
    图像显示输出模块分为中心处理前显示输出模块和中心处理后显示输出模块;两显示输出模块分别独立存在,单独运行;中心处理前显示输出模块与所述云端数据可视化运算处理模块连接,用于显示所述3D模型;中心处理后显示输出模块与所述中心处理模块连接,用于显示光学图像和3D模型。The image display output module is divided into a central processing pre-display output module and a central processing display output module; the two display output modules respectively exist independently and run separately; the central processing pre-display output module is connected with the cloud data visualization operation processing module for The 3D model is displayed; a centrally processed display output module is coupled to the central processing module for displaying an optical image and a 3D model.
  2. 如权利要求1所述的手术定位系统,其特征在于,所述可见光图像输入模块的图像信号源来自于摄像头,与所述DICOM数据源自的检查设备的成像原理不同。The surgical positioning system of claim 1 wherein the image source of the visible light image input module is derived from a camera different from the imaging principle of the inspection device from which the DICOM data is derived.
  3. 如权利要求1所述的手术定位系统,其特征在于,所述中心处理模块完成影像学非即时图像和可见光即时图像的直接对比。The surgical positioning system of claim 1 wherein said central processing module performs direct comparison of the imagery non-immediate image and the visible light instant image.
  4. 如权利要求1所述的手术定位系统,其特征在于,所述中心处理模块可在本地端,也可在云端。 The surgical positioning system of claim 1 wherein said central processing module is at the local end or in the cloud.
PCT/CN2015/099144 2015-09-06 2015-12-28 Positioning system for use in surgical operation WO2017036023A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510559675.4A CN105213032B (en) 2015-09-06 2015-09-06 Location of operation system
CN201510559675.4 2015-09-06

Publications (1)

Publication Number Publication Date
WO2017036023A1 true WO2017036023A1 (en) 2017-03-09

Family

ID=54982566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099144 WO2017036023A1 (en) 2015-09-06 2015-12-28 Positioning system for use in surgical operation

Country Status (2)

Country Link
CN (1) CN105213032B (en)
WO (1) WO2017036023A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326856A (en) * 2016-08-18 2017-01-11 厚凯(天津)医疗科技有限公司 Surgery image processing method and surgery image processing device
CN112237477B (en) * 2019-07-17 2021-11-16 杭州三坛医疗科技有限公司 Fracture reduction closed operation positioning navigation device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002102249A (en) * 2000-09-29 2002-04-09 Olympus Optical Co Ltd Device and method for operation navigation
WO2005055008A2 (en) * 2003-11-26 2005-06-16 Viatronix Incorporated Automated segmentation, visualization and analysis of medical images
CN1874734A (en) * 2003-09-01 2006-12-06 西门子公司 Method and device for visually assisting the electrophysiological use of a catheter in the heart
CN102811655A (en) * 2010-03-17 2012-12-05 富士胶片株式会社 System, method, device, and program for supporting endoscopic observation
US8348831B2 (en) * 2009-12-15 2013-01-08 Zhejiang University Device and method for computer simulated marking targeting biopsy
CN103793915A (en) * 2014-02-18 2014-05-14 上海交通大学 Low-cost mark-free registration system and method in neurosurgery navigation
CN104757951A (en) * 2014-04-11 2015-07-08 京东方科技集团股份有限公司 Display system and data processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080147086A1 (en) * 2006-10-05 2008-06-19 Marcus Pfister Integrating 3D images into interventional procedures
US8235530B2 (en) * 2009-12-07 2012-08-07 C-Rad Positioning Ab Object positioning with visual feedback
US10561861B2 (en) * 2012-05-02 2020-02-18 Viewray Technologies, Inc. Videographic display of real-time medical treatment
CN203195768U (en) * 2013-03-15 2013-09-18 应瑛 Operation guidance system
CN103371870B (en) * 2013-07-16 2015-07-29 深圳先进技术研究院 A kind of surgical navigation systems based on multimode images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002102249A (en) * 2000-09-29 2002-04-09 Olympus Optical Co Ltd Device and method for operation navigation
CN1874734A (en) * 2003-09-01 2006-12-06 西门子公司 Method and device for visually assisting the electrophysiological use of a catheter in the heart
WO2005055008A2 (en) * 2003-11-26 2005-06-16 Viatronix Incorporated Automated segmentation, visualization and analysis of medical images
US8348831B2 (en) * 2009-12-15 2013-01-08 Zhejiang University Device and method for computer simulated marking targeting biopsy
CN102811655A (en) * 2010-03-17 2012-12-05 富士胶片株式会社 System, method, device, and program for supporting endoscopic observation
CN103793915A (en) * 2014-02-18 2014-05-14 上海交通大学 Low-cost mark-free registration system and method in neurosurgery navigation
CN104757951A (en) * 2014-04-11 2015-07-08 京东方科技集团股份有限公司 Display system and data processing method

Also Published As

Publication number Publication date
CN105213032B (en) 2017-12-15
CN105213032A (en) 2016-01-06

Similar Documents

Publication Publication Date Title
JP7133474B2 (en) Image-based fusion of endoscopic and ultrasound images
US20220192611A1 (en) Medical device approaches
Okamoto et al. Clinical application of navigation surgery using augmented reality in the abdominal field
CN106236006B (en) 3D optical molecular image laparoscope imaging systems
CN107456278B (en) Endoscopic surgery navigation method and system
RU2556593C2 (en) Image integration based superposition and navigation for endoscopic surgery
Reynisson et al. Navigated bronchoscopy: a technical review
JP5486432B2 (en) Image processing apparatus, operating method thereof, and program
KR20130108320A (en) Visualization of registered subsurface anatomy reference to related applications
JP2013517909A (en) Image-based global registration applied to bronchoscopy guidance
KR20130015146A (en) Method and apparatus for processing medical image, robotic surgery system using image guidance
Onda et al. Short rigid scope and stereo-scope designed specifically for open abdominal navigation surgery: clinical application for hepatobiliary and pancreatic surgery
Kriegmair et al. Digital mapping of the urinary bladder: potential for standardized cystoscopy reports
Bertrand et al. A case series study of augmented reality in laparoscopic liver resection with a deformable preoperative model
Amir-Khalili et al. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video
WO2019047820A1 (en) Image display method, device and system for minimally invasive endoscopic surgical navigation
Ma et al. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
Konishi et al. Augmented reality navigation system for endoscopic surgery based on three-dimensional ultrasound and computed tomography: Application to 20 clinical cases
WO2017036023A1 (en) Positioning system for use in surgical operation
Galloway et al. Image‐Guided Abdominal Surgery and Therapy Delivery
Bartholomew et al. Surgical navigation in the anterior skull base using 3-dimensional endoscopy and surface reconstruction
Ong et al. A novel method for texture-mapping conoscopic surfaces for minimally invasive image-guided kidney surgery
CN115375595A (en) Image fusion method, device, system, computer equipment and storage medium
Thapa et al. A novel augmented reality for hidden organs visualisation in surgery: enhanced super-pixel with sub sampling and variance adaptive algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15902828

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15902828

Country of ref document: EP

Kind code of ref document: A1