WO2021114226A1 - 基于肝内血管配准的手术导航系统 - Google Patents

基于肝内血管配准的手术导航系统 Download PDF

Info

Publication number
WO2021114226A1
WO2021114226A1 PCT/CN2019/125138 CN2019125138W WO2021114226A1 WO 2021114226 A1 WO2021114226 A1 WO 2021114226A1 CN 2019125138 W CN2019125138 W CN 2019125138W WO 2021114226 A1 WO2021114226 A1 WO 2021114226A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
image
dimensional
cloud data
intraoperative
Prior art date
Application number
PCT/CN2019/125138
Other languages
English (en)
French (fr)
Inventor
温铁祥
王澄
周寿军
李迟迟
王磊
张毅
陆建
Original Assignee
珠海横乐医学科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珠海横乐医学科技有限公司 filed Critical 珠海横乐医学科技有限公司
Publication of WO2021114226A1 publication Critical patent/WO2021114226A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • IGS Image guided surgery
  • Surgical navigation refers to the use of virtual reality technology to track and display the positional relationship of surgical instruments relative to the diseased tissue with the help of virtual reality technology, based on medical image data such as CT and MR, so as to realize real-time guidance of the surgical process. It is of great significance to improve positioning accuracy, reduce intraoperative trauma, and reduce surgical error rate.
  • the existing surgical navigation and positioning methods are usually the body surface marker method and the human anatomical structure feature method.
  • the body surface marker method requires that the marker be pasted on the patient's body surface during the CT/MR preoperative scan of the patient. During the operation, the position of the body may be shifted due to the movement of the body part, which affects
  • the accuracy of the surgical navigation system is not conducive to the development of minimally invasive interventional surgery, and the existing surgical navigation system also requires that the in vitro markers are easy to identify in the preoperative CT/MR scan, otherwise it will also affect the accuracy of navigation and positioning; in addition, , When performing intraoperative configuration, the algorithm requires automatic identification of markers, and the accuracy of the recognition algorithm will also affect the accuracy of registration.
  • the human anatomical structure feature method needs to manually select the corresponding feature points through an interactive method.
  • the main disadvantage of this method is that the same structure is different in preoperative CT/MR and intraoperative ultrasound images, and requires a particularly experienced imaging surgeon.
  • the demand for feature points is difficult to operate and the selection of feature points takes a long time.
  • the present invention provides a surgical navigation system based on intrahepatic blood vessel registration, which does not require placing markers on the human body surface and manually selecting feature points, which is convenient for operation and improves the accuracy of registration.
  • the specific technical solution proposed by the present invention is to provide a surgical navigation system based on intrahepatic blood vessel registration, and the surgical navigation system includes:
  • Ultrasound probe used to obtain intraoperative ultrasound images
  • Positioning device for obtaining the position of the surgical instrument in the first coordinate system
  • the point cloud data extraction module is used to obtain the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system according to the preoperative three-dimensional image and obtain the first three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system according to the intra-operative ultrasound image Two- and three-dimensional point cloud data;
  • a registration module configured to register the first three-dimensional point cloud data and the second three-dimensional point cloud data to obtain a transformation relationship between the first coordinate system and the second coordinate system;
  • the first coordinate conversion module is configured to convert the spatial position of the intraoperative ultrasound image into a first coordinate system according to the conversion relationship to obtain the intraoperative ultrasound image in the first coordinate system;
  • the second coordinate conversion module is configured to convert the spatial position of the surgical instrument into a second coordinate system according to the transformation relationship, and obtain the position of the surgical instrument in the intraoperative ultrasound image;
  • the fusion display module is used for fusion display of the preoperative three-dimensional image, the intraoperative ultrasound image in the first coordinate system, and the position of the surgical instrument in the intraoperative ultrasound image for surgical navigation.
  • point cloud data extraction module is specifically used for:
  • point cloud data extraction module is specifically used for:
  • point cloud data extraction module is also specifically used for:
  • the regression function is optimized to obtain an intraoperative three-dimensional image.
  • kernel function is:
  • G x (X i), G y (X i) and G z (X i) are the sampling points X i gradient in the x, y and z directions.
  • first three-dimensional point cloud data and the second three-dimensional point cloud data are registered by the following formula to obtain the transformation relationship between the first coordinate system and the second coordinate system:
  • the first three-dimensional point cloud data is ⁇ x i , i ⁇ m ⁇
  • T k is the Kth time Iterative transformation matrix
  • a marking point is configured on the ultrasound probe, and the positioning device is also used to obtain the position of the marking point in the first coordinate system.
  • the first coordinate conversion module is specifically configured to:
  • the spatial position of the intraoperative ultrasound image is converted into a first coordinate system according to the conversion relationship and the conversion relationship, and the intraoperative ultrasound image in the first coordinate system is obtained.
  • the positioning device is an optical positioner or a magnetic positioner.
  • the surgical navigation system based on intrahepatic blood vessel registration takes the intrahepatic blood vessels of the human body as the internal anatomical marker points, and extracts the first position of the intrahepatic blood vessels in the first coordinate system through the point cloud data extraction module.
  • the number of three-dimensional point clouds and the second three-dimensional point cloud data in the second coordinate system, and then the first three-dimensional point cloud data and the second three-dimensional point cloud data are registered through the registration module, so that there is no need to Placing markers on the surface of the human body and manually selecting feature points facilitate operation and improve the accuracy of registration.
  • Figure 1 is a schematic diagram of the structure of the surgical navigation system
  • FIG. 2 is a schematic diagram of the structure of the image processing system 3;
  • Figure 3 is a flowchart of a surgical navigation method
  • FIG. 4 is a flowchart of obtaining the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system
  • FIG. 5 is a flowchart of obtaining the second three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system
  • Figure 6 is a flowchart of three-dimensional volume reconstruction based on intraoperative ultrasound images
  • FIG. 7 is a flowchart of transforming the spatial position of the intraoperative ultrasound image into the first coordinate system according to the transformation relationship.
  • the surgical navigation system based on intrahepatic vessel registration proposed in this application includes an ultrasound probe, a positioning device, a point cloud data extraction module, a registration module, a first coordinate conversion module, a second coordinate conversion module, and a fusion display module.
  • the ultrasound probe is used to obtain intraoperative ultrasound images
  • the positioning device is used to obtain the position of the surgical instrument in the first coordinate system
  • the point cloud data extraction module is used to obtain the first coordinate system of the intrahepatic blood vessel based on the preoperative three-dimensional image.
  • the registration module is used to match the first three-dimensional point cloud data and the second three-dimensional point cloud data.
  • the first coordinate transformation module is used to transform the spatial position of the intraoperative ultrasound image into the first coordinate system according to the transformation relationship to obtain the first coordinate system
  • the intraoperative ultrasound image is used to convert the spatial position of the surgical instrument into the second coordinate system according to the transformation relationship to obtain the position of the surgical instrument in the intraoperative ultrasound image
  • the fusion display module is used to display the preoperative The three-dimensional image, the intraoperative ultrasound image in the first coordinate system, and the position of the surgical instrument in the intraoperative ultrasound image are fused and displayed for surgical navigation.
  • the surgical navigation system based on intrahepatic blood vessel registration uses the intrahepatic blood vessels of the human body as internal anatomical landmarks, and extracts the first position of the intrahepatic blood vessels in the first coordinate system through the point cloud data extraction module.
  • the number of three-dimensional point clouds and the second three-dimensional point cloud data in the second coordinate system, and then the first three-dimensional point cloud data and the second three-dimensional point cloud data are registered through the registration module, so that there is no need to place markers on the surface of the human body And manually select feature points to avoid the misalignment of intraoperative markers, the difficulty of marker recognition, and the accuracy of the recognition algorithm to affect the accuracy of registration, which improves the accuracy, convenience and safety of surgical navigation and positioning.
  • the surgical navigation system based on intrahepatic blood vessel registration includes an ultrasound probe 1, a positioning device 2, and an image processing system 3.
  • the ultrasound probe 1 is used to obtain real-time intraoperative ultrasound images of the patient 8 lying on the surgical bed 7.
  • the ultrasound probe 1 is selected as the real-time image modality in this embodiment because ultrasound has reasonable price, no X-ray radiation, Convenient use and other advantages. Of course, in the actual operation process, other imaging modalities can also be selected according to clinical needs, such as real-time magnetic resonance imaging (MR), real-time fluoroscopy and so on.
  • MR magnetic resonance imaging
  • the intraoperative ultrasound image acquired by the ultrasound probe 1 is located in the ultrasound image coordinate system.
  • the ultrasound probe 1 may be a two-dimensional ultrasound probe or a three-dimensional ultrasound probe.
  • the real-time ultrasound image of the liver can be acquired through the movement of the ultrasound probe 1.
  • the positioning device 2 is used to obtain the position of the surgical instrument 4 in a first coordinate system, and the first coordinate system is a spatial coordinate system with the positioning device 2 as a reference.
  • a marking point 5 is configured on the surface of the ultrasonic probe 1, and the positioning device 2 can track and locate the position of the ultrasonic probe 1 in the spatial coordinate system in real time by tracking the position of the marking point 5 in real time.
  • the real-time intraoperative images are transmitted to the image processing system 3 through the data line.
  • the positioning device 2 in this embodiment is an optical positioner or a magnetic positioner.
  • the marking point 5 is an optical marking point or a magnetic marking point. That is, if the positioning device 2 is an optical positioner, the marking point 5 is It is an optical marking point, for example, the marking point 5 is a light-emitting device, and the positioning device 2 obtains the position of the ultrasonic probe 1 in the spatial coordinate system by receiving the optical signal from the marking point 5, and can receive the optical marking point; if the positioning device 2 is For a magnetic locator, the marking point 5 is a magnetic marking point, and the positioning device 2 obtains the position of the ultrasonic probe 1 in the space coordinate system by receiving the electromagnetic signal from the marking point 5.
  • the surgical instrument 4 and the ultrasonic probe 1 are fixed together by the fixing frame 6 in this embodiment, so that the surgical instrument 4 and the ultrasonic probe 1 are on the same plane.
  • the spatial orientation is consistent with the spatial orientation of the ultrasonic probe 1.
  • the surgical instrument 4 can move relative to a certain coordinate axis of its own coordinate system.
  • the surgical instrument 4 in this embodiment is a puncture needle
  • the puncture The needle can precess the puncture action on the Z axis of its own coordinate system.
  • the surgical instrument 4 in this embodiment can be selected according to a specific surgical operation, which is only shown here as an example and is not used as a limitation.
  • the image processing system 3 in this embodiment is used to perform real-time fusion display of the ultrasound image and the preoperative tomographic image according to the coordinate mapping relationship positioned before the operation, and to give the position of the surgical instrument 4 in the patient model coordinate system in real time , So as to achieve three-dimensional real-time rendering.
  • the image processing system 3 includes a point cloud data extraction module 31, a registration module 32, a first coordinate conversion module 33, a second coordinate conversion module 34, and a fusion display module 35.
  • the point cloud data extraction module 31 is used to obtain the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system according to the preoperative three-dimensional image and obtain the second three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system according to the intraoperative ultrasound image.
  • Point cloud data It should be noted here that the second coordinate system is the coordinate system with the ultrasound probe 1 as the reference; the registration module 32 is used to register the first three-dimensional point cloud data and the second three-dimensional point cloud data.
  • the first coordinate transformation module 33 is used to transform the spatial position of the intraoperative ultrasound image into the first coordinate system according to the transformation relationship, and obtain the Intraoperative ultrasound image
  • the second coordinate conversion module 34 is used to convert the spatial position of the surgical instrument 4 into the second coordinate system according to the transformation relationship to obtain the position of the surgical instrument in the intraoperative ultrasound image
  • the fusion display module 35 is used to convert The preoperative three-dimensional image, the intraoperative ultrasound image in the first coordinate system, and the position of the surgical instrument in the intraoperative ultrasound image are fused and displayed for surgical navigation.
  • the point cloud data extraction module 31 in this embodiment is specifically used to obtain the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system according to the preoperative three-dimensional image:
  • the preoperative tomographic image obtained by the tomographic device can be an MR scanner or a CT scanner, that is, the preoperative tomographic image is an MR tomographic image or a CT tomographic image, based on the preoperative tomographic image Perform three-dimensional volume reconstruction to obtain a preoperative three-dimensional image. Since the preoperative tomographic image is regular slice data, the preoperative tomographic image can be reconstructed according to the conventional three-dimensional volume reconstruction method, which will not be repeated here.
  • the point cloud data extraction module 31 is specifically used to obtain the second three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system according to the intraoperative ultrasound image:
  • the point cloud data extraction module 31 performs the three-dimensional volume reconstruction of the intraoperative ultrasound image.
  • Performing 3D volume reconstruction specifically includes: the point cloud data extraction module 31 samples the intraoperative ultrasound image to obtain multiple sampling points. Then, the point cloud data extraction module 31 constructs a kernel function and a regression function according to the multiple sampling points, and finally performs a regression analysis. The function is optimized to obtain intraoperative three-dimensional images.
  • the kernel function K constructed by the point cloud data extraction module 31 in this embodiment is:
  • G x (X i), G y (X i) and G z (X i) are the sampling points X i gradient in the x, y and z directions.
  • Point cloud data extraction module 31 according to the kernel function
  • the constructed regression function is:
  • n the series of Taylor expansion.
  • the smoothing in the kernel function The matrix is constructed as It can effectively filter the speckle noise in the sampled data, while protecting the characteristic information such as the edge of the image.
  • the point cloud data extraction module 31 performs three-dimensional surface reconstruction and extracts the first three-dimensional point cloud data of intrahepatic blood vessels in the first coordinate system and the second three-dimensional point cloud data in the second coordinate system.
  • Some three-dimensional surface reconstruction and image segmentation methods for example, perform three-dimensional surface reconstruction through isosurface extraction methods.
  • the point cloud data extraction module 31 obtains the first three-dimensional point cloud data and the second three-dimensional point cloud data and transmits them to the registration module 32.
  • the registration module 32 then performs processing on the first three-dimensional point cloud data and the second three-dimensional point cloud data. Registration to obtain the transformation relationship between the first coordinate system and the second coordinate system.
  • the registration module 32 uses the following formula to register the first three-dimensional point cloud data and the second three-dimensional point cloud data to obtain the transformation relationship between the first coordinate system and the second coordinate system:
  • ⁇ x i , i ⁇ m ⁇ is the first three-dimensional point cloud data
  • m represents the number of data points in the first three-dimensional point cloud data
  • N represents the number of data points in the second three-dimensional point cloud data
  • m ⁇ n represents the number of iterations
  • T k is the transformation matrix of the kth iteration.
  • the registration process in this embodiment can automatically find the correspondence between the first three-dimensional point cloud data and the second three-dimensional point cloud data, and does not require data points in the first three-dimensional point cloud data and the second three-dimensional point cloud data
  • the number of is equal, and it has strong robustness.
  • the registration module 32 obtains the transformation relationship between the first three-dimensional point cloud data and the second three-dimensional point cloud data, and then transmits the transformation relationship to the first coordinate transformation module 33.
  • the first coordinate transformation module 33 converts the intraoperative ultrasound according to the transformation relationship.
  • the spatial position of the image is converted to the first coordinate system, and the intraoperative ultrasound image in the first coordinate system is obtained.
  • the first coordinate conversion module 33 is specifically used when converting the spatial position of the intraoperative ultrasound image into the first coordinate system according to the transformation relationship, and obtaining the intraoperative ultrasound image in the first coordinate system.
  • the conversion relationship C 21 from the intraoperative ultrasound image coordinate system C 2 to the ultrasound probe coordinate system C 1 is obtained.
  • the calibrated ultrasound probe imaging parameters are performed on the ultrasound probe 1 according to the existing calibration method Obtained by calibration, I won’t repeat it here;
  • the point cloud data extraction module 31, the first coordinate conversion module 33, and the second coordinate conversion module 34 respectively transmit the preoperative three-dimensional image, the intraoperative ultrasound image in the first coordinate system, and the position of the surgical instrument in the intraoperative ultrasound image.
  • the preoperative three-dimensional image, the intraoperative ultrasound image in the first coordinate system, and the position of the surgical instrument in the intraoperative ultrasound image are fused and displayed through the fusion display module 35, so as to provide the doctor with three-dimensional real-time rendering results Come for surgical navigation.
  • the surgical navigation method in this embodiment mainly includes the following steps:
  • S5 Convert the spatial position of the intraoperative ultrasound image to the first coordinate system according to the transformation relationship, and obtain the intraoperative ultrasound image in the first coordinate system;
  • step S1 acquiring the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system according to the preoperative three-dimensional image specifically includes:
  • S11 Receive a preoperative tomographic image obtained by a tomographic device, where the tomographic device may be an MR scanner or a CT scanner, that is, the preoperative tomographic image is an MR tomographic image or a CT tomographic image;
  • S14 Perform a three-dimensional surface reconstruction on the first image to obtain a three-dimensional image of the surface of the blood vessel in the liver before the operation;
  • S15 Extract the first three-dimensional point cloud data of the intrahepatic blood vessel in the first coordinate system from the three-dimensional image of the intrahepatic blood vessel surface before the operation.
  • step S3 acquiring the second three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system according to the intraoperative ultrasound image specifically includes:
  • S35 Extract the second three-dimensional point cloud data of the intrahepatic blood vessel in the second coordinate system from the three-dimensional image of the intrahepatic blood vessel surface during the operation.
  • step S32 specifically includes:
  • S322 Construct a kernel function and a regression function according to multiple sampling points
  • step S322 the constructed kernel function K is:
  • G x (X i), G y (X i) and G z (X i) are the sampling points X i gradient in the x, y and z directions.
  • step S322 according to the kernel function
  • the constructed regression function is:
  • y i is the observation of the i-th data sampling point X i
  • ⁇ i is the y i Taylor expansion coefficient at point X i
  • P is the number of sampling points X i.
  • step S323 the regression function is optimized to obtain an intraoperative three-dimensional image, that is, to solve the following optimization problem:
  • n the series of Taylor expansion.
  • the smoothing in the kernel function The matrix is constructed as It can effectively filter the speckle noise in the sampled data, while protecting the characteristic information such as the edge of the image.
  • steps S13-15 and S33-35 the first image and the second image of the intrahepatic blood vessel are extracted, the three-dimensional surface reconstruction is performed, and the first three-dimensional point cloud data and the second image of the intrahepatic blood vessel in the first coordinate system are extracted.
  • the second three-dimensional point cloud data in the two-coordinate system adopts the existing image segmentation method, three-dimensional surface reconstruction and point cloud data extraction method.
  • the three-dimensional surface reconstruction is performed by the isosurface extraction method, which will not be repeated here.
  • step S4 the first three-dimensional point cloud data and the second three-dimensional point cloud data are registered to obtain the transformation relationship between the first coordinate system and the second coordinate system. Specifically, the first three-dimensional point cloud data and Registration of the second three-dimensional point cloud data:
  • ⁇ x i , i ⁇ m ⁇ is the first three-dimensional point cloud data
  • m represents the number of data points in the first three-dimensional point cloud data
  • N represents the number of data points in the second three-dimensional point cloud data
  • m ⁇ n represents the number of iterations
  • T k is the transformation matrix of the kth iteration.
  • the registration process in this embodiment can automatically find the correspondence between the first three-dimensional point cloud data and the second three-dimensional point cloud data, and does not require data points in the first three-dimensional point cloud data and the second three-dimensional point cloud data
  • the number of is equal, and it has strong robustness.
  • step S5 transforming the spatial position of the intraoperative ultrasound image into the first coordinate system according to the transformation relationship, and obtaining the intraoperative ultrasound image in the first coordinate system specifically includes:
  • the calibrated ultrasound probe imaging parameters obtain the conversion relationship C 21 from the intraoperative ultrasound image coordinate system C 2 to the ultrasound probe coordinate system C 1.
  • the calibrated ultrasound probe imaging parameters are based on the existing calibration method for the ultrasound probe 1 Obtained by calibration, so I won't repeat it here;
  • step S6 the spatial position of the surgical instrument 4 is converted to the second coordinate system according to the transformation relationship, which can be understood as the inverse problem of the registration process, that is, the transformation relationship between the first coordinate system and the second coordinate system is known.
  • the transformation relationship can be understood as the inverse problem of the registration process, that is, the transformation relationship between the first coordinate system and the second coordinate system is known.
  • step S7 the image images in the second coordinate system and the first coordinate system and the position of the surgical instrument 4 are displayed in real time in the same three-dimensional scene, that is, the first coordinate system, so as to complete the tracking process of the surgical navigation, and then use the surgery
  • the high-resolution tomographic images guide the surgeon to perform safe and precise operations.

Abstract

本发明提供了一种基于肝内血管配准的手术导航系统,所述手术导航系统包括:超声探头,定位装置,点云数据提取模块,配准模块,第一坐标转换模块,第二坐标转换模块,融合显示模块。本发明提供的基于肝内血管配准的手术导航系统,将人体本身自带的肝内血管作为内在的解剖标记点,通过点云数据提取模块提取肝内血管在第一坐标系中的第一三维点云数以及在第二坐标系中的第二三维点云数据,再通过配准模块对所述第一三维点云数据和所述第二三维点云数据进行配准,从而不需要在人体表面放置标记物以及手动选取特征点,便于操作且提升了配准的精度。

Description

基于肝内血管配准的手术导航系统 技术领域
本发明涉及手术导航技术领域,尤其涉及一种基于肝内血管配准的手术导航系统。
背景技术
传统的外科手术是外科医生将病人的术前影像以固定胶片的形式放在远离术者的灯箱表面,手术工具和病人身体的解剖结构关系需要医生的主观想象,缺乏客观的图像引导。随着科学技术的发展,出现了计算机辅助外科,其中手术导航便是其中一项非常有用、而又重要的外科辅助技术。手术导航IGS(Image guided surgery)于20世纪80年代末首先应用于神经外科,随后逐渐推广到脊柱外科、整形外科、膝关节甚至腹部等手术中。IGS的使用延伸了医生有限的视觉范围,更新了外科手术和外科手术器械的概念,通过在外科手术中引入图像的引导,能够有效的提高手术精度、缩短手术时间、减少手术创口以及并发症的发生。
手术导航是指以CT、MR等医学影像数据为基础,通过虚拟现实技术,借助光学/磁定位仪跟踪并显示手术器械相对于病变组织的位置关系,从而实现对手术过程的实时引导,该技术对提高定位精度、减少术中创伤、降低手术失误率有重要意义。
为了进行图像空间与患者坐标空间的配准,现有的手术导航定位方法通常体表标记物方法和人体解剖结构特征方法。体表标记物方法需要需要在对病人进行CT/MR术前扫描时,要在病人的体表粘贴上的标记物,在术中可能由于人身体部位的活动而发生位置的偏移,从而影响手术导航系统的精确性,不利于微创介入治疗手术的开展,而且现有手术导航系统还要求体外标记物在术前的CT/MR扫描中易于识别,否则也会影响导航定位的精度;此外,在术中进行配置时,算法要求自动识别标记点,识别算法的精度也会对配准的精度造成影响。人体解剖结构特征方法需要通过交互式的方法手动选取对应特征点,该 方法的主要缺点是,同一结构在术前CT/MR与术中超声影像上有区别,需要特别有经验的不用影像科医生进行特征点的需求,操作难度高、特征点选取耗时长。
发明内容
为了解决现有技术的不足,本发明提供一种基于肝内血管配准的手术导航系统,不需要在人体表面放置标记物以及手动选取特征点,便于操作且提升了配准的精度。
本发明提出的具体技术方案为:提供一种基于肝内血管配准的手术导航系统,所述手术导航系统包括:
超声探头,用于获取术中超声图像;
定位装置,用于获取手术器械在第一坐标系中的位置;
点云数据提取模块,用于根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据以及根据所述术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据;
配准模块,用于对所述第一三维点云数据和所述第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系;
第一坐标转换模块,用于根据所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像;
第二坐标转换模块,用于根据所述变换关系将所述手术器械的空间位置转换至第二坐标系中,获得所述手术器械在术中超声图像中的位置;
融合显示模块,用于将所述术前三维图像、第一坐标系中的术中超声图像以及所述手术器械在术中超声图像中的位置进行融合显示,以进行手术导航。
进一步地,所述点云数据提取模块具体用于:
根据术前断层扫描图像进行三维体重建,获得术前三维图像;
从所述术前三维图像中提取肝内血管的第一图像;
对所述第一图像进行三维表面重建,获得术前肝内血管表面的三维图像;
从所述术前肝内血管表面的三维图像中提取肝内血管在第一坐标系中的第一三维点云数据。
进一步地,所述点云数据提取模块具体用于:
根据所述术中超声图像进行三维体重建,获得术中三维图像;
从所述术中三维图像中提取肝内血管的第二图像;
对所述第二图像进行三维表面重建,获得术中肝内血管表面的三维图像;
从所述术中肝内血管表面的三维图像中提取肝内血管在第二坐标系中的第二三维点云数据。
进一步地,所述点云数据提取模块还具体用于:
对所述术中超声图像进行采样,获得多个采样点;
根据所述多个采样点构造核函数和回归函数;
对所述回归函数进行优化,获得术中三维图像。
进一步地,所述核函数为:
Figure PCTCN2019125138-appb-000001
其中,
Figure PCTCN2019125138-appb-000002
为平滑矩阵,X i为第i个采样点,h为全局平滑参数,μ i为第i个采样点对应的局部采样密度参数,C i为i个采样点对应的基于局部灰度分布的协方差矩阵。
进一步地,
Figure PCTCN2019125138-appb-000003
通过如下公式得到:
Figure PCTCN2019125138-appb-000004
其中,
Figure PCTCN2019125138-appb-000005
Figure PCTCN2019125138-appb-000006
G x(X i)、G y(X i)以及G z(X i)分别是采样点X i在x、y以及z方向上的梯度。
进一步地,通过如下公式对所述第一三维点云数据和所述第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系:
Figure PCTCN2019125138-appb-000007
Figure PCTCN2019125138-appb-000008
其中,所述第一三维点云数据为{x i,i∈m},所述第二三维点云数据为N={y i,i∈n},m≠n,T k为第K次迭代的变换矩阵。
进一步地,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
进一步地,所述第一坐标转换模块具体用于:
建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
进一步地,所述定位装置为光学定位器或磁定位器。
本发明提供的基于肝内血管配准的手术导航系统,将人体本身自带的肝内血管作为内在的解剖标记点,通过点云数据提取模块提取肝内血管在第一坐标系中的第一三维点云数以及在第二坐标系中的第二三维点云数据,再通过配准模块对所述第一三维点云数据和所述第二三维点云数据进行配准,从而不需要 在人体表面放置标记物以及手动选取特征点,便于操作且提升了配准的精度。
附图说明
下面结合附图,通过对本发明的具体实施方式详细描述,将使本发明的技术方案及其它有益效果显而易见。
图1为手术导航系统的结构示意图;
图2为影像处理系统3的结构示意图;
图3为手术导航方法的流程图;
图4为获取肝内血管在第一坐标系中的第一三维点云数据的流程图;
图5为获取肝内血管在第二坐标系中的第二三维点云数据的流程图;
图6为根据术中超声图像进行三维体重建的流程图;
图7为根据变换关系将术中超声图像的空间位置转换至第一坐标系中的流程图。
具体实施方式
以下,将参照附图来详细描述本发明的实施例。然而,可以以许多不同的形式来实施本发明,并且本发明不应该被解释为限制于这里阐述的具体实施例。相反,提供这些实施例是为了解释本发明的原理及其实际应用,从而使本领域的其他技术人员能够理解本发明的各种实施例和适合于特定预期应用的各种修改。在附图中,相同的标号将始终被用于表示相同的元件。
本申请提出的基于肝内血管配准的手术导航系统包括超声探头、定位装置、点云数据提取模块、配准模块、第一坐标转换模块、第二坐标转换模块以及融合显示模块。
超声探头用于获取术中超声图像,定位装置用于获取手术器械在第一坐标系中的位置,点云数据提取模块用于根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数以及根据术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据,配准模块用于对第一三维点云数据和第二三维点云数 据进行配准,获得第一坐标系与第二坐标系之间的变换关系;第一坐标转换模块用于根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像;第二坐标转换模块用于根据变换关系将手术器械的空间位置转换至第二坐标系中,获得手术器械在术中超声图像中的位置;融合显示模块用于将术前三维图像、第一坐标系中的术中超声图像以及手术器械在术中超声图像中的位置进行融合显示,以进行手术导航。
本申请提供的基于肝内血管配准的手术导航系统,将人体本身自带的肝内血管作为内在的解剖标记点,通过点云数据提取模块提取肝内血管在第一坐标系中的第一三维点云数以及在第二坐标系中的第二三维点云数据,再通过配准模块对第一三维点云数据和第二三维点云数据进行配准,从而不需要在人体表面放置标记物以及手动选取特征点,避免术中标记物发生偏移、标记物识别困难以及识别算法的精度而影响配准的精度,提升了手术导航定位的准确性、便捷性以及安全性。
下面通过具体的实施例并结合附图来对本申请中的基于肝内血管配准的手术导航系统进行详细的描述。
参照图1~2,本实施例提供的基于肝内血管配准的手术导航系统包括超声探头1、定位装置2、影像处理系统3。
具体地,超声探头1用于实时获取躺在手术病床7上的患者8的术中超声图像,本实施例选用超声探头1作为实时的图像模态是因为超声具有价格合理、无X线辐射、使用方便等优点,当然,在实际操作过程中,也可以根据临床的需要,将图像模态选用其它成像模态,例如,实时核磁共振成像(MR)、实时透视X线等。其中,超声探头1获取的术中超声图像位于超声图像坐标系中,超声探头1可以是二维超声探头或三维超声探头,通过超声探头1的运动可以获取肝脏部位的实时超声图像。
定位装置2用于获取手术器械4在第一坐标系中的位置,第一坐标系是以定位装置2为参照物的空间坐标系。本实施例在超声探头1的表面配置有标记点5,定位装置2可以通过实时追踪标记点5的位置来对超声探头1在空间坐标系中的位置进行实时跟踪、定位,超声探头1获取的实时术中影像通过数据线传送给影像处理系统3。
较佳地,本实施例中的定位装置2为光学定位器或磁定位器,相应的,标记点5为光学标记点或磁标记点,即若定位装置2为光学定位器,则标记点5为光学标记点,例如,标记点5为发光器件,定位装置2通过接收标记点5发出的光信号来获得超声探头1在空间坐标系中的位置,可以接收光学标记点;若定位装置2为磁定位器,则标记点5为磁标记点,定位装置2通过接收标记点5发出的电磁信号来获得超声探头1在空间坐标系中的位置。
为了便于对手术器械4进行定位,本实施例中通过固定架6将手术器械4与超声探头1固定在一起的,使手术器械4与超声探头1处在同一平面上,因此,手术器械4的空间朝向与超声探头1的空间朝向是一致的,当然手术器械4可以沿自身坐标系的某一坐标轴的方向做相对运动,例如,假定本实施例中的手术器械4为穿刺针,则穿刺针可在自身坐标系的Z轴上进行穿刺动作的进动。当然,本实施例中的手术器械4可以根据具体的外科手术选用不同的手术器械,这里仅仅是作为示例示出,并不用作限定。
本实施例中的影像处理系统3用于根据术前定位好的坐标映射关系对超声图像和术前断层图像进行实时的融合显示,并实时的给出手术器械4在病人模型坐标系中的位置,从而实现三维实时渲染。具体地,如图2所示,影像处理系统3包括点云数据提取模块31、配准模块32、第一坐标转换模块33、第二坐标转换模块34、融合显示模块35。点云数据提取模块31用于根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据以及根据术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据,这里需要说明的是,第二坐标系是以超声探头1为参照物的坐标系;配准模块32用于对第一三维点云数据和第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系;第一坐标转换模块33用于根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像;第二坐标转换模块34用于根据变换关系将手术器械4的空间位置转换至第二坐标系中,获得手术器械在术中超声图像中的位置;融合显示模块35用于将术前三维图像、第一坐标系中的术中超声图像以及手术器械在术中超声图像中的位置进行融合显示,以进行手术导航。
本实施例中的点云数据提取模块31在根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据时具体用于:
接收断层扫描设备获取的术前断层扫描图像,其中,断层扫描设备可以是MR扫描仪或CT扫描仪,即术前断层扫描图像为MR断层扫描图像或CT断层扫描图像,根据术前断层扫描图像进行三维体重建,获得术前三维图像,由于术前断层扫描图像为规则的切片数据,根据常规的三维体重建方法对术前断层扫描图像进行重建即可,这里不再赘述。然后从术前三维图像中提取肝内血管的第一图像,再对第一图像进行三维表面重建,获得术前肝内血管表面的三维图像,最后从术前肝内血管表面的三维图像中提取肝内血管在第一坐标系中的第一三维点云数据。
在术中时,点云数据提取模块31在根据术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据时具体用于:
接收超声探头1获取的术中超声图像,根据术中超声图像进行三维体重建,获得术中三维图像,然后从术中三维图像中提取肝内血管的第二图像,再对第二图像进行三维表面重建,获得术中肝内血管表面的三维图像,最后从术中肝内血管表面的三维图像中提取肝内血管在第二坐标系中的第二三维点云数据。
由于术中超声图像为不规则的切片图像,不能采用常规的规则切片图像的三维体重建方法来对术中超声图像进行三维体重建,本实施例中点云数据提取模块31对术中超声图像进行三维体重建具体包括:点云数据提取模块31对术中超声图像进行采样,获得多个采样点,然后,点云数据提取模块31根据多个采样点构造核函数和回归函数,最后对回归函数进行优化,获得术中三维图像。
较佳地,本实施例中点云数据提取模块31构造的核函数K为:
Figure PCTCN2019125138-appb-000009
其中,
Figure PCTCN2019125138-appb-000010
为平滑矩阵,X i为第i个采样点,h为全局平滑参数,μ i为第i个采样点对应的局部采样密度参数,C i为i个采样点对应的基于局部灰度分布的协方差矩阵。
具体的,
Figure PCTCN2019125138-appb-000011
通过如下公式得到:
Figure PCTCN2019125138-appb-000012
其中,
Figure PCTCN2019125138-appb-000013
Figure PCTCN2019125138-appb-000014
G x(X i)、G y(X i)以及G z(X i)分别是采样点X i在x、y以及z方向上的梯度。
点云数据提取模块31根据核函数
Figure PCTCN2019125138-appb-000015
构造的回归函数为:
Figure PCTCN2019125138-appb-000016
其中,y i为在第i个采样点X i的观察数据,β i为y i在点X i处的泰勒展开系数,P为采样点X i的个数。
本实施例中所说的对回归函数进行优化,获得术中三维图像即求解下面的优化问题:
Figure PCTCN2019125138-appb-000017
其中,n表示泰勒展开的级数。
传统核函数回归方法将平滑矩阵构造H i=hI(I为单位矩阵,h为全局平滑因子)会造成重建图像中边缘特征模糊以及低通滤波的问题,本实施例通过将核函数中的平滑矩阵构造为
Figure PCTCN2019125138-appb-000018
可以对采样数据中的斑点噪声进行有效滤除,同时保护图像边缘等特征信息。
本实施例中点云数据提取模块31进行三维表面重建以及提取肝内血管在第一坐标系中的第一三维点云数据、第二坐标系中的第二三维点云数据采用的都是现有的三维表面重建和图像分割方法,例如,通过等值面提取方法进行三维表面重建。
点云数据提取模块31获得第一三维点云数据、第二三维点云数据后将其传送给配准模块32,配准模块32再对第一三维点云数据和第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系。
具体地,配准模块32通过如下公式对第一三维点云数据和第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系:
Figure PCTCN2019125138-appb-000019
Figure PCTCN2019125138-appb-000020
其中,{x i,i∈m}为第一三维点云数据,m表示第一三维点云数据中数据点的个数,N={z i,i∈t}为第二三维点云数据,n表示第二三维点云数据中数据点的个数,m≠n,k表示迭代次数,T k为第k次迭代的变换矩阵。
通过当前变换矩阵T k将第一三维点云数据中的每一个数据点x i进行变换,然后在第二三维点云数据中寻找离T k(x i)最近的点,将这一点标记为在第k次迭代的对应点
Figure PCTCN2019125138-appb-000021
第k次迭代后的结果为一组对应点对的集合
Figure PCTCN2019125138-appb-000022
通过寻找最能描述或解释这组对应关系的变换矩阵即为第一坐标系与第二坐标系之间的变换关系T US-CT
本实施例中的配准过程能自动找到第一三维点云数据和第二三维点云数据之间的对应关系,且不需要第一三维点云数据和第二三维点云数据中的数据点的个数相等,具有很强的鲁棒性。
配准模块32获得第一三维点云数据和第二三维点云数据之间的变换关系之后将该变换关系传送给第一坐标转换模块33,第一坐标转换模块33根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
具体地,如图1所示,第一坐标转换模块33在根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像时具体用于:
建立以超声探头1为参照物的坐标系,获得超声探头坐标系C 1,这里的C 1即为第二坐标系;
根据已标定的超声探头成像参数,获得术中超声图像坐标系C 2到超声探头坐标系C 1的转换关系C 21,已标定的超声探头成像参数是根据现有的标定方法对超声探头1进行标定而获得的,这里不再赘述;
根据转换关系C 21和变换关系T US-CT将术中超声图像的空间位置转换至第一坐标系中,从而获得位于第一坐标系中的术中超声图像,例如,术中超声图像中的任意一点z’ i,转换至第一坐标系后,该点对应的坐标为x’ i=T US-CT(C 21(z′ i))。
点云数据提取模块31、第一坐标转换模块33、第二坐标转换模块34分别将术前三维图像、位于第一坐标系中的术中超声图像、手术器械在术中超声图像中的位置传送给融合显示模块35,通过融合显示模块35将术前三维图像、第一坐标系中的术中超声图像以及手术器械在术中超声图像中的位置进行融合显示,以便为医生提供三维实时渲染结果来进行手术导航。
下面具体对本实施例中的基于肝内血管配准的手术导航系统的手术导航方法进行详细的描述。
参照图3,本实施例中的手术导航方法主要包括以下步骤:
S1、根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据;
S2、获取手术器械4在第一坐标系中的位置、术中超声图像,通过定位装置2实时追踪标记点5的位置来对超声探头1在空间坐标系中的位置进行实时跟踪、定位,由于手术器械4通过固定架6与超声探头1固定在一起,手术器械4与超声探头1处在同一平面上,因此,手术器械4的空间朝向与超声探头1的空间朝向是一致的,即通过定位装置2获取标记点5在第一坐标系中的位置后便可以获得超声探头1和手术器械4在第一坐标系中的位置;
S3、根据术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据;
S4、对第一三维点云数据和第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系;
S5、根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像;
S6、根据变换关系将手术器械4的空间位置转换至第二坐标系中,获得手术器械在术中超声图像中的位置;
S7、将术前三维图像、第一坐标系中的术中超声图像以及手术器械在术中超声图像中的位置进行融合显示,以进行手术导航。
参照图4,在步骤S1中,根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据具体包括:
S11、接收断层扫描设备获取的术前断层扫描图像,其中,断层扫描设备可以是MR扫描仪或CT扫描仪,即术前断层扫描图像为MR断层扫描图像或CT断层扫描图像;
S12、根据术前断层扫描图像进行三维体重建,获得术前三维图像,由于术前断层扫描图像为规则的切片数据,根据常规的三维体重建方法对术前断层扫描图像进行重建即可;
S13、从术前三维图像中提取肝内血管的第一图像;
S14、对第一图像进行三维表面重建,获得术前肝内血管表面的三维图像;
S15、从术前肝内血管表面的三维图像中提取肝内血管在第一坐标系中的第一三维点云数据。
参照图5,在步骤S3中,根据术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据具体包括:
S31、接收超声探头1获取的术中超声图像;
S32、根据术中超声图像进行三维体重建,获得术中三维图像;
S33、从术中三维图像中提取肝内血管的第二图像;
S34、对第二图像进行三维表面重建,获得术中肝内血管表面的三维图像;
S35、从术中肝内血管表面的三维图像中提取肝内血管在第二坐标系中的第二三维点云数据。
参照图6,由于术中超声图像为不规则的切片图像,不能采用常规的规则切片图像的三维体重建方法来对术中超声图像进行三维体重建,本实施例中根据术中超声图像进行三维体重建,获得术中三维图像,即步骤S32具体包括:
S321、对术中超声图像进行采样,获得多个采样点;
S322、根据多个采样点构造核函数和回归函数;
S323、对回归函数进行优化,获得术中三维图像。
较佳地,在步骤S322中,构造的核函数K为:
Figure PCTCN2019125138-appb-000023
其中,
Figure PCTCN2019125138-appb-000024
为平滑矩阵,X i为第i个采样点,h为全局平滑参数,μ i为第i个采样点对应的局部采样密度参数,C i为i个采样点对应的基于局部灰度分布的协方差矩阵。
具体的,
Figure PCTCN2019125138-appb-000025
通过如下公式得到:
Figure PCTCN2019125138-appb-000026
其中,
Figure PCTCN2019125138-appb-000027
Figure PCTCN2019125138-appb-000028
G x(X i)、G y(X i)以及G z(X i)分别是采样点X i在x、y以及z方向上的梯度。
在步骤S322中,根据核函数
Figure PCTCN2019125138-appb-000029
构造的回归函数为:
Figure PCTCN2019125138-appb-000030
其中,y i为在第i个采样点X i的观察数据,β i为y i在点X i处的泰勒展开系数,P为采样点X i的个数。
在步骤S323中,对回归函数进行优化,获得术中三维图像即求解下面的优化问题:
Figure PCTCN2019125138-appb-000031
其中,n表示泰勒展开的级数。
传统核函数回归方法将平滑矩阵构造H i=hI(I为单位矩阵,h为全局平滑因子)会造成重建图像中边缘特征模糊以及低通滤波的问题,本实施例通过将核函数中的平滑矩阵构造为
Figure PCTCN2019125138-appb-000032
可以对采样数据中的斑点噪声进行有效滤除,同时保护图像边缘等特征信息。
本实施例中步骤S13~15、S33~35中提取肝内血管的第一图像和第二图像、进行三维表面重建以及提取肝内血管在第一坐标系中的第一三维点云数据和第二坐标系中的第二三维点云数据采用的都是现有的图像分割方法、三维表面重建和点云数据提取方法,例如,通过等值面提取方法进行三维表面重建,这里不再赘述。
在步骤S4中,对第一三维点云数据和第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系具体通过如下公式对第一三维点云数据和第二三维点云数据进行配准:
Figure PCTCN2019125138-appb-000033
Figure PCTCN2019125138-appb-000034
其中,{x i,i∈m}为第一三维点云数据,m表示第一三维点云数据中数据点的个数,N={z i,i∈t}为第二三维点云数据,n表示第二三维点云数据中数据点的个数,m≠n,k表示迭代次数,T k为第k次迭代的变换矩阵。
通过当前变换矩阵T k将第一三维点云数据中的每一个数据点x i进行变换,然后在第二三维点云数据中寻找离T k(x i)最近的点,将这一点标记为在第k次迭代的对应点
Figure PCTCN2019125138-appb-000035
第k次迭代后的结果为一组对应点对的集合
Figure PCTCN2019125138-appb-000036
通过寻找最能描述或解释这组对应关系的变换矩阵即为第一坐标系与第二坐标系之间的变换关系T US-CT
本实施例中的配准过程能自动找到第一三维点云数据和第二三维点云数据之间的对应关系,且不需要第一三维点云数据和第二三维点云数据中的数据点的个数相等,具有很强的鲁棒性。
参照图7,在步骤S5中,根据变换关系将术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像具体包括:
S51、建立以超声探头1为参照物的坐标系,获得超声探头坐标系C 1,这里的C 1即为第二坐标系;
S52、根据已标定的超声探头成像参数,获得术中超声图像坐标系C 2到超声探头坐标系C 1的转换关系C 21,已标定的超声探头成像参数是根据现有的标定方法对超声探头1进行标定而获得的,这里不再赘述;
S53、根据转换关系C 21和变换关系T US-CT将术中超声图像的空间位置转换至第一坐标系中,从而获得位于第一坐标系中的术中超声图像,例如,术中超声图像中的任意一点z’ i,转换至第一坐标系后,该点对应的坐标为x’ i=T US-CT(C 21(z′ i))。
在步骤S6中,根据变换关系将手术器械4的空间位置转换至第二坐标系中,可以理解为配准过程的逆问题,即已知第一坐标系和第二坐标系的变换关系,当改变手术器械4在第一坐标系中的位置时,根据该变换关系计算出手术器械4在第二坐标系中的新位置。
在步骤S7中,将第二坐标系和第一坐标系中的影像图像以及手术器械4的位置实时显示在同一个三维场景即第一坐标系中,从而完成手术导航的跟踪过程,再利用术前高分辨率的断层图像指导外科医生进行安全、精确的手术。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1. 一种基于肝内血管配准的手术导航系统,其中,所述手术导航系统包括:
    超声探头,用于获取术中超声图像;
    定位装置,用于获取手术器械在第一坐标系中的位置;
    点云数据提取模块,用于根据术前三维图像获取肝内血管在第一坐标系中的第一三维点云数据以及根据所述术中超声图像获取肝内血管在第二坐标系中的第二三维点云数据;
    配准模块,用于对所述第一三维点云数据和所述第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系;
    第一坐标转换模块,用于根据所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像;
    第二坐标转换模块,用于根据所述变换关系将所述手术器械的空间位置转换至第二坐标系中,获得所述手术器械在术中超声图像中的位置;
    融合显示模块,用于将所述术前三维图像、第一坐标系中的术中超声图像以及所述手术器械在术中超声图像中的位置进行融合显示,以进行手术导航。
  2. 根据权利要求1所述的手术导航系统,其中,所述点云数据提取模块具体用于:
    根据术前断层扫描图像进行三维体重建,获得术前三维图像;
    从所述术前三维图像中提取肝内血管的第一图像;
    对所述第一图像进行三维表面重建,获得术前肝内血管表面的三维图像;
    从所述术前肝内血管表面的三维图像中提取肝内血管在第一坐标系中的第一三维点云数据。
  3. 根据权利要求1所述的手术导航系统,其中,所述点云数据提取模块具体用于:
    根据所述术中超声图像进行三维体重建,获得术中三维图像;
    从所述术中三维图像中提取肝内血管的第二图像;
    对所述第二图像进行三维表面重建,获得术中肝内血管表面的三维图像;
    从所述术中肝内血管表面的三维图像中提取肝内血管在第二坐标系中的第二三维点云数据。
  4. 根据权利要求3所述的手术导航系统,其中,所述点云数据提取模块还具体用于:
    对所述术中超声图像进行采样,获得多个采样点;
    根据所述多个采样点构造核函数和回归函数;
    对所述回归函数进行优化,获得术中三维图像。
  5. 根据权利要求4所述的手术导航系统,其中,所述核函数为:
    Figure PCTCN2019125138-appb-100001
    其中,
    Figure PCTCN2019125138-appb-100002
    为平滑矩阵,X i为第i个采样点,h为全局平滑参数,μ i为第i个采样点对应的局部采样密度参数,C i为i个采样点对应的基于局部灰度分布的协方差矩阵。
  6. 根据权利要求5所述的手术导航系统,其中,
    Figure PCTCN2019125138-appb-100003
    通过如下公式得到:
    Figure PCTCN2019125138-appb-100004
    其中,
    Figure PCTCN2019125138-appb-100005
    Figure PCTCN2019125138-appb-100006
    G x(X i)、G y(X i)以及G z(X i)分别是采样点X i在x、y以及z方向上的梯度。
  7. 根据权利要求1所述的手术导航系统,其中,通过如下公式对所述第一三维点云数据和所述第二三维点云数据进行配准,获得第一坐标系与第二坐标系之间的变换关系:
    Figure PCTCN2019125138-appb-100007
    Figure PCTCN2019125138-appb-100008
    其中,所述第一三维点云数据为{x i,i∈m},所述第二三维点云数据为N={y i,i∈n},m≠n,T k为第K次迭代的变换矩阵。
  8. 根据权利要求1所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  9. 根据权利要求2所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  10. 根据权利要求3所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  11. 根据权利要求4所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  12. 根据权利要求5所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  13. 根据权利要求6所述的手术导航系统,其中,所述超声探头上配置有标记点,所述定位装置还用于获取所述标记点在第一坐标系中的位置。
  14. 根据权利要求8所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  15. 根据权利要求9所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  16. 根据权利要求10所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  17. 根据权利要求11所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  18. 根据权利要求12所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  19. 根据权利要求13所述的手术导航系统,其中,所述第一坐标转换模块具体用于:
    建立以所述超声探头为参照物的坐标系,获得超声探头坐标系;
    根据已标定的超声探头成像参数,获得所述术中超声图像坐标系到所述超声探头坐标系的转换关系;
    根据所述转换关系和所述变换关系将所述术中超声图像的空间位置转换至第一坐标系中,获得第一坐标系中的术中超声图像。
  20. 根据权利要求8所述的手术导航系统,其中,所述定位装置为光学定位器或磁定位器。
PCT/CN2019/125138 2019-12-12 2019-12-13 基于肝内血管配准的手术导航系统 WO2021114226A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911273460.0A CN112971982B (zh) 2019-12-12 2019-12-12 基于肝内血管配准的手术导航系统
CN201911273460.0 2019-12-12

Publications (1)

Publication Number Publication Date
WO2021114226A1 true WO2021114226A1 (zh) 2021-06-17

Family

ID=76328798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/125138 WO2021114226A1 (zh) 2019-12-12 2019-12-13 基于肝内血管配准的手术导航系统

Country Status (2)

Country Link
CN (1) CN112971982B (zh)
WO (1) WO2021114226A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113855240A (zh) * 2021-09-30 2021-12-31 上海寻是科技有限公司 一种基于磁导航的医学影像配准系统和方法
CN114145846A (zh) * 2021-12-06 2022-03-08 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114948199A (zh) * 2022-05-17 2022-08-30 天津大学 一种外科手术辅助系统及手术路径规划方法
CN115762722A (zh) * 2022-11-22 2023-03-07 南方医科大学珠江医院 一种基于人工智能的影像复查系统
CN117204951A (zh) * 2023-09-22 2023-12-12 上海睿触科技有限公司 一种基于x射线的手术定位导航设备及其定位实现方法
CN117204951B (zh) * 2023-09-22 2024-04-30 上海睿触科技有限公司 一种基于x射线的手术定位导航设备及其定位实现方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113558765B (zh) * 2021-07-09 2023-03-21 北京罗森博特科技有限公司 导航及复位操作控制系统及方法
CN113974830A (zh) * 2021-11-02 2022-01-28 中国人民解放军总医院第一医学中心 一种用于超声引导甲状腺肿瘤热消融的手术导航系统
CN115311407B (zh) * 2022-04-19 2023-09-12 北京和华瑞博医疗科技有限公司 一种特征点标记方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102266250A (zh) * 2011-07-19 2011-12-07 中国科学院深圳先进技术研究院 超声手术导航系统及超声手术导航方法
WO2014174069A1 (en) * 2013-04-26 2014-10-30 Sonowand As Stand-alone ultrasound unit for image guided surgery
CN106687048A (zh) * 2014-09-08 2017-05-17 皇家飞利浦有限公司 医学成像装置
CN106890025A (zh) * 2017-03-03 2017-06-27 浙江大学 一种微创手术导航系统和导航方法
CN108210024A (zh) * 2017-12-29 2018-06-29 威朋(苏州)医疗器械有限公司 手术导航方法及系统
CN110025379A (zh) * 2019-05-07 2019-07-19 新博医疗技术有限公司 一种超声图像与ct图像融合实时导航系统及方法
US20190223958A1 (en) * 2018-01-23 2019-07-25 Inneroptic Technology, Inc. Medical image guidance
CN110537961A (zh) * 2019-08-01 2019-12-06 中国人民解放军总医院 一种ct和超声影像融合的微创介入引导系统及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402796B (zh) * 2011-10-26 2013-09-18 重庆大学 肝脏血管系统的三维结构化描述方法
CN103040525B (zh) * 2012-12-27 2016-08-03 深圳先进技术研究院 一种多模医学影像手术导航方法及系统
JP7233841B2 (ja) * 2017-01-18 2023-03-07 ケービー メディカル エスアー ロボット外科手術システムのロボットナビゲーション
CN108198235B (zh) * 2017-12-25 2022-03-04 中国科学院深圳先进技术研究院 一种三维超声重建方法、装置、设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102266250A (zh) * 2011-07-19 2011-12-07 中国科学院深圳先进技术研究院 超声手术导航系统及超声手术导航方法
WO2014174069A1 (en) * 2013-04-26 2014-10-30 Sonowand As Stand-alone ultrasound unit for image guided surgery
CN106687048A (zh) * 2014-09-08 2017-05-17 皇家飞利浦有限公司 医学成像装置
CN106890025A (zh) * 2017-03-03 2017-06-27 浙江大学 一种微创手术导航系统和导航方法
CN108210024A (zh) * 2017-12-29 2018-06-29 威朋(苏州)医疗器械有限公司 手术导航方法及系统
US20190223958A1 (en) * 2018-01-23 2019-07-25 Inneroptic Technology, Inc. Medical image guidance
CN110025379A (zh) * 2019-05-07 2019-07-19 新博医疗技术有限公司 一种超声图像与ct图像融合实时导航系统及方法
CN110537961A (zh) * 2019-08-01 2019-12-06 中国人民解放军总医院 一种ct和超声影像融合的微创介入引导系统及方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113855240A (zh) * 2021-09-30 2021-12-31 上海寻是科技有限公司 一种基于磁导航的医学影像配准系统和方法
CN114145846A (zh) * 2021-12-06 2022-03-08 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114145846B (zh) * 2021-12-06 2024-01-09 北京理工大学 基于增强现实辅助的手术导航方法及系统
CN114948199A (zh) * 2022-05-17 2022-08-30 天津大学 一种外科手术辅助系统及手术路径规划方法
CN114948199B (zh) * 2022-05-17 2023-08-18 天津大学 一种外科手术辅助系统及手术路径规划方法
CN115762722A (zh) * 2022-11-22 2023-03-07 南方医科大学珠江医院 一种基于人工智能的影像复查系统
CN115762722B (zh) * 2022-11-22 2023-05-09 南方医科大学珠江医院 一种基于人工智能的影像复查系统
CN117204951A (zh) * 2023-09-22 2023-12-12 上海睿触科技有限公司 一种基于x射线的手术定位导航设备及其定位实现方法
CN117204951B (zh) * 2023-09-22 2024-04-30 上海睿触科技有限公司 一种基于x射线的手术定位导航设备及其定位实现方法

Also Published As

Publication number Publication date
CN112971982B (zh) 2022-08-19
CN112971982A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
WO2021114226A1 (zh) 基于肝内血管配准的手术导航系统
US10762627B2 (en) Method and a system for registering a 3D pre acquired image coordinates system with a medical positioning system coordinate system and with a 2D image coordinate system
CN110946654B (zh) 一种基于多模影像融合的骨科手术导航系统
CN103040525B (zh) 一种多模医学影像手术导航方法及系统
US8781186B2 (en) System and method for abdominal surface matching using pseudo-features
Huang et al. Dynamic 2D ultrasound and 3D CT image registration of the beating heart
JP2009078144A (ja) 副鼻洞形成術ナビゲーションのためのフルオロスコープと計算機式断層写真法との位置揃えのシステム及び利用方法
Alam et al. A review on extrinsic registration methods for medical images
Nimmagadda et al. Patient-specific, touch-based registration during robotic, image-guided partial nephrectomy
CN113229937A (zh) 一种利用实时结构光技术实现手术导航的方法和系统
Xiao et al. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration
US11950951B2 (en) Systems and methods for C-arm fluoroscope camera pose refinement with secondary movement compensation
WO2022165112A1 (en) Systems and methods for c-arm fluoroscope camera pose refinement with secondary movement compensation
Vemuri et al. Interoperative biopsy site relocalization in endoluminal surgery
Wang et al. Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy
Wu et al. Process analysis and application summary of surgical navigation system
CN111714203A (zh) 一种光学定位与电磁定位结合的手术导航方法
Ponzio et al. A Multi-modal Brain Image Registration Framework for US-guided Neuronavigation Systems-Integrating MR and US for Minimally Invasive Neuroimaging
Li et al. Comparison of 2d and 3d ultrasound guided percutaneous renal puncture
Lu et al. Virtual-real registration of augmented reality technology used in the cerebral surgery lesion localization
Manning et al. Surgical navigation
Estépar et al. Multimodality guidance in endoscopic and laparoscopic abdominal procedures
Li et al. Augmented reality using 3D shape model for ultrasound-guided percutaneous renal access: A pig model study
Clements Salient anatomical features for robust surface registration and atlas-based model updating in image-guided liver surgery
Sadeghi-Neshat et al. Design and Implementation of a 3D Ultrasound System for Image Guided Liver Interventions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19955687

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/11/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19955687

Country of ref document: EP

Kind code of ref document: A1