CN115089293A - Calibration method for spinal endoscopic surgical robot - Google Patents
Calibration method for spinal endoscopic surgical robot Download PDFInfo
- Publication number
- CN115089293A CN115089293A CN202210780332.0A CN202210780332A CN115089293A CN 115089293 A CN115089293 A CN 115089293A CN 202210780332 A CN202210780332 A CN 202210780332A CN 115089293 A CN115089293 A CN 115089293A
- Authority
- CN
- China
- Prior art keywords
- image
- binocular camera
- rigid body
- tracking device
- spine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000003902 lesion Effects 0.000 claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims description 17
- 238000002674 endoscopic surgery Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 15
- 210000000988 bone and bone Anatomy 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000000576 coating method Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 7
- 239000011248 coating agent Substances 0.000 claims description 5
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 210000000056 organ Anatomy 0.000 claims description 3
- 210000001519 tissue Anatomy 0.000 claims description 3
- 208000035965 Postoperative Complications Diseases 0.000 abstract description 5
- 230000002285 radioactive effect Effects 0.000 abstract description 5
- 238000002591 computed tomography Methods 0.000 description 24
- 238000001356 surgical procedure Methods 0.000 description 10
- 239000011324 bead Substances 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000002324 minimally invasive surgery Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000002594 fluoroscopy Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/301—Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/373—Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Gynecology & Obstetrics (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a spine endoscope operation robot calibration method, which comprises the following steps: calibrating the binocular camera by using a calibration device, and determining internal parameters, external parameters and distortion parameters of the binocular camera; the tracking device is connected with a rigid body, the rigid body comprises a spine part of a patient, a tail end of a robot and a spine endoscope, and the binocular camera is used for tracking the tracking device and acquiring pose change information of the rigid body; performing image fusion and registration on the preoperative CT image and the spine endoscope image to obtain the corresponding relation between the lesion part image and the actual physical lesion area; and positioning the local lesion target in the operation area according to the rigid body pose information acquired by the binocular camera and the image fusion registration data to finish calibration. Can effectively realize the hand-eye calibration and rigid body tracking, improve the automation degree of the spine endoscope operation robot, effectively improve the accuracy and stability of the operation, reduce risks in the operation and postoperative complications, and greatly reduce the radioactive damage of CT perspective guide to medical personnel.
Description
Technical Field
The invention relates to the technical field of surgical robots, in particular to a spine endoscope surgical robot calibration method.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Compared with the traditional open surgery, the spine endoscope minimally invasive surgery has the characteristics of small wound, quick postoperative recovery, reliable surgery effect and the like, and most of the spine endoscope minimally invasive surgery is directly performed by doctors or is performed by doctors through spine endoscope surgical robots.
The spine endoscopic surgery robot needs to be calibrated before performing surgery to help the robot obtain a reference position of a coordinate system of a terminal performing instrument, and the calibration is realized by relying on an optical instrument to match with auxiliary facilities in the prior art, so that the problems of complex operation, low surgery efficiency, low accuracy, poor stability and the like exist, CT scanning needs to be performed for multiple times in the surgery to obtain the accurate position of the spine, and more radiation is generated for patients and medical staff.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a spine endoscope operation robot calibration method, which can effectively realize hand-eye calibration and rigid body tracking, improve the automation degree of the spine endoscope operation robot, effectively improve the accuracy and stability of the operation, reduce risks in the operation and postoperative complications, and greatly reduce the radioactive damage of CT perspective guidance to medical care personnel.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a spine endoscopic surgery robot calibration method in a first aspect, which comprises the following steps:
step 1: calibrating the binocular camera by using a calibration device, and determining internal parameters, external parameters and distortion parameters of the binocular camera;
step 2: the tracking device is connected with a rigid body, the rigid body comprises a spine part of a patient, a tail end of a robot, a spine endoscope, a connecting flange of the robot and the spine endoscope, a connecting tool of the robot and the spine endoscope, a clamp or a surgical instrument, and the binocular camera is used for tracking the tracking device and acquiring pose change information of the rigid body;
and step 3: performing image fusion and registration on the preoperative CT image and the spine endoscope image to obtain the corresponding relation between the lesion part image and the actual physical lesion area;
and 4, step 4: and (4) positioning a local lesion target in the operation area according to the rigid body pose information acquired by the binocular camera and the image fusion registration data in the step (3) to finish calibration.
In the step 1, two cameras of the binocular camera respectively send infrared rays to the calibration device, the left camera and the right camera receive the infrared rays reflected by the calibration device, a binary image is obtained, and internal parameters, external parameters and distortion parameters of the binocular camera are obtained.
In the step 1, the calibration device is formed by fixedly connecting four small balls which are positioned on the same plane and are not collinear, each small ball is provided with a retro-reflection coating, and infrared rays emitted by a binocular camera are reflected back to the binocular camera to obtain a binary image.
In the step 2, the tracking device is four small balls which are fixedly connected together, are positioned on the same plane and are not collinear, each small ball is provided with a reverse reflection coating and is connected with the rigid body, and infrared rays emitted by the binocular camera are reflected back to the binocular camera to obtain a binary image of the rigid body.
The diameter of each ball in the tracking device and the calibration device may be the same or different.
And 2, performing three-dimensional reconstruction and calibrating the tracking device according to the binary image acquired by the binocular camera to acquire pose information of the tracking device, and acquiring the pose of the rigid body according to the pose information of the tracking device.
In the step 2, in the process of tracking the posture change of the rigid body, a tracking device is connected with the bone tissue of the operation area of the patient and the tail end of the spinal endoscope, the tracking device is respectively fixed on the bone tissue and the spinal endoscope, and the two-dimensional image information of the position and the posture of the spinal endoscope in the body of the patient in the operation is obtained.
In step 3, reconstructing a tomographic image of a patient organ and a target tissue according to a preoperative CT image of the patient to obtain a three-dimensional visual image model; and obtaining the position corresponding relation between the three-dimensional visual image model and the two-dimensional image information according to the spine endoscope image, and realizing image fusion and registration.
In the steps 1 and 2, based on internal parameters and external parameters of the binocular camera, a space transformation relation between a camera coordinate system and a pixel coordinate system and a space transformation relation between a world coordinate system and the camera coordinate system are obtained, the space transformation relation between the world coordinate system and the pixel coordinate system is obtained, and pose information of a rigid body in an actual physical space in an image is obtained.
And 4, obtaining local lesion target data of the operation area according to the image fusion and registration data obtained in the step 3, reversely solving to obtain the position and attitude information of the local lesion target of the operation area through the space transformation relation matrix of the world coordinate system and the pixel coordinate system obtained in the step 1-2, and simultaneously establishing a known fixed connection coupling relation matrix of the spine part of the patient and the tracking device to obtain the position and attitude state of the required spine part tracking device of the patient.
Compared with the prior art, the above one or more technical schemes have the following beneficial effects:
1. the system is simple and feasible, can effectively realize hand-eye calibration and rigid body tracking, and improves the automation degree and accuracy of the spine endoscope surgical robot.
2. The problems of complex and unstable operation in the traditional calibration process are solved, the accuracy and stability of the operation can be effectively improved, and risks in the operation and postoperative complications are reduced.
3. The preoperative CT image and the binocular camera are utilized to guide the endoscope to track and position through visual images, the scanning times of the intraoperative CT are greatly reduced, the radiation quantity of patients and medical staff in the operation is reduced, and the radioactive damage of the CT perspective guide to the medical staff can be greatly reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a schematic view illustrating a calibration process of a spinal endoscopic surgical robotic system according to one or more embodiments of the present invention;
FIG. 2 is a schematic diagram of a binocular camera tracking rigid body pose change principle provided by one or more embodiments of the invention;
fig. 3 is a schematic structural diagram of a calibration device or a tracking device according to one or more embodiments of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As described in the background art, calibration is required before the spine endoscopic surgery robot performs surgery to help the robot obtain a reference position of a coordinate system of a distal end performing instrument, but the prior art realizes calibration by relying on an optical instrument to cooperate with an auxiliary facility, has the problems of complex operation, low surgery efficiency, low accuracy, poor stability and the like, and needs to perform CT scanning for many times during surgery to obtain an accurate position of a spine, thereby generating more radiation for patients and medical staff.
Therefore, the following embodiments provide a spine endoscopic surgery robot calibration method, which can effectively realize hand-eye calibration and rigid body tracking, improve the automation degree of the spine endoscopic surgery robot, effectively improve the accuracy and stability of the surgery, reduce risks in the surgery and postoperative complications, and greatly reduce the radioactive damage of CT fluoroscopy guidance to medical personnel.
The first embodiment is as follows:
as shown in fig. 1-3, a spine endoscopic surgery robot calibration method comprises the following steps:
step 1: calibrating the binocular camera by using a calibration device to determine information such as internal parameters, external parameters and distortion parameters of the binocular camera;
in the process of calibrating the binocular camera, the passive binocular camera is adopted, the left camera and the right camera respectively send infrared rays to the calibration device, the left camera and the right camera receive the infrared rays reflected by the calibration device, shoot and obtain a binary image, and information such as internal parameters, external parameters, distortion parameters and the like of the binocular camera can be obtained through resolving.
In the calibration process of the binocular camera, the determined internal parameters of the binocular camera are used for converting coordinates from a camera coordinate system to a pixel coordinate system so as to solve the space transformation relation between the camera coordinate system and the pixel coordinate system.
In the calibration process of the binocular camera, the determined external parameters of the binocular camera are used for converting the coordinates from the world coordinate system to the camera coordinate system so as to solve the space transformation relation between the world coordinate system and the camera coordinate system.
In the calibration process of the binocular camera, the determined distortion parameters of the binocular camera are used for obtaining the distortion degree of the original image and the image distortion introduced by the manufacturing precision and the deviation of the assembling process so as to carry out corresponding compensation, and therefore the rigid body high-precision pose information is obtained.
In the calibration process of the binocular camera, the space transformation relation between the world coordinate system and the pixel coordinate system can be obtained through the space transformation relation between the camera coordinate system and the pixel coordinate system and the space transformation relation between the world coordinate system and the camera coordinate system, and therefore the pose information of the rigid body in the image in the actual physical space can be further obtained. In addition, by introducing distortion parameters, the pose information can be compensated to improve accuracy.
In the calibration process of the binocular camera, the used calibration device is four light-reflecting small balls which are fixedly connected together, are positioned on the same plane and are not collinear, and the small balls are provided with retro-reflecting coatings, so that reliable tracking can be realized.
In this embodiment, the calibration device has a structure as shown in fig. 3, and includes four reflective beads fixedly connected together and located on the same plane and not collinear, the relative position relationship of the four beads is not limited, the direction of the light source emitted by the binocular camera is forward direction, the beads are passive reflective beads, the surface has a retro-reflective coating, the direction opposite to the light source emitting direction is backward direction, that is, the reflective light is emitted in the direction of the binocular camera, and thus the reflective light is received by the binocular camera.
Step 2: fixedly connecting a tracking device to a rigid body, wherein the rigid body comprises a spine part of a patient, a tail end of a robot, a spine endoscope, a connecting flange of the robot and the spine endoscope, a connecting tool or clamp of the robot and the spine endoscope, a surgical instrument and the like, and a binocular camera tracks the tracking device in real time to acquire posture change information of the rigid body;
in the process of tracking the pose change of the rigid body, performing binocular camera three-dimensional reconstruction according to the acquired binary image, calibrating the tracking device, calculating the pose information of the tracking device, and fixedly connecting the tracking device and the rigid body, namely acquiring the pose of the rigid body in real time according to the pose of the tracking device.
In the process of tracking the pose change of the rigid body, the space transformation relation and the distortion compensation relation of a world coordinate system and a pixel coordinate system can be obtained through the internal parameters, the external parameters and the distortion parameters of the binocular camera obtained in the step 1, the accurate pose relation of the tracking device in an image is obtained through space change, and the tracking device is calibrated.
In the process of tracking the pose change of the rigid body, the used tracking device is four small reflecting balls which are fixedly connected together, are positioned on the same plane and are not collinear, and the small reflecting balls are provided with retro-reflecting coatings, so that reliable tracking can be realized.
In this embodiment, the tracking device and the calibration device are rigid bodies with four small reflective balls, and may have the same or different structures, and may have the same or different sizes and dimensions, as long as they are located on the same plane and are not collinear.
In the process of tracking the posture change of the rigid body, tracking devices are installed at the tail ends of bone tissues and spinal endoscopes in an operation area of a patient, the tracking devices are respectively fixed on the bone tissues and the spinal endoscopes, and two-dimensional video images of the position and the posture of the spinal endoscopes in the patient body in the operation are obtained in real time.
And 3, step 3: performing image fusion and registration on the preoperative CT image and the spine endoscope real-time image to obtain the corresponding relation between the lesion part image and the actual physical lesion area;
in the process of image fusion and registration, according to the preoperative CT image of a patient, a tomographic image of an organ and a target tissue of the patient is reconstructed to obtain a three-dimensional visual image model, and meanwhile, the position corresponding relation between the three-dimensional image and the two-dimensional video image is established by combining with an endoscope real-time image to realize the image fusion and registration.
In this embodiment, the three-dimensional reconstruction of the CT image is performed in three stages: the CT three-dimensional reconstruction method comprises the steps of CT raw data denoising based on Gaussian filtering, intermediate tomographic image reconstruction based on an inverse distance weighted interpolation method and CT three-dimensional reconstruction drawing based on a ray projection method.
Denoising CT (computed tomography) original data based on Gaussian filtering, and distributing weights by adopting a distance weighting function in consideration of influence of neighborhood pixel point distance on the weights, namely
Wherein, it is assumed that the original image v (i, j) is an N × N matrix, M represents a set of CT image pixel coordinates, N represents all the number of coordinate points in M, u (i, j) represents a weighted value and the cumulative sum thereof is 1, and f (x, y) is the filtered and denoised image.
And (3) reconstructing the middle fault image based on the inverse distance weighted interpolation method, wherein the reconstruction is used for reconstructing the middle pixel point of the CT image fault so as to obtain a smooth reconstructed curved surface and improve the three-dimensional reconstruction effect and accuracy. Wherein the opacity at the neighboring data points and the sample points can be expressed as:
and satisfies the following conditions:
wherein, M i (x i ,y i ,z i ) Other nearby data points, f (M), within the voxel cube grid representing the sample point M (x, y, z) are located i ) As color values at adjacent data points and sample points, μ i Weight of distance from each neighboring point to the sample point,/ i Is the euclidean distance of the neighboring points to the sample point, i 1, 2.
The CT three-dimensional reconstruction drawing based on the ray projection method is that the opacity and the color value of all sampling points on a ray are synthesized and calculated to obtain the final color of a corresponding pixel point, and the final color can be expressed as follows:
wherein S is i 、T i Respectively representing the color saturation and opacity before the incident sampling point, S o 、T o Respectively representing the chroma and the opacity S of the projection ray after passing through the sampling point n 、T n Respectively representing the chroma and the opacity of the current sampling point.
In the image fusion and registration process, an iterative closest point algorithm based on point cloud data is adopted for fusion and registration, namely, three-dimensional point cloud reconstructed by CT images before an operation and surface point cloud obtained by scanning in the operation are continuously moved, and the best overlapping effect is finally obtained by taking the least square distance sum between points as a measurement standard, so that the coordinate conversion relation of the point cloud data is obtained.
Let registration point cloud be U ═ U i ∈R 3 I ═ 1, 2.. 7, m } and V ═ V i ∈R 3 1,2,.., n }, and any point U in U i And a point V in V i And if the distance between the nearest component point pairs in the point cloud data set is the nearest component point pair, the sum of Euclidean distances of the registered point cloud data set can be expressed as:
wherein, R is a rotation matrix, and T is a translation vector. By iterating, solving for R and T minimizes D (R, T).
And 4, step 4: and comprehensively analyzing rigid body pose information acquired by the binocular camera and image fusion registration data, and positioning to a local lesion target in a surgical area.
And (3) obtaining local lesion target data of the operation area according to the image fusion and registration data obtained in the step (3), reversely solving to obtain the pose information of the local lesion target of the operation area through the space transformation relation matrix of the world coordinate system and the pixel coordinate system obtained in the step (1), and further establishing a known fixed coupling relation matrix of the spine part of the patient and the tracking device in a simultaneous manner to obtain the expected pose state of the spine part tracking device of the patient.
And (3) according to the pose information of the tracking device fixedly connected with the rigid body in the operation, which is obtained by the binocular camera in the step (2), wherein the rigid body comprises a spine part of a patient, the tail end of a robot, a spine endoscope, a connecting flange of the robot and the spine endoscope, a connecting tool or clamp of the robot and the spine endoscope, surgical instruments and the like, the rigid body is continuously compared with the obtained expected value, the minimum value is obtained based on the principle of least square method, and the spine endoscope surgical robot in the operation is considered to be positioned at the local lesion target part in the operation area.
The system is simple and feasible, can effectively realize hand-eye calibration and rigid body tracking, and improves the automation degree of the spine endoscope surgical robot; the problems of complex and unstable traditional operation are overcome, the accuracy and stability of the operation can be effectively improved, and risks in the operation and postoperative complications are reduced.
The endoscope tracking and positioning are guided by visual images through the CT image and the binocular camera before the operation, the scanning times of CT in the operation are greatly reduced, the radiation quantity of patients and medical staff in the operation is reduced, and the radioactive damage of CT perspective guide to the medical staff can be greatly reduced.
The principle of the binocular camera tracking rigid body pose changes is shown in fig. 2:
step 1: the method comprises the following steps that a passive binocular camera is adopted, a left camera and a right camera respectively send infrared rays to a calibration device, the left camera and the right camera receive the infrared rays reflected by the calibration device, a binary image is shot and obtained, and information such as internal parameters, external parameters and distortion parameters of the binocular camera can be obtained through resolving;
step 2: performing binocular camera three-dimensional reconstruction according to the acquired binary image;
and step 3: calibrating the tracking device;
and 4, step 4: calculating pose information of the tracking device;
and 5: fixedly connecting the tracking device with the rigid body;
step 6: and calculating the pose of the tracking device in real time so as to obtain the pose of the rigid body.
Wherein, at the terminal tracking device of installation of bone tissue and backbone scope in patient art district, fix tracking device respectively on bone tissue and backbone scope through the bolt, obtain the two-dimensional video image of backbone scope in patient position and gesture in real time in the art.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A calibration method of a spinal endoscopic surgery robot is characterized by comprising the following steps: the method comprises the following steps:
step 1: calibrating the binocular camera by using a calibration device, and determining internal parameters, external parameters and distortion parameters of the binocular camera;
step 2: the tracking device is connected with a rigid body, the rigid body comprises a spine part of a patient, a tail end of a robot, a spine endoscope, a connecting flange of the robot and the spine endoscope, a connecting tool of the robot and the spine endoscope, a clamp or a surgical instrument, and the binocular camera is used for tracking the tracking device and acquiring pose change information of the rigid body;
and step 3: performing image fusion and registration on the preoperative CT image and the spine endoscope image to obtain the corresponding relation between the lesion part image and the actual physical lesion area;
and 4, step 4: and (4) positioning a local lesion target in the operation area according to the rigid body pose information acquired by the binocular camera and the image fusion registration data in the step (3) to finish calibration.
2. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 1, two cameras of the binocular camera respectively send infrared rays to the calibration device, and the left camera and the right camera receive the infrared rays reflected by the calibration device to obtain a binary image, so as to obtain internal parameters, external parameters and distortion parameters of the binocular camera.
3. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 1, the calibration device is four small balls which are fixedly connected together, are positioned on the same plane and are not collinear, each small ball is provided with a retro-reflection coating, and infrared rays emitted by the binocular camera are reflected back to the binocular camera to obtain a binary image.
4. The method for calibrating a spinal endoscopic surgical robot according to claim 1, wherein: in the step 2, the tracking device is four small balls which are fixedly connected together, are positioned on the same plane and are not collinear, each small ball is provided with a retro-reflection coating and is connected with the rigid body, and infrared rays emitted by the binocular camera are reflected back to the binocular camera to obtain a binary image of the rigid body.
5. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: the diameters of the small balls in the tracking device and the calibration device are the same or different.
6. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: and 2, performing three-dimensional reconstruction and calibrating the tracking device according to the binary image acquired by the binocular camera to acquire pose information of the tracking device, and acquiring the pose of the rigid body according to the pose information of the tracking device.
7. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 2, in the process of tracking the posture change of the rigid body, a tracking device is connected with the bone tissue of the operation area of the patient and the tail end of the spinal endoscope, the tracking device is respectively fixed on the bone tissue and the spinal endoscope, and two-dimensional image information of the position and the posture of the spinal endoscope in the body of the patient in the operation is obtained.
8. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 3, a tomographic image of the organ and the target tissue of the patient is reconstructed according to the preoperative CT image of the patient to obtain a three-dimensional visual image model; and obtaining the position corresponding relation between the three-dimensional visual image model and the two-dimensional image information according to the spine endoscope image, and realizing image fusion and registration.
9. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the steps 1 and 2, based on the internal parameters and the external parameters of the binocular camera, the spatial transformation relationship between the camera coordinate system and the pixel coordinate system and the spatial transformation relationship between the world coordinate system and the camera coordinate system are obtained, the spatial transformation relationship between the world coordinate system and the pixel coordinate system is obtained, and the pose information of the rigid body in the image in the actual physical space is obtained.
10. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 4, local lesion target data of the operation area are obtained according to the image fusion and registration data obtained in the step 3, the pose information of the local lesion target of the operation area is obtained through the space transformation relation matrix of the world coordinate system and the pixel coordinate system obtained in the step 1-2 by reverse solution, and the known fixed connection coupling relation matrix of the spine part of the patient and the tracking device is established in a simultaneous manner to obtain the pose state of the required spine part tracking device of the patient.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210780332.0A CN115089293A (en) | 2022-07-04 | 2022-07-04 | Calibration method for spinal endoscopic surgical robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210780332.0A CN115089293A (en) | 2022-07-04 | 2022-07-04 | Calibration method for spinal endoscopic surgical robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115089293A true CN115089293A (en) | 2022-09-23 |
Family
ID=83297465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210780332.0A Pending CN115089293A (en) | 2022-07-04 | 2022-07-04 | Calibration method for spinal endoscopic surgical robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115089293A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117679173A (en) * | 2024-01-03 | 2024-03-12 | 骨圣元化机器人(深圳)有限公司 | Robot assisted navigation spine surgical system and surgical equipment |
CN117830438A (en) * | 2024-03-04 | 2024-04-05 | 数据堂(北京)科技股份有限公司 | Laser radar and camera combined calibration method based on specific marker |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101862205A (en) * | 2010-05-25 | 2010-10-20 | 中国人民解放军第四军医大学 | Intraoperative tissue tracking method combined with preoperative image |
CN107874832A (en) * | 2017-11-22 | 2018-04-06 | 合肥美亚光电技术股份有限公司 | Bone surgery set navigation system and method |
CN109925057A (en) * | 2019-04-29 | 2019-06-25 | 苏州大学 | A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality |
CN110946654A (en) * | 2019-12-23 | 2020-04-03 | 中国科学院合肥物质科学研究院 | Bone surgery navigation system based on multimode image fusion |
CN114129262A (en) * | 2021-11-11 | 2022-03-04 | 北京歌锐科技有限公司 | Method, equipment and device for tracking surgical position of patient |
CN114176772A (en) * | 2021-12-03 | 2022-03-15 | 上海由格医疗技术有限公司 | Preoperative positioning method, system, medium and computer equipment based on three-dimensional vision |
CN114224489A (en) * | 2021-12-12 | 2022-03-25 | 浙江德尚韵兴医疗科技有限公司 | Trajectory tracking system for surgical robot and tracking method using the same |
WO2022062464A1 (en) * | 2020-09-27 | 2022-03-31 | 平安科技(深圳)有限公司 | Computer vision-based hand-eye calibration method and apparatus, and storage medium |
-
2022
- 2022-07-04 CN CN202210780332.0A patent/CN115089293A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101862205A (en) * | 2010-05-25 | 2010-10-20 | 中国人民解放军第四军医大学 | Intraoperative tissue tracking method combined with preoperative image |
CN107874832A (en) * | 2017-11-22 | 2018-04-06 | 合肥美亚光电技术股份有限公司 | Bone surgery set navigation system and method |
CN109925057A (en) * | 2019-04-29 | 2019-06-25 | 苏州大学 | A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality |
CN110946654A (en) * | 2019-12-23 | 2020-04-03 | 中国科学院合肥物质科学研究院 | Bone surgery navigation system based on multimode image fusion |
WO2022062464A1 (en) * | 2020-09-27 | 2022-03-31 | 平安科技(深圳)有限公司 | Computer vision-based hand-eye calibration method and apparatus, and storage medium |
CN114129262A (en) * | 2021-11-11 | 2022-03-04 | 北京歌锐科技有限公司 | Method, equipment and device for tracking surgical position of patient |
CN114176772A (en) * | 2021-12-03 | 2022-03-15 | 上海由格医疗技术有限公司 | Preoperative positioning method, system, medium and computer equipment based on three-dimensional vision |
CN114224489A (en) * | 2021-12-12 | 2022-03-25 | 浙江德尚韵兴医疗科技有限公司 | Trajectory tracking system for surgical robot and tracking method using the same |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117679173A (en) * | 2024-01-03 | 2024-03-12 | 骨圣元化机器人(深圳)有限公司 | Robot assisted navigation spine surgical system and surgical equipment |
CN117830438A (en) * | 2024-03-04 | 2024-04-05 | 数据堂(北京)科技股份有限公司 | Laser radar and camera combined calibration method based on specific marker |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110946654B (en) | Bone surgery navigation system based on multimode image fusion | |
US8064669B2 (en) | Fast 3D-2D image registration system with application to continuously guided endoscopy | |
US8108072B2 (en) | Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information | |
CN107468350B (en) | Special calibrator for three-dimensional image, operation positioning system and positioning method | |
US8792963B2 (en) | Methods of determining tissue distances using both kinematic robotic tool position information and image-derived position information | |
CN115089293A (en) | Calibration method for spinal endoscopic surgical robot | |
US6782287B2 (en) | Method and apparatus for tracking a medical instrument based on image registration | |
EP2433262B1 (en) | Marker-free tracking registration and calibration for em-tracked endoscopic system | |
CN110264504B (en) | Three-dimensional registration method and system for augmented reality | |
US11759272B2 (en) | System and method for registration between coordinate systems and navigation | |
JP6713563B2 (en) | System and method for performing local three-dimensional volume reconstruction using standard fluoroscopy equipment | |
CN112971982B (en) | Operation navigation system based on intrahepatic vascular registration | |
CN111260786A (en) | Intelligent ultrasonic multi-mode navigation system and method | |
WO2009045827A2 (en) | Methods and systems for tool locating and tool tracking robotic instruments in robotic surgical systems | |
WO2019073681A1 (en) | Radiation imaging device, image processing method, and image processing program | |
Wahle et al. | 3D heart-vessel reconstruction from biplane angiograms | |
CN114191078B (en) | Endoscope operation navigation robot system based on mixed reality | |
JP2023520618A (en) | Method and system for using multi-view pose estimation | |
CN116883471A (en) | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture | |
CN110458886A (en) | A kind of surgical navigational automation registration frame of reference | |
CN114073579A (en) | Operation navigation method, device, electronic equipment and storage medium | |
CN116612166A (en) | Registration fusion algorithm for multi-mode images | |
Huang et al. | Image registration based 3D TEE-EM calibration | |
CN114886558A (en) | Endoscope projection method and system based on augmented reality | |
US20220133409A1 (en) | Method for Determining Target Spot Path |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |