CN114041876A - Augmented reality orthopedic perspective navigation method and system based on structured light - Google Patents

Augmented reality orthopedic perspective navigation method and system based on structured light Download PDF

Info

Publication number
CN114041876A
CN114041876A CN202111448706.0A CN202111448706A CN114041876A CN 114041876 A CN114041876 A CN 114041876A CN 202111448706 A CN202111448706 A CN 202111448706A CN 114041876 A CN114041876 A CN 114041876A
Authority
CN
China
Prior art keywords
structured light
perspective
camera
coordinate system
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111448706.0A
Other languages
Chinese (zh)
Inventor
张峰峰
张浩东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202111448706.0A priority Critical patent/CN114041876A/en
Publication of CN114041876A publication Critical patent/CN114041876A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/16Bone cutting, breaking or removal means other than saws, e.g. Osteoclasts; Drills or chisels for bones; Trepans
    • A61B17/17Guides or aligning means for drills, mills, pins or wires
    • A61B17/1703Guides or aligning means for drills, mills, pins or wires using imaging means, e.g. by X-rays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/16Bone cutting, breaking or removal means other than saws, e.g. Osteoclasts; Drills or chisels for bones; Trepans
    • A61B17/17Guides or aligning means for drills, mills, pins or wires
    • A61B17/1717Guides or aligning means for drills, mills, pins or wires for applying intramedullary nails or pins
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an augmented reality orthopedic perspective navigation method and system based on structured light, which comprises the following steps: s1: converting an imaging source coordinate system of the X-ray perspective equipment and a coordinate system of the structured light reconstruction model into the same coordinate system by using the calibration target; s2: acquiring a perspective picture of an affected part of a patient shot by an X-ray perspective device and a three-dimensional reconstruction point cloud model obtained by scanning the affected part of the patient with structured light; s3: matching and fusing the perspective picture and the three-dimensional reconstruction point cloud model under the same coordinate system to obtain fusion information; s4: and outputting the fusion information to the structured light to display after image superposition. The invention meets the requirement of a doctor on perspective information in the orthopedic surgery by utilizing the navigation requirement of augmented reality in the orthopedic surgery and combining the structured light.

Description

Augmented reality orthopedic perspective navigation method and system based on structured light
Technical Field
The invention relates to the technical field of surgical navigation, in particular to an augmented reality orthopedic perspective navigation method and system based on structured light.
Background
In the medical industry, the more important the technology is to enhance the existing navigation technology, and part of the technology is rapidly developed in minimally invasive neurosurgery and plastic surgery and trauma surgery. In the field of orthopedic surgery, the characteristic of smaller deformation in the surgery is matched with the characteristic of augmented reality technology. For the introduction of augmented reality to realize navigation in orthopedic surgery, the research is less at present abroad, and the research is more at home in terms of the phoenix feather unicorn. Research in this regard in germany, the united kingdom and the united states is leading in the world, and some research teams have developed many advanced systems and devices in orthopaedic surgery using augmented reality for the head, lower leg, spine, etc., some of which have been successful in clinical trials but have not yet been put into practical large-scale clinical use, which is a long distance away from commercialization and industrialization.
The prior art includes two augmented reality navigation technologies of no mark and marked registration, wherein the marked registration is to implant a trackable mark point into the body or the surface skin of a patient, then perform CT scanning to obtain perspective information, obtain the real space position of the mark point by using high-precision tracking equipment, then register the mark point position in the CT scanning information with the real space coordinate, so as to find the position of the affected part of the patient in the CT in the real space, and finally fuse virtual and real information by using a camera to guide the operation of a doctor, for example, in 2003, the HipNav operation navigation system developed by Jaramaz and the like based on a CT machine; the unmarked registration is that the target registration is fixed under the line, the fluoroscopy equipment and the camera are registered at a certain fixed space position, and after the registration is finished, the patient can be moved to the fixed position to perform the fluoroscopy shooting so as to perform the operation navigation, for example, the operation navigation system of the double-mirror fusion X-ray fluoroscopy image researched and developed by Navab professor and the like in 1999.
The traditional augmented reality navigation system with marker registration has long registration time, complex flow, low integration level, needs a large amount of time and manpower to adjust the position of the device, has narrow operation space, needs to be re-registered if the registration flow is wrong, mark points are touched by mistake or tracking equipment is moved in the operation, can increase the operation flow, prolong the operation time and increase the operation risk;
the traditional unmarked registration augmented reality navigation system has a complex structure, and needs to modify perspective equipment, while offline registration reduces the registration time in the operation, the operation is still complicated, the perspective equipment is required to be at a fixed position, and the large-volume mechanical structure fixed on the perspective equipment also influences the ordinary use of the perspective equipment at ordinary times, so that the system is difficult to be accepted by doctors;
at present, most of augmented reality navigation platforms are complex in structure and high in price, and a common hospital is difficult to pay high cost to buy the platform and is difficult to popularize; compared with the traditional mechanical or electromagnetic navigation technology, the orthopedic navigation technology except augmented reality has the advantages that the navigation is not intuitive enough, the operation is simplified but still complicated, and multiple times of fluoroscopy is still needed, so that a large amount of radiation is still generated for patients and doctors;
most of the current surgical navigation systems based on X-ray fluoroscopy information need to track the position of the patient and the surgical operation by means of external optical positioning equipment, which requires a high-precision preoperative registration process, and the additional surgical site tracking marks cause additional damage to the body of the patient. Moreover, due to the complex composition of the surgical system, the interaction between the devices may affect the efficiency and accuracy of the surgery, and may also cause a large burden on the narrow surgical operation space. For doctors, the surgical navigation system is more desirable to be an auxiliary device in operation, the original surgical procedure does not need to be changed, and the surgical device which is not compatible with the operation of the doctors also lacks the clinical reliability and is difficult to be accepted by the doctors.
Disclosure of Invention
The invention aims to provide an augmented reality orthopedic perspective navigation method and system based on structured light, which meet the requirement of a doctor on perspective information in an orthopedic operation by utilizing the requirement of augmented reality orthopedic perspective navigation and combining the structured light.
In order to solve the technical problem, the invention provides an augmented reality orthopedic perspective navigation method based on structured light, which comprises the following steps:
s1: converting an imaging source coordinate system of the X-ray perspective equipment and a coordinate system of the structured light reconstruction model into the same coordinate system by using the calibration target;
s2: acquiring a perspective picture of an affected part of a patient shot by an X-ray perspective device and a three-dimensional reconstruction point cloud model obtained by scanning the affected part of the patient with structured light;
s3: matching and fusing the perspective picture and the three-dimensional reconstruction point cloud model under the same coordinate system to obtain fusion information;
s4: and outputting the fusion information to the structured light to display after image superposition.
As a further improvement of the present invention, the step S1 specifically includes the following steps:
s11: manufacturing a calibration target, adopting a round hole calibration target based on a perspective principle, and enabling an X-ray perspective device to be equivalent to a pinhole camera model;
s22: placing a calibration target under an X-ray perspective device to take an X-ray picture, marking a certain edge corner point of the calibration target as an origin of a world coordinate system, and solving a rotation and translation matrix from a camera coordinate of a pinhole camera model simulated by rays to the world coordinate system by utilizing a Zhang calibration method and recording the rotation and translation matrix as R1T1And the pinhole camera's internal reference matrix is denoted as Intrasic1
S23: keeping the position of a calibration target, taking an optical picture by using an optical camera of structured light, marking the same edge angular point of the calibration target as the origin of a world coordinate system, and obtaining a rotation translation matrix R from the coordinates of the optical camera of the structured light to the world coordinate system by using the Zhang calibration method2T2And the internal reference matrix is denoted as Intrasic2Then, the rotation and translation unit matrix R from the structured light camera coordinate system to the structured light reconstruction coordinate system is obtained3T3Then, the rotational and translational matrix from the structured light reconstruction coordinate system to the world coordinate system can be obtained and recorded as R3T3R2T2
S33: obtaining a translation rotation matrix from the light reconstruction coordinate system to the camera coordinate system of the simulated pinhole camera model as RT ═ R1T1]-1R2T2And completing the calibration of the relative relation of the two spatial positions.
As a further improvement of the present invention, the step S3 specifically includes the following steps:
s31: the structured light reconstruction coordinate system is used as a space origin, and a reconstructed point cloud model is rendered in a virtual space;
s32: using calibrated RT and Intrinsic1Creating in virtual spaceA first virtual camera simulating imaging by the fluoroscopy device;
s33: creating a virtual projection plane according to the point cloud model projected by the first virtual camera, mapping the perspective picture of the affected part of the patient to the virtual plane by using texture mapping, and shooting a picture formed by fusing the point cloud model and the perspective picture of the affected part of the patient by the first virtual camera;
s34: and carrying out region-of-interest division or drilling planning pretreatment on the fused picture to obtain fusion information.
As a further improvement of the present invention, the step S4 specifically includes the following steps: using Intrinsic2And R2T2And a second virtual camera simulating the structured light camera is created, and the second virtual camera receives the processed fusion information and then outputs and displays the fusion information after being superposed with the image of the structured light camera in reality.
An augmented reality orthopaedics perspective navigation system based on structured light comprises a computer workstation, wherein the computer workstation is used for executing the augmented reality orthopaedics perspective navigation method based on structured light.
As a further improvement of the invention, the calibration target adopts a round hole calibration target, the round hole calibration target comprises a rectangular bottom plate, and a plurality of round holes with equal intervals are arranged on the rectangular bottom plate.
As a further improvement of the invention, the structured light comprises a housing and a projector and a camera arranged in the housing, and the relative positions of the projector and the camera are fixed.
As a further improvement of the invention, the system further comprises a mobile support, on which a sliding frame is hinged, and the structured light is mounted on the sliding frame.
As a further improvement of the invention, the X-ray perspective equipment is a movable C-shaped arm, a display screen is further arranged on the movable support, and the display screen is connected with the computer workstation.
The invention has the beneficial effects that: the invention utilizes the requirement of augmented reality for navigation in the orthopedic surgery, meets the requirement of a doctor on perspective information in the orthopedic surgery, has the perspective function in the surgery by the effect of combining virtuality and reality, does not need to shoot X-ray images for many times, and greatly reduces the radiation dose to the doctor and a patient; the superposition precision of the X-ray perspective image and the real space position of the internal skeleton of the body of the patient is less than 2mm, so that the requirements of most orthopedic operations are met; the X-ray fluoroscopy equipment can be independent of the X-ray fluoroscopy equipment, and the use of other functions of the equipment is not influenced by the structural modification of the fluoroscopy equipment; the invention can optimize the calibration process, shorten the whole calibration process to less than five minutes, simplify the calibration method and ensure that a doctor can quickly learn the operation method without accepting training;
the system has strong integration, hardware facilities required by the operation navigation system based on the structured light technology are integrated in the movable device, the requirement of doctors on omnibearing movement of the structured light device in the operation process can be met, the structured light device can be moved to the required position, the coordination and the innovation are increased, the structure is simple, the operation is convenient, the cost is low, and the manual operation errors are reduced.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a calibration target structure employed in the present invention;
FIG. 3 is a schematic diagram of the relationship between the source of X-ray radiation from a planar model according to the present invention;
FIG. 4 is a diagram of a color look-up table needed to create the present invention for rendering and displaying a color point cloud;
FIG. 5 is a diagram illustrating the scalar values versus index values according to the present invention;
FIG. 6 is an exemplary diagram of pixel point values corresponding to a point cloud in accordance with the present invention;
FIG. 7 is a schematic view of the enhancement effect of the present invention;
FIG. 8 is a diagram of a partial hardware architecture employed in an embodiment of the present invention;
the reference numbers in the figures illustrate: 1. calibrating a target; 2. an X-ray source; 3. a planar model; 4. a movable support; 5. a carriage; 6. a structured light; 7. a display.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
Referring to fig. 1, the invention provides an augmented reality orthopedic perspective navigation method based on structured light, comprising the following steps:
s1: converting an imaging source coordinate system of the X-ray perspective equipment and a coordinate system of the structured light reconstruction model into the same coordinate system by using the calibration target;
s2: acquiring a perspective picture of an affected part of a patient shot by an X-ray perspective device and a three-dimensional reconstruction point cloud model obtained by scanning the affected part of the patient with structured light;
s3: matching and fusing the perspective picture and the three-dimensional reconstruction point cloud model under the same coordinate system to obtain fusion information;
s4: and outputting the fusion information to the structured light to display after image superposition.
Specifically, before the method is used, the calibration target is placed on a movable X-ray perspective device to take an X-ray picture, the calibration target is not moved at the moment, an optical camera of the structured light is used to take an optical photo, the world coordinate system of the X-ray source is converted into a reconstructed coordinate system of the structured light by using the two pictures, and the calibration process is completed at the moment. When the device is used, the device does not move the structured light and the X-ray fluoroscopy equipment, the affected part of the patient is moved to the X-ray fluoroscopy equipment to take a perspective picture, the affected part is scanned by the structured light to obtain a three-dimensional reconstruction point cloud model, the picture and the reconstruction model are matched and fused, and then fusion information is output from the optical camera of the structured light to guide a doctor to perform an operation.
The movable X-ray perspective equipment and the structured light alignment calibration method comprise the following steps:
in order to fuse an X-ray image and a structured light reconstruction image, an imaging source (namely a ray source) coordinate system of a perspective device and a coordinate system of a structured light reconstruction model must be converted into the same coordinate system, so that a registration calibration method based on a camera model is designed: firstly, a C-shaped arm is equivalent to a pinhole camera model, and as the calibration target needs to be recognized by an X-ray picture and an optical camera picture, a metal round hole calibration target is designed according to the perspective principle of X-rays, as shown in figure 2;
taking an X-ray picture of a calibration target 1, marking a certain edge corner point of the calibration target 1 as the origin of a world coordinate system, and solving a rotation and translation matrix from a camera coordinate system of a pinhole camera model simulated by rays to the world coordinate system by using a Zhang calibration method and recording the rotation and translation matrix as R1T1And the internal reference matrix of the analog camera is marked as Intrasic1
A camera of the structured light is used for shooting a picture of the calibration target 1, the same edge angular point of the calibration target 1 is marked as the origin of a world coordinate system, and a Zhang calibration method is used for obtaining a rotation translation matrix R from the coordinate system of the structured light camera to the world coordinate system2T2And the internal reference matrix is denoted as Intrasic2Then, a rotation and translation matrix R from the structured light camera coordinate system to the structured light reconstruction coordinate system is obtained3T3Then, the rotational and translational matrix from the structured light reconstruction coordinate system to the world coordinate system can be obtained and recorded as R3T3 R2T2
Due to R3T3Generally, the translation and rotation matrix from the light reconstruction coordinate system to the camera coordinate system of the simulated pinhole camera model is obtained as RT ═ R1T1]-1R2T2And completing the calibration of the relative relation of the two spatial positions.
The method comprises the steps of calibrating internal and external parameters of X-rays by using a calibration method of an optical camera (the X-rays are regarded as the optical camera, and the internal and external parameters obtained at the moment are actually the internal and external parameters of an imaging model of a simulation camera). The manufacturer knows the internal parameters and distortion parameters of the structured light camera, and the external parameters of the structured light camera can be obtained by the internal parameters. Because the calibration algorithm uses a Zhang Zhengyou calibration method and needs to detect angular points, and because a common checkerboard camera calibration board cannot be identified by X-rays, a calibration target 1 which can be simultaneously identified by an optical camera and the X-rays is needed when the relative position of the X-rays and the structured light is to be calibrated. X-ray can be absorbed by metal, so that a hollowed metal plate can be used for manufacturing the calibration target 1 with the characteristic corner points, but the chessboard can obviously not be processed, so that the traditional chessboard corner point detection method is abandoned, the circle center mark points are used for detecting, and a calibration target 1 with 19X19 regular through holes is designed and can be identified by a camera and X-ray.
The augmented reality fusion method comprises the following steps:
using a structured light reconstruction coordinate system as a space origin, rendering a reconstructed point cloud model in a virtual space by utilizing Opengl, and using calibrated RT and Intrasic1Creating a virtual camera simulating C-arm imaging in a virtual space, creating a virtual projection plane according to an opengl camera projection model, mapping an X-ray picture to the virtual plane by using texture mapping, shooting a picture obtained by fusing a point cloud model and the X-ray picture by using the virtual camera, performing preprocessing such as region-of-interest division or drilling planning on the fused X-ray picture, and then utilizing Intrarisic2And R2T2And a second virtual camera simulating the structured light camera is created, and the second virtual camera receives the processed information and displays and outputs the information after being superposed with the image of the structured light camera in reality so as to guide a doctor to perform an operation.
The principle is as follows: after the calibration is finished, the matrix R can be transformed as shown in FIG. 33T3And converting all the space coordinate relations into a coordinate system with the optical center of the structured light camera as an origin, reconstructing an X-ray projection cone beam in the structured light coordinate system through the external parameters of the X-ray source 2 and the size of a CCD imaging plane, and reconstructing a camera projection model through the external parameters of the structured light and the size of a CMOS imaging element of the camera. The method comprises the steps of collecting target point clouds and perspective images, rendering and displaying the point clouds by utilizing a VTK or other visual library platforms, and constructing a virtual camera simulating a structured light camera and X-rays in a graphic space according to reconstructed projection cone beams and parameters of camera projection to obtain the point clouds with different visual angles. And (3) creating a plane model 3 according to the cone beam corresponding relation, and pasting the X-ray picture on a plane by using texture mapping so as to obtain the simple corresponding relation between the point cloud and the X-ray picture. Since the desire to render and display a color point cloud in graphics space requires the creation of a color look-up table, as shown in fig. 4. Therefore, scalar quantity is set for each point of the point cloud, and the corresponding rgb value is found according to the scalar quantitySince each point of the point cloud of the monocular structured light corresponds to each pixel point of the two-dimensional image one to one, a scalar value can be defined for the point cloud and a color lookup table can be created according to the two-dimensional image pixel arrangement sequence shot by the structured light, as shown in fig. 5.
According to this characteristic, the rgb values of the point cloud and of the corresponding pixel points of the picture can be modified according to the scalar number of the point cloud. For example, taking a 1024 × 1024-resolution structured light camera as an example, a scalar number of a certain point in the point cloud is 150, a corresponding pixel coordinate thereof is (0, 150) on the picture, if the rgb value of the point is modified, the rgb value of the pixel coordinate is modified (0, 150) in the video image, if the scalar of the point is 1025, the corresponding pixel coordinate is (1, 1), as shown in fig. 6;
therefore, each point on the point cloud can be traversed, a straight line connecting the point with the ray source is calculated, whether the point has an intersection with the plane model 3 is judged, if the intersection exists, the pixel coordinate (i, j) of the image corresponding to the intersection coordinate is obtained, and the rgb value of the pixel coordinate (i, j) is obtained and is endowed to the point cloud. Since each point of the point cloud corresponds to a pixel of the two-dimensional image shot by the structured light camera one by one, the scalar number of the point with the modified rgb value and the corresponding modified rgb value are recorded, and the corresponding pixel point can be found in the two-dimensional image and the rgb value can be replaced, so that the effect of enhancing reality can be achieved during video output, as shown in fig. 7.
The invention also provides an augmented reality orthopaedics perspective navigation system based on the structured light, which comprises a computer workstation, wherein the computer workstation is used for executing the augmented reality orthopaedics perspective navigation method based on the structured light. The specific implementation process is shown in the following embodiments.
Examples
As shown in fig. 8, an augmented reality orthopedic perspective navigation system based on structured light mainly comprises a display 7, a computer workstation, a C-arm, monocular structured light 6, a movable support 4, a calibration target 1, and the like. The monocular structured light 6 can be adjusted by the movable support 4 to the distance from the C-shaped arm and the pitch angle of the structured light 6 projector, the moving distance of the upper and lower heights ranges from 0.2m to 1.2m, and the projection angles of the projector in the structured light 6 range from 0 to 180 degrees and from 0 to 360 degrees respectively.
The specific hardware setting is as follows:
(1) a computer workstation: the main hardware of the computer workstation related to the invention is configured as a CPU and a GPU display card, the core control computing power of the computer mainly depends on the CPU, and the aspects related to the graph computation and the display rendering mainly depend on the GPU capability. In the surgical process, the liver surface needs to be rapidly reconstructed in three dimensions, so that the computer is required to be capable of rapidly calculating coordinates of the three-dimensional space of the liver surface and rendering a three-dimensional point cloud, and the requirements on the CPU computing capacity of the computer and the display rendering capacity of a GPU display card are high. In order to meet the requirements, the CPU of the computer selected by the invention is i7-9700K, and the display card is GTX-2080 ti. In order to be able to connect to the binocular structured light 6 device and the image interactive display 7, the computer workstation needs to comprise at least 1 DP interface, 1 HDMI interface, and 3 USB interfaces.
(2) Monocular structured light 6: the three-dimensional reconstruction system mainly comprises 1 projector, 1 camera and a fixed shell, wherein the projector is used for projecting fringe gratings, the camera is used for shooting pictures and reconstructing surfaces in a three-dimensional mode, the product power of the projector is 165W, the projection resolution is 1024x 768, the model number of the camera is MER-130-30Ux (-L), and the resolution is 2048 x 1536. The selected projector and the selected camera have the characteristics of relatively low price and small appearance, are convenient to integrate in the fixed shell, and can meet the basic requirement of three-dimensional reconstruction of the surface of the liver. In the operation process, the binocular structured light 6 may need to be adjusted to the area designated by the doctor according to the standing position of the doctor and the change of the operation position, and in order to prevent the parameters of the system from being calibrated for many times in the operation process, the fixing shell is designed. After the monocular camera and the projector are calibrated, the relative position between the monocular camera and the projector cannot be changed any more, otherwise the monocular camera and the projector need to be calibrated again. The camera and the projector are screwed into the threaded holes and are connected with the fixed frame through screws, and after the relative positions of the camera and the projector are adjusted, the camera and the projector are kept fixed, so that the relative positions of the camera and the projector cannot be changed under calibration and subsequent moving operation, and repeated calibration is avoided.
(3) A sliding frame 5: guarantee that projecting apparatus and camera can be according to demand adjustment position, it links to each other to link to each other with movable support 4 through the fixed shell of screw and bolt binocular structure light 6 respectively. The adjustable binocular vision device can freely adjust the sliding of the adjustable binocular vision device in the allowed length range of the movable support 4, and can complete the rotation of 0-180 degrees and 0-360 degrees of binocular light by means of the rotation of the hinge, so that the projection and three-dimensional reconstruction vision field range required by a doctor can be provided.
(4) The display 7: the display 7 adopts a Philips third-generation high-definition display 7, and has the advantages of low price, 4K resolution and 2K screen which can simultaneously meet the requirements of split screen and various different connection modes. 4 threaded holes are drilled on the back of the display 7 and are connected with the movable bracket 4 through screws, so that the display can be suspended in the sight range required by a doctor.
(5) The movable support 4: for supporting the display 7 and providing guidance to the sliding support, the movable support 4 is mainly made of aluminum alloy material, the appearance and size of which are mainly shown in fig. 8, and its main function is to provide a platform for integrating the equipment required by the experiment, facilitating the operations of scanning, three-dimensional reconstruction and the like of the structured light 6.
(6) Calibrating a plate: the calibration plate provided by the invention is a circular calibration plate as shown in fig. 2, the size of the circular calibration plate is 15Cm by 15Cm, and the center distance of the circular calibration plate is 7 mm. Compared with a chessboard calibration plate, the circular calibration plate has high sub-pixel extraction precision, and can obtain better calibration precision under normal conditions.
The system is in carrying out augmented reality navigation method to intramedullary nail fixation, including the following step:
1) the doctor moves the structured light 6 device to the approximate location of the surgical field, fixing the device;
2) the hardware devices of the structured light 6 scanner are turned on. A software control system of the binocular structured light 6 scanner is opened through a computer, the movable C arm is started, and then the system is calibrated by using the calibration target 1;
3) after calibration is finished, the patient is moved to a position below the C arm, an X-ray picture is taken after an intramedullary nail is embedded in the femur or the humerus, the position of the nail hole is found, the patient is fixed on the affected part after the nail hole is aligned with the nail hole, and then an X-ray picture is taken;
4) clicking to start scanning, scanning the surgical area to reconstruct a three-dimensional curved surface in real time, displaying information fused with the perspective image on the display 7 in real time, seeing the perspective information and the planned drilling points in the video shot by the camera, enabling a doctor to use a scalpel to cut a 1-2cm incision after finding the drilling points on the body surface of a patient under the guidance of the real-time video, placing a drill guide sleeve into the incision to drill holes, and then placing fixing screws into nail holes and bones to finish fixing.
The system introduces a structured light system, and can complete augmented reality registration under the condition of not modifying perspective equipment. The system simulates a C-shaped arm or other X-ray perspective equipment into a pinhole camera, designs a metal round hole calibration target 1, and finishes the calibration of the space position of a ray source and structured light reconstruction coordinate system by utilizing the metal calibration target 1. The system consists of a display 7, a computer workstation, a structured light 6, a sliding support, a movable support 4 and the like, and has the advantages of simple structure, convenient operation and low cost. The system fuses the perspective picture and the point cloud model by using a method for creating a virtual camera, and fuses the fusion information with the real camera by using the virtual camera to realize the augmented reality effect.
The invention provides the intraoperative navigation system for providing accurate perspective information on the body surface of a patient for an orthopedic doctor, lays a foundation for the development and popularization of the future augmented reality technology on the orthopedic operation navigation and has a positive promoting effect. The system greatly reduces the radiation to doctors and patients in the operation, provides more visual navigation for the operation process of the doctors, and the doctors can find out the cutting or drilling position on the body surface of the patients through the camera, so that the accurate positioning avoids the open necessity of some orthopedic operations, and the development of minimally invasive orthopedics is facilitated. The operation flow based on the system is the traditional operation flow of the existing doctor simplified instead of changed, is more beneficial to the acceptance of the doctor, and has the advantages of good integration, low price, small volume and the like, so that a plurality of basic hospitals can use the operation flow, and more secondary and tertiary orthopedic operations can be conveniently used. Has important effects on clinical auxiliary diagnosis of orthopedic diseases, success rate of interventional operations, clinical anatomy teaching and the like. Hardware facilities required by the operation navigation system based on the structured light 6 technology are integrated in the movable device, meanwhile, the requirement of a doctor on the omnibearing movement of the structured light device in the operation process can be met, the structured light device can be moved to the required position, and the coordination and the innovation are improved.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. An augmented reality orthopaedics perspective navigation method based on structured light is characterized in that: the method comprises the following steps:
s1: converting an imaging source coordinate system of the X-ray perspective equipment and a coordinate system of the structured light reconstruction model into the same coordinate system by using the calibration target;
s2: acquiring a perspective picture of an affected part of a patient shot by an X-ray perspective device and a three-dimensional reconstruction point cloud model obtained by scanning the affected part of the patient with structured light;
s3: matching and fusing the perspective picture and the three-dimensional reconstruction point cloud model under the same coordinate system to obtain fusion information;
s4: and outputting the fusion information to the structured light to display after image superposition.
2. The structured light-based augmented reality orthopedic perspective navigation method of claim 1, wherein: the step S1 specifically includes the following steps:
s11: manufacturing a calibration target, adopting a round hole calibration target based on a perspective principle, and enabling an X-ray perspective device to be equivalent to a pinhole camera model;
s22: placing a calibration target under an X-ray perspective device to take an X-ray picture, marking a certain edge corner point of the calibration target as an origin of a world coordinate system, and solving a rotation and translation matrix from a camera coordinate of a pinhole camera model simulated by rays to the world coordinate system by utilizing a Zhang calibration method and recording the rotation and translation matrix as R1T1And the pinhole camera's internal reference matrix is denoted as Intrasic1
S23: keeping the position of a calibration target, taking an optical picture by using an optical camera of structured light, marking the same edge angular point of the calibration target as the origin of a world coordinate system, and obtaining a rotation translation matrix R from the coordinates of the optical camera of the structured light to the world coordinate system by using the Zhang calibration method2T2And the internal reference matrix is denoted as Intrasic2Then, the rotation and translation unit matrix R from the structured light camera coordinate system to the structured light reconstruction coordinate system is obtained3T3Then, the rotational and translational matrix from the structured light reconstruction coordinate system to the world coordinate system can be obtained and recorded as R3T3R2T2
S33: obtaining a translation rotation matrix from the light reconstruction coordinate system to the camera coordinate system of the simulated pinhole camera model as RT ═ R1T1]-1R2T2And completing the calibration of the relative relation of the two spatial positions.
3. The structured light-based augmented reality orthopedic perspective navigation method of claim 1, wherein: the step S2 specifically includes the following steps: keeping the positions of the structured light and the X-ray perspective equipment, moving the affected part of the patient to the X-ray perspective equipment to shoot a perspective picture of the affected part of the patient, and scanning the affected part by using the structured light to obtain a three-dimensional reconstruction point cloud model.
4. The structured light-based augmented reality orthopedic perspective navigation method of claim 2, wherein: the step S3 specifically includes the following steps:
s31: the structured light reconstruction coordinate system is used as a space origin, and a reconstructed point cloud model is rendered in a virtual space;
s32: using calibrated RT and Intrinsic1Creating a first virtual camera in the virtual space simulating the imaging of the X-ray fluoroscopy device;
s33: creating a virtual projection plane according to the point cloud model projected by the first virtual camera, mapping the perspective picture of the affected part of the patient to the virtual plane by using texture mapping, and shooting a picture formed by fusing the point cloud model and the perspective picture of the affected part of the patient by the first virtual camera;
s34: and carrying out region-of-interest division or drilling planning pretreatment on the fused picture to obtain fusion information.
5. The structured light-based augmented reality orthopedic perspective navigation method of claim 4, wherein: the step S4 specifically includes the following steps: using Intrinsic2And R2T2And a second virtual camera simulating the structured light camera is created, and the second virtual camera receives the processed fusion information and then outputs and displays the fusion information after being superposed with the image of the structured light camera in reality.
6. The utility model provides an augmented reality orthopedics perspective navigation based on structured light which characterized in that: comprising a computer workstation to perform a structured light based augmented reality orthopaedic perspective navigation method as claimed in any one of claims 1 to 5.
7. The structured-light-based augmented reality orthopedic perspective navigation system of claim 6, wherein: the calibration target is a round hole calibration target which comprises a rectangular bottom plate, and a plurality of round holes with equal intervals are arranged on the rectangular bottom plate.
8. The structured-light-based augmented reality orthopedic perspective navigation system of claim 6, wherein: the structured light comprises a shell, a projector and a camera, wherein the projector and the camera are arranged in the shell, and the relative positions of the projector and the camera are fixed.
9. The structured-light based augmented reality orthopedic perspective navigation system of claim 8, wherein: the system further comprises a movable support, a sliding frame is hinged to the movable support, and the structured light is installed on the sliding frame.
10. The structured-light based augmented reality orthopedic perspective navigation system of claim 9, wherein: the X-ray perspective equipment is a movable C-shaped arm, a display screen is further arranged on the movable support, and the display screen is connected with the computer workstation.
CN202111448706.0A 2021-11-30 2021-11-30 Augmented reality orthopedic perspective navigation method and system based on structured light Pending CN114041876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111448706.0A CN114041876A (en) 2021-11-30 2021-11-30 Augmented reality orthopedic perspective navigation method and system based on structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111448706.0A CN114041876A (en) 2021-11-30 2021-11-30 Augmented reality orthopedic perspective navigation method and system based on structured light

Publications (1)

Publication Number Publication Date
CN114041876A true CN114041876A (en) 2022-02-15

Family

ID=80212089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111448706.0A Pending CN114041876A (en) 2021-11-30 2021-11-30 Augmented reality orthopedic perspective navigation method and system based on structured light

Country Status (1)

Country Link
CN (1) CN114041876A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399503A (en) * 2022-03-24 2022-04-26 武汉大学 Medical image processing method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102784003A (en) * 2012-07-20 2012-11-21 北京先临华宁医疗科技有限公司 Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning
US20180153626A1 (en) * 2010-04-28 2018-06-07 Ryerson University System and methods for intraoperative guidance feedbac
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system
CN109464196A (en) * 2019-01-07 2019-03-15 北京和华瑞博科技有限公司 Using the operation guiding system and registration signal acquisition method of structure light Image registration
CN110141363A (en) * 2019-06-17 2019-08-20 苏州大学 A kind of backbone multistage method for registering based on structure light scan
WO2021069449A1 (en) * 2019-10-06 2021-04-15 Universität Bern System and method for computation of coordinate system transformations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180153626A1 (en) * 2010-04-28 2018-06-07 Ryerson University System and methods for intraoperative guidance feedbac
CN102784003A (en) * 2012-07-20 2012-11-21 北京先临华宁医疗科技有限公司 Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system
CN109464196A (en) * 2019-01-07 2019-03-15 北京和华瑞博科技有限公司 Using the operation guiding system and registration signal acquisition method of structure light Image registration
CN110141363A (en) * 2019-06-17 2019-08-20 苏州大学 A kind of backbone multistage method for registering based on structure light scan
WO2021069449A1 (en) * 2019-10-06 2021-04-15 Universität Bern System and method for computation of coordinate system transformations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN ZHANG,LONG CHEN,FENGFENG ZHANG,LINING SUN: "Research on the Accuracy and Speed of Three-Dimensional Reconstruction of Liver Surface Based on Binocular Structured Light", 《DIGITAL OBJECT IDENTIFIER》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399503A (en) * 2022-03-24 2022-04-26 武汉大学 Medical image processing method, device, terminal and storage medium
CN114399503B (en) * 2022-03-24 2022-07-01 武汉大学 Medical image processing method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
Gavaghan et al. A portable image overlay projection device for computer-aided open liver surgery
Chen et al. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display
CN106687046B (en) Guidance system for positioning a patient for medical imaging
Navab et al. Merging visible and invisible: Two camera-augmented mobile C-arm (CAMC) applications
Navab et al. Camera augmented mobile C-arm (CAMC): calibration, accuracy study, and clinical applications
US20190000564A1 (en) System and method for medical imaging
Blackwell et al. An image overlay system for medical data visualization
WO2021217713A1 (en) Surgical navigation system, computer for performing surgical navigation method, and storage medium
Chu et al. Registration and fusion quantification of augmented reality based nasal endoscopic surgery
Blackwell et al. Augmented reality and its future in orthopaedics.
CN102784003B (en) Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning
CN106691600A (en) Spine pedicle screw implanting and locating device
CN201029876Y (en) Navigation system for bone surgery
CN103211655A (en) Navigation system and navigation method of orthopedic operation
EP2438880A1 (en) Image projection system for projecting image on the surface of an object
CN100581447C (en) Orthopaedics operation navigation system
CN103519895A (en) Orthopedic operation auxiliary guide method
WO2002080773A1 (en) Augmentet reality apparatus and ct method
Blackwell et al. An image overlay system for medical data visualization
Zeng et al. A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation
Mewes et al. Concepts for augmented reality visualisation to support needle guidance inside the MRI
Fotouhi et al. Development and pre-clinical analysis of spatiotemporal-aware augmented reality in orthopedic interventions
Ma et al. Visualization, registration and tracking techniques for augmented reality guided surgery: a review
Liao et al. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay
CN114041876A (en) Augmented reality orthopedic perspective navigation method and system based on structured light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220215

RJ01 Rejection of invention patent application after publication