CN110264504B - Three-dimensional registration method and system for augmented reality - Google Patents

Three-dimensional registration method and system for augmented reality Download PDF

Info

Publication number
CN110264504B
CN110264504B CN201910572830.4A CN201910572830A CN110264504B CN 110264504 B CN110264504 B CN 110264504B CN 201910572830 A CN201910572830 A CN 201910572830A CN 110264504 B CN110264504 B CN 110264504B
Authority
CN
China
Prior art keywords
dimensional model
vertebra
coordinate system
spine
reference part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910572830.4A
Other languages
Chinese (zh)
Other versions
CN110264504A (en
Inventor
丛曰声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guorun Health Medical Investment Co ltd
Original Assignee
Beijing Guorun Health Medical Investment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guorun Health Medical Investment Co ltd filed Critical Beijing Guorun Health Medical Investment Co ltd
Priority to CN201910572830.4A priority Critical patent/CN110264504B/en
Publication of CN110264504A publication Critical patent/CN110264504A/en
Application granted granted Critical
Publication of CN110264504B publication Critical patent/CN110264504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a three-dimensional registration method for augmented reality, which comprises the following steps: s1, acquiring CT image data of a spine sample, creating a three-dimensional model of the outer surface of the spine, and calculating to obtain a statistical shape model of the outer surface of the spine; s2, placing a marker on the reference part, obtaining CT image data of the reference part, and creating a three-dimensional model of the external surface of the vertebra of the reference part; and calculating and obtaining the repair data of the reference part vertebra according to the statistical shape model of the external surface of the vertebra. By utilizing the invention, the virtual restoration of the skeleton data of the affected part and the three-dimensional accurate registration of the virtual-real coordinate system between the real coordinate (actual coordinate system) of the patient and the three-dimensional model coordinate (virtual coordinate system) reconstructed based on the CT image in the skeleton vertebra augmented reality operation can be realized according to the self-made marker and the NDI POLARIS high-precision optical positioning system.

Description

Three-dimensional registration method and system for augmented reality
Technical Field
The invention belongs to the field of digital medicine, precise medical treatment and computer graphics, and particularly relates to a three-dimensional registration method and system for augmented reality
Background
The minimally invasive surgery is a leading-edge technology of surgical operation research, and how to make the minimally invasive surgery 'accurate' is a problem to be solved urgently at present. At present, orthopedic minimally invasive surgery still depends on two-dimensional X-ray fluoroscopy guidance, and has the limitations of long learning curve, inaccurate positioning of body surface puncture points, delayed operation time, important structural damage caused by puncture errors, excessive X-ray fluoroscopy accompanied with latent radiation and the like. Although navigation and robot-assisted surgery techniques are applied in clinic, the navigation and robot-assisted surgery techniques are expensive in equipment and complex to operate, and are not favorable for popularization. Augmented reality is a technology for superposing digital virtual information to a real scene, and is an effective way for orthopedic minimally invasive surgery. How to realize the accurate registration of the virtual-real three-dimensional data of the patient affected part is one of the technical difficulties of clinical application of orthopedic augmented reality surgery. The technology can realize clinical application of spinal endoscopic treatment of lumbar disc herniation, accurate minimally invasive interventional treatment of femoral head necrosis, accurate minimally invasive interventional treatment of hallux valgus and the like.
In recent years, with the rapid development of medical acquisition equipment such as computed tomography and magnetic resonance imaging, and technologies such as digital image processing, virtual reality, augmented reality, and the like, the application of the augmented reality technology to minimally invasive surgery has become a trend of development, and a large amount of research work is carried out by scholars at home and abroad. An augmented reality protractor system was constructed by Yuichiro Abe et al, japan, for guiding percutaneous vertebroplasty. Elmi-Terander et al, a university medical university of Carolins, Sweden, uses an augmented reality intraoperative navigation system provided by Philips corporation to guide placement of thoracic pedicle screws. Jan Fritz et al, the institute of radiology, John Hopkins medical school, USA, developed a set of 2D augmented reality systems based on a 1.5TMRI device for use in spinal interventional therapy and joint imaging of the shoulder and hip joints.
The accurate registration of the virtual and real three-dimensional data of the patient affected part is the basis of the orthopedic augmented reality operation, and the aim is to accurately place the virtual three-dimensional data of the patient affected part in the real world, so that a doctor can be helped to perform auxiliary diagnosis and treatment. At present, a registration technology based on multi-source multi-modal medical images becomes a mainstream technology of research. At present, the registration of virtual three-dimensional data mainly comprises two main methods based on electromagnetic tracking and computer vision:
(1) the electromagnetic tracker generally consists of a transmitter, a receiver and a calculation module, and calculates the relative position by using the coupling relation between the transmitted and induced signals. The Komagacho et al provides a method based on a fuzzy system BP neural network and a least square support vector machine, and realizes accurate virtual and real registration based on an electromagnetic tracker. The Zhang et al realizes the position tracking of the surgical tool through an electromagnetic tracking system and guides a doctor to carry out the interventional minimally invasive surgery. Although the electromagnetic tracker is already corrected based on the electromagnetic principle, in actual use, due to the use environment, the use distance, the magnetic field interference and the like, and the influence of drift errors and accumulated errors of a gyroscope and an accelerometer inside an inertial sensor, an observed value and an actual value of six degrees of freedom have errors, and the application requirement of precise medical surgery cannot be met.
(2) The vision-based registration method mainly utilizes the mark points or the reference object to calculate the position and the posture of the camera or the depth sensor so as to realize the tracking of the camera or the surgical instrument, and has the advantages of high operation speed, high precision and the like. Shelten et al use an Optotrak 3020 optical tracking device to calculate the position and pose parameters of the surgical instrument to enable the positioning, tracking, and registration of the medical image with the surgical instrument. However, this method is also susceptible to occlusion, and has limitations such as susceptibility to mismatching under high-speed motion conditions. Congratulatory and astronaut et al propose a method for fusing optical and inertial tracking information, which can obtain accurate position and attitude information under the condition of partial shielding and accurately feed back virtual-real fused surgical navigation images to users. Because the head-mounted optical-inertial hybrid tracking system is relatively complex, virtual-real registration needs to be realized through a large amount of calculation, and the head-mounted optical-inertial hybrid tracking system is easily influenced by calculation errors.
NDI corporation is primarily directed to the medical, industrial, and scientific fields to provide three-dimensional measurement techniques and services. The NDI Polaris is an optical positioning and tracking system developed by the NDI Polaris, has the advantages of high speed, high precision and the like, but how to complete the registration and fusion method of a virtual-real model on the NDI Polaris, and the registration among a local coordinate system of a patient body, a space coordinate system of a real operation environment and an inertial sensor coordinate system is realized, so that the real-time tracking and display in the operation process are problems to be solved urgently.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a three-dimensional registration method of a virtual-real coordinate system in a spine augmented reality operation by utilizing an NDI POLARIS high-precision optical positioning system, so as to realize the three-dimensional registration of a virtual-real coordinate system between a virtual repair result of skeleton data of a diseased part, a real coordinate (an actual coordinate system) of a patient in the spine augmented reality operation and a three-dimensional model coordinate (a virtual coordinate system) reconstructed by a CT image.
In order to achieve the above object, the present invention is achieved by the following technical solutions.
A method of three-dimensional registration of virtual and real coordinate systems, the method comprising:
s1, acquiring CT image data of a spine sample, creating a three-dimensional model of the outer surface of the spine, and calculating to obtain a statistical shape model of the outer surface of the spine;
s2, placing a marker on the reference part, obtaining CT image data of the reference part, and creating a three-dimensional model of the external surface of the vertebra of the reference part; and calculating and obtaining the repair data of the reference part vertebra according to the statistical shape model of the external surface of the vertebra.
Preferably, the step S1 includes:
s11, calculating a threshold value by adopting an Otsu method based on a genetic algorithm, realizing the segmentation of the vertebra and the skin, and creating first three-dimensional models of the inner surface and the outer surface of the vertebra sample by adopting a Marching Cubes algorithm based on the segmentation;
s12, calculating the gravity center of the first three-dimensional model, establishing a covariance matrix formed by point clouds of the first three-dimensional model, calculating three orthogonal principal components of the covariance matrix by using a characteristic decomposition algorithm, establishing a first local coordinate system, wherein the origin of coordinates of the first local coordinate system is the same as the origin of coordinates of the gravity center, the coordinate axes of the first local coordinate system respectively correspond to the three principal components, the Z axis in the first local coordinate system is the direction of the first three-dimensional model with the maximum change of point cloud data distribution, the X axis is the direction of the first three-dimensional model with the minimum change of point cloud data distribution, and the first local coordinate system accords with the right-hand rule;
s13, geometrically transforming the first three-dimensional model of each sample in the spine samples into the first local coordinate system by adopting a geometric transformation method to form a second three-dimensional model;
s14, performing equidistant equal-angle cylindrical sampling along the Z axis on the second three-dimensional model, and calculating three-dimensional point cloud on the outer surface of the second three-dimensional model vertebra by adopting a ray and second three-dimensional model intersection algorithm;
s15, calibrating characteristic points on the external surface of the second three-dimensional model vertebra, wherein the characteristic points are defined at the bending part of the vertebra;
s16, according to the characteristic points, carrying out non-rigid registration between the second three-dimensional models of different spine samples; taking the closest point of the two registered second three-dimensional models as a corresponding point, and establishing dense corresponding point cloud of the external surface of the second three-dimensional model spine;
and S17, calculating an average model of the spine outer surface three-dimensional model, principal component coefficients and principal component coefficients by adopting a principal component analysis method for the dense corresponding point clouds, thereby establishing a statistical shape model of the spine outer surface, wherein the point cloud number of the average model in the statistical shape model is l.
Preferably, the step S15 of calibrating the feature points on the external surface of the second three-dimensional model vertebra includes:
picking up two-dimensional pixels on the second three-dimensional model and calculating coordinates of the pixels through inverse projection transformation;
defining the intersection points of the pixel on the near cutting surface and the far cutting surface as A and B respectively, and calculating by adopting an intersection algorithm of rays and a three-dimensional model
Figure BDA0002111322670000031
And an intersection point with the second three-dimensional model, the intersection point being a feature point. Preferably, the second three-dimensional model is represented by an octree structure, the number of triangular plates in the intersection process of the ray and the three-dimensional model is reduced, and the calculation speed is increased.
Preferably, the step S11 includes:
step 1.1.1: aiming at CT image data, calculating a threshold value by adopting an Otsu method based on a genetic algorithm, and segmenting image data of vertebra and skin;
step 1.1.2: for the segmented CT image data, sequentially extracting corresponding pixels from the CT image data of two adjacent layers as voxels;
step 1.1.3: judging whether the color values of two end points of an edge in each voxel are the same or not so as to determine whether an intersection point exists between the edge and an isosurface in the current voxel or not;
step 1.1.4: calculating the intersection points of the edges of the voxel and the isosurface by a linear interpolation method, and constructing a triangular plate by the intersection points on the three edges of the voxel obtained by calculation; detecting each triangular plate, triangulating the triangular plates when obtuse angles exist in the triangular plates, namely selecting the middle points of the sides corresponding to the obtuse angles as new intersection points, connecting the vertexes corresponding to the obtuse angles with the new intersection points, and regenerating two new triangular plates;
step 1.1.5: solving a normal vector at the vertex of a voxel edge by using a central difference method, and then solving a normal vector at the vertex of a triangular patch by using a linear interpolation method; thereby obtaining a first three-dimensional model of the inner and outer surfaces of the spine.
Preferably, the step S14 includes:
step 1.4.1: the ray is parameterized and expressed by the equation
Figure BDA0002111322670000041
Wherein the starting point of the o' is,
Figure BDA0002111322670000042
sampling corresponding vectors at equal angles, wherein k is a parameter;
step 1.4.2: and expressing the second three-dimensional model by using an octree structure, and expressing any one point in each triangular plate by using a barycentric coordinate system as follows: t (u, v) ═ 1-u-v) v0+u·v1+v·v2U is more than or equal to 0, v is more than or equal to 0, and u + v is less than or equal to 1, wherein v0、v1、v2Three vertexes of the triangular plate are shown, (u, v) represents v in the barycentric coordinate system1、v2The coefficient corresponding to the vertex;
step 1.4.3: calculating k, u and v by simultaneously solving a ray equation and a parameter equation of a triangular plate, and taking an intersection point with the maximum k value as a vertex of the outer surface of the vertebral model; preferably, in the solving process, whether the ray direction is consistent with the direction of the normal of the triangular plate or not is judged through the octree structure so as to reduce the number of the triangular plates.
Preferably, the step S2 is:
s21, placing a marker on the skin of the reference part to obtain CT image data of the reference part;
s22, creating a first three-dimensional model of the spine, the markers and the skin of the reference part according to the CT image data of the reference part, wherein the coordinate system of the first three-dimensional model is a global coordinate system; calculating a local first coordinate system of the first three-dimensional model of the reference part vertebra by using a principal component analysis method, and transforming the first three-dimensional model of the reference part vertebra into the local first coordinate system by using geometric transformation to form a second three-dimensional model of the reference part vertebra;
s23, calibrating feature points, preferably, not less than three feature points in each section of vertebra in the second three-dimensional model of the reference part vertebra;
s24, rigid registration is carried out on the second three-dimensional model of the vertebra of the reference position to the average model of the statistical shape model of the vertebra by adopting a closest point iterative algorithm, and a rotation transformation R1 and a translation transformation t1 are obtained through calculation; performing non-rigid registration on the second three-dimensional model of the reference part vertebra to the average model of the vertebra statistical shape model by adopting a non-rigid closest point iterative algorithm; searching a closest point in the registration results of the two models so as to obtain a corresponding point cloud between the second three-dimensional model of the reference part spine and the average model, preferably, improving the speed of searching the closest point by adopting a method of searching the adjacent point on a K-d tree structure, and reducing the backtracking times through a priority queue;
s25, calculating a coefficient QBeneceff of the vertebra three-dimensional model of the reference part in the vertebra statistical shape model space according to the statistical shape model of the vertebra outer surface, wherein the calculation formula is as follows: QBoneVector, QBoneBoff- (QBoneBone) | Eryth2+λ||QBonecoff||2Wherein BoneVector represents a principal component of the statistical shape model, MeanBone represents an average model of the statistical shape model, and λ represents a regularization coefficient;
s26: calculating first spine repair data of the reference part by using the coefficient QBonecoff, and marking the first spine repair data as QBoneRecover, wherein the calculation formula is as follows: QBoneRecoverr ═ MeanBone + BoneVector.QBooneco, where QBoneRecoverr ═ p0,p1,...,pl},plPoint clouds of an average model in the statistical shape model are calculated, and l represents the number of the point clouds;
s27: transforming the first vertebra repairing data QBeneRecover of the reference position to a global coordinate system by using a rotation transformation R1 and a translation transformation t1 to generate second vertebra repairing data newQBeneRecover, wherein the calculation formula is as follows: newqbonerecovery ═ qbonerecovery · inv (R1) -t1, where inv (R1) represents the inverse matrix of R1;
preferably, the method further comprises:
step S3, acquiring actual coordinate data of a reference part by using a positioning system, performing virtual-real registration with the global coordinate system, and displaying volume rendering data of the reference part, second spine repair data and reference part data obtained by the positioning system in the global coordinate system; wherein the step S3 includes:
s31, measuring the point cloud coordinates of the markers on the reference part by using a positioning system, and recording the point cloud coordinates as
Figure BDA0002111322670000051
Where d is the number of feature points measured on the surface of the marker, rlandmarkiThree-dimensional coordinates representing the ith point cloud, denoted as xi,yi,zi(ii) a Preferably, the number of the characteristic points is not less than 10;
s32, d characteristic points are marked on the three-dimensional model of the marker of the reference part by the method of the step S15, and the geometric coordinates of the characteristic points are marked as
Figure BDA0002111322670000052
tlandmarkiThree-dimensional coordinates representing the ith feature point, denoted as xi,yi,zi(ii) a Preferably, the positions of the feature points coincide with the positions of the feature points in S31;
s33, defining a distance function for geometrically transforming the coordinate system of the positioning system to the global coordinate system according to the coordinates RefLankmark and TarLankmark
Figure BDA0002111322670000061
Wherein R and t are rotation and translation transformation, respectively, and m represents the number of feature points; solving R and t by optimizing a minimum distance function E; preferably, a random sampling consistency algorithm and a singular value decomposition algorithm are adopted for optimization calculation in the solving process, and RefLankmark and TarLankmark selected when the E is the minimum value are taken as corresponding characteristic points to calculate rotation transformation R and translation transformation t;
s34, the positioning system measures data of the reference portion in the actual coordinate system, and the data is expressed as NDI _ Cor ═ xi,yi,zi) (ii) a The coordinates NDI _ Cor ═ R · NDI _ Cor in the global coordinate systemi+t;
S35, converting the second spine repairing data, the volume drawing data of the reference part and the data collected by the positioning system into the global coordinate system and displaying; the volume rendering data of the reference portion is a volume rendering result formed by a volume rendering algorithm and is displayed in the global coordinate system.
A three-dimensional registration system of virtual and real coordinate system comprises a statistic shape model generation module of the external surface of the vertebra, a repair module, an NDI POLARIS high-precision optical positioning system and a registration module, wherein:
the statistical shape model generation module of the external surface of the vertebra is used for generating a statistical shape model of the external surface of the vertebra, which comprises an average model of the external surface of the vertebra, a principal component and a principal component coefficient, by utilizing CT image data of a vertebra sample;
the repairing module is used for establishing a global coordinate system, generating a first spine three-dimensional model and a second spine three-dimensional model of a reference part according to CT image data of the reference part, obtaining first spine repairing data after fitting calculation with the statistical shape model of the outer surface of the spine, and forming second spine repairing data after coordinate system conversion;
and the registration module is used for registering the data acquired by the positioning system with the global coordinate system and displaying the acquired data, the datum part data and the repair data in the global coordinate system through a display device.
Preferably, the statistical shape model generation module of the external surface of the spine includes a segmentation unit, a first three-dimensional model generation unit, a first local coordinate system generation unit, a second three-dimensional model generation unit, a registration unit, and a statistical shape model generation unit, wherein,
the segmentation unit is used for segmenting the spine and the skin in the CT image data of the spine sample;
the first three-dimensional model generating unit is used for generating a first three-dimensional model of the spine in the spine sample;
the first local coordinate system generation unit is used for generating a first local coordinate system;
the second three-dimensional model generating unit is used for transforming the first three-dimensional model of the spine sample into the first local coordinate system to form a second three-dimensional model;
the registration unit is used for establishing dense corresponding point clouds between the second three-dimensional models of the vertebras;
and the statistical shape model generating unit is used for generating a statistical shape model of the external surface of the vertebra, which comprises an average model, principal components and principal component coefficients, according to the dense corresponding point cloud.
Preferably, the repair module includes a reference portion modeling unit, a global coordinate system generation unit, and a repair data generation unit, wherein:
the reference part modeling unit is used for creating a first three-dimensional model of the vertebra of the reference part for the CT image data of the reference part and generating a second three-dimensional model of the vertebra;
the global coordinate system generating unit is used for taking a coordinate system where the first three-dimensional model of the reference part is located as a global coordinate system;
and the repair data generation unit is used for generating repair data of the spine in the reference part.
Preferably, the registration module comprises a marker calibration unit, a coordinate transformation unit and a display unit, wherein;
the marker calibration unit is used for obtaining point cloud coordinates of marker data placed on the reference part transmitted by the positioning module and characteristic point coordinates calibrated on a three-dimensional model of the marker of the reference part generated in the reference part modeling unit;
the coordinate transformation unit is used for calculating rotation and translation transformation according to the point cloud coordinates and the feature point coordinates, and performing three-dimensional registration on the repair data of the spine in the reference part and the data acquired by the positioning module aiming at the reference part under the real environment coordinates to a global coordinate system;
and the display unit is used for displaying the registered data.
The invention has the beneficial effects that:
(1) based on CT image data of the spine, adopting a segmented Marching Cubes-based three-dimensional reconstruction technology to realize the reconstruction of a spine outer surface three-dimensional model, and adopting a non-rigid registration and principal component analysis algorithm to establish a statistical shape model of the spine outer surface so as to be used for comparing with diseased spine data to generate virtual repair data;
(2) the self-made tower-tip-shaped circular marker is adhered to the outer surface of a patient part to serve as a marker point, and compared with a plastic spherical marker provided by NDI, the self-made marker has the advantages of small volume, low cost, easiness in long-term adhesion, easiness in identification in a registration process and the like;
(3) marking the geometric coordinates of the markers of the patient part by using an NDI POLARIS high-precision optical positioning system, and calculating the geometric coordinates of the markers on the surface of the patient by using a quick intersection algorithm of rays and a triangular plate; aiming at the obtained geometric coordinates of the markers, rotation and translation transformation are calculated by using a singular value decomposition method, and registration of a virtual coordinate system and a real coordinate system is realized; and finally, displaying three-dimensional data of patient affected part volume rendering, bones, bone restoration results, skin and the like on a screen.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a method according to one embodiment of the invention;
FIG. 2 is a flow diagram of a method of creating a statistical shape model of an outer surface of a vertebra according to one embodiment of the present invention;
FIG. 3 is a flow diagram of a method of obtaining virtual repair data according to one embodiment of the invention;
FIG. 4 is a flow diagram of a virtual-real three-dimensional coordinate system registration method according to one embodiment of the invention;
FIG. 5 is a schematic view of a marker and its application to a patient's affected area in accordance with one embodiment of the present invention;
fig. 6 is a schematic structural diagram of a registration system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, a technical solution in an embodiment of the present invention will be described in detail and completely with reference to the accompanying drawings in the embodiment of the present invention, and it is obvious that the described embodiment is a part of embodiments of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, the three-dimensional registration method of the virtual and real coordinate system provided by the present invention includes the following steps:
s1, acquiring CT image data of a spine sample, creating a three-dimensional model of the outer surface of the spine, and calculating to obtain a statistical shape model of the outer surface of the spine;
s2, placing a marker on the reference part, obtaining CT image data of the reference part, and creating a three-dimensional model of the external surface of the vertebra of the reference part; and calculating and obtaining the repair data of the reference part vertebra according to the statistical shape model of the external surface of the vertebra.
In step S1, reconstructing the spine three-dimensional model by using a segmentation-based Marching Cubes algorithm, accurately segmenting the spine and the skin by using an ontology based on a genetic algorithm, and directly comparing color values of two end-point pixels in a voxel to reduce the time for determining a topological structure in the Marching Cubes algorithm and improve the three-dimensional reconstruction speed; and aiming at the spine three-dimensional model, a principal component analysis method is adopted for carrying out coordinate geometric correction, and further, a cylindrical surface sampling method with equal intervals and equal angles is adopted for realizing the calculation of the point cloud on the outer surface of the spine. In the calculation process, the speed of the intersection algorithm of the ray and the three-dimensional model is improved through the octree structure and the consistency of the judgment normal; according to the characteristic points calibrated on the spine outer surface three-dimensional model, the non-rigid registration of the spine outer surface three-dimensional model is realized by adopting a closest point iterative algorithm and a thin plate spline function, and then the closest point between the two three-dimensional models after registration is used as a corresponding point to establish a dense corresponding relation between vertexes of the spine outer surface three-dimensional model; and calculating an average model, principal components and principal component coefficients of the three-dimensional model of the external surface of the vertebra by adopting a principal component analysis method, and establishing a statistical shape model of the external surface of the vertebra, wherein the model is used for virtual repair of the affected part.
As shown in fig. 2, step S1 specifically includes the following steps:
step 1.1: acquiring CT image data of a spine sample, and performing three-dimensional modeling on the inner and outer surfaces of the spine in all the samples by adopting a Marching Cubes algorithm based on segmentation; in the algorithm, whether the two end point color values of each edge in the voxel formed by the segmentation result are the same or not is directly compared, so that whether the edge is intersected with the isosurface or not is judged, the extraction of the contour line in the Marching cube algorithm is avoided, and the three-dimensional reconstruction speed is accelerated.
Step 1.1, the concrete steps are as follows:
step 1.1.1: aiming at CT image data of a spine sample, calculating a threshold value by adopting an Otsu method based on a genetic algorithm to realize image segmentation of the spine and the skin;
step 1.1.2: for the segmented CT image data, sequentially taking out corresponding 8 pixels from the adjacent two layers of CT images as voxels;
step 1.1.3: for each voxel, judging whether the color values of two end points of each edge in the voxel are the same or not, and determining whether an intersection point exists between the edge and an isosurface in the current voxel or not;
step 1.1.4: calculating the intersection points of the edges of the voxel and the isosurface by a linear interpolation method, and constructing a triangular plate by the intersection points on the three edges of the voxel obtained by calculation; detecting each triangular plate, triangulating the triangular plates when obtuse angles exist in the triangular plates, namely selecting the middle points of the sides corresponding to the obtuse angles as new intersection points, connecting the vertexes corresponding to the obtuse angles with the new intersection points, and regenerating two new triangular plates;
step 1.1.5: solving a normal vector at the vertex of a voxel edge by using a central difference method, and then solving a normal vector at the vertex of a triangular plate by using a linear interpolation method; thereby obtaining a first three-dimensional model of the inner and outer surfaces of the spine.
Step 1.2: calculating the barycentric coordinate of the first three-dimensional model of the vertebra, and recording the barycentric coordinate as o; then, a covariance matrix formed by three-dimensional model point clouds is established, three orthogonal principal components corresponding to the covariance matrix are calculated by utilizing a characteristic decomposition algorithm, a first local coordinate system is established, and the coordinate axes of o on the coordinate origin of the coordinate system respectively correspond to the three principal components; the coordinate axes of the local coordinate system respectively correspond to the three main components, wherein the Z axis in the first local coordinate system is the direction with the maximum distribution change of the three-dimensional model point cloud data, the X axis is the direction with the minimum distribution change of the three-dimensional model point cloud data, and the first local coordinate system accords with a right-hand rule;
step 1.3: transforming the first three-dimensional models of all the vertebral samples into the first local coordinate system established in the step 1.2 by using geometric transformation to form a second three-dimensional model;
step 1.4: performing equidistant equal-angle cylindrical sampling along the Z axis on a second three-dimensional model of the spine after geometric transformation, wherein the coordinate of the starting point is o',
Figure BDA0002111322670000091
is the sampling direction. Calculating the three-dimensional point cloud of the outer surface of the second three-dimensional model vertebra by adopting a ray and second three-dimensional model intersection algorithm;
step 1.4, the concrete steps are as follows:
step 1.4.1: the ray is parameterized and expressed by the equation
Figure BDA0002111322670000101
Wherein the starting point of the o' is,
Figure BDA0002111322670000102
for sampling at equal anglesCorresponding vectors, k being a parameter;
step 1.4.2: and expressing the second three-dimensional model by using an octree structure, and expressing any one point in each triangular plate by using a barycentric coordinate system as follows: t (u, v) ═ 1-u-v · v0+u·v1+v·v2U is more than or equal to 0, v is more than or equal to 0, and u + v is less than or equal to 1, wherein v0,v1,v2Three vertexes of the triangular plate are shown, (u, v) represents v in the barycentric coordinate system1,v2The coefficient corresponding to the vertex; the barycentric coordinate system is a term, namely coordinates of three vertexes are used for representing coordinates of any point in a triangle;
step 1.4.3: and (3) calculating k, u and v by simultaneously solving a ray equation and a parameter equation of a triangular plate, and taking the intersection point with the maximum k value as the vertex of the outer surface of the vertebral model. And in the solving process, whether the ray direction is consistent with the direction of the normal of the triangular plate or not is judged through the octree structure so as to reduce the number of the triangular plates.
Step 1.5: calibrating characteristic points on the external surface of the second three-dimensional model vertebra, wherein the characteristic points are defined at the bending part of the vertebra; specifically, feature points are defined at the curves of the second three-dimensional model surface of each vertebra in combination with physiological spinal features; feature points are then marked on the outer surface of the spine of the second three-dimensional model. The calibration method comprises the following steps: picking up two-dimensional pixels on the three-dimensional model from the screen, calculating the corresponding geometric coordinates of the picked-up pixels through inverse projection transformation, defining the intersection points of the pixels on the near cutting surface and the far cutting surface as A and B respectively, and calculating by adopting the intersection algorithm of the rays and the three-dimensional model in the step 1.4
Figure BDA0002111322670000103
And (4) an intersection point with the spine in the second three-dimensional model, and the intersection point is marked as a characteristic point. The feature points will be used for non-rigid registration between vertebral models;
step 1.6: non-rigid registration of the second three-dimensional models of all the vertebral samples; taking the closest point of the two registered second three-dimensional models as a corresponding point, and establishing dense corresponding point cloud of the external surface of the second three-dimensional model spine; specifically, a second three-dimensional model of the outer surface of the spine is createdThe template model is used for carrying out non-rigid registration on the template model to a second three-dimensional model of the outer surface of other vertebras by adopting a classical closest point iterative algorithm and a thin plate spline function according to the characteristic points calibrated in the step 1.4; further, the closest point between the two second three-dimensional models after registration is used as a corresponding point, and the dense corresponding relation between the point clouds of the second three-dimensional models on the outer surface of the spine is established and recorded as
Figure BDA0002111322670000104
Wherein p isiRepresenting the geometric coordinates of each point cloud in the vertebral outer surface model, and l representing the number of the point clouds; thereby establishing a corresponding relation between the second three-dimensional model point clouds of the outer surface of each vertebra;
step 1.7: calculating the average model MeanBone of the three-dimensional model of the external surface of the vertebra by adopting a principal component analysis method for the dense corresponding point cloud0,p1,...p1},pi=(xi,yi,zi) And a main component Bonevector ═ svectori,i=1,2,3,…,f]And principal component coefficient Bonecoff ═ { sbiAnd i 1, 2, 3, a. Since dense correspondence of point clouds has been established, i.e. each spine outer surface is made up of the same number of point clouds, the average model is the average of all spine outer surface models; x is the number ofi,yi,ziThree-dimensional coordinates representing each point cloud,/, the number of point clouds, svectoriRepresenting principal components in a statistical shape space, sbiThe corresponding principal component coefficients are shown, and f represents the number of principal components. The statistical model realizes the representation of the external surface three-dimensional model of the vertebra in the shape space, and provides a data base for realizing the virtual repair of the vertebra of the affected part in the step two.
In step S2, a pyramidal rounded object is used as a marker (as shown in fig. 5) which is a key for the registration of the virtual and real coordinate systems and is stuck on the skin of the surface of the reference site; similar to the method in step S1, the processing of the CT image data of the reference region uses a Marching Cubes algorithm based on segmentation and equidistant equal-angle cylindrical surface sampling to realize three-dimensional modeling of the outer surface of the spine and the skin of the reference region. Wherein the three-dimensional skeleton model is used for virtual repair, and the three-dimensional skin model is used for registration of virtual and real models. Alternately calibrating characteristic points on the three-dimensional model of the external surface of the spine of the reference part and the average model of the external surface of the spine, realizing non-rigid registration of the three-dimensional model of the external surface of the spine of the reference part and the average model of the external surface of the spine by using a closest point iterative algorithm and a thin plate spline function, further taking the closest point between the two three-dimensional models after registration as a corresponding point, and establishing a point cloud corresponding relation between the spine model of the reference part and the vertex of the average model; according to the vertebra outer surface statistical shape model, the principal component coefficient corresponding to the vertebra of the reference position is calculated through optimizing the energy function, virtual repair data of the vertebra of the reference position are obtained, and the repair result can be displayed in a display device.
As shown in fig. 3, the specific steps of step S2 are as follows:
step 2.1: making a cone-tip-shaped round object as a marker, and pasting the marker on the skin surface of the reference part through adhesive tapes, wherein the marker is the basis of virtual-real registration;
step 2.2: acquiring a CT image of a reference part, and realizing first three-dimensional modeling of vertebra, markers and skin of the reference part by adopting a Marching Cubes algorithm based on segmentation in step 1.1; calculating three-dimensional point clouds of the spine, the marker and the outer surface of the skin by adopting an intersection algorithm of the ray and the three-dimensional model in the step 1.4, and respectively recording the three-dimensional point clouds as
Figure BDA0002111322670000111
And
Figure BDA0002111322670000112
m and n respectively represent the number of point clouds; the coordinate system where the first three-dimensional model is located is a global coordinate system; when displaying, the coordinate system is used as a reference, and other coordinate systems are converted to the coordinate system;
step 2.3: by adopting the characteristic point calibration algorithm in the step 1.5, the characteristic points are calibrated on the second three-dimensional model QBone of the external surface of the vertebra of the reference part and the mean model MeanBone of the external surface of the vertebra established in the step 1.4 alternately, and in order to ensure the accuracy of the registration result, preferably, no less than three pairs of characteristic points are arranged in each section of vertebra;
step 2.4: calculating a rotation and translation matrix through a closest point iterative algorithm to realize rigid registration of QBone to MeanBone, and calculating to obtain a rotation transformation R1 and a translation transformation t 1; and realizing non-rigid registration of the QBone to the MeanBone by adopting a non-rigid closest point iterative algorithm. And (4) establishing a corresponding relation between the reference part spine model QBOne and the mean model MeanBone point cloud, namely a corresponding point cloud, by searching the closest point in the registration result. The non-rigid registration accuracy of the two spine models is improved through the non-rigid registration method, and a high-quality point cloud dense corresponding relation is established for obtaining virtual repair data of the spine of the reference part; preferably, the method of searching adjacent points on the K-d tree structure is adopted to improve the speed of searching the closest point, and the backtracking times are reduced through the priority queue;
step 2.5: according to the statistical shape model of the external surface of the vertebra established in the step S1, calculating a coefficient QBonecoff corresponding to the three-dimensional model of the vertebra at the reference position in the statistical shape model space of the vertebra, wherein the calculation formula is as follows: QBoneVector, QBoneBoff- (QBoneBone) | Eryth2+λ||QBonecoff||2Wherein BoneVector represents a principal component of the statistical shape model, MeanBone represents an average model of the statistical shape model, and λ represents a regularization coefficient;
step 2.6: calculating first spine repair data of the reference part by using the coefficient QBonecoff, and marking the first spine repair data as QBoneRecover, wherein the calculation formula is as follows: QBoneRecoverr ═ MeanBone + BoneVector.QBooneco, where QBoneRecoverr ═ p0,p1,...,plL represents the number of point clouds of the average model in the statistical shape model; the repair data is constrained by the priori knowledge of the three-dimensional shape of the spine (namely a spine statistical shape model), so that the accuracy of the repair data is higher;
step 2.7: transforming the first vertebra repairing data QBeneRecover of the reference position to a global coordinate system by using a rotation transformation R1 and a translation transformation t1 to generate second vertebra repairing data newQBeneRecover, wherein the calculation formula is as follows: newqbonerecovery ═ qbonerecovery · inv (R1) -t1, where inv (R1) represents the inverse matrix of R1.
In one embodiment, the method further comprises step S3: acquiring actual coordinate data of a reference part by using a positioning system, performing virtual-real registration with the global coordinate system, and displaying volume rendering data of the reference part, second spine repair data and reference part data obtained by the positioning system in the global coordinate system; the positioning system can adopt an NDI POLARIS high-precision optical positioning system, and the precision is high.
In step S3, marking the geometric coordinates of the marker on the reference site in the real environment by using the NDI poiris high-precision optical positioning system, and correspondingly calibrating the geometric coordinates of the feature points on the three-dimensional model of the reference site by using the marking method of the feature points in S15; by utilizing a random sampling consistency algorithm and a singular value decomposition algorithm, selecting a marker fixed point with highest correspondence and calculating rotation and translation transformation, realizing virtual repair of the bone data of the reference part and three-dimensional registration of a virtual-real coordinate system between a real environment coordinate (an actual coordinate system) and a three-dimensional model coordinate (a virtual coordinate system, namely a global coordinate system) reconstructed based on a CT image, thereby fusing the tissue structures (skin, bones, peripheral blood vessels, muscles and the like) of the reference part, the virtually repaired bone model data and the space position of an NDI POLARIS surgical instrument for reality, wherein the method can be used for supporting implementation of augmented reality surgery, and for example, the virtually repaired result of the reference part is displayed by utilizing different colors and transparencies; and mapping the coordinate values picked up by the NDI POLARIS high-precision optical positioning system into a virtual coordinate system according to the corresponding relation between the virtual and real coordinates, thereby realizing the fusion display of the virtual and real data.
As shown in fig. 4, the specific steps of step S3 are as follows:
step 3.1: measuring the point cloud coordinates of the markers placed on the reference position by using the NDI POLARIS high-precision optical positioning system, and recording as
Figure BDA0002111322670000131
Where d is the number of feature points measured on the surface of the marker, rlandmarkiThree-dimensional coordinates representing the ith point cloud, denoted as xi,yi,zi
Step 3.2: using the feature point calibration algorithm in step 1.5, d feature points are calibrated on the three-dimensional model of the marker of the reference part established in step S2, and are recorded as
Figure BDA0002111322670000132
Where d is the number of feature points, tlandmarkiThree-dimensional coordinates representing the ith feature point, denoted by xi,yi,zi(ii) a Preferably, the positions of the feature points coincide with the positions of the feature points in S31;
step 3.3: according to the coordinates RefLankmark and TarLankmark of the corresponding points measured in the steps 3.1 and 3.2, defining a distance function of geometric transformation from the NDI POLARIS high-precision optical positioning system coordinate system to the global coordinate system
Figure BDA0002111322670000133
Wherein R and t are rotation and translation transformation respectively, m represents the number of characteristic points, and R and t are solved by optimizing a minimum distance function E;
in the solving process, a random sampling consistency algorithm and a singular value decomposition algorithm are adopted for optimization calculation, RefLankmark and TarLankmark selected when the E is the minimum value are taken as corresponding feature points, and rotation transformation R and translation transformation t are calculated, so that errors in the calculation process of the rotation and translation matrixes caused by inaccurate feature point calibration are overcome;
step 3.4: the coordinate of a reference position in a real environment measured by an NDI POLARIS high-precision optical positioning system is recorded as NDI _ Cor ═ (x)i,yi,zi) Then, the coordinate in the global coordinate system is represented as NDI _ Cor ═ R · NDI _ Cori+ t; thus matching the object in the real environment to the global coordinate environment;
step 3.5: in the augmented reality display, the volume rendering of the surrounding structure (skin, bone, peripheral blood vessels, muscles and the like) of the reference part is realized by using a volume rendering algorithm; displaying the second repair data of the reference part vertebra in the step S2 by using different colors and transparencies; and tracking the coordinate value NDI _ Cor' picked up by the NDI POLARIS high-precision optical positioning system measuring system in the step 3.4. And realizing the fusion display of the virtual and real data.
The reference site in the real environment may include a real diseased site and may also include intraoperative instruments. The invention aims to convert objects in a real environment to be displayed under the global coordinates created by the invention.
The invention also provides a three-dimensional registration system for augmented reality, which comprises a statistical shape model generation module, a repair module, a positioning module and a registration module of the external surface of the vertebra.
The statistical shape model generation module of the external surface of the vertebra is used for generating a statistical shape model of the external surface of the vertebra, which comprises an average model of the external surface of the vertebra, a principal component and a principal component coefficient, by utilizing the CT image data of the vertebra sample;
the restoring module is used for establishing a global coordinate system, generating a first spine three-dimensional model and a second spine three-dimensional model of a reference part according to the CT image data of the reference part, obtaining first spine restoring data after fitting calculation with the statistical shape model of the outer surface of the spine, and forming second spine restoring data after coordinate system conversion;
and the registration module is used for registering the data acquired by the positioning system with the global coordinate system and displaying the acquired data, the datum part data and the repair data in the global coordinate system through the display device.
The statistical shape model generation module of the external surface of the vertebra comprises a segmentation unit, a first three-dimensional model generation unit, a first local coordinate system generation unit, a second three-dimensional model generation unit, a registration unit and a statistical shape model generation unit. Wherein the content of the first and second substances,
and the segmentation unit is used for segmenting the spine and the skin in the CT image data of the spine sample, and the CT image data of the spine sample can be obtained from a database of a hospital. The segmentation method is to adopt Otsu's method based on genetic algorithm to calculate threshold value to realize the image segmentation of vertebra and skin.
A first three-dimensional model generation unit for generating a three-dimensional model of a vertebra in a vertebra sample. In the unit, for the segmented CT image data generated by the segmentation unit, sequentially taking out corresponding 8 pixels from adjacent two layers of CT images as voxels, judging whether color values of two end points of each edge in the voxels are the same, determining whether an intersection point exists between the edge and an isosurface in the current voxel, calculating the intersection point between the edge of the voxel and the isosurface by a linear interpolation method, solving a normal vector at the vertex of each edge of the voxel by using a central difference method, solving a normal vector at each vertex of a triangular surface patch by using the linear interpolation method, and further obtaining a first three-dimensional model of the inner and outer surfaces of the vertebra sample.
In the unit, whether the two end point color values of each edge in the voxel formed by the segmentation result are the same or not is directly compared, so that whether the edge is intersected with the isosurface or not is judged, the extraction of the contour line in the Marching cube algorithm is avoided, and the speed of three-dimensional modeling is accelerated.
A first local coordinate system generation unit for generating a first local coordinate system. In the unit, calculating a center of gravity of the first three-dimensional model; then, a covariance matrix formed by three-dimensional model point clouds is established, three orthogonal principal components corresponding to the covariance matrix are calculated by utilizing a characteristic decomposition algorithm, a first local coordinate system is established, the coordinate origin of the first local coordinate system is the same as the origin of the gravity center, and the coordinate axes respectively correspond to the three principal components; the coordinate axes of the local coordinate system respectively correspond to the three main components, wherein the Z axis in the first local coordinate system is the direction with the maximum distribution change of the three-dimensional model point cloud data, the X axis is the direction with the minimum distribution change of the three-dimensional model point cloud data, and the first local coordinate system accords with a right-hand rule;
the second three-dimensional model generating unit is used for transforming the first three-dimensional models of all the vertebra samples into the first local coordinate system to form a second three-dimensional model;
and the registration unit is used for establishing dense corresponding point clouds among all the second three-dimensional models of the vertebras. In the unit, the specific implementation method is as follows:
(1) performing equidistant equal-angle cylindrical sampling along the Z axis on the second three-dimensional model of the spine after geometric transformation, whereinThe coordinates of the starting point are o',
Figure BDA0002111322670000151
is the sampling direction. Calculating the three-dimensional point cloud of the outer surface of the second three-dimensional model vertebra by adopting a ray and second three-dimensional model intersection algorithm; the specific method achieved is as described above in step 1.4.
(2) Calibrating characteristic points on the external surface of the second three-dimensional model vertebra, wherein the characteristic points are defined at the bending part of the vertebra; the specific procedure is as described above in step 1.5.
(3) Non-rigid registration of the second three-dimensional models of all the vertebral samples; taking the closest point of the two registered second three-dimensional models as a corresponding point, and establishing dense corresponding point cloud of the external surface of the second three-dimensional model spine; the specific procedure is as described above in step 1.6.
And the statistical shape model generating unit is used for generating a statistical shape model of the external surface of the vertebra, which comprises the average model, the principal component and the principal component coefficient, according to the dense corresponding point cloud. In this unit, the specific steps to be carried out are referred to above as step 1.7.
The repair module comprises a reference part modeling unit, a global coordinate system generating unit and a repair data generating unit. Wherein:
and a reference part modeling unit for modeling the spine in the acquired CT image data of the reference part by a segmentation unit, a first three-dimensional model generation unit, a first local coordinate system generation unit and a second three-dimensional model generation unit in a statistical shape model generation module of the external surface of the spine to obtain three-dimensional modeling of the vertebra, the marker and the skin of the reference part. Wherein the marker is a cone-shaped round object and is stuck on the skin surface of the reference part through an adhesive tape; the CT image data of the reference site includes the marker.
A global coordinate system generating unit, configured to use a coordinate system in which the first three-dimensional model of the reference portion is located as a global coordinate system; the subsequent display is referenced to the coordinate system, and the objects in other coordinate systems are converted to the coordinate system.
A repair data generation unit forAnd generating repair data of the reference part spine. In the unit, (1) feature points are alternately calibrated on the second three-dimensional model QBone of the external surface of the spine of the reference part and the average model MeanBone generated by the statistical shape model generation unit by adopting the feature point calibration algorithm in the step 1.5, and in order to ensure the accuracy of the registration result, preferably, no less than three pairs of feature points are selected in each section of vertebra; (2) calculating a rotation and translation matrix through a closest point iterative algorithm to realize rigid registration of QBone to MeanBone, and calculating to obtain a rotation transformation R1 and a translation transformation t 1; and realizing non-rigid registration of the QBone to the MeanBone by adopting a non-rigid closest point iterative algorithm. And (4) establishing a corresponding relation between the reference part spine model QBOne and the mean model MeanBone point cloud, namely a corresponding point cloud, by searching the closest point in the registration result. The non-rigid registration accuracy of the two spine models is improved by the non-rigid registration method, and a high-quality point cloud dense corresponding relation is established; preferably, the method of searching adjacent points on the K-d tree structure is adopted to improve the speed of searching the closest point, and the backtracking times are reduced through the priority queue; (3) according to the statistical shape model of the external surface of the vertebra, calculating a coefficient corresponding to the vertebra model of the reference position in the statistical shape model space of the vertebra, and marking the coefficient as QBeneceff, wherein the calculation formula is as follows: QBoneVector, QBoneBoff- (QBoneBone) | Eryth2+λ||QBonecoff||2Wherein BoneVector represents a principal component of the statistical shape model, MeanBone represents an average model of the statistical shape model, and λ represents a regularization coefficient; (4) calculating first virtual repair data of the vertebra at the reference position by using a coefficient QBoonecofoff, wherein the calculation formula is as follows: QBoneRecoverr ═ MeanBone + BoneVector.QBooneco, where QBoneRecoverr ═ p0,p1,...,plL represents the number of point clouds of the average model in the statistical shape model; the repair data is constrained by the priori knowledge of the three-dimensional shape of the spine (namely a spine statistical shape model), so that the accuracy of the repair data is higher; (5) transforming the first vertebra repairing data QBeneRecover of the reference position to a global coordinate system by using a rotation transformation R1 and a translation transformation t1 to generate second vertebra repairing data newQBeneRecover, wherein the calculation formula is as follows: newQBoneRecover ═ QBoneRecover · inv (R1) -t1, whereinv (R1) represents the inverse matrix of R1.
And the positioning module is used for acquiring data of the object in the actual environment, such as data of a reference part, a marker and the like. The positioning module can be an NDI POLARIS high-precision optical positioning system. The data here may be image data, spatial data, etc., for later configuration display.
The registration module is used for receiving actual coordinate data of the object, such as data of the reference part, acquired by the positioning system, performing virtual-real registration with the global coordinate system, and displaying a spine model and spine repair data of the reference part in the global coordinate system and reference part data acquired by the positioning system.
The configuration module comprises a characteristic point marking unit, a coordinate transformation unit and a display unit. Wherein the content of the first and second substances,
a marker calibration unit for obtaining point cloud coordinates of the data of the marker placed on the reference position transmitted by the positioning module and recording the point cloud coordinates as
Figure BDA0002111322670000171
Where d is the number of feature points measured on the surface of the marker, rlandmarkiThree-dimensional coordinates representing the ith point cloud, denoted as xi,yi,zi(ii) a The unit also calibrates d feature points, noted as d feature points, on the three-dimensional model of the marker of the reference site generated in the reference site modeling unit using the feature point calibration algorithm in step 1.5
Figure BDA0002111322670000172
Where d is the number of feature points, tlandmarkiThree-dimensional coordinates representing the ith point cloud, denoted as xi,yi,zi;;
And the coordinate transformation unit is used for selecting the marker fixed point with the highest correspondence and calculating rotation and translation transformation by utilizing a random sampling consistency algorithm and a singular value decomposition algorithm so as to realize virtual repair of the spine data of the reference part and three-dimensional registration of a real environment coordinate (an actual coordinate system) and a global coordinate system. In this unit, NDI POLARIS is defined according to the coordinates RefLankmark and TarLankmarkDistance function for geometric transformation from coordinate system of high-precision optical positioning system to global coordinate system
Figure BDA0002111322670000173
Wherein R and t are rotation and translation transformation respectively, m represents the number of characteristic points, and R and t are solved by optimizing a minimum distance function E in numerical calculation; the coordinate of the reference position in the real environment measured by the positioning system is recorded as NDI _ Cor ═ xi,yi,zi) Then, the coordinate in the global coordinate system is represented as NDI _ Cor ═ R · NDI _ Cori+ t; thus, the object in the real environment is matched to the global coordinate environment.
In the solving process, a random sampling consistency algorithm and a singular value decomposition algorithm are adopted for optimization calculation, RefLankmark and TarLankmark selected when the E is the minimum value are taken as corresponding feature points, and rotation transformation R and translation transformation t are calculated, so that errors in the calculation process of the rotation and translation matrixes caused by inaccurate feature point calibration are overcome;
and a display unit for displaying the volume rendering result of each tissue structure (skin, bone, peripheral blood vessels, muscle, etc.) of the reference site, the second spine repair data, and the spatial position of the NDI palaris surgical instrument in a fusion manner. The method can be used for supporting the implementation of augmented reality operation, such as displaying the second repair data of the reference part vertebra by using different colors and transparencies; when augmented reality display is carried out, the realistic volume rendering of the peripheral structure (skin, skeleton, peripheral blood vessel, muscle and the like) of the reference part is realized by using a volume rendering algorithm; displaying second repair data of the virtual repair reference part spine by using different colors and transparencies; and tracking the coordinate value NDI _ Cor' picked up by the NDI POLARIS high-precision optical positioning system measuring system.
Compared with the prior art, the invention has the following advantages and positive effects:
(1) the invention can be used for three-dimensional modeling of the external surface of the vertebra and provides a sampling method of the external surface based on segmented Marching Cubes and cylindrical surface sampling. The method has the advantages that the data volume of the spine model is greatly reduced, and the registration accuracy between the spine models and the accuracy of the spine statistical shape model are improved. The method realizes the segmentation of different tissues by adopting the Otsu method based on the genetic algorithm, further realizes the three-dimensional modeling of the spine by adopting the Marching Cubes algorithm, and improves the speed of algorithm reconstruction because the color values of two end points in the voxel are directly compared. The geometric correction of the vertebra three-dimensional model is realized through a principal component analysis method, and further the calculation of the point cloud on the outer surface of the vertebra is realized through an equidistant and equiangular cylindrical surface sampling method. Because each triangular plate in the spine three-dimensional model needs to be facilitated in the calculation process, the intersection frequency of the triangular plates is greatly reduced and the calculation speed of the point cloud on the outer surface is improved by adopting the octree structure and detecting the consistency of the normal direction.
(2) The invention can be used in the spine augmented reality operation, realizes the three-dimensional registration of the virtual and real coordinate system, is beneficial to the result prediction in the orthopedics augmented operation, and has the positive effects that the virtual repair of the skeleton of the affected part is realized and the accuracy of the virtual repair result is improved according to the established statistical shape model of the complete spine outer surface. The method establishes the point cloud corresponding relation between the affected part and the complete spine outer surface average model through non-rigid registration, thereby calculating the principal component coefficient corresponding to the spine of the affected part in the spine outer surface statistical shape model, and realizing the shape fitting of the skeleton of the affected part through the shape constraint of the statistical shape model. Compared with the existing method for realizing virtual spine repair based on non-rigid registration of an average model, a thin plate spline and the like, the method takes the priori knowledge of the geometric shape of the outer surface of the spine as constraint, overcomes the problems of repair result error and the like caused by difficulty in selection of a reference model and inaccurate registration result in the non-rigid registration method, and improves the accuracy of the virtual repair result of the diseased part. The virtual repairing result and the patient part body drawing result are overlapped to realize the effective guidance of the operation planning scheme in the augmented reality operation.
(3) The invention can be used in the spine augmented reality operation, and the positive effect is mainly shown in that the virtual restoration of the skeleton data of the diseased part and the three-dimensional accurate registration of the virtual-real coordinate system between the real coordinate (actual coordinate system) of the patient and the three-dimensional model coordinate (virtual coordinate system) reconstructed based on the CT image in the spine augmented reality operation are realized according to the self-made marker and the NDI POLARIS high-precision optical positioning system. Compared with the spherical calibration provided by NDI POLARIS, the self-made marker used by the invention has the advantages of small volume, low cost and easy identification in the registration process. In the registration process of the virtual-real coordinate system, a method combining a random sampling consistency algorithm and singular value decomposition is adopted, the problem of inaccurate rotation and translation calculation caused by inaccurate marking of corresponding points is solved, and the marker points selected when the minimum value of the distance function is found are used as calculation bases, so that the accuracy of rotation and translation conversion calculation is improved, and the registration accuracy is improved.
The above examples are only for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. A three-dimensional registration method for augmented reality, the method comprising:
s1, acquiring CT image data of a spine sample, creating a three-dimensional model of the outer surface of the spine, and calculating to obtain a statistical shape model of the outer surface of the spine;
s2, placing a marker on the reference part, obtaining CT image data of the reference part, and creating a three-dimensional model of the external surface of the vertebra of the reference part; calculating and obtaining the repair data of the reference part vertebra according to the statistical shape model of the external surface of the vertebra;
wherein the step S1 includes:
s11, calculating a threshold value by adopting an Otsu method based on a genetic algorithm, realizing the segmentation of the vertebra and the skin, and creating first three-dimensional models of the inner surface and the outer surface of the vertebra sample by adopting a Marching Cubes algorithm based on the segmentation;
s12, calculating the gravity center of the first three-dimensional model, establishing a covariance matrix formed by point clouds of the first three-dimensional model, calculating three orthogonal principal components of the covariance matrix by using a characteristic decomposition algorithm, establishing a first local coordinate system, wherein the origin of coordinates of the first local coordinate system is the same as the origin of coordinates of the gravity center, the coordinate axes of the first local coordinate system respectively correspond to the three principal components, the Z axis in the first local coordinate system is the direction of the first three-dimensional model with the maximum change of point cloud data distribution, the X axis is the direction of the first three-dimensional model with the minimum change of point cloud data distribution, and the first local coordinate system accords with the right-hand rule;
s13, geometrically transforming the first three-dimensional model of each sample in the spine samples into the first local coordinate system by adopting a geometric transformation method to form a second three-dimensional model;
s14, performing equidistant equal-angle cylindrical sampling along the Z axis on the second three-dimensional model, and calculating three-dimensional point cloud on the outer surface of the second three-dimensional model vertebra by adopting a ray and second three-dimensional model intersection algorithm;
s15, calibrating characteristic points on the external surface of the second three-dimensional model vertebra, wherein the characteristic points are defined at the bending part of the vertebra;
s16, according to the characteristic points, carrying out non-rigid registration between the second three-dimensional models of different spine samples; taking the closest point of the two registered second three-dimensional models as a corresponding point, and establishing dense corresponding point cloud of the external surface of the second three-dimensional model spine;
and S17, calculating an average model of the spine outer surface three-dimensional model, principal component and principal component coefficients by adopting a principal component analysis method for the dense corresponding point cloud, thereby establishing a statistical shape model of the spine outer surface.
2. The method according to claim 1, wherein the step S15 of calibrating the feature points on the external surface of the second three-dimensional model vertebra comprises:
picking up two-dimensional pixels on the second three-dimensional model and calculating coordinates of the pixels through inverse projection transformation;
defining the intersection points of the pixel on the near cutting surface and the far cutting surface as A and B respectively, and calculating by adopting an intersection algorithm of rays and a three-dimensional model
Figure FDA0002855337540000011
An intersection point with the second three-dimensional model, the intersection point being a feature point; preferably, the second three-dimensional model is represented by an octree structure, the number of triangular plates in the intersection process of the ray and the three-dimensional model is reduced, and the calculation speed is increased.
3. The method according to claim 1, wherein the step S11 includes:
step 1.1.1: aiming at CT image data, calculating a threshold value by adopting an Otsu method based on a genetic algorithm, and segmenting image data of vertebra and skin;
step 1.1.2: for the segmented CT image data, sequentially extracting corresponding pixels from the CT image data of two adjacent layers as voxels;
step 1.1.3: judging whether the color values of two end points of an edge in each voxel are the same or not so as to determine whether an intersection point exists between the edge and an isosurface in the current voxel or not;
step 1.1.4: calculating the intersection points of the edges of the voxel and the isosurface by a linear interpolation method, and constructing a triangular plate by the intersection points on the three edges of the voxel obtained by calculation; detecting each triangular plate, and performing triangulation on the triangular plates when obtuse angles exist in the triangular plates;
step 1.1.5: solving a normal vector at the vertex of a voxel edge by using a central difference method, and then solving a normal vector at the vertex of the triangular plate by using a linear interpolation method; thereby obtaining a first three-dimensional model of the inner and outer surfaces of the spine.
4. The method according to claim 1, wherein the step S14 includes:
step 1.4.1: the ray is parameterized and expressed by the equation
Figure FDA0002855337540000021
Wherein the starting point of the o' is,
Figure FDA0002855337540000022
sampling corresponding vectors at equal angles, wherein k is a parameter;
step 1.4.2: and expressing the second three-dimensional model by using an octree structure, and expressing any one point in each triangular plate by using a barycentric coordinate system as follows: t (u, v) ═ 1-u-v · v0+u·v1+v·v2U is more than or equal to 0, v is more than or equal to 0, and u + v is less than or equal to 1, wherein v0,v1,v2Three vertexes of the triangular plate are shown, (u, v) represents v in the barycentric coordinate system1,v2The coefficient corresponding to the vertex;
step 1.4.3: calculating k, u and v by simultaneously solving a ray equation and a parameter equation of a triangular plate, and taking an intersection point with the maximum k value as a vertex of the outer surface of the vertebral model; preferably, in the solving process, whether the ray direction is consistent with the direction of the normal of the triangular plate or not is judged through the octree structure so as to reduce the number of the triangular plates.
5. The method according to claim 1, wherein the step S2 is:
s21, placing a marker on the skin of the reference part to obtain CT image data of the reference part;
s22, creating a first three-dimensional model of the spine, the markers and the skin of the reference part according to the CT image data of the reference part, wherein the coordinate system of the first three-dimensional model of the spine, the markers and the skin of the reference part is a global coordinate system; calculating a local first coordinate system of the first three-dimensional model of the reference part vertebra by using a principal component analysis method, and transforming the first three-dimensional model of the reference part vertebra into the local first coordinate system by using geometric transformation to form a second three-dimensional model of the reference part vertebra;
s23, calibrating feature points, preferably, the number of the feature points in each section of vertebra in the second three-dimensional model of the reference part vertebra is not less than three;
s24, rigid registration is carried out on the second three-dimensional model of the vertebra of the reference position to the average model of the statistical shape model of the vertebra by adopting a closest point iterative algorithm, and a rotation transformation R1 and a translation transformation t1 are obtained through calculation; performing non-rigid registration on the second three-dimensional model of the reference part vertebra to the average model of the vertebra statistical shape model by adopting a non-rigid closest point iterative algorithm; searching a closest point in the registration results of the two models so as to obtain a corresponding point cloud between the second three-dimensional model of the reference part vertebra and the average model; preferably, the method of searching adjacent points on the K-d tree structure is adopted to improve the speed of searching the closest point, and the backtracking times are reduced through the priority queue;
s25, calculating a coefficient QBeneceff of the vertebra three-dimensional model of the reference part in the vertebra statistical shape model space according to the statistical shape model of the vertebra outer surface, wherein the calculation formula is as follows: QBoneVector, QBoneBoff- (QBoneBone) | Eryth2+λ||QBonecoff||2Wherein BoneVector represents a principal component of the statistical shape model, MeanBone represents an average model of the statistical shape model, and λ represents a regularization coefficient;
s26: calculating first spine repair data of the reference part by using the coefficient QBonecoff, and marking the first spine repair data as QBoneRecover, wherein the calculation formula is as follows: QBoneRecoverr ═ MeanBone + BoneVector.QBooneco, where QBoneRecoverr ═ p0,p1,...,plL represents the number of point clouds of the average model in the statistical shape model;
s27: and transforming the first vertebra repairing data QBeneRecover of the reference position to the global coordinate system by using a rotation transformation R1 and a translation transformation t1 to generate second vertebra repairing data newQBeneRecover, wherein the calculation formula is as follows: newqbonerecovery ═ qbonerecovery · inv (R1) -t1, where inv (R1) represents the inverse matrix of R1.
6. The method of claim 5, further comprising:
step S3, acquiring actual coordinate data of a reference part by using a positioning system, performing virtual-real registration with the global coordinate system, and displaying volume rendering data of the reference part, second spine repair data and reference part data obtained by the positioning system in the global coordinate system; wherein the step S3 includes:
s31, measuring the point cloud coordinates of the markers on the reference part by using a positioning system, and recording the point cloud coordinates as
Figure FDA0002855337540000031
Where d is the number of feature points measured on the surface of the marker, rlandmarkiThree-dimensional coordinates representing the ith point cloud, denoted as xi,yi,zi(ii) a Preferably, the number of the characteristic points is not less than 10;
s32, d characteristic points are marked on the three-dimensional model of the marker of the reference part by the method of the step S15, and the geometric coordinates of the characteristic points are marked as
Figure FDA0002855337540000041
tlandmarkiThree-dimensional coordinates representing the ith feature point, denoted as xi,yi,zi(ii) a Preferably, the positions of the feature points coincide with the positions of the feature points in S31;
s33, defining a distance function for geometrically transforming the coordinate system of the positioning system to the global coordinate system according to the coordinates RefLankmark and TarLankmark
Figure FDA0002855337540000042
Wherein R and t are rotation and translation transformation, respectively, and m represents the number of feature points; solving R and t by optimizing a minimum distance function E; preferably, a random sampling consistency algorithm and a singular value decomposition algorithm are adopted for optimization calculation in the solving process, and RefLankmark and TarLankmark selected when the E is the minimum value are taken as corresponding characteristic points to calculate rotation transformation R and translation transformation t;
s34, obtaining data of the reference portion in the actual coordinate system by the positioning system, and recording the data as NDI _ Cor ═ by (xi,yi,zi) (ii) a The coordinates NDI _ Cor ═ R · NDI _ Cor in the global coordinate systemi+t;
S35, converting the second spine repairing data, the volume drawing data of the reference part and the data collected by the positioning system into the global coordinate system and displaying; wherein the volume rendering data of the reference site is formed by a volume rendering algorithm and displayed in the global coordinate system.
7. A three-dimensional registration system for augmented reality, the system comprising a statistical shape model generation module, a repair module, a localization module and a registration module for an outer surface of a vertebra,
the statistical shape model generation module of the external surface of the vertebra is used for generating a statistical shape model of the external surface of the vertebra, which comprises an average model of the external surface of the vertebra, a principal component and a principal component coefficient, by utilizing CT image data of a vertebra sample;
the repairing module is used for establishing a global coordinate system, generating a first spine three-dimensional model and a second spine three-dimensional model of a reference part according to CT image data of the reference part, obtaining first spine repairing data after fitting calculation with the statistical shape model of the outer surface of the spine, and forming second spine repairing data after coordinate system conversion;
the registration module is used for registering the data acquired by the positioning system with the global coordinate system and displaying the acquired data, the datum part data and the repair data under the global coordinate system through a display device;
wherein the statistical shape model generation module of the vertebral outer surface comprises a segmentation unit, a first three-dimensional model generation unit, a first local coordinate system generation unit, a second three-dimensional model generation unit, a registration unit and a statistical shape model generation unit,
the segmentation unit is used for segmenting the spine and the skin in the CT image data of the spine sample;
the first three-dimensional model generating unit is used for generating a first three-dimensional model of the spine in the spine sample;
the first local coordinate system generation unit is used for generating a first local coordinate system;
the second three-dimensional model generating unit is used for transforming the first three-dimensional model of the spine sample into the first local coordinate system to form a second three-dimensional model;
the registration unit is used for establishing dense corresponding point clouds between the second three-dimensional models of the vertebras;
and the statistical shape model generating unit is used for generating a statistical shape model of the external surface of the vertebra, which comprises an average model, principal components and principal component coefficients, according to the dense corresponding point cloud.
8. The system according to claim 7, wherein the repair module comprises a reference site modeling unit, a global coordinate system generation unit, and a repair data generation unit, wherein:
the reference part modeling unit is used for establishing a first three-dimensional model of the vertebra of the reference part for the CT image data of the reference part and generating a second three-dimensional model of the vertebra;
the global coordinate system generating unit is used for taking a coordinate system where the first three-dimensional model of the reference part is located as a global coordinate system;
and the repair data generation unit is used for generating repair data of the spine in the reference part.
9. The system according to claim 7, wherein the registration module comprises a marker calibration unit, a coordinate transformation unit, and a display unit, wherein;
the marker calibration unit is used for obtaining point cloud coordinates of marker data placed on the reference part transmitted by the positioning module and characteristic point coordinates calibrated on a three-dimensional model of the marker of the reference part generated in the reference part modeling unit;
the coordinate transformation unit is used for calculating rotation and translation transformation according to the point cloud coordinates and the feature point coordinates, and performing three-dimensional registration on the repair data of the spine in the reference part and the data acquired by the positioning module aiming at the reference part under the real environment coordinates to a global coordinate system;
and the display unit is used for displaying the registered data.
CN201910572830.4A 2019-06-28 2019-06-28 Three-dimensional registration method and system for augmented reality Active CN110264504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910572830.4A CN110264504B (en) 2019-06-28 2019-06-28 Three-dimensional registration method and system for augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910572830.4A CN110264504B (en) 2019-06-28 2019-06-28 Three-dimensional registration method and system for augmented reality

Publications (2)

Publication Number Publication Date
CN110264504A CN110264504A (en) 2019-09-20
CN110264504B true CN110264504B (en) 2021-03-30

Family

ID=67922683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910572830.4A Active CN110264504B (en) 2019-06-28 2019-06-28 Three-dimensional registration method and system for augmented reality

Country Status (1)

Country Link
CN (1) CN110264504B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581513B (en) * 2019-09-29 2022-10-21 北京大学 Cone beam computed tomography image feature extraction and corresponding method
CN110992243B (en) * 2019-10-29 2023-12-22 平安科技(深圳)有限公司 Intervertebral disc cross-section image construction method, device, computer equipment and storage medium
CN113256814B (en) * 2020-02-10 2023-05-30 北京理工大学 Augmented reality virtual-real fusion method and device based on spatial registration
CN111353985B (en) * 2020-03-02 2022-05-03 电子科技大学 Airport self-service consignment luggage detection method based on depth camera
CN112037277B (en) * 2020-07-31 2022-11-18 东南大学 Three-dimensional visualization method based on spine three-dimensional ultrasonic volume data
CN111862171B (en) * 2020-08-04 2021-04-13 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112754658B (en) * 2020-12-31 2023-03-14 华科精准(北京)医疗科技有限公司 Operation navigation system
CN113610826A (en) * 2021-08-13 2021-11-05 推想医疗科技股份有限公司 Puncture positioning method and device, electronic device and storage medium
CN114092447B (en) * 2021-11-23 2022-07-22 北京阿尔法三维科技有限公司 Method, device and equipment for measuring scoliosis based on human body three-dimensional image
CN116993794B (en) * 2023-08-02 2024-05-24 德智鸿(上海)机器人有限责任公司 Virtual-real registration method and device for augmented reality surgery assisted navigation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170961A (en) * 2005-03-11 2008-04-30 布拉科成像S.P.A.公司 Methods and devices for surgical navigation and visualization with microscope
CN102208117A (en) * 2011-05-04 2011-10-05 西安电子科技大学 Method for constructing vertebral three-dimensional geometry and finite element mixture model
US20140276001A1 (en) * 2013-03-15 2014-09-18 Queen's University At Kingston Device and Method for Image-Guided Surgery
CN104138296A (en) * 2013-05-08 2014-11-12 天津市天堰医教科技开发有限公司 Surgical navigation system
CN105096311A (en) * 2014-07-01 2015-11-25 中国科学院科学传播研究中心 Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
CN108510580A (en) * 2018-03-28 2018-09-07 哈尔滨理工大学 A kind of vertebra CT image three-dimensional visualization methods
CN109310476A (en) * 2016-03-12 2019-02-05 P·K·朗 Apparatus and method for operation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025627B (en) * 2017-04-12 2019-10-11 中南大学 The movement of bone and calibration and quantization method close to parameter in CT images
CN109685732B (en) * 2018-12-18 2023-02-17 重庆邮电大学 High-precision depth image restoration method based on boundary capture

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101170961A (en) * 2005-03-11 2008-04-30 布拉科成像S.P.A.公司 Methods and devices for surgical navigation and visualization with microscope
CN102208117A (en) * 2011-05-04 2011-10-05 西安电子科技大学 Method for constructing vertebral three-dimensional geometry and finite element mixture model
US20140276001A1 (en) * 2013-03-15 2014-09-18 Queen's University At Kingston Device and Method for Image-Guided Surgery
CN104138296A (en) * 2013-05-08 2014-11-12 天津市天堰医教科技开发有限公司 Surgical navigation system
CN105096311A (en) * 2014-07-01 2015-11-25 中国科学院科学传播研究中心 Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
CN109310476A (en) * 2016-03-12 2019-02-05 P·K·朗 Apparatus and method for operation
CN108510580A (en) * 2018-03-28 2018-09-07 哈尔滨理工大学 A kind of vertebra CT image three-dimensional visualization methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CARS 2019—Computer Assisted Radiology and Surgery Proceedings of the 33rd International Congress and Exhibition;Chairman: Ulrich Bick, MD 等;《Int J CARS (2019)》;20190618;全文 *
基于增强现实的手术导航图像融合方法研究;刘晓宏;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20150815;正文第11页第2段-12页倒数第3段、第33第2段-41页倒数第3段 *

Also Published As

Publication number Publication date
CN110264504A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110264504B (en) Three-dimensional registration method and system for augmented reality
CN110946654B (en) Bone surgery navigation system based on multimode image fusion
JP7204663B2 (en) Systems, apparatus, and methods for improving surgical accuracy using inertial measurement devices
EP3254621B1 (en) 3d image special calibrator, surgical localizing system and method
US7831096B2 (en) Medical navigation system with tool and/or implant integration into fluoroscopic image projections and method of use
CN107106241B (en) System for navigating to surgical instruments
CN101474075B (en) Navigation system of minimal invasive surgery
US9320569B2 (en) Systems and methods for implant distance measurement
US20210045715A1 (en) Three-dimensional imaging and modeling of ultrasound image data
CN111260786A (en) Intelligent ultrasonic multi-mode navigation system and method
CN108784832A (en) A kind of minimally invasive spine surgical augmented reality air navigation aid
US20080154120A1 (en) Systems and methods for intraoperative measurements on navigated placements of implants
CN202751447U (en) Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
CN113950301A (en) System for computer guided surgery
CN106068451A (en) Operation device and using method thereof
WO2011134083A1 (en) System and methods for intraoperative guidance feedback
CN112168346A (en) Method for real-time coincidence of three-dimensional medical image and patient and operation auxiliary system
Bächler et al. Restricted surface matching—numerical optimization and technical evaluation
CN110891488A (en) Sagittal rotation determination
CN115153835A (en) Acetabular prosthesis placement guide system and method based on feature point registration and augmented reality
Maharjan et al. A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm
Fleute Shape reconstruction for computer assisted surgery based on non-rigid registration of statistical models with intra-operative point data and X-ray images
US20230123621A1 (en) Registering Intra-Operative Images Transformed from Pre-Operative Images of Different Imaging-Modality for Computer Assisted Navigation During Surgery
CN116485850A (en) Real-time non-rigid registration method and system for surgical navigation image based on deep learning
US11950951B2 (en) Systems and methods for C-arm fluoroscope camera pose refinement with secondary movement compensation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Cong Yue-sheng

Inventor before: Cong Yue

GR01 Patent grant
GR01 Patent grant