CN109785374B - Automatic real-time unmarked image registration method for navigation of dental augmented reality operation - Google Patents

Automatic real-time unmarked image registration method for navigation of dental augmented reality operation Download PDF

Info

Publication number
CN109785374B
CN109785374B CN201910061265.5A CN201910061265A CN109785374B CN 109785374 B CN109785374 B CN 109785374B CN 201910061265 A CN201910061265 A CN 201910061265A CN 109785374 B CN109785374 B CN 109785374B
Authority
CN
China
Prior art keywords
oral cavity
model
dimensional
registration
dimensional oral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910061265.5A
Other languages
Chinese (zh)
Other versions
CN109785374A (en
Inventor
王君臣
纪红蕾
孙振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kemai Qiyuan Technology Co ltd
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910061265.5A priority Critical patent/CN109785374B/en
Publication of CN109785374A publication Critical patent/CN109785374A/en
Application granted granted Critical
Publication of CN109785374B publication Critical patent/CN109785374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses an automatic real-time unmarked image registration method for navigation of a dental augmented reality operation, and relates to the field of medical image processing. The method provided by the invention mainly comprises the following steps: firstly, acquiring Computed Tomography (CT) data of a patient; secondly, preprocessing the acquired CT data such as threshold segmentation and adding the preprocessed CT data into an operation planning model, and performing three-dimensional reconstruction to generate a CT model; thirdly, calibrating and correcting the stereo camera; fourthly, collecting data of a three-dimensional oral cavity scanner of the patient; fifthly, generating a three-dimensional oral cavity scanning model; sixthly, registering the three-dimensional oral cavity scanning model and the CT model; and seventhly, registering the three-dimensional oral cavity scanning model and the three-dimensional camera image. The invention does not need to attach a marker on the patient and can be automatically completed. In addition, a new three-dimensional stereo matching algorithm is provided, and the registration precision is improved. Meanwhile, the three-dimensional camera can also be used as a tracking device for tracking the posture of the surgical instrument, so that a virtual reality surgical navigation example is formed.

Description

Automatic real-time unmarked image registration method for navigation of dental augmented reality operation
Technical Field
The invention relates to the field of medical image processing, in particular to an automatic real-time unmarked image registration method for navigation of a dental augmented reality operation.
Background
In dental surgery, sophisticated medical computer aided design and manufacturing techniques ensure accurate surgical planning based on patient Computed Tomography (CT) data, however, accurate surgical guidance and anatomical visualization are lacking during the surgical procedure, and a doctor performs the surgery using vision and experience, which generally cannot ensure that the surgical result is consistent with the preoperative planning. The augmented reality technology can superimpose a preoperative design scheme and a three-dimensional virtual model of an important anatomical structure onto an operation visual field by virtue of the characteristic of real-time interaction with the environment around the body, so that a real-time perspective effect is generated, a doctor can realize naked eye observation, and the accuracy and the safety of the operation are improved.
The heart of augmented reality is image registration. According to different information used for registration, registration methods based on gray scale information, transform domain and features can be divided. In augmented reality surgical navigation, feature-based registration methods are mostly used, which can be divided into an image registration method based on artificial markers and an image registration method based on natural features. In order to convert augmented reality technology into clinical application in dental surgical navigation, in addition to the accuracy requirement, the image registration needs to have the following features: the current operation process should not be brought with too much extra work; should not cause too much additional infringement to the patient; fast execution and real-time update. This requires image registration based on natural features. However, despite the great progress made in the field of computer vision in recent years, the technology of registration of augmented reality systems in three-dimensional scenes based on natural feature points remains a serious challenge.
There has been some research on navigation of dental augmented reality surgery. The Shanghai university of transportation medical college affiliated to the ninth national hospital is also healthy and the like, and the method of wearing the marker complex on the teeth is adopted in the experimental research of the augmented reality navigation mandible osteotomy to realize the image registration tracking. The patent "mouthpiece-based medical image registration method (201610669284)" uses a method of providing a positioning groove on the mouthpiece to achieve image registration tracking. The patent ' dental robot path planning system and method based on visual navigation ' (201810569970) ' adopts a method of installing optical markers on a doctor hand-held probe and a patient oral cavity to realize image registration tracking. However, the above methods all rely on "wearing markers" and this image registration has the following disadvantages: firstly, too much extra work is brought to the tooth socket to be worn; mismatch of the actual position and the design position of the mark point can cause tracking error; and thirdly, in order to mark the visibility, a bracket is extended on the basis of the tooth socket to place a tracking marker, and the tracking error is obviously increased due to low rigidity of the bracket and long error propagation path.
Disclosure of Invention
The purpose of the invention is as follows:
the invention overcomes the problems that the image registration depends on artificial marks in the existing dental augmented reality operation navigation and the registration precision is influenced by the fact that the CT model and the camera image are directly registered because of the inconsistent boundary shapes, and provides an automatic real-time unmarked image registration method.
The technical scheme is as follows:
the invention provides an automatic real-time unmarked image registration method for navigation of a dental augmented reality operation, which comprises the following steps of:
the method comprises the following steps: and (5) acquiring CT data. Preoperative patients receive CT scans to obtain CT data for diagnosis and surgical planning.
Step two: and (4) generating a CT model. And (3) processing the CT data in the step one, extracting the teeth, the jaw bones and the surrounding key anatomical tissues of the patient by using a threshold segmentation method, designing virtual implants such as implant teeth or drilling cutting positions and the like, and performing three-dimensional reconstruction to generate a three-dimensional CT model.
Step three: and calibrating and correcting the stereo camera. Camera parameters of the two optical cameras are obtained to align the two camera image planes in a common plane.
Step four: and (5) acquiring data of the three-dimensional oral scanner. Intraoperatively, the teeth are scanned using a three-dimensional oral scanner, which obtains three-dimensional oral scanner data, exposing as much of the tooth area as possible to the camera.
Step five: and generating a three-dimensional oral cavity scanning model. And performing three-dimensional reconstruction on the data of the three-dimensional oral cavity scanner obtained in the step three, rapidly drawing a boundary line of the obtained model by using a semi-automatic segmentation tool based on curvature, and extracting the exposed tooth part which is not covered by the gum to obtain a three-dimensional oral cavity scanning model.
Step six: and registering the three-dimensional oral cavity scanning model and the CT model. The method comprises the steps of firstly carrying out primary registration based on Principal Component Analysis (PCA), then carrying out fine registration based on Iterative Closest Point (ICP), converting a three-dimensional oral cavity scanning model into a CT space, automatically completing the whole registration process, and only executing once.
Step seven: the three-dimensional oral scan model (transformed to CT space) is registered with the stereo camera image. The registration process adopts a scheme from coarse to fine, a pyramid layering algorithm is adopted for primary registration, and then a quasi-Newton method is adopted for iterative solution to realize fine registration. Thereby dynamically analyzing the geometric relationship between the CT space and the camera space.
Compared with the prior art, the invention has the beneficial effects that:
1. the present invention does not rely on manual labeling. The registration is directly achieved by matching the three-dimensional model with the stereo image. Therefore, it does not cause any invasion or discomfort to the patient.
2. According to the invention, registration does not need to bring much extra work, only the teeth of a patient need to be seen by the camera, and the registration can be almost completed in real time, so that the interference on the current operation process is reduced to the maximum extent.
3. The invention overcomes the problem that the registration precision is influenced by the fact that the CT model and the camera image are directly registered because of the inconsistency of the boundary shapes, provides a novel three-dimensional matching algorithm and improves the registration precision.
4. The three-dimensional camera can also be used as a tracking device for tracking the posture of the surgical instrument, so that a virtual reality surgical navigation example is formed.
Drawings
FIG. 1 is a flow chart of an automatic real-time label-free image registration method
FIG. 2 is a process of registering a three-dimensional oral cavity scanning model and a CT model
FIG. 3 is a process of initial registration of three-dimensional oral cavity scanning model and stereo camera image
FIG. 4 is a process of fine registration of three-dimensional oral cavity scanning model and stereo camera image
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, which are used for explanation and not limitation of the present invention.
The invention provides an automatic real-time unmarked image registration method for dental augmented reality operation navigation. And then, registering the three-dimensional oral cavity scanning model, the CT model and the stereo camera image in sequence. And finally, dynamically analyzing the spatial relationship between the CT space and the stereo camera according to the transformation chain. Referring to fig. 1, the implementation steps are specifically described as follows:
and (I) acquiring CT data. Preoperative patients receive CT scans to obtain CT data for diagnosis and surgical planning.
And (II) generating a CT model. And (3) processing the CT data in the step one, extracting the teeth, the jaw bones and the surrounding key anatomical tissues of the patient by using a threshold segmentation method, designing virtual implants such as implant teeth or drilling cutting positions and the like, and performing three-dimensional reconstruction to generate a three-dimensional CT model.
And (III) calibrating and correcting the stereo camera. Camera parameters of the two optical cameras are obtained to align the two camera image planes in a common plane. The stereo camera is used for tracking the teeth of the patient.
And (IV) acquiring data of the three-dimensional oral scanner. Intraoperatively, the teeth are scanned using a three-dimensional oral scanner, which obtains three-dimensional oral scanner data, exposing as much of the tooth area as possible to the camera.
And (V) generating a three-dimensional oral cavity scanning model. And performing three-dimensional reconstruction on the data of the three-dimensional oral cavity scanner obtained in the step three, rapidly drawing a boundary line of the obtained model by using a semi-automatic segmentation tool based on curvature, and extracting the tooth part which is not covered by the gum to obtain a three-dimensional oral cavity scanning model.
And (sixthly), registering the three-dimensional oral cavity scanning model with the CT model. See fig. 2. The three-dimensional bounding boxes of teeth are obviously different in width, length and height, so that primary registration based on Principal Component Analysis (PCA) is firstly carried out, then fine registration based on Iterative Closest Point (ICP) is carried out, the three-dimensional oral cavity scanning model is converted into a CT space, the whole registration process is automatically completed, and the registration process is only carried out once.
The initial registration based on PCA is solved by adopting a quaternion and Singular Value Decomposition (SVD) method. The PCA procedure is briefly described as follows:
and (5) performing decentralization on the data to obtain a zero-mean data model X.
SVD (singular value decomposition) is carried out on X to obtain X ═ U ∑ VTObtaining the covariance matrix C ═ XTThe characteristic value of X is Λ ═ Σ2The corresponding feature vector is V. The eigenvalues are arranged from big to small, and the eigenvectors corresponding to the first three eigenvalues are the three main directions.
Respectively calculating three main directions of the CT model and the three-dimensional oral cavity scanning model, adding respective gravity centers to obtain 4 corresponding characteristic point pairs, calculating quaternion (w, x, y, z) by using a construction matrix of a covariance matrix, and further obtaining a rotation matrix R0And a translation matrix s0
Figure BDA0001954193300000041
Figure BDA0001954193300000042
Completing initial registration to obtain an initial registration matrix T0=(R0,s0)。Q=T0Q。
And solving a rotation matrix by adopting a singular value decomposition method in the ICP-based fine registration.
For a CT model P and a three-dimensional oral cavity scanning model Q, a k-d tree method is used for searching a point set consisting of points closest to the central European style of Q in P
Figure BDA0001954193300000043
Calculating an objective function:
Figure BDA0001954193300000044
if f is less than a certain threshold value or the iteration reaches a certain number of times, stopping the iteration, otherwise, calculating as follows:
Figure BDA0001954193300000051
Figure BDA0001954193300000052
Figure BDA0001954193300000053
SVD W (singular value decomposition) W-U-Sigma VTThe rotation and translation matrixes are respectively R ═ UVT,s=p0-Rq0And (3) updating Q TQ by using the registration matrix T (R, s), updating D by using a k-tree method, and iteratively calculating the objective function f.
And finishing the ICP-based fine registration after the iteration is terminated.
And (seventhly) registering the three-dimensional oral cavity scanning model (transformed into the CT space) with the stereo camera image. In the registration process, a scheme from coarse to fine is adopted, and a pyramid layering algorithm is adopted for primary registration, which is shown in fig. 3; and then, a quasi-Newton method is adopted to carry out iterative solution to realize fine registration, and the reference figure 4 is shown. Thereby dynamically analyzing the geometric relationship between the CT space and the camera space.
The search strategy for the initial registration is described as follows: setting a virtual camera with the same internal reference as the real camera, regularly setting thousands of viewpoints at different positions, correspondingly presenting thousands of two-dimensional views of the three-dimensional oral cavity scanning model, clustering the views according to similarity, wherein each class corresponds to a group of viewpoints, performing down-sampling to a pyramid with a higher level after clustering, repeating the clustering process, forming pyramid level template views, and establishing a pyramid view database.
In the initial registration online search stage, an image pyramid is generated from a stereo camera image, searching is started from the topmost pyramid, the similarity measure of the stereo camera image on the layer and template views in a database is calculated, a cluster template with the highest similarity is selected, matching of lower pyramid template views is continuously carried out along the cluster, and finally the view point corresponding to the bottommost pyramid template view with the highest similarity measure is the best view point, namely the initial pose is determined.
The similarity measure function is:
Figure BDA0001954193300000054
wherein N is the number of pixel points;
Figure BDA0001954193300000055
representing a stereo camera image point (x)i,yi) Gradient of (d)iTwo-dimensional projection profile point (x) representing three-dimensional oral scanning modeli,yi) The normal vector of (2).
The precise registration adopts an iterative solution idea.
Firstly, projecting a three-dimensional oral cavity scanning model on a left image and a right image of a camera to obtain a two-dimensional projection contour point set, and using GammalAnd rrRepresenting that the three-dimensional model point sets corresponding to the three-dimensional model point sets are respectively LlAnd Lr
Then, the stereo image is subjected to Laplacian of Gaussian (LoG for short) filtering, and the zero-crossing point of the filtering result is detected to obtain the edge of the image.
And finally, solving the following function by adopting a BFGS algorithm of a quasi-Newton method:
Figure BDA0001954193300000061
where dist (x, y) represents the Euclidean distance between the non-homogeneous coordinates of x and y; n is a radical oflAnd NrAre each rlAnd rrThe number of midpoints; k is the internal reference matrix of the camera; b is the camera baseline length.Once the function is solved, the gesture is updated and the next iteration is performed until the change in gesture is less than a threshold.
And finally, dynamically analyzing the geometric relation between the CT space and the camera space according to the transformation chain.
In order to evaluate the accuracy of the method, an in-vitro experiment is carried out by applying a resin model of the teeth, and the experimental result shows that the registration error is less than 0.5 mm; in real-time registration, it takes about 0.5 seconds to complete the registration at present.
It should be noted that all parameters and methods in the present invention are not exclusive and all obvious modifications and similar methods with equivalent alternatives are included within the scope of the present invention as long as they do not depart from the theoretical method of automatic real-time markless image registration method for dental augmented reality surgical navigation.

Claims (2)

1. An automatic real-time unmarked image registration method for navigation of a dental augmented reality operation is characterized in that: the method comprises the following steps:
the method comprises the following steps: acquiring Computed Tomography (CT) data of a patient;
step two: carrying out threshold segmentation pretreatment on the acquired CT data, extracting key tissues, adding the key tissues into an operation planning model, and carrying out three-dimensional reconstruction to generate a CT model;
step three: calibrating and correcting a stereo camera;
step four: collecting data of a three-dimensional oral cavity scanner of a patient;
step five: preprocessing the collected three-dimensional oral cavity scanning data to generate a three-dimensional oral cavity scanning model;
step six: registering the three-dimensional oral cavity scanning model and the CT model by adopting a principal component analysis and iteration closest point method, so that the three-dimensional oral cavity scanning model is converted into a CT space;
step seven: registering the three-dimensional oral cavity scanning model converted into the CT space with a stereo camera image by adopting a pyramid layering algorithm and a quasi-Newton method, thereby dynamically analyzing the geometric relationship between the CT space and the camera space;
seventhly, registering the three-dimensional oral cavity scanning model and the three-dimensional camera image, firstly performing primary registration by adopting a pyramid layering algorithm, and then performing iterative solution by adopting a quasi-Newton method to realize fine registration, so as to dynamically analyze the geometric relation between a CT space and a camera space;
the initial registration is performed by adopting a pyramid hierarchical algorithm, and the search strategy is described as follows: setting a virtual camera with the same internal reference as a real camera, regularly setting thousands of viewpoints at different positions, correspondingly presenting thousands of two-dimensional views of the three-dimensional oral cavity scanning model, clustering the views according to similarity, wherein each class corresponds to a group of viewpoints, performing down-sampling to a pyramid at a higher level after clustering is completed, repeating the clustering process to form pyramid level template views, and establishing a pyramid view database;
in the initial registration online search stage, generating an image pyramid from a stereo camera image, starting search from the topmost pyramid, calculating the similarity measure between the stereo camera image and a template view, selecting a clustering template with the highest similarity, continuing matching the lower pyramid template view along the clustering, and finally determining the viewpoint corresponding to the bottommost pyramid template view with the highest similarity measure as the optimal viewpoint, namely determining the initial pose;
the precise registration adopts an iterative solution idea, a three-dimensional oral scanning model is projected to obtain two-dimensional projection contour points, the image edge is obtained by carrying out Gaussian Laplacian of the Gaussian (LoG for short) filtering on a stereo camera image, and finally the following function is solved by adopting a BFGS algorithm which is one of quasi-Newton methods:
Figure FDA0002701204670000011
Nland NrRespectively the number of left and right projection contour points of the three-dimensional oral cavity scanning model; k is the internal reference matrix of the camera; b is the camera baseline length, once the function is solved, the pose is updated and the next iteration is performed until the pose change is less than a threshold;
and finally, dynamically analyzing the geometric relation between the CT space and the camera space according to the transformation chain.
2. The method of claim 1, wherein the method comprises the following steps: registering the three-dimensional oral cavity scanning model with the CT model, firstly performing primary registration based on Principal Component Analysis (PCA), then performing fine registration based on Iterative Closest Point (ICP), and finally converting the three-dimensional oral cavity scanning model into a CT space;
solving initial registration based on PCA by using a quaternion and Singular Value Decomposition (SVD) method, solving three main directions of a CT (computed tomography) model and a three-dimensional oral cavity scanning model, adding respective gravity centers to obtain 4 corresponding characteristic point pairs, solving quaternion (w, x, y, z) by using a construction matrix of a covariance matrix, and further obtaining a rotation matrix R0And a translation matrix s0Completing the initial registration to obtain an initial registration matrix T0=(R0,s0),Q’=T0Q, Q is a three-dimensional oral cavity scanning model, and Q' is a three-dimensional oral cavity scanning model which is registered with the CT model and transformed to the CT space through coordinates;
and (3) solving the rotation matrix by adopting an SVD (singular value decomposition) method based on ICP (inductively coupled plasma) for precise registration, searching a point closest to the European distance in the three-dimensional oral cavity scanning model in the CT (computed tomography) model by using a k-d tree method to form a point set, iteratively calculating a registration matrix T (R, s) by adopting an SVD (singular value decomposition) method, updating the pose of the three-dimensional oral cavity scanning model, and iterating until an objective function is smaller than a certain threshold value or the iteration reaches a certain number of times.
CN201910061265.5A 2019-01-23 2019-01-23 Automatic real-time unmarked image registration method for navigation of dental augmented reality operation Active CN109785374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910061265.5A CN109785374B (en) 2019-01-23 2019-01-23 Automatic real-time unmarked image registration method for navigation of dental augmented reality operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910061265.5A CN109785374B (en) 2019-01-23 2019-01-23 Automatic real-time unmarked image registration method for navigation of dental augmented reality operation

Publications (2)

Publication Number Publication Date
CN109785374A CN109785374A (en) 2019-05-21
CN109785374B true CN109785374B (en) 2020-12-04

Family

ID=66501873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910061265.5A Active CN109785374B (en) 2019-01-23 2019-01-23 Automatic real-time unmarked image registration method for navigation of dental augmented reality operation

Country Status (1)

Country Link
CN (1) CN109785374B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110123453B (en) * 2019-05-31 2021-07-23 东北大学 Operation navigation system based on unmarked augmented reality
CN110215281B (en) * 2019-06-11 2020-07-10 北京和华瑞博医疗科技有限公司 Femur or tibia registration method and device based on total knee replacement surgery
CN112168348A (en) * 2019-07-03 2021-01-05 钜旺生技股份有限公司 Positioning and navigation system for operation and operation method thereof
CN110459083B (en) 2019-08-22 2020-08-04 北京众绘虚拟现实技术研究院有限公司 Vision-touch fused augmented reality oral surgery skill training simulator
CN110751681B (en) * 2019-10-18 2022-07-08 西南科技大学 Augmented reality registration method, device, equipment and storage medium
CN111476832A (en) * 2020-03-23 2020-07-31 杭州柳叶刀机器人有限公司 Acetabular cup registration method
CN111476905B (en) * 2020-04-04 2023-11-21 哈尔滨理工大学 Robot-assisted tooth preparation simulation system based on augmented reality
WO2021242681A1 (en) * 2020-05-29 2021-12-02 Covidien Lp System and method for integrated control of 3d visualization through a surgical robotic system
CN111743628A (en) * 2020-07-18 2020-10-09 纽智医疗科技(苏州)有限公司 Automatic puncture mechanical arm path planning method based on computer vision
CN112330723B (en) * 2020-09-22 2023-08-01 广东工业大学 Physical-to-image/image-to-physical automatic registration method
CN112155732B (en) * 2020-09-29 2022-05-17 苏州微创畅行机器人有限公司 Readable storage medium, bone modeling and registering system and bone surgery system
CN114305682B (en) * 2020-09-29 2023-09-22 上海微创卜算子医疗科技有限公司 Neural intervention navigation real-time positioning system and method
CN112732072B (en) * 2020-12-21 2023-09-29 航天信息股份有限公司 Equipment intelligent maintenance system based on VR/AR
CN113052883B (en) * 2021-04-02 2024-02-02 北方工业大学 Fused reality operation navigation registration system and method in large-scale dynamic environment
CN113823000A (en) * 2021-09-26 2021-12-21 上海交通大学医学院附属第九人民医院 Enhanced display method, system, device and storage medium based on head
CN114145846B (en) * 2021-12-06 2024-01-09 北京理工大学 Operation navigation method and system based on augmented reality assistance
CN117372317A (en) * 2022-06-30 2024-01-09 武汉联影智融医疗科技有限公司 Registration method, registration apparatus, computer device, and readable storage medium
CN115645044A (en) * 2022-11-04 2023-01-31 福州大学 Oral implant image superposition method based on no-marker
CN115830287B (en) * 2023-02-20 2023-12-12 汉斯夫(杭州)医学科技有限公司 Tooth point cloud fusion method, device and medium based on laser mouth scanning and CBCT reconstruction
CN116563379B (en) * 2023-07-06 2023-09-29 湖南卓世创思科技有限公司 Marker positioning method, device and system based on model fusion
CN116758128B (en) * 2023-08-23 2023-11-17 深圳卡尔文科技有限公司 Method, system and storage medium for registration of oral implantation surgery

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835112A (en) * 2015-05-07 2015-08-12 厦门大学 Liver multi-phase CT image fusion method
CN105279762A (en) * 2015-11-20 2016-01-27 北京航空航天大学 An oral cavity soft and hard tissue CT sequence and three-dimensional grid model registration method
CN106327587A (en) * 2016-11-16 2017-01-11 北京航空航天大学 Laparoscope video precision fusion method for enhancing real surgical navigation
CN106560163A (en) * 2015-09-30 2017-04-12 合肥美亚光电技术股份有限公司 Surgical navigation system and registration method of surgical navigation system
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104835112A (en) * 2015-05-07 2015-08-12 厦门大学 Liver multi-phase CT image fusion method
CN106560163A (en) * 2015-09-30 2017-04-12 合肥美亚光电技术股份有限公司 Surgical navigation system and registration method of surgical navigation system
CN105279762A (en) * 2015-11-20 2016-01-27 北京航空航天大学 An oral cavity soft and hard tissue CT sequence and three-dimensional grid model registration method
CN106327587A (en) * 2016-11-16 2017-01-11 北京航空航天大学 Laparoscope video precision fusion method for enhancing real surgical navigation
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Coarse-to-fine dot array marker detection with accurate edge Coarse-to-fine dot array marker detection with accurate edgelocalization for stereo visual tracking";Junchen Wang 等;《Biomedical Signal Processing and Control》;20141016;正文第49-59页 *
"Video see-through augmented reality for oral and maxillofacial surgery";Junchen Wang 等;《THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY》;20160609;正文第1-14页 *
"基于ICP算法的手术导航三维配准技术";王君臣 等;《北京航空航天大学学报》;20090430;第35卷(第4期);第435-437页 *

Also Published As

Publication number Publication date
CN109785374A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109785374B (en) Automatic real-time unmarked image registration method for navigation of dental augmented reality operation
CN110946654B (en) Bone surgery navigation system based on multimode image fusion
CN109310488B (en) Method for estimating at least one of shape, position and orientation of a dental restoration
CN112200843B (en) Super-voxel-based CBCT and laser scanning point cloud data tooth registration method
Mahfouz et al. A robust method for registration of three-dimensional knee implant models to two-dimensional fluoroscopy images
Livyatan et al. Gradient-based 2-D/3-D rigid registration of fluoroscopic X-ray to CT
EP1486900A1 (en) Method and system for manufacturing a surgical guide
CN110264504B (en) Three-dimensional registration method and system for augmented reality
US20060127854A1 (en) Image based dentition record digitization
US11961238B2 (en) Tooth segmentation using tooth registration
CN102576465B (en) Method for digitizing dento-maxillofacial objects
WO2011134083A1 (en) System and methods for intraoperative guidance feedback
CN110946652B (en) Method and device for planning screw path of bone screw
CN109692050A (en) A kind of calibration, method for tracing and the device of dentistry plantation navigating surgery
CN111685899A (en) Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models
CN110236673B (en) Database-based preoperative design method and device for reconstruction of bilateral jaw defects
CN112509022A (en) Non-calibration object registration method for preoperative three-dimensional image and intraoperative perspective image
CN113633377B (en) Tibia optimization registration system and method for tibia high osteotomy
CA3200325A1 (en) Method for automatically detecting landmark in three-dimensional dental scan data, and computer-readable recording medium with program for executing same in computer recorded thereon
Chaoui et al. Recognition-based segmentation and registration method for image guided shoulder surgery
CN115578320A (en) Full-automatic space registration method and system for orthopedic surgery robot
CN114283179A (en) Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images
Qin et al. Registration in oral and maxillofacial surgery
Mei et al. Registration of the Cone Beam CT and Blue‐Ray Scanned Dental Model Based on the Improved ICP Algorithm
CN116725641B (en) Craniocerebral puncture template construction method based on grid patch processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210126

Address after: 100089 floor 230, building 2, Tiandi Linfeng, No.1, yongtaizhuang North Road, Haidian District, Beijing

Patentee after: Beijing Kemai Qiyuan Technology Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220214

Address after: Room 3046, floor 3, building 1, No. 1, Desheng South Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Patentee after: Beijing Kemai Xuanji Medical Technology Co.,Ltd.

Address before: 100089 floor 230, building 2, Tiandi Linfeng, No.1, yongtaizhuang North Road, Haidian District, Beijing

Patentee before: Beijing Kemai Qiyuan Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220608

Address after: 100192 floor 230, building 2, Tiandi Linfeng, No.1, yongtaizhuang North Road, Haidian District, Beijing

Patentee after: Beijing Kemai Qiyuan Technology Co.,Ltd.

Address before: Room 3046, floor 3, building 1, No. 1, Desheng South Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176

Patentee before: Beijing Kemai Xuanji Medical Technology Co.,Ltd.

TR01 Transfer of patent right