CN112472293B - Registration method of preoperative three-dimensional image and intraoperative perspective image - Google Patents

Registration method of preoperative three-dimensional image and intraoperative perspective image Download PDF

Info

Publication number
CN112472293B
CN112472293B CN202011471312.2A CN202011471312A CN112472293B CN 112472293 B CN112472293 B CN 112472293B CN 202011471312 A CN202011471312 A CN 202011471312A CN 112472293 B CN112472293 B CN 112472293B
Authority
CN
China
Prior art keywords
perspective
image
dimensional image
point
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011471312.2A
Other languages
Chinese (zh)
Other versions
CN112472293A (en
Inventor
王炳强
孙世民
孙之建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Weigao Medical Technology Co Ltd
Original Assignee
Shandong Weigao Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Weigao Medical Technology Co Ltd filed Critical Shandong Weigao Medical Technology Co Ltd
Priority to CN202011471312.2A priority Critical patent/CN112472293B/en
Publication of CN112472293A publication Critical patent/CN112472293A/en
Application granted granted Critical
Publication of CN112472293B publication Critical patent/CN112472293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a registration method of a preoperative three-dimensional image and an intraoperative perspective image, which comprises the following steps of S1, calculating a conversion relation from world coordinates to a perspective view in a magnetic field range; s2, calculating camera parameters corresponding to the perspective view through the conversion relation; s3, reconstructing the preoperative three-dimensional image into a 3D bone model; s4, re-projecting the 3D bone model by using the camera parameters and generating a two-dimensional image; s5, calculating the similarity between the two-dimensional image and the perspective view, and recording the similarity as A, wherein the current maximum similarity is equal to A; s6, adjusting camera parameters through random numbers, repeating the step S4 to generate a new two-dimensional image, and calculating the similarity between the two-dimensional image and a perspective view; s7, judging whether the similarity obtained in the S6 is greater than the current maximum similarity or not, and if so, enabling the current maximum similarity to be equal to the similarity obtained in the S6; otherwise, the current maximum similarity value is kept unchanged; s8, judging whether preset iteration times are finished or not, and finishing registration if the preset iteration times are finished; otherwise, returning to S6. The method has high registration precision.

Description

Registration method of preoperative three-dimensional image and intraoperative perspective image
Technical Field
The invention relates to the technical field of image registration, in particular to a registration method of a preoperative three-dimensional image and an intraoperative perspective image.
Background
The magnetic navigation system often contacts with a minimally invasive surgery in the using process of the orthopedic field, the registration mode for selecting the characteristic points is mostly suitable for open surgeries, too many characteristic points cannot be provided for selection in the minimally invasive surgery process, and the registration cannot be effectively completed.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a registration method of a pre-operative three-dimensional image and an intra-operative perspective image, the registration method does not need to select characteristic points, the intra-operative perspective image is convenient to shoot and operate, the injury to a patient is small, the registration precision of the registration method is high, and a doctor can be better assisted in performing an operation.
In order to achieve the above object, the present application provides a registration method of a three-dimensional preoperative image and an intraoperative fluoroscopic image, comprising the following steps:
step 1, calculating a conversion relation from world coordinates to a perspective view in a magnetic field range;
step 2, calculating camera parameters corresponding to the perspective view through a conversion relation from the world coordinates to the perspective view;
step 3, reconstructing the preoperative three-dimensional image into a 3D bone model through a computer;
step 4, re-projecting the 3D bone model by using the camera parameters corresponding to the perspective view to generate a two-dimensional image;
step 5, calculating the similarity between the two-dimensional image obtained in the step 4 and the perspective view, and marking the similarity as A, wherein the current maximum similarity is equal to A;
step 6, adjusting camera parameters through random numbers, repeating the step 4, generating a new two-dimensional image, and calculating the similarity between the two-dimensional image and a perspective view;
step 7, judging whether the similarity obtained in the step 6 is greater than the current maximum similarity, and if so, enabling the current maximum similarity to be equal to the similarity obtained in the step 6; if not, keeping the current maximum similarity value unchanged;
step 8, judging whether preset iteration times are finished or not, and if so, finishing registration; if not, return to step 6.
In some embodiments, in step 1, the process of calculating the world coordinate to perspective transformation relationship is as follows: the imaging position of the spatial point P on the image can be approximately represented by a pinhole model, that is, the projection position P' of the arbitrary spatial point P on the image is the intersection point of the line connecting the optical center and the point P and the image plane, and this relationship is called perspective projection and can be represented by the following linear equation system:
A 1 x+A 2 y+A 3 z+A 4 -A 5 xu-A 6 yu-A 7 zu=u (1.1);
A 8 x+A 9 y+A 10 z+A 11 -A 5 xv-A 6 yv-A 7 zv=v (1.2);
the above two equations contain 11 parameters A to be solved i (i =1,2, Λ 11), since the intra-operative perspective contains the information of the marker, the spatial coordinates (x) of n (n ≧ 6) marker points can be given i ,y i ,z i ) (i =1,2, Λ n) and corresponding screen coordinates (u @ i ,v i ) (i =1,2, Λ n), then 2n can be obtained for a i (i =1,2, Λ 11), expressed in matrix form:
Figure BDA0002836061610000021
A 11×1 =[A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10 A 11 ] T
C (2×n)×1 =[u 1 u 2 Λ u n v 1 v 2 Λ v n ] T
so E · a = C;
A=(E T E) -1 E T C (1.3);
thus obtaining A i (i=1,2,Λ11);
In equation (1.3), after calculating matrix a, the transformation relationship from world coordinates to perspective in the magnetic field range is determined.
In some embodiments, in step 2, the calculation process of the camera parameters corresponding to the perspective view is as follows:
step 21, two adjacent characteristic points and an image center point of a correction plate close to the center on the perspective image are taken;
step 22, calculating light ray vectors of three points, namely two adjacent characteristic points and an image central point, by using the matrix A;
step 23, the intersection point of the light passing through one feature point of the two adjacent feature points and the light passing through the other feature point is a light source point, and the light source point is the camera position;
and 24, taking a projection point from the light source to the screen as a focus, taking the plane where the correction plate is located as an imaging plane, taking the distance from the light source to the imaging plane as a focal length, and taking a point on the imaging plane as a pixel point of the two-dimensional image.
The registration method of the preoperative three-dimensional image and the intraoperative perspective image has the advantages that feature points do not need to be selected, the intraoperative perspective image is convenient to shoot and operate, the injury to a patient is small, the registration accuracy of the registration method is high, and a doctor can be better assisted to perform an operation.
Drawings
Fig. 1 is a schematic diagram showing the capturing of a normal/lateral X-ray image in the embodiment, in which (a) is a schematic diagram showing the capturing of a normal X-ray image and (b) is a schematic diagram showing the capturing of a lateral X-ray image.
Fig. 2 shows a schematic diagram of a calculation process of camera parameters corresponding to the perspective view in the embodiment.
Reference numerals: 1-light source, 2-patient, 3-calibrator, 4-imaging plane.
Detailed Description
The following further describes embodiments of the present application with reference to the drawings.
In computer-aided navigation orthopedic surgery, intraoperative registration is a very important step, which is related to the accuracy of the surgery and even success, and the registration method of the preoperative three-dimensional image and the intraoperative fluoroscopic image related in the application is to acquire a frontal/lateral X-ray image (i.e. intraoperative fluoroscopic image) of a diseased part intraoperatively, and to register the frontal/lateral X-ray image with preoperative CT data (i.e. preoperative three-dimensional image) as shown in fig. 1. The method has the advantages of convenient X-ray film shooting operation and small harm to patients, and can improve the registration precision and performance.
The registration method of the preoperative three-dimensional image and the intraoperative perspective image comprises the following steps:
step 1, calculating the conversion relation from world coordinates to a perspective view in a magnetic field range.
Specifically, the process of calculating the conversion relationship from world coordinates to perspective is as follows: the imaging position of the spatial point P on the image can be approximately represented by a pinhole model, that is, the projection position P' of the arbitrary spatial point P on the image is the intersection point of the line connecting the optical center O and the point P and the image plane, and this relationship is called perspective projection and can be represented by the following linear equation system:
A 1 x+A 2 y+A 3 z+A 4 -A 5 xu-A 6 yu-A 7 zu=u (1.1);
A 8 x+A 9 y+A 10 z+A 11 -A 5 xv-A 6 yv-A 7 zv=v (1.2);
the above two equations contain 11 parameters A to be solved i (i =1,2, Λ 11), since the intra-operative perspective contains the information of the marker, the spatial coordinates (x) of n (n ≧ 6) marker points can be given i ,y i ,z i ) (i =1,2, Λ n) and corresponding screen coordinates (u @ i ,v i ) (i =1,2, Λ n), then 2n can be obtained for a i (i =1,2, Λ 11), expressed in matrix form:
Figure BDA0002836061610000051
A 11×1 =[A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10 A 11 ] T
C (2×n)×1 =[u 1 u 2 Λ u n v 1 v 2 Λ v n ] T
so E · a = C;
A=(E T E) -1 E T C (1.3);
thus obtaining A i (i=1,2,Λ11)。
In equation (1.3), after the matrix a is calculated, the transformation of world coordinates into perspective view within the magnetic field range has been actually determined.
And 2, calculating camera parameters corresponding to the perspective view through the conversion relation from the world coordinates to the perspective view. In the present embodiment, the camera parameters include the position, focal length, and the like of the camera.
The calculation process of the camera parameters corresponding to the specific perspective view is as follows, as shown in fig. 2:
and step 21, taking two adjacent characteristic points P1 and P2 close to the center of the correction plate on the perspective image and an image center point C.
And step 22, calculating the light ray vectors of three points, namely two adjacent characteristic points P1 and P2 and an image central point C, by using the matrix A.
And 23, the intersection point of the light ray passing through one characteristic point P1 and the light ray passing through the other characteristic point P2 in the two adjacent characteristic points is a light source point L, and the L is the position of the camera.
And 24, taking a projection point of the light source to the screen as a focus, taking the plane where the correction plate is positioned as an imaging plane, taking the distance from the light source to the imaging plane as a focal length, and taking a point on the imaging plane as a pixel point of the two-dimensional image.
And 3, reconstructing the preoperative three-dimensional image into a 3D bone model through a computer.
And 4, re-projecting the 3D bone model by using the camera parameters corresponding to the perspective view and generating a two-dimensional image.
And 5, calculating the similarity between the two-dimensional image obtained in the step 4 and the perspective view, and marking the similarity as A, wherein the current maximum similarity is equal to A.
And 6, adjusting camera parameters through random numbers, repeating the step 4, generating a new two-dimensional image, and calculating the similarity between the two-dimensional image and the perspective view.
Step 7, judging whether the similarity obtained in the step 6 is greater than the current maximum similarity, and if so, enabling the current maximum similarity to be equal to the similarity obtained in the step 6; if not, the current maximum similarity value is kept unchanged.
Step 8, judging whether preset iteration times, such as 500 times, are finished, and if so, finishing registration; if not, return to step 6.
The registration method of the preoperative three-dimensional image and the intraoperative perspective image does not need to select characteristic points, the intraoperative perspective image is convenient to shoot and operate, the injury to a patient is small, the registration accuracy of the registration method is high, and a doctor can be better assisted to perform an operation.

Claims (2)

1. A registration method of a preoperative three-dimensional image and an intraoperative perspective image is characterized by comprising the following steps: the method comprises the following steps:
step 1, calculating a conversion relation from world coordinates to a perspective view in a magnetic field range; the process of calculating the conversion relation from the world coordinates to the perspective view is as follows: the imaging position of the spatial point P on the image can be approximately represented by a pinhole model, that is, the projection position P' of the arbitrary spatial point P on the image is the intersection point of the line connecting the optical center and the point P and the image plane, and this relationship is called perspective projection and can be represented by the following linear equation system:
A 1 x+A 2 y+A 3 z+A 4 -A 5 xu-A 6 yu-A 7 zu=u (1.1);
A 8 x+A 9 y+A 10 z+A 11 -A 5 xv-A 6 yv-A 7 zv=v (1.2);
the above equation set contains 11 parameters A to be solved i (i =1,2, Λ 11), since the intra-operative perspective contains the information of the marker, the spatial coordinates (x) of n (n ≧ 6) marker points can be given i ,y i ,z i ) (i =1,2, Λ n) and corresponding screen coordinates (u @ i ,v i ) (i =1,2, Λ n), then 2n can be obtained for a i (i =1,2, Λ 11), expressed in matrix form:
Figure FDA0003755616080000011
A 11×1 =[A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10 A 11 ] T
C (2×n)×1 =[u 1 u 2 Λ u n v 1 v 2 Λ v n ] T
so E · a = C;
A=(E T E) -1 E T C (1.3);
thus obtaining A i (i=1,2,Λ11);
In the formula (1.3), after the matrix A is calculated, the conversion relation of world coordinates to perspective in the magnetic field range is determined;
step 2, calculating camera parameters corresponding to the perspective view through a conversion relation from the world coordinate to the perspective view;
step 3, reconstructing the preoperative three-dimensional image into a 3D bone model through a computer;
step 4, re-projecting the 3D bone model by using camera parameters corresponding to the perspective view and generating a two-dimensional image;
step 5, calculating the similarity between the two-dimensional image obtained in the step 4 and the perspective view, and marking the similarity as A, wherein the current maximum similarity is equal to A;
step 6, adjusting camera parameters through random numbers, repeating the step 4, generating a new two-dimensional image, and calculating the similarity between the two-dimensional image and a perspective view;
step 7, judging whether the similarity obtained in the step 6 is greater than the current maximum similarity, and if so, enabling the current maximum similarity to be equal to the similarity obtained in the step 6; if not, the current maximum similarity value is kept unchanged;
step 8, judging whether preset iteration times are finished or not, and if so, finishing registration; if not, return to step 6.
2. The method for registering a preoperative three-dimensional image and an intraoperative fluoroscopic image according to claim 1, wherein: in step 2, the calculation process of the camera parameters corresponding to the perspective view is as follows:
step 21, two adjacent characteristic points and an image center point of a correction plate close to the center on the perspective image are taken;
step 22, calculating light ray vectors of three points, namely two adjacent characteristic points and an image central point, by using the matrix A;
step 23, the intersection point of the light passing through one feature point of the two adjacent feature points and the light passing through the other feature point is a light source point, and the light source point is the camera position;
and 24, taking a projection point from the light source to the screen as a focus, taking the plane where the correction plate is located as an imaging plane, taking the distance from the light source to the imaging plane as a focal length, and taking a point on the imaging plane as a pixel point of the two-dimensional image.
CN202011471312.2A 2020-12-15 2020-12-15 Registration method of preoperative three-dimensional image and intraoperative perspective image Active CN112472293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011471312.2A CN112472293B (en) 2020-12-15 2020-12-15 Registration method of preoperative three-dimensional image and intraoperative perspective image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011471312.2A CN112472293B (en) 2020-12-15 2020-12-15 Registration method of preoperative three-dimensional image and intraoperative perspective image

Publications (2)

Publication Number Publication Date
CN112472293A CN112472293A (en) 2021-03-12
CN112472293B true CN112472293B (en) 2022-10-21

Family

ID=74917003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011471312.2A Active CN112472293B (en) 2020-12-15 2020-12-15 Registration method of preoperative three-dimensional image and intraoperative perspective image

Country Status (1)

Country Link
CN (1) CN112472293B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880469B (en) * 2023-02-20 2023-05-09 江苏省人民医院(南京医科大学第一附属医院) Registration method of surface point cloud data and three-dimensional image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211655B (en) * 2013-04-11 2016-03-09 深圳先进技术研究院 A kind of orthopaedics operation navigation system and air navigation aid
CN105976372A (en) * 2016-05-05 2016-09-28 北京天智航医疗科技股份有限公司 Non-calibration object registering method for pre-operation three-dimensional images and intra-operative perspective images
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN110944594A (en) * 2017-06-19 2020-03-31 穆罕默德·R·马赫福兹 Hip surgical navigation using fluoroscopy and tracking sensors

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1892668B1 (en) * 2006-08-22 2012-10-03 BrainLAB AG Registration of imaging data
CN107468350B (en) * 2016-06-08 2020-12-08 北京天智航医疗科技股份有限公司 Special calibrator for three-dimensional image, operation positioning system and positioning method
CN109064397B (en) * 2018-07-04 2023-08-01 广州希脉创新科技有限公司 Image stitching method and system based on camera earphone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211655B (en) * 2013-04-11 2016-03-09 深圳先进技术研究院 A kind of orthopaedics operation navigation system and air navigation aid
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN105976372A (en) * 2016-05-05 2016-09-28 北京天智航医疗科技股份有限公司 Non-calibration object registering method for pre-operation three-dimensional images and intra-operative perspective images
CN110944594A (en) * 2017-06-19 2020-03-31 穆罕默德·R·马赫福兹 Hip surgical navigation using fluoroscopy and tracking sensors

Also Published As

Publication number Publication date
CN112472293A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
Shahidi et al. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system
US11025889B2 (en) Systems and methods for determining three dimensional measurements in telemedicine application
US6415171B1 (en) System and method for fusing three-dimensional shape data on distorted images without correcting for distortion
Suenaga et al. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study
JP5243754B2 (en) Image data alignment
US8145012B2 (en) Device and process for multimodal registration of images
Zhang et al. A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy
Bong et al. Endoscopic navigation system with extended field of view using augmented reality technology
US8121380B2 (en) Computerized imaging method for a three-dimensional reconstruction from two-dimensional radiological images; implementation device
CN109925052B (en) Target point path determination method, device and system and readable storage medium
Fan et al. Spatial position measurement system for surgical navigation using 3-D image marker-based tracking tools with compact volume
JP2007236701A (en) Method for displaying medical image and program thereof
Ma et al. 3D visualization and augmented reality for orthopedics
CN109925054B (en) Auxiliary method, device and system for determining target point path and readable storage medium
Shao et al. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation
CN109993792A (en) Projecting method, apparatus and system and readable storage medium storing program for executing
Guéziec et al. Providing visual information to validate 2-D to 3-D registration
CN112472293B (en) Registration method of preoperative three-dimensional image and intraoperative perspective image
Fan et al. Three-dimensional image-guided techniques for minimally invasive surgery
JP2019505271A (en) Providing projection data sets
Lai et al. Hand-eye camera calibration with an optical tracking system
Reichard et al. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
Zhang et al. 3D augmented reality based orthopaedic interventions
Zhang et al. A high-accuracy surgical augmented reality system using enhanced integral videography image overlay
WO2023232492A1 (en) Guidance during medical procedures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant