CN214157490U - Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method - Google Patents

Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method Download PDF

Info

Publication number
CN214157490U
CN214157490U CN202022478672.7U CN202022478672U CN214157490U CN 214157490 U CN214157490 U CN 214157490U CN 202022478672 U CN202022478672 U CN 202022478672U CN 214157490 U CN214157490 U CN 214157490U
Authority
CN
China
Prior art keywords
image
dimensional
patient
dimensional medical
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202022478672.7U
Other languages
Chinese (zh)
Inventor
杨云鹏
谢锦华
孙野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Lanruan Intelligent Medical Technology Co Ltd
Shenyang Lanruan Intelligent Medical Technology Co ltd
Original Assignee
Wuxi Lanruan Intelligent Medical Technology Co Ltd
Shenyang Lanruan Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Lanruan Intelligent Medical Technology Co Ltd, Shenyang Lanruan Intelligent Medical Technology Co ltd filed Critical Wuxi Lanruan Intelligent Medical Technology Co Ltd
Priority to CN202022478672.7U priority Critical patent/CN214157490U/en
Application granted granted Critical
Publication of CN214157490U publication Critical patent/CN214157490U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The utility model relates to an use operation auxiliary system of three-dimensional medical image and real-time coincidence method of patient for supplementary clinical operation's planning and implementation, its constitution includes: a surgical navigation system; a depth image acquisition device; a three-dimensional image display device; the depth image acquisition device is provided with a calibration piece which can be identified by the surgical navigation system; the depth image acquisition device and the surgical navigation system carry out spatial coordinate system calibration, so that the spatial coordinate of the image acquired by the depth image acquisition device is a spatial coordinate which can be identified by the surgical navigation system under the spatial coordinate system of the surgical navigation system; the operation navigation system, the depth image acquisition device and the three-dimensional image display device are respectively connected with a data processing device. The utility model discloses can solve current operation process and have three-dimensional medical image and the inaccurate problem of patient's health matching.

Description

Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method
Technical Field
The utility model relates to a method and operation auxiliary system of three-dimensional medical image and real-time coincidence of patient for supplementary clinical operation's planning and implementation, especially open operation's implementation.
Background
Three-dimensional medical images are more and more commonly applied in assisted clinical surgery, which are 3D model files generated after three-dimensional reconstruction is performed according to CT or nuclear magnetic sequence images of patients, most three-dimensional medical images applied in hospitals display the three-dimensional medical images on a two-dimensional display device, doctors locate corresponding tissue structures of the bodies of patients or make surgical implementation schemes by contrasting the three-dimensional medical images of the patients, plan surgical paths, in the actual surgical process, the doctors need to switch visual angles between the two-dimensional display device and the patients, then search corresponding tissue structures on the bodies of the patients by means of transient memory of the three-dimensional medical images for surgery, and the memory deviation due to artificial contrast is easy to occur, or the deviation that the visual sense is difficult to determine exists between the angles and the positions of the three-dimensional medical images displayed on the two-dimensional display device and the actual body parts of the patients, the tissue structure of the body part of the patient is inaccurately positioned, and a relatively serious operation risk exists; in addition, because the visual angle of the doctor is frequently switched under the highly tense operation environment, the attention of the doctor can be dispersed, the fatigue of the doctor is increased, the operation efficiency is influenced, the operation time is prolonged, and great harm is caused to the patient.
Disclosure of Invention
The utility model discloses a solve the doctor at the three-dimensional medical image of contrast patient, the organizational structure that the location patient health corresponds or formulate the operation implementation scheme, the problem that three-dimensional medical image and patient health that exist when planning the operation route match imprecise and doctor frequently switch the sight when referring to three-dimensional medical image and carrying out the operation process provides a three-dimensional medical image and patient's real-time coincident method and operation auxiliary system.
The specific invention content is as follows:
a method for real-time registration of a three-dimensional medical image with a patient, the method comprising:
converting the CT sequence image or nuclear magnetic sequence image of the patient into integral point cloud, and performing three-dimensional model reconstruction on the CT sequence image or nuclear magnetic sequence image of the patient to form a three-dimensional medical image;
acquiring an image of a part corresponding to a CT sequence image or a nuclear magnetic sequence image on a patient body, wherein the image is a local image of the patient body, and converting the local image into a local point cloud;
registering the integral point cloud and the local point cloud by adopting a Super-4PCS algorithm, and determining an optimal conversion matrix for accurately coinciding the space coordinate of the integral point cloud and the space coordinate of the local point cloud in the same space coordinate system;
determining a space coordinate of the three-dimensional medical image when the three-dimensional medical image and the space coordinate of the local point cloud are precisely superposed in the same space coordinate system through the optimal conversion matrix;
and transmitting the three-dimensional medical image and the space coordinates thereof to a three-dimensional image display device.
Further, the method for acquiring the local point cloud comprises the following steps:
finding a characteristic region corresponding to the CT sequence image or the nuclear magnetic sequence image on the body of the patient, acquiring a continuous depth image of the characteristic region by adopting a depth image acquisition device, and converting the continuous depth image of the characteristic region into local point cloud.
Further, the same spatial coordinate system is a spatial coordinate system of the surgical navigation system.
Further, before the depth image acquisition device acquires the continuous depth images of the characteristic region, a space coordinate system calibration is carried out on the depth image acquisition device and the operation navigation system, so that the space coordinate of the point cloud of the local body part of the patient and the space coordinate system of the operation navigation system are in the same space coordinate system.
Furthermore, a calibration piece which can be identified by the surgical navigation system is installed on the depth image acquisition device, and the spatial coordinate of the position of the calibration piece can be identified in the spatial coordinate system of the surgical navigation system.
Furthermore, the depth image acquisition device and the surgical navigation system are calibrated by adopting a checkerboard calibration method, and a calibration piece which can be identified by the surgical navigation system is placed at the corner point of the checkerboard, and the calibration process is as follows:
s1: collecting identifiable markers in the checkerboard by using a surgical navigation system, and recording coordinates of the markers;
s2: identifying the angular points of the checkerboard by using a depth image acquisition device, and recording coordinates of the angular points;
s3: changing the positions and the placement angles of the markers in the checkerboard, and repeating the steps S1 and S3 for 20 times in total;
s4: and calculating a conversion matrix from the depth image acquisition device to the surgical navigation system by using the coordinates recorded in the steps S1, S2 and the completed S3, and completing the calibration of the depth image acquisition device and the surgical navigation system.
Further, the determination process of the optimal transformation matrix is as follows:
when the absolute value of the difference value between the space coordinate of the three-dimensional medical image under the space coordinate system of the surgical navigation system and the space coordinate of the local point cloud is the minimum value, namely the local point cloud and the whole point cloud are accurately superposed, determining a conversion matrix as an optimal conversion matrix according to the space coordinate of the local point cloud and the space coordinate relationship of the whole point cloud;
the calculation formula of the conversion matrix is as follows:
TCT/MRI-Patient= PPatient /PCT/MRI
wherein the content of the first and second substances,
PCT/MRIthe integral point cloud space coordinate of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or the nuclear magnetic equipment;
PPatientthe space coordinates of the local point cloud are obtained;
TCT/MRI-Patientand the transformation matrix is the transformation matrix between the space coordinates of the local point cloud and the space coordinates of the whole point cloud of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or nuclear magnetic equipment.
Further, the calculation formula of the spatial coordinate when the spatial coordinate of the whole point cloud, i.e. the three-dimensional medical image, of the CT sequence image or the nuclear magnetic sequence image and the spatial coordinate of the local point cloud are precisely coincident with each other in the spatial coordinate system of the surgical navigation system is as follows:
P’Patient= T’CT/MRI-Patient×PCT/MRI
wherein:
T’CT/MRI-Patientthe optimal transformation matrix is obtained when the space coordinate of the whole point cloud and the space coordinate of the local point cloud are precisely superposed under the space coordinate system of the operation navigation system;
P’Patientthe spatial coordinates of the integral point cloud, namely the three-dimensional medical image, and the spatial coordinates of the local point cloud are precisely coincident under the spatial coordinate system of the operation navigation system;
PCT/MRIthe global point cloud space coordinate of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or the nuclear magnetic equipment.
Further, the utility model also provides an use operation auxiliary system of three-dimensional medical image and the real-time coincidence method of patient, its constitution includes:
the surgical navigation system;
the depth image acquisition device;
a three-dimensional image display device;
the depth image acquisition device is provided with a calibration piece which can be identified by the surgical navigation system;
the depth image acquisition device and the surgical navigation system carry out spatial coordinate system calibration, so that the spatial coordinate of the image acquired by the depth image acquisition device is a spatial coordinate which can be identified by the surgical navigation system under the spatial coordinate system of the surgical navigation system;
the operation navigation system, the depth image acquisition device and the three-dimensional image display device are respectively connected with a data processing device.
Further the data processing apparatus comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor when executing the computer program implementing:
processing the integral point cloud extraction of the CT or nuclear magnetic sequence image of the patient;
obtaining spatial coordinates P of CT sequence image or nuclear magnetic sequence image of patientCT/MR
Acquiring spatial position information of a depth image acquisition device identified by a surgical navigation system;
acquiring continuous images of a characteristic region of a patient body acquired by a depth image acquisition device;
processing the point cloud extraction of the body part of the patient;
acquiring the space coordinate P of the point cloud of the local body of the patient under the coordinate system of the operation navigation systemPatient
Searching an optimal conversion matrix when the space coordinates of the whole point cloud and the local point cloud are precisely superposed under a space coordinate system of the surgical navigation system;
processing the space coordinates of the integral point cloud and the local point cloud when the space coordinates are precisely coincident under the space coordinate system of the operation navigation system;
and sending the space coordinates of the whole point cloud under the coordinate system of the operation navigation system to a three-dimensional image display device.
Further, the three-dimensional image display device is an augmented reality display device, presents three-dimensional images based on AR or MR technology,
the augmented reality display device comprises a three-dimensional image presentation module;
a three-dimensional image information processing module;
a head-mounted device;
the three-dimensional image presentation module can import and display a three-dimensional image;
the three-dimensional image information processing module is used for processing the real-time space coordinate of the three-dimensional medical image so as to enable the state and position relation of the three-dimensional medical image presented in the three-dimensional image presentation module to be matched with the space coordinate of the three-dimensional medical image in real time;
the real-time space coordinate of the three-dimensional medical image is acquired by a method of real-time coincidence of the three-dimensional medical image and a patient;
the head-mounted device is connected with the three-dimensional image presentation module and is worn on the head.
Advantageous effects
The utility model discloses following beneficial effect has:
1. the utility model adopts the Super-4PCS algorithm, and in the running process of the algorithm, the spatial form of the continuously transformed local point cloud and the spatial form of the whole point cloud are overlapped and matched, the minimum value of the absolute value of the difference between the spatial coordinate of the three-dimensional medical image under the spatial coordinate system of the surgical navigation system and the spatial coordinate of the local point cloud is dynamically searched, namely the transformation matrix when the coincidence matching degree is the highest, and the overall point cloud is calculated through the optimal transformation matrix, namely, the spatial coordinate of the three-dimensional medical image which is accurately matched with the body characteristic region of the patient under the spatial coordinate system of the operation navigation system, the Super-4PCS algorithm can reduce the matching error to be within 1mm, simultaneously, improve the matching speed, will match speed and improve the single matching time and be no longer than 30s the utility model discloses in have coincidence matching precision height and fast two kinds of advantages.
2. Through the utility model discloses a method of three-dimensional medical image and patient's real-time coincidence, can adopt space coordinate's form to carry out the coincidence registration with patient's three-dimensional medical image and its patient's health position that corresponds, and load three-dimensional medical image and its real-time space coordinate in the three-dimensional image display device that can wear, improve the precision of matching, the doctor has been avoided using two-dimensional display equipment, judge the degree of coincidence of three-dimensional medical image and patient's health through the vision, produce the visual deviation and bring the inaccurate problem of coincidence matching, improve the success rate of performing the operation, reduce the operation risk.
3. The utility model discloses the use can wear, augmented reality display device based on AR or MR technique, can make the patient in virtual three-dimensional medical image and the real world produce the stack effect, and demonstrate the visual effect that the local characteristic region of patient's health in virtual three-dimensional medical image and the real world exists simultaneously in same picture and space, can eliminate the doctor and frequently switch the problem at the visual angle between virtual three-dimensional medical image and real patient's health at the operation in-process, improve operation efficiency, reduce doctor's fatigue.
4. The utility model adopts the operation navigation system commonly used in hospitals, has higher measurement precision and reliability, stronger capability of resisting local shielding, small and light appearance, and is convenient for users to directly install the system on a patient examination table or integrate the system into other hardware systems; accurate reliable operation navigation combines the two mesh degree of depth cameras of distinguishable calibration piece, the spatial position of the local characteristic zone of definite patient's health that can be accurate to calculate its space coordinate, for the three-dimensional medical science image space coordinate one-to-one with the patient provides the target position of relative accurate, be the utility model discloses realize the basis of accurate location coincidence.
5. The utility model discloses a Super-4PCS algorithm is when matching the whole point cloud of patient CT or nuclear magnetic sequence image and the local point cloud of patient health, and the local point cloud of the patient health that obtains can be with the tissue or the organ or the skeleton that patient CT or nuclear magnetic sequence image correspond whole, also can be the partial region of the tissue or the organ or the skeleton that correspond, has effectively improved the local point cloud the acquisition and with the sensitivity of whole point cloud coincidence matching.
Drawings
FIG. 1 is a flow chart of a method for real-time coincidence of a three-dimensional medical image and a patient according to the present invention;
FIG. 2 is a schematic diagram of the checkerboard setting of the present invention;
fig. 3 is a calibration flow chart of the checkerboard calibration method according to embodiment 7 of the present invention;
fig. 4 is a schematic structural diagram of the depth image acquiring device of the present invention;
FIG. 5 is a structural diagram of an operation support system employing a method of real-time registration of three-dimensional medical images with a patient;
FIG. 6 is a flow chart of the operation assistance system using the method of real-time registration of three-dimensional medical images with a patient;
fig. 7 is a schematic structural diagram of the augmented reality display device of the present invention;
FIG. 8 is a flow chart of the operation assistance system using the method of real-time registration of three-dimensional medical images with a patient.
Detailed Description
In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments of the present invention are described in detail below with reference to the accompanying drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as broadly as the present invention can be modified by those skilled in the art without departing from the spirit and scope of the invention.
It will be understood that when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example 1:
the following describes the present embodiment 1 with reference to the flowchart of the method for real-time registration of three-dimensional medical images and patients in fig. 1:
a method for real-time registration of a three-dimensional medical image with a patient, the method comprising:
converting the CT sequence image or nuclear magnetic sequence image of the patient into integral point cloud, and performing three-dimensional model reconstruction on the CT sequence image or nuclear magnetic sequence image of the patient to form a three-dimensional medical image;
acquiring an image of a part corresponding to a CT sequence image or a nuclear magnetic sequence image on a patient body, wherein the image is a local image of the patient body, and converting the local image into a local point cloud;
registering the integral point cloud and the local point cloud by adopting a Super-4PCS algorithm, and determining an optimal conversion matrix for accurately coinciding the space coordinate of the integral point cloud and the space coordinate of the local point cloud in the same space coordinate system;
determining a space coordinate of the three-dimensional medical image when the three-dimensional medical image and the space coordinate of the local point cloud are precisely superposed in the same space coordinate system through the optimal conversion matrix;
and transmitting the three-dimensional medical image and the space coordinates thereof to a three-dimensional image display device.
Example 2
The method for real-time registration of three-dimensional medical image and patient according to embodiment 1 is described in detail for the method for extracting integral point cloud of the present invention,
the extraction method of the integral point cloud comprises the following steps:
selecting a proper threshold value in a CT sequence image or a nuclear magnetic sequence image of a patient;
because there is obvious difference in the pixel grey scale value of patient CT image or nuclear magnetic image's target area and background region, so the utility model discloses a threshold value segmentation algorithm cuts apart patient CT image or nuclear magnetic image, the process of selecting of suitable threshold value is:
(1) selecting a pre-estimated threshold value T according to pixel gray values of a target area and a background area in a CT image or a nuclear magnetic image of a patient;
(2) the patient CT image or nuclear magnetic image is segmented with a threshold T, generating two sets of pixels G1 and G2: g1 is composed of all pixels with the gray value larger than T, G2 is composed of all pixels with the gray value smaller than or equal to T;
(3) average gradation values u1 and u2 were calculated for all pixels in G1 and G2;
(4) calculate the new threshold: t' =1/2 (u 1+ u 2);
(5) and (4) repeating the steps (2) to (4) until the difference of the obtained new threshold value T 'is smaller than the pre-estimated threshold value T, wherein the T' is a proper threshold value.
Segmenting the CT image or the nuclear magnetic image of the patient into a background region and a target region by using a proper threshold value T', wherein the target region is the contour of body tissues, organs or bones of the patient;
generating a point cloud from the target area, wherein the point cloud at the moment is an integral point cloud;
the space coordinate of the integral point cloud is PCT/MRISaid spatial coordinate PCT/MRI,The spatial coordinate value of the CT sequence image or the nuclear magnetic sequence image under the spatial coordinate system of the CT or nuclear magnetic equipment;
carrying out three-dimensional model reconstruction on the CT sequence image or the nuclear magnetic sequence image of the patient to form a three-dimensional medical image, wherein the coordinate of the three-dimensional medical image is the same as the space coordinate of the integral point cloud, and the three-dimensional medical image and the space coordinate of the integral point cloud are both PCT/MRI
Example 3
The method for real-time registration of a three-dimensional medical image and a patient according to embodiment 1, the method for acquiring the local point cloud comprises:
finding a characteristic region corresponding to the CT sequence image or the nuclear magnetic sequence image obtained before the method is applied on the body of the patient, acquiring a continuous depth image of the characteristic region by adopting a depth image acquisition device, and converting the continuous depth image of the characteristic region into local point cloud by adopting a threshold segmentation method.
Example 4
The method according to embodiment 3, wherein the characteristic region is a region of an organ, a tissue, or a bone that can be acquired by the depth image acquiring device, and the characteristic region may be a whole region of the organ, the tissue, or the bone corresponding to the CT sequence image or the nuclear magnetic sequence image, or a partial region of the organ, the tissue, or the bone, and a surface area of the partial region is not less than one tenth of a surface area of the whole region; taking the CT sequence image or the nuclear magnetic sequence image of the patient as the seventh lumbar vertebra bone as an example, the characteristic region may be the entire region of the seventh lumbar vertebra obtained by the depth image obtaining device, or a partial region of the seventh lumbar vertebra, or a region not smaller than one tenth of the surface area of the entire region of the seventh lumbar vertebra bone.
The characteristic region is a skeleton corresponding to the CT sequence image or the nuclear magnetic sequence image, and comprises a skull or a spine of a human brain, and the spine comprises a cervical vertebra, a thoracic vertebra and a lumbar vertebra.
Example 5
According to the method for real-time coincidence of the three-dimensional medical image and the patient in embodiment 1, the same spatial coordinate system referred by the utility model is the spatial coordinate system of the surgical navigation system;
the operation navigation system is an operation auxiliary device which accurately corresponds preoperative or intraoperative image data of a patient to an anatomical structure of the patient on an operation bed and enables a doctor to clearly know the position of an operation instrument relative to the anatomical structure of the patient;
the surgical navigation system used in the embodiment is an optical positioning and tracking system, and has the characteristics of high flexibility, high accuracy and the like.
Example 6
According to the method for real-time registration of a three-dimensional medical image and a patient in embodiment 3, before the depth image acquisition device acquires the continuous depth images of the feature region, the depth image acquisition device and the surgical navigation system are calibrated in a spatial coordinate system, so that the spatial coordinates of the point cloud of the local body part of the patient and the surgical navigation system are in the same spatial coordinate system.
Example 7
The embodiment is explained with reference to the drawings of fig. 2 and fig. 3:
according to the method for real-time registration of a three-dimensional medical image and a patient described in embodiment 7, the depth image acquisition device and the surgical navigation system are calibrated by a checkerboard calibration method;
firstly, as shown in fig. 2, a checkerboard 1 is designed, the side length of each square on the checkerboard 1 is larger than 4cm, a calibration piece 2 which can be identified by the surgical navigation system is placed at the corner point of the checkerboard 1, and the calibration process is as follows:
s1: collecting identifiable markers 2 in the checkerboard 1 by using a surgical navigation system, and recording coordinates of the markers 2;
s2: recognizing the corner points of the checkerboard 1 by using a depth image acquisition device, and recording coordinates of the corner points;
s3: changing the position and the placement angle of the index 2 in the checkerboard 1, and repeating the step S1 and the step S3 for 20 times in total;
s4: calculating a conversion matrix from the depth image acquisition device to the surgical navigation system by using the coordinates recorded in the steps S1, S2 and the completed S3, and completing the calibration of the depth image acquisition device and the surgical navigation system;
in the embodiment, through multiple experiments and records, when the position and the placement angle of the calibration piece in the checkerboard 1 are changed, and the step S1 and the step S3 are repeated for 20 times, the calibration error can be effectively eliminated, and the calibration accuracy is improved.
Example 8
The embodiment is described with reference to fig. 4:
according to the method for real-time coincidence of three-dimensional medical images and patients as in embodiment 7, the depth image acquiring device 6 is provided with the calibration piece 2 which can be identified by the surgical navigation system, the spatial coordinate of the position of the calibration piece 2 can be identified in the spatial coordinate system of the surgical navigation system, the depth image acquiring device 6 is provided with at least 4 coplanar calibration pieces 2, the calibration piece 2 and the depth image acquiring device 6 are detachably connected by using the fixing piece 7, the distance between any two adjacent calibration pieces 2 is greater than 2cm, and when the number of the calibration pieces 2 is 4, the establishment of the spatial coordinate system is facilitated in both the aspects of identification speed and precision.
Example 9
The method according to embodiment 7 or 8, wherein the depth image capturing device 6 is a binocular depth camera.
Example 10
The method of real-time registration of three-dimensional medical images with a patient according to embodiment 7 or embodiment 8, wherein the marker 2 is a sphere with a diameter greater than 1 cm.
Example 11
The method for real-time registration of three-dimensional medical images with patients according to embodiment 1, wherein the determination process of the optimal transformation matrix is as follows:
continuously transforming the spatial form of the local point cloud, and when the spatial form of the local point cloud is coincident with the spatial form of the whole point cloud, namely when the absolute value of the difference value between the spatial coordinate of the three-dimensional medical image under the spatial coordinate system of the surgical navigation system and the spatial coordinate of the local point cloud is the minimum value, namely when the spatial coordinate of the local point cloud is accurately coincident with the whole point cloud, determining a conversion matrix as an optimal conversion matrix according to the spatial coordinate of the local point cloud at the moment and the spatial coordinate of the whole point cloud;
the calculation formula of the conversion matrix is as follows:
TCT/MRI-Patient= PPatient /PCT/MRI
wherein the content of the first and second substances,
PCT/MRIthe integral point cloud space coordinate of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or the nuclear magnetic equipment;
PPatientthe spatial coordinates of the local point cloud under a spatial coordinate system of the surgical navigation system are obtained;
TCT/MRI-Patientthe transformation matrix is the transformation matrix between the space coordinates of the local point cloud under the space coordinate system of the operation navigation system and the space coordinates of the whole point cloud of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or nuclear magnetic equipment.
Example 12
According to the method for real-time registration of a three-dimensional medical image and a patient described in embodiment 1, a calculation formula of a spatial coordinate when a spatial coordinate of a global point cloud, i.e., a three-dimensional medical image, of a CT sequence image or a nuclear magnetic sequence image and a spatial coordinate of a local point cloud are precisely registered in a spatial coordinate system of the surgical navigation system is as follows:
P’Patient= T’CT/MRI-Patient×PCT/MRI
wherein:
T’CT/MRI-Patientthe optimal transformation matrix is obtained when the space coordinate of the whole point cloud and the space coordinate of the local point cloud are precisely superposed under the space coordinate system of the operation navigation system;
P’Patientthe spatial coordinates of the integral point cloud, namely the three-dimensional medical image, and the spatial coordinates of the local point cloud are precisely coincident under the spatial coordinate system of the operation navigation system;
PCT/MRIthe global point cloud space coordinate of the CT sequence image or the nuclear magnetic sequence image under the space coordinate system of the CT or the nuclear magnetic equipment.
Example 13
The method of real-time registration of three-dimensional medical images with a patient according to embodiment 11 or 12, when | T'CT/MRI-Patient×PCT/MRI-PPatient| is a minimum value, T'CT/MRI-PatientIs the optimal transformation matrix.
Example 14
The present embodiment is described below with reference to fig. 5 and 6:
FIG. 5 is a structural diagram of an operation support system employing a method of real-time registration of three-dimensional medical images with a patient;
FIG. 6 is a flow chart of the operation assistance system using the method of real-time registration of three-dimensional medical images with a patient;
an operation auxiliary system applying a method for real-time coincidence of a three-dimensional medical image and a patient comprises the following components:
the surgical navigation system 3;
the depth image acquisition device 6;
a three-dimensional image display device 4;
the depth image acquisition device 6 is provided with a calibration piece 2 which can be identified by the surgical navigation system 3;
the depth image acquisition device 6 and the surgical navigation system 3 perform spatial coordinate system calibration, so that the spatial coordinate of the image acquired by the depth image acquisition device 6 is a spatial coordinate which can be identified by the surgical navigation system 3 in the spatial coordinate system;
the operation navigation system 3, the depth image acquisition device 6 and the three-dimensional image display device 4 are respectively connected with a data processing device 5;
the data processing device 5 includes:
an image processing unit;
a data processing unit;
the output end of the surgical navigation system 3 is connected with the input end of the data processing unit;
the output end of the depth image acquisition device 6 is connected with the input end of the image processing unit;
the output end of the image processing unit is connected with the data processing unit;
the output end of the data processing unit is connected with the three-dimensional image display device 4;
the data processing device 5 comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing, when executing the computer program:
processing the integral point cloud extraction of the CT or nuclear magnetic sequence image of the patient;
obtaining spatial coordinates P of CT sequence image or nuclear magnetic sequence image of patientCT/MR
Acquiring spatial position information of a depth image acquisition device identified by a surgical navigation system;
acquiring continuous images of a characteristic region of a patient body acquired by a depth image acquisition device;
processing the point cloud extraction of the body part of the patient;
acquiring the space coordinate P of the point cloud of the local body of the patient under the coordinate system of the operation navigation systemPatient
Searching an optimal transformation matrix T when the space coordinates of the whole point cloud and the local point cloud are accurately superposed under the space coordinate system of the surgical navigation systemCT/MRI-Patient
Processing a space coordinate P 'when the space coordinates of the whole point cloud and the local point cloud are precisely coincident under a space coordinate system of the operation navigation system'Patient
And sending the space coordinates of the whole point cloud under the coordinate system of the operation navigation system to a three-dimensional image display device 4.
The data processing device 5 may be a computer.
Example 15
The following describes the embodiment with reference to the schematic structural diagram of the augmented reality display device shown in fig. 7:
in an operation assisting system applying the method for real-time registration of three-dimensional medical images with a patient according to embodiment 14, the three-dimensional image display device 4 is an augmented reality display device, based on AR or MR technology, for presenting three-dimensional images,
the augmented reality display device includes:
a three-dimensional image rendering module 41;
a three-dimensional image information processing module 42;
a head-mounted device 43;
the head-mounted device 43 is connected to the three-dimensional image presenting module 41, and is configured to support the three-dimensional image presenting module 41 to be fixed to the head of the user, so that the three-dimensional image presenting module 41 and the head or the visual angle of the user always keep a proper distance, and the user can conveniently operate the interface content displayed in the three-dimensional image presenting module 41;
the head-mounted device 43 is detachably connected with the three-dimensional image presenting module 41 and is worn on the head of a user, and the size of the head-mounted device 43 can be adjusted according to the size of the head of the user;
further, the head-mounted device 43 is fixedly connected with the three-dimensional image presenting module 41 and is worn on the head of the user, and the size of the head-mounted device 43 can be adjusted according to the size of the head of the user;
the three-dimensional image presentation module 41 may load and read three-dimensional image data remotely or locally, and display a three-dimensional image on the three-dimensional image presentation module 41;
the three-dimensional image presenting module 41, based on the AR augmented reality or MR mixed reality technology, generates a superimposed effect of the virtual three-dimensional image and the object in the real world, and presents visual effects of the virtual three-dimensional image and the object in the real world existing in the same picture and space at the same time, which can be perceived by a doctor, and the three-dimensional image presenting module 41 may be mixed reality head display equipment or augmented reality head display equipment, such as mixed reality glasses or augmented reality glasses, mixed reality helmets or augmented reality helmets;
the three-dimensional image information processing module 42 processes the real-time spatial coordinates of the three-dimensional image, so that the state and position relationship of the three-dimensional image presented in the three-dimensional image presenting module 41 are matched with the spatial coordinates of the three-dimensional image in real time, and the real-time spatial coordinates of the three-dimensional image are obtained by a method of real-time coincidence of the three-dimensional medical image and the patient;
the three-dimensional image described in this embodiment includes, but is not limited to, a three-dimensional medical image;
the three-dimensional medical image described in this embodiment is a three-dimensional medical image of a tissue, an organ, or a bone of a patient formed after three-dimensional reconstruction is performed according to a CT sequence image or a nuclear magnetic sequence image of the patient;
the three-dimensional medical image of the tissue and organ of the patient is loaded into the three-dimensional image presenting module 41, and the real-time space coordinates of the three-dimensional medical image of the patient are transmitted to the three-dimensional image information processing module 42, so that the three-dimensional medical image of the patient can be presented in front of the view angle of the doctor, the virtual three-dimensional medical image of the patient and the characteristic region of the patient in the real world are superposed together, and the three-dimensional medical image of the patient and the characteristic region of the patient in the real world are displayed according to the ratio of 1: 1, compared with a two-dimensional display device, the three-dimensional medical image of a patient is displayed more intuitively and accurately, and the problem that the sight line of a doctor is frequently switched in the operation is effectively solved.
Example 16
According to the operation assisting system of embodiment 14, which applies the method for real-time registration of three-dimensional medical images and patients, the operation navigation system 3 and the depth image acquiring device 6 are calibrated by using checkerboard calibration,
the chessboard pattern marking piece consists of a chessboard pattern 1 and a marking piece 2;
black and white grids which are distributed at intervals are arranged on the checkerboard 1, and the side length of each grid is more than 4 cm;
at the corner points of the checkerboard 1, a landmark 2 is placed that can be identified by the surgical navigation system 3.
Example 17
The following describes the procedure of using the surgery assistance system applying the method for real-time registration of three-dimensional medical images and patients with reference to fig. 8:
in the embodiment, a binocular depth camera is used as the depth image acquisition device 6, and is calibrated with the surgical navigation system 3 before use;
fixing the surgical navigation system 3 at a position with a certain height away from a patient by adopting a bracket;
fixing a binocular depth camera at a position with a certain height away from a patient;
the fixed position of the binocular depth camera is within the range that the surgical navigation system 3 can measure the fixed calibration piece 2 on the binocular depth camera;
the fixed position of the binocular depth camera can clearly acquire the characteristic region on the body of the patient;
extracting integral point cloud from CT or nuclear magnetic sequence image of patient, wherein the space coordinate of the integral point cloud is PCT/MR
Carrying out three-dimensional model reconstruction on a CT or nuclear magnetic sequence image of a patient to form a three-dimensional medical image;
the surgical navigation system 3 acquires the spatial position information of the binocular depth camera and transmits the spatial position information to the data processing device 5;
binocular depth camera acquires body characteristics of patientContinuous images of the area are transmitted to a data processing device 5, point cloud extraction is carried out on the continuous images in the data processing device 5, the point cloud is local point cloud of the patient, and the spatial coordinate of the local point cloud is PPatient
The computer program stored in the data processing device 5 executes the following instructions:
processing the integral point cloud extraction of the CT or nuclear magnetic sequence image of the patient;
obtaining spatial coordinates P of CT sequence image or nuclear magnetic sequence image of patientCT/MR
Acquiring continuous images of the characteristic region of the body of the patient acquired by the depth image acquisition device 6;
processing the point cloud extraction of the body part of the patient;
obtaining the spatial coordinates P of the point cloud of the local body part of the patientPatient
Searching an optimal transformation matrix T when the space coordinates of the whole point cloud and the local point cloud are accurately superposed under a space coordinate system of the operation navigation system 3;
processing the spatial coordinates of the whole point cloud and the local point cloud when the spatial coordinates are precisely coincident under the spatial coordinate system of the surgical navigation system 3;
sending the space coordinates of the whole point cloud under the coordinate system of the operation navigation system 3 to a three-dimensional image display device 4;
the three-dimensional image display device 4 used in this embodiment is a mixed reality glasses, the three-dimensional image display device 4 loads a three-dimensional medical image of a patient and a corresponding spatial coordinate of the whole point cloud, and the three-dimensional image information processing module 42 processes the spatial coordinate of the three-dimensional medical image, so that the position of the three-dimensional medical image displayed on the three-dimensional image presentation module 41 is consistent with the spatial coordinate of the three-dimensional medical image in the coordinate system of the surgical navigation system 3;
at the moment, a doctor can visually present that the virtual three-dimensional medical image of the patient is in the same space with the real body of the patient and is accurately superposed with the characteristic region on the body of the patient by wearing the mixed reality glasses, so that the doctor is assisted in performing the operation.

Claims (7)

1. An operation auxiliary system applying a method for real-time coincidence of a three-dimensional medical image and a patient is characterized by comprising the following components:
a surgical navigation system;
a depth image acquisition device;
a three-dimensional image display device;
the depth image acquisition device is provided with a calibration piece which can be identified by the surgical navigation system;
the operation navigation system, the depth image acquisition device and the three-dimensional image display device are respectively connected with a data processing device.
2. A surgical assistant system applying a method for real-time coincidence between a three-dimensional medical image and a patient according to claim 1, wherein the data processing device comprises a memory and a processor.
3. The operation assisting system of claim 1, wherein the three-dimensional image display device is an augmented reality display device, which presents three-dimensional images based on AR or MR technology, and the augmented reality display device comprises a three-dimensional image presenting module;
a three-dimensional image information processing module;
a head-mounted device;
the three-dimensional image presentation module can import and display a three-dimensional image;
the three-dimensional image information processing module is used for processing the real-time space coordinate of the three-dimensional medical image so as to enable the state and position relation of the three-dimensional medical image presented in the three-dimensional image presentation module to be matched with the space coordinate of the three-dimensional medical image in real time;
the real-time space coordinate of the three-dimensional medical image is acquired by a method of real-time coincidence of the three-dimensional medical image and a patient;
the head-mounted device is connected with the three-dimensional image presentation module.
4. The system as claimed in claim 1, wherein the depth image capturing device is configured to mount at least 4 coplanar markers.
5. A surgical assistant system applying a method for real-time coincidence between three-dimensional medical images and a patient according to claim 4, wherein the distance between the markers is greater than 2 cm.
6. An operation auxiliary system using the method for real-time registration of three-dimensional medical images with patients according to any one of claims 1, 4 and 5, wherein the calibration member and the depth image acquisition device are detachably connected by a fixing member.
7. The system of claim 6, wherein the calibration element is a sphere with a diameter greater than 1 cm.
CN202022478672.7U 2020-11-02 2020-11-02 Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method Active CN214157490U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202022478672.7U CN214157490U (en) 2020-11-02 2020-11-02 Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202022478672.7U CN214157490U (en) 2020-11-02 2020-11-02 Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method

Publications (1)

Publication Number Publication Date
CN214157490U true CN214157490U (en) 2021-09-10

Family

ID=77598768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202022478672.7U Active CN214157490U (en) 2020-11-02 2020-11-02 Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method

Country Status (1)

Country Link
CN (1) CN214157490U (en)

Similar Documents

Publication Publication Date Title
CN112168346A (en) Method for real-time coincidence of three-dimensional medical image and patient and operation auxiliary system
EP2583244B1 (en) Method of determination of access areas from 3d patient images
US7715602B2 (en) Method and apparatus for reconstructing bone surfaces during surgery
CN102784003B (en) Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning
CN202751447U (en) Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
Grimson et al. Clinical experience with a high precision image-guided neurosurgery system
WO2012045626A1 (en) Image projection system for projecting image on the surface of an object
US20220202493A1 (en) Alignment of Medical Images in Augmented Reality Displays
WO2008035271A2 (en) Device for registering a 3d model
WO2023026229A1 (en) Registration and registration validation in image-guided surgery
CN115105207A (en) Operation holographic navigation method and system based on mixed reality
WO2016082017A1 (en) Method, system and apparatus for quantitative surgical image registration
WO2020145826A1 (en) Method and assembly for spatial mapping of a model, such as a holographic model, of a surgical tool and/or anatomical structure onto a spatial position of the surgical tool respectively anatomical structure, as well as a surgical tool
Edwards et al. Neurosurgical guidance using the stereo microscope
US11160610B2 (en) Systems and methods for soft tissue navigation
US11540887B2 (en) Technique for providing user guidance in surgical navigation
EP1465541B1 (en) Method and apparatus for reconstructing bone surfaces during surgery
Harders et al. Multimodal augmented reality in medicine
CN214157490U (en) Operation auxiliary system applying three-dimensional medical image and patient real-time coincidence method
Paloc et al. Computer-aided surgery based on auto-stereoscopic augmented reality
CN115105204A (en) Laparoscope augmented reality fusion display method
Wang et al. Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy
CN111631814A (en) Intraoperative blood vessel three-dimensional positioning navigation system and method
CN115363751B (en) Intraoperative anatomical structure indication method
US20240104853A1 (en) Method and device for providing surgical guide using augmented reality

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant