CN112107366B - Mixed reality ultrasonic navigation system - Google Patents

Mixed reality ultrasonic navigation system Download PDF

Info

Publication number
CN112107366B
CN112107366B CN202010715410.XA CN202010715410A CN112107366B CN 112107366 B CN112107366 B CN 112107366B CN 202010715410 A CN202010715410 A CN 202010715410A CN 112107366 B CN112107366 B CN 112107366B
Authority
CN
China
Prior art keywords
point
tracker
mixed reality
tracking
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010715410.XA
Other languages
Chinese (zh)
Other versions
CN112107366A (en
Inventor
张嘉伟
陈亮
韩曼曼
赵泉洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Jinser Medical Information Technology Co ltd
Original Assignee
Changzhou Jinser Medical Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Jinser Medical Information Technology Co ltd filed Critical Changzhou Jinser Medical Information Technology Co ltd
Priority to CN202010715410.XA priority Critical patent/CN112107366B/en
Publication of CN112107366A publication Critical patent/CN112107366A/en
Application granted granted Critical
Publication of CN112107366B publication Critical patent/CN112107366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2063Acoustic tracking systems, e.g. using ultrasound

Abstract

The invention relates to a mixed reality ultrasonic navigation system, which is characterized in that a mixed reality device, a tracking device, a tracker and ultrasonic scanning are adopted, and the mixed tracker is arranged on an ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method has the advantages that the image is fused by ultrasound and preoperative CT in the real operation of the operation area of the patient in real time in the real space scene, so that the visual field of a doctor does not need to leave the operation area, the ultrasound is used for auxiliary positioning, and the radiation-free dose mixed reality ultrasound navigation method is realized.

Description

Mixed reality ultrasonic navigation system
Technical Field
The invention relates to the technical field of medical instruments, in particular to a mixed reality ultrasonic navigation system.
Background
There are two main types of conventional surgical navigation systems: one type is to perform intraoperative navigation by only depending on preoperative CT and MRI images, and the surgical navigation systems have the problems that the real-time tracking and reflection of elastic human tissues or organs are difficult, and preoperative medical image data cannot be completely matched with intraoperative actual conditions because the human tissues or organs can generate elastic deformation when being contacted with surgical instruments. The other type is the operation navigation which is performed by registering and fusing the CT and MRI images before the operation and the real-time medical images in the operation and then performing image guidance.
The existing operation is performed before the operation, the affected part is shot by medical imaging equipment such as CT, nuclear magnetic resonance, B-ultrasonic, X-ray and the like, a 2D plane image is displayed on a display screen, and a doctor imagines the spatial position and the structure of the affected part according to the 2D image and performs the operation according to the spatial position and the structure. In addition, the conventional navigation system requires a doctor to pay attention to a computer screen in the visual field during an operation to observe the navigation result, and sometimes needs to continuously scan X-rays and introduce radiation dose, thereby causing damage to the doctor and a patient.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a mixed reality ultrasonic navigation system which realizes the fusion of images of ultrasound and preoperative CT in real time in the operation area of a patient in a real space scene and ensures that the visual field of a doctor does not need to leave the operation area.
The technical scheme for realizing the purpose of the invention is as follows: a mixed reality ultrasound navigation system has an ultrasound probe, a mixed tracker, a tracking device and a mixed reality device worn on the head of a doctor; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: starting the tracking equipment, realizing the azimuth tracking of the ultrasonic probe through the hybrid tracker to obtain an azimuth conversion matrix
Figure GDA0003116761590000021
Meanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid tracker
Figure GDA0003116761590000022
Thereby calculating a conversion matrix between the tracking device and the ultrasonic imaging
Figure GDA0003116761590000023
Wherein the content of the first and second substances,
Figure GDA0003116761590000024
(3) continuously scanning the ultrasonic probe in a tracking state at the position of a patient along a mode from top to bottom to obtain an intraoperative ultrasonic image sequence based on the origin of the tracking equipment and a coordinate system w;
(4) at least three characteristic points are respectively selected from the intraoperative ultrasonic image sequence and the preoperative CT image sequence, and the preoperative CT sequence and the operation are calculatedRegistration transformation matrix of medium ultrasonic sequence
Figure GDA0003116761590000025
Therefore, the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence is realized;
(5) transmitting the spatial information obtained by the tracking equipment to the mixed reality equipment, wherein the mixed reality equipment is based on a coordinate transformation matrix from the tracking equipment to the mixed reality equipment
Figure GDA0003116761590000026
And a transformation matrix between the tracking device and the ultrasound imaging
Figure GDA0003116761590000027
Calculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate center
Figure GDA0003116761590000028
Wherein
Figure GDA0003116761590000029
The method realizes that a mixed reality device wearer sees a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient.
The registration module comprises a tracker and a series of mark points, the positions of the mark points are relatively fixed with the tracker of the registration module, namely the position matrix of the tracker
Figure GDA00031167615900000210
The orientations of these marker points are derived.
The step (2) in the above technical scheme is specifically:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtained
Figure GDA00031167615900000211
B. Acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position;
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image;
D. calculating a transformation matrix between the hybrid tracker-equipped ultrasound probe and the desired imaging location
Figure GDA00031167615900000212
The step A of the technical scheme is specifically as follows: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; point of the second row is c; the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due to
Figure GDA0003116761590000031
Knowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker as
Figure GDA0003116761590000032
The position of point b is
Figure GDA0003116761590000033
The position of point c is
Figure GDA0003116761590000034
The position of the point d is
Figure GDA0003116761590000035
According to
Figure GDA0003116761590000036
To obtain
Figure GDA0003116761590000037
Wherein
Figure GDA0003116761590000038
Is composed of
Figure GDA0003116761590000039
The inverse matrix of (d);
the step B specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
the step C is specifically as follows: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, the upper left corner is at the self coordinate origin of the hybrid tracker, and under the coordinate system p, the positions of the four marked points a ', b', c ', d' in the space, namely the distances from each point to the image edge, namely the position of the point a 'is a'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a The position of the point c 'is c'p=[0,-Vc′,-Hc′,1]T(ii) a The position of the point d 'is d'p=[0,-Vd′,-Hd′,1]T
The step D is specifically as follows: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationship
Figure GDA00031167615900000310
The corresponding relationship is as follows:
Figure GDA00031167615900000311
Figure GDA00031167615900000312
Figure GDA00031167615900000313
Figure GDA00031167615900000314
due to (a'p,b′p,c′p,d′p) The four marked points are transformed into (a) through rigidity changep,bp,cp,dp) Therefore, it is
Figure GDA00031167615900000315
Only contain translation and rotation operations and do not contain zoom-in and zoom-out operations.
In step D of the above technical solution, the matrix is converted
Figure GDA00031167615900000316
The solving method is as follows,
e) firstly, (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition;
f) then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal;
g) then fine-tuning the translation position to further reduce the comprehensive error;
h) finally obtaining
Figure GDA0003116761590000041
The step (4) in the above technical scheme is specifically:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
e. then fine-tuning the translation position to further reduce the comprehensive error to obtain
Figure GDA0003116761590000042
In the step (5) of the above technical solution, a coordinate transformation matrix of the mixed reality device and the tracking device is obtained
Figure GDA0003116761590000043
The method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same time
Figure GDA0003116761590000044
And
Figure GDA0003116761590000045
the calculation results in that,
Figure GDA0003116761590000046
wherein
Figure GDA0003116761590000047
Is composed of
Figure GDA0003116761590000048
C represents a fixed value, indicating that it does not change with changes in the position of the object.
The technical scheme tracks the medical instrument provided with the instrument tracker in the operation, and calculates and obtains the coordinate matrix of the medical tracker on the medical instrument of the mixed reality equipment and the operation area under the coordinate system ms of the mixed reality equipment
Figure GDA0003116761590000049
The technical scheme is as follows: detected orientation matrix of medical tracker on medical instrument based on tracking device coordinates
Figure GDA00031167615900000410
Coordinate transformation matrix for medical instruments mixing real equipment with operation area
Figure GDA00031167615900000411
After the technical scheme is adopted, the invention has the following positive effects:
(1) according to the invention, through the mixed reality device, the tracking device, the tracker and the ultrasonic scanning, the fusion of the image of the ultrasonic and the preoperative CT in the real operation of the operation area of the patient in real time in the real space scene is realized, so that the visual field of a doctor does not need to leave the operation area, and the ultrasonic is used for auxiliary positioning, thereby realizing the radiation-free dose mixed reality ultrasonic navigation method.
(2) The invention integrates the fusion of the ultrasound and the CT before the operation into an operation three-dimensional navigation system, an operator can adjust the track of a medical instrument according to a pre-planned three-dimensional operation path to achieve the aim of accurate operation treatment, the operation risk caused by errors in the traditional operation positioning based on experience is reduced, the accuracy of the operation is improved, and the invention is more efficient, more intuitive, higher in safety and stronger in functionality compared with the traditional operation.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a schematic diagram of a use scenario of the present invention;
FIG. 2 is a schematic diagram of coordinate transformation among the hybrid tracker, the hybrid display device and the tracking device according to the present invention;
FIG. 3 is a schematic diagram of an original tracking position of an ultrasound image and a desired position of an ultrasound image according to the present invention;
FIG. 4 is a schematic view of a scanning trajectory of an ultrasonic probe according to the present invention;
FIG. 5 is a schematic diagram illustrating registration transformation between an intraoperative ultrasound image sequence and a preoperative CT image sequence in accordance with the present invention;
FIG. 6 is a schematic view of embodiment 2 of the present invention;
FIG. 7 is a schematic spatial relationship diagram according to embodiment 3 of the present invention;
FIG. 8 is a schematic view of example 3 of the present invention;
fig. 9 is a schematic diagram of registration conversion between an intraoperative ultrasound image sequence and a preoperative CT image sequence according to embodiment 3 of the present invention.
Detailed Description
(example 1)
Referring to fig. 1-9, the present invention has an ultrasound probe, a hybrid tracker, a tracking device, and a mixed reality device worn on the head of a physician; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: turning on the tracking device, the hybrid tracker is used to track the position of the ultrasonic probe to obtain a position transformation matrix
Figure GDA0003116761590000061
Meanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid tracker
Figure GDA0003116761590000062
Thereby calculating a conversion matrix between the tracking device and the ultrasonic imaging
Figure GDA0003116761590000063
Wherein the content of the first and second substances,
Figure GDA0003116761590000064
Figure GDA0003116761590000065
(3) continuously scanning the ultrasonic probe in the tracking state at the position of the patient along a mode of up and down to obtain an intraoperative ultrasonic image sequence taking the origin of the tracking device and a coordinate system w as reference, and referring to fig. 4;
(4) at least three characteristic points are respectively selected from the preoperative ultrasonic image sequence and the preoperative CT image sequence, and a registration conversion matrix from the preoperative CT sequence to the intraoperative ultrasonic sequence is calculated and obtained
Figure GDA0003116761590000066
So as to realize the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence, as shown in figure 5; the us coordinate system coincides with the w coordinate system of the tracking device, so the transformation matrix is registered
Figure GDA0003116761590000067
Can also be said to be
Figure GDA0003116761590000068
(5) See fig. 2, the spatial information obtained by the tracking device is transmitted to the mixed reality device, and the mixed reality device performs coordinate conversion between the tracking device and the mixed reality deviceMatrix array
Figure GDA0003116761590000069
And a transformation matrix between the tracking device and the ultrasound imaging
Figure GDA00031167615900000610
Calculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate center
Figure GDA00031167615900000611
Wherein
Figure GDA00031167615900000612
The method realizes that a mixed reality device wearer sees a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient.
The coordinate origin of the mixed reality equipment in the invention is determined for the equipment starting time, the position of the origin cannot be changed by the movement of the equipment, and the mixed reality equipment has SLAM space positioning capability, so that the determination is carried out
Figure GDA00031167615900000613
The value does not change thereafter.
The registration module comprises a tracker and a series of mark points, namely four mark points a, b, c and d, the positions of the mark points are fixed relative to the tracker of the registration module, namely the position matrix of the tracker
Figure GDA00031167615900000614
The orientations of these marker points are derived.
The step (2) is specifically as follows:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtained
Figure GDA0003116761590000071
The method specifically comprises the following steps: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; second rowPoint (c); the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due to
Figure GDA0003116761590000072
Knowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker as
Figure GDA0003116761590000073
The position of point b is
Figure GDA0003116761590000074
The position of point c is
Figure GDA0003116761590000075
Figure GDA0003116761590000076
The position of the point d is
Figure GDA0003116761590000077
According to
Figure GDA0003116761590000078
Figure GDA0003116761590000079
To obtain
Figure GDA00031167615900000710
Wherein
Figure GDA00031167615900000711
Is composed of
Figure GDA00031167615900000712
The inverse matrix of (d);
B. acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position; the method specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image; the method specifically comprises the following steps: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, with its top left corner at the origin of the mixture tracker's own coordinates, under the coordinate system p, the positions of the four marker points a ', b ', c ', d ' in space, i.e. the distances of the points to the edges of the image, i.e. the position a ' of point a 'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a Position c 'of point c'p=[0,-Vc′,-Hc′,1]T(ii) a Position d 'of point d'p=[0,-Vd′,-Hd′,1]T
D. Calculating a transformation matrix between the hybrid tracker-equipped ultrasound probe and the desired imaging location
Figure GDA00031167615900000713
The method specifically comprises the following steps: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationship
Figure GDA00031167615900000714
The corresponding relationship is as follows:
Figure GDA00031167615900000715
Figure GDA0003116761590000081
Figure GDA0003116761590000082
Figure GDA0003116761590000083
due to (a'p,b′p,c′p,d′p) The four marked points are transformed into (a) through rigidity changep,bp,cp,dp) Therefore, it is
Figure GDA0003116761590000084
Only contain translation and rotation operations and do not contain zoom-in and zoom-out operations.
Wherein the conversion matrix
Figure GDA0003116761590000085
The solution is obtained by first solving (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition; then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal; then fine-tuning the translation position to further reduce the comprehensive error; finally obtaining
Figure GDA0003116761590000086
The step (4) is specifically as follows:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
e. then fine-tuning the translation position to further reduce the comprehensive error to obtain
Figure GDA0003116761590000087
In the step (5), referring to fig. 2, a coordinate transformation matrix of the mixed reality device and the tracking device is obtained
Figure GDA0003116761590000088
The method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same time
Figure GDA0003116761590000089
And
Figure GDA00031167615900000810
is calculated because
Figure GDA00031167615900000811
Thereby obtaining
Figure GDA00031167615900000812
Wherein
Figure GDA00031167615900000813
Is composed of
Figure GDA00031167615900000814
C represents a fixed value, indicating that it does not change with changes in the position of the object.
(example 2)
Referring to fig. 6, using the method of the mixed reality ultrasound navigation system in the embodiment 1, the medical instrument equipped with the instrument tracker is tracked during the operation, and the coordinate matrix of the medical tracker on the medical instrument of the mixed reality device and the operation area is calculated and obtained under the coordinate system ms of the mixed reality device
Figure GDA00031167615900000815
The method specifically comprises the following steps: detected orientation matrix of medical tracker on medical instrument based on tracking device coordinates
Figure GDA0003116761590000091
Due to the fact that
Figure GDA0003116761590000092
Thereby obtaining a coordinate transformation matrix of a medical tracker on a medical instrument mixing a real device with an operating field
Figure GDA0003116761590000093
If a subsequent calculation is needed for the medical instrument, if the distance from the tip of the medical instrument to the center of the tracker is l, the orientation matrix of the tip is
Figure GDA0003116761590000094
(example 3)
In order to facilitate understanding, the mixed reality device and the tracking device are placed in a certain simple space with the tracked object.
An ultrasound probe with a hybrid tracker is placed in the position shown in figure 7, zwxwPlane, zmsxmsPlane and zpxpComplete overlap, at this time
Figure GDA0003116761590000095
(180 deg. rotation in z) and,
Figure GDA0003116761590000096
therefore, the first and second electrodes are formed on the substrate,
Figure GDA0003116761590000097
the hybrid tracker is mounted exactly parallel to the imaging plane of the ultrasound probe, so
Figure GDA0003116761590000098
Only comprises a translation operation, and z is registered during scanningpxpPlane and zcxcCompletely overlapping.
Fig. 8 is a schematic diagram, and the spatial relationship is based on fig. 7.
Figure GDA0003116761590000099
(including a 180 degree rotation along z),
the spatial position of point a in coordinate system p is then
Figure GDA00031167615900000910
At this point in time,
Figure GDA0003116761590000101
Figure GDA0003116761590000102
therefore, it is not only easy to use
Figure GDA0003116761590000103
Meanwhile, the spatial position of a 'point under the coordinate system p is obviously a'p=[0,-Va′,-Ha′,1]T
The placing positions are ideal and practical
Figure GDA0003116761590000104
The calculation can be carried out only by translating the point a to the point a', the minimum error transformation is already carried out, and the optimal transformation is solved without further rotation.
According to
Figure GDA0003116761590000105
Namely, it is
Figure GDA0003116761590000106
Since Ha' -k-L3 is almost equal to 0,
therefore, it is not only easy to use
Figure GDA0003116761590000107
So far, we have the spatial orientation of ultrasonic imaging under the coordinate system w of the tracking device obtained by the tracking device, namely
Figure GDA0003116761590000108
For example, the current orientation of the ultrasound probe is
Figure GDA0003116761590000109
Then
Figure GDA0003116761590000111
Then, the ultrasound probe is used to continuously scan at the position of the patient along the mode of up-down and up-down to obtain an ultrasound image sequence, and the ultrasound sequence image and the CT sequence image are abstractly seen to be in a cuboid with spatial position attribute, as shown in fig. 9.
It is assumed here that the patient intraoperative position is the head orientation tracking device, i.e. foot-to-head direction and zwThree feature points, consistent and selected in the intraoperative ultrasound sequence, are in zwxwOn a plane and its front-to-back direction with ywIn agreement, i.e. the sequence of intraoperative ultrasound images in fig. 9 shows the patient lying on his side.
Three feature points before the operation are at yCTZ of LCTxCTOn a plane, then we call three points aCT,bCT,cCTThe three points in the operation corresponding to the above are aw,bwAnd cw. So aCT=[xcta,L,zcta,1]T,bCT=[xctb,L,zctb,1]T,cCT=[xctc,L,zctc,1]T;aw=[xwa,0,zwa,1]T,bw=[xwb,0,zwb,1]T,cw=[xwc,0,zwc,1]T
The position information of these points can be acquired.
Since the patient position directions of the preoperative CT and the intraoperative ultrasound are consistent, the patient position directions are consistent
Figure GDA0003116761590000112
Only need to calculate aCTIs translated to awThe point can be calculated, the minimum error transformation is already obtained, and the optimal transformation is solved without further rotation.
According to
Figure GDA0003116761590000113
Namely, it is
Figure GDA0003116761590000114
Therefore, it is not only easy to use
Figure GDA0003116761590000115
As a test, we simulated the intraoperative red spot to yCTThe direction is increased by 5 (indicating a deviation towards the back of the patient), i.e. a new point position is obtained as newACT=[xcta,L+5,zcta,1]TFrom the registration transformation matrix we can get its position as intraoperative
Figure GDA0003116761590000116
Figure GDA0003116761590000121
I.e. the point is also offset 5 towards the back of the patient during surgery, in line with the preoperative direction, and is correct.
After the registration process is completed, we have already fused preoperative CT and intraoperative ultrasound based on the coordinate system w.
And finally, transmitting the spatial information obtained by the tracking equipment to mixed reality equipment, wherein the mixed reality equipment needs to be based on the spatial information obtained by the tracking equipment
Figure GDA0003116761590000122
I.e., coordinate transformation matrix calculation between the tracking device to the mixed reality device, the actual position of the object as seen by the mixed reality device, so that the virtual object can be superimposed on the correct position in real space.
Here, taking the previous fig. 7 as an example, it is assumed that the ultrasonic probe is moved rightward 3 and downward 2. We then calculate where in space the real-time ultrasound image should be placed from the mixed reality device perspective.
At this time
Figure GDA0003116761590000123
And is known
Figure GDA0003116761590000124
Because of the fact that
Figure GDA0003116761590000125
Therefore, it is not only easy to use
Figure GDA0003116761590000126
Because of the fact that
Figure GDA0003116761590000127
It is known that
Figure GDA0003116761590000128
Therefore, it is not only easy to use
Figure GDA0003116761590000129
Figure GDA00031167615900001210
It can be seen that the intraoperative ultrasound image is correct from the mixed reality device, n-h-q-2 in its x-axis, L1-L2-3 in its z-axis, and without any rotation.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A mixed reality ultrasound navigation system, characterized by: having an ultrasound probe, a hybrid tracker, a tracking device, and a mixed reality device worn on the head of a doctor; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence, and introducing into mixed reality equipment;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: starting the tracking equipment, realizing the azimuth tracking of the ultrasonic probe through the hybrid tracker to obtain an azimuth conversion matrix
Figure FDA0003116761580000011
Meanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid tracker
Figure FDA0003116761580000012
Thereby calculating a conversion matrix between the tracking device and the ultrasonic imaging
Figure FDA0003116761580000013
Wherein the content of the first and second substances,
Figure FDA0003116761580000014
(3) continuously scanning the ultrasonic probe in a tracking state at the position of a patient along a mode from top to bottom to obtain an intraoperative ultrasonic image sequence based on the origin of the tracking equipment and a coordinate system w;
(4) at least three characteristic points are respectively selected from the preoperative ultrasonic image sequence and the preoperative CT image sequence, and a registration conversion matrix from the preoperative CT sequence to the intraoperative ultrasonic sequence is calculated and obtained
Figure FDA0003116761580000015
Therefore, the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence is realized;
(5) transmitting the spatial information obtained by the tracking equipment to the mixed reality equipment, wherein the mixed reality equipment is based on a coordinate transformation matrix from the tracking equipment to the mixed reality equipment
Figure FDA0003116761580000016
And a transformation matrix between the tracking device and the ultrasound imaging
Figure FDA0003116761580000017
Calculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate center
Figure FDA0003116761580000018
Wherein
Figure FDA0003116761580000019
The method has the advantages that a mixed reality device wearer can see a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient;
the registration module comprises a tracker and a series of mark points, the position of the mark points is fixed relative to the tracker of the registration module, namely the position matrix of the tracker
Figure FDA00031167615800000110
Deducing the orientation of the mark points;
the step (2) is specifically as follows:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtained
Figure FDA0003116761580000021
B. Acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position;
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image;
D. calculating a transformation matrix between the hybrid tracker-equipped ultrasound probe and the desired imaging location
Figure FDA0003116761580000022
2. The mixed reality ultrasound navigation system according to claim 1, wherein the step a is specifically: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; point of the second row is c; the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due to
Figure FDA0003116761580000023
Knowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker as
Figure FDA0003116761580000024
The position of point b is
Figure FDA0003116761580000025
The position of point c is
Figure FDA0003116761580000026
Figure FDA0003116761580000027
The position of the point d is
Figure FDA0003116761580000028
According to
Figure FDA0003116761580000029
Figure FDA00031167615800000210
To obtain
Figure FDA00031167615800000211
Wherein
Figure FDA00031167615800000212
Is composed of
Figure FDA00031167615800000213
The inverse matrix of (d);
the step B specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
the step C is specifically as follows: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, with its top left corner at the origin of the mixture tracker's own coordinates, under the coordinate system p, the positions of the four marker points a ', b ', c ', d ' in space, i.e. the distances of the points to the edges of the image, i.e. the position a ' of point a 'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a Position c 'of point c'p=[0,-Vc′,-Hc′,1]T(ii) a Position d 'of point d'p=[0,-Vd′,-Hd′,1]T
The step D is specifically as follows: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationship
Figure FDA0003116761580000031
The corresponding relationship is as follows:
Figure FDA0003116761580000032
Figure FDA0003116761580000033
Figure FDA0003116761580000034
Figure FDA0003116761580000035
due to (a'p,b′p,c′p,d′p) The four marked points are transformed into (a) through rigidity changep,bp,cp,dp) Therefore, it is
Figure FDA0003116761580000036
Only contain translation and rotation operations and do not contain zoom-in and zoom-out operations.
3. The mixed reality ultrasound navigation system of claim 2, wherein in step D, the matrix is transformed
Figure FDA0003116761580000037
The solving method is as follows,
a) firstly, (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition;
b) then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal;
c) then fine-tuning the translation position to further reduce the comprehensive error;
d) finally obtaining
Figure FDA0003116761580000038
4. The mixed reality ultrasound navigation system according to claim 1, wherein the step (4) is specifically:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
e. then fine-tuning the translation position to further reduce the comprehensive error to obtain
Figure FDA0003116761580000039
5. The mixed reality ultrasound navigation system of claim 1, wherein in step (5), a coordinate transformation matrix of the mixed reality device and the tracking device is obtained
Figure FDA00031167615800000310
The method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same time
Figure FDA0003116761580000041
And
Figure FDA0003116761580000042
the calculation results in that,
Figure FDA0003116761580000043
wherein
Figure FDA0003116761580000044
Is composed of
Figure FDA0003116761580000045
C represents a fixed value, indicating that it does not change with changes in the position of the object.
6. The mixed reality ultrasound navigation system of any of claims 1 to 5, wherein: tracking the medical instrument equipped with the instrument tracker in the operation, and calculating and obtaining a coordinate matrix of the medical tracker on the medical instrument of the mixed reality equipment and the operation area under a mixed reality equipment coordinate system ms
Figure FDA0003116761580000046
7. The mixed reality ultrasound navigation system of claim 6, specifically being: detected orientation matrix of medical tracker on medical instrument based on tracking device coordinates
Figure FDA0003116761580000047
Coordinate transformation matrix for medical instruments mixing real equipment with operation area
Figure FDA0003116761580000048
Figure FDA0003116761580000049
CN202010715410.XA 2020-07-23 2020-07-23 Mixed reality ultrasonic navigation system Active CN112107366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010715410.XA CN112107366B (en) 2020-07-23 2020-07-23 Mixed reality ultrasonic navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010715410.XA CN112107366B (en) 2020-07-23 2020-07-23 Mixed reality ultrasonic navigation system

Publications (2)

Publication Number Publication Date
CN112107366A CN112107366A (en) 2020-12-22
CN112107366B true CN112107366B (en) 2021-08-10

Family

ID=73799450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010715410.XA Active CN112107366B (en) 2020-07-23 2020-07-23 Mixed reality ultrasonic navigation system

Country Status (1)

Country Link
CN (1) CN112107366B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113041083A (en) * 2021-04-22 2021-06-29 江苏瑞影医疗科技有限公司 Holographic projection operation console applied to QMR technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102319117A (en) * 2011-06-16 2012-01-18 上海交通大学医学院附属瑞金医院 Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation
CN103211655A (en) * 2013-04-11 2013-07-24 深圳先进技术研究院 Navigation system and navigation method of orthopedic operation
CN106846496A (en) * 2017-01-19 2017-06-13 杭州古珀医疗科技有限公司 DICOM images based on mixed reality technology check system and operating method
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE602004016993D1 (en) * 2003-06-05 2008-11-20 Philips Intellectual Property ADAPTIVE IMAGE INTERPOLATION FOR VOLUME PRESENTATION
US20150067599A1 (en) * 2013-09-05 2015-03-05 General Electric Company Smart and early workflow for quick vessel network detection
CN107340871A (en) * 2017-07-25 2017-11-10 深识全球创新科技(北京)有限公司 The devices and methods therefor and purposes of integrated gesture identification and ultrasonic wave touch feedback
CN110427102A (en) * 2019-07-09 2019-11-08 河北经贸大学 A kind of mixed reality realization system
CN110537980A (en) * 2019-09-24 2019-12-06 上海理工大学 puncture surgery navigation method based on motion capture and mixed reality technology
CN111420391A (en) * 2020-03-04 2020-07-17 青岛小鸟看看科技有限公司 Head-mounted display system and space positioning method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102319117A (en) * 2011-06-16 2012-01-18 上海交通大学医学院附属瑞金医院 Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation
CN103211655A (en) * 2013-04-11 2013-07-24 深圳先进技术研究院 Navigation system and navigation method of orthopedic operation
CN106846496A (en) * 2017-01-19 2017-06-13 杭州古珀医疗科技有限公司 DICOM images based on mixed reality technology check system and operating method
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction

Also Published As

Publication number Publication date
CN112107366A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN107468350B (en) Special calibrator for three-dimensional image, operation positioning system and positioning method
JP4822634B2 (en) A method for obtaining coordinate transformation for guidance of an object
CN103040525B (en) A kind of multimode medical image operation piloting method and system
WO2021217713A1 (en) Surgical navigation system, computer for performing surgical navigation method, and storage medium
US6996430B1 (en) Method and system for displaying cross-sectional images of a body
JP2966089B2 (en) Interactive device for local surgery inside heterogeneous tissue
US10912537B2 (en) Image registration and guidance using concurrent X-plane imaging
CN110537961B (en) Minimally invasive intervention guiding system and method for CT and ultrasonic image fusion
US8781186B2 (en) System and method for abdominal surface matching using pseudo-features
US6546279B1 (en) Computer controlled guidance of a biopsy needle
US11759272B2 (en) System and method for registration between coordinate systems and navigation
US6782287B2 (en) Method and apparatus for tracking a medical instrument based on image registration
Navab et al. Merging visible and invisible: Two camera-augmented mobile C-arm (CAMC) applications
BR112020022649A2 (en) live 3d holographic navigation guidance system to perform intervention procedure
Nakamoto et al. Intraoperative magnetic tracker calibration using a magneto-optic hybrid tracker for 3-D ultrasound-based navigation in laparoscopic surgery
US20080123910A1 (en) Method and system for providing accuracy evaluation of image guided surgery
WO2007135609A2 (en) Coordinate system registration
Zeng et al. A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation
WO2008035271A2 (en) Device for registering a 3d model
US20200222122A1 (en) System and Method for Registration Between Coordinate Systems and Navigation
Mirota et al. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery
CN112107366B (en) Mixed reality ultrasonic navigation system
CN113229937A (en) Method and system for realizing surgical navigation by using real-time structured light technology
Uddin et al. Three-dimensional computer-aided endoscopic sinus surgery
Rousseau et al. A frameless method for 3D MRI-and CT guided stereotaxic localisation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant