CN112107366B - Mixed reality ultrasonic navigation system - Google Patents
Mixed reality ultrasonic navigation system Download PDFInfo
- Publication number
- CN112107366B CN112107366B CN202010715410.XA CN202010715410A CN112107366B CN 112107366 B CN112107366 B CN 112107366B CN 202010715410 A CN202010715410 A CN 202010715410A CN 112107366 B CN112107366 B CN 112107366B
- Authority
- CN
- China
- Prior art keywords
- point
- tracker
- mixed reality
- tracking
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2063—Acoustic tracking systems, e.g. using ultrasound
Abstract
The invention relates to a mixed reality ultrasonic navigation system, which is characterized in that a mixed reality device, a tracking device, a tracker and ultrasonic scanning are adopted, and the mixed tracker is arranged on an ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method has the advantages that the image is fused by ultrasound and preoperative CT in the real operation of the operation area of the patient in real time in the real space scene, so that the visual field of a doctor does not need to leave the operation area, the ultrasound is used for auxiliary positioning, and the radiation-free dose mixed reality ultrasound navigation method is realized.
Description
Technical Field
The invention relates to the technical field of medical instruments, in particular to a mixed reality ultrasonic navigation system.
Background
There are two main types of conventional surgical navigation systems: one type is to perform intraoperative navigation by only depending on preoperative CT and MRI images, and the surgical navigation systems have the problems that the real-time tracking and reflection of elastic human tissues or organs are difficult, and preoperative medical image data cannot be completely matched with intraoperative actual conditions because the human tissues or organs can generate elastic deformation when being contacted with surgical instruments. The other type is the operation navigation which is performed by registering and fusing the CT and MRI images before the operation and the real-time medical images in the operation and then performing image guidance.
The existing operation is performed before the operation, the affected part is shot by medical imaging equipment such as CT, nuclear magnetic resonance, B-ultrasonic, X-ray and the like, a 2D plane image is displayed on a display screen, and a doctor imagines the spatial position and the structure of the affected part according to the 2D image and performs the operation according to the spatial position and the structure. In addition, the conventional navigation system requires a doctor to pay attention to a computer screen in the visual field during an operation to observe the navigation result, and sometimes needs to continuously scan X-rays and introduce radiation dose, thereby causing damage to the doctor and a patient.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a mixed reality ultrasonic navigation system which realizes the fusion of images of ultrasound and preoperative CT in real time in the operation area of a patient in a real space scene and ensures that the visual field of a doctor does not need to leave the operation area.
The technical scheme for realizing the purpose of the invention is as follows: a mixed reality ultrasound navigation system has an ultrasound probe, a mixed tracker, a tracking device and a mixed reality device worn on the head of a doctor; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: starting the tracking equipment, realizing the azimuth tracking of the ultrasonic probe through the hybrid tracker to obtain an azimuth conversion matrixMeanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid trackerThereby calculating a conversion matrix between the tracking device and the ultrasonic imagingWherein the content of the first and second substances,
(3) continuously scanning the ultrasonic probe in a tracking state at the position of a patient along a mode from top to bottom to obtain an intraoperative ultrasonic image sequence based on the origin of the tracking equipment and a coordinate system w;
(4) at least three characteristic points are respectively selected from the intraoperative ultrasonic image sequence and the preoperative CT image sequence, and the preoperative CT sequence and the operation are calculatedRegistration transformation matrix of medium ultrasonic sequenceTherefore, the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence is realized;
(5) transmitting the spatial information obtained by the tracking equipment to the mixed reality equipment, wherein the mixed reality equipment is based on a coordinate transformation matrix from the tracking equipment to the mixed reality equipmentAnd a transformation matrix between the tracking device and the ultrasound imagingCalculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate centerWhereinThe method realizes that a mixed reality device wearer sees a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient.
The registration module comprises a tracker and a series of mark points, the positions of the mark points are relatively fixed with the tracker of the registration module, namely the position matrix of the trackerThe orientations of these marker points are derived.
The step (2) in the above technical scheme is specifically:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtained
B. Acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position;
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image;
D. calculating a transformation matrix between the hybrid tracker-equipped ultrasound probe and the desired imaging location
The step A of the technical scheme is specifically as follows: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; point of the second row is c; the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due toKnowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker asThe position of point b isThe position of point c isThe position of the point d isAccording toTo obtainWhereinIs composed ofThe inverse matrix of (d);
the step B specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
the step C is specifically as follows: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, the upper left corner is at the self coordinate origin of the hybrid tracker, and under the coordinate system p, the positions of the four marked points a ', b', c ', d' in the space, namely the distances from each point to the image edge, namely the position of the point a 'is a'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a The position of the point c 'is c'p=[0,-Vc′,-Hc′,1]T(ii) a The position of the point d 'is d'p=[0,-Vd′,-Hd′,1]T;
The step D is specifically as follows: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationshipThe corresponding relationship is as follows:
due to (a'p,b′p,c′p,d′p) The four marked points are transformed into (a) through rigidity changep,bp,cp,dp) Therefore, it isOnly contain translation and rotation operations and do not contain zoom-in and zoom-out operations.
e) firstly, (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition;
f) then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal;
g) then fine-tuning the translation position to further reduce the comprehensive error;
The step (4) in the above technical scheme is specifically:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
In the step (5) of the above technical solution, a coordinate transformation matrix of the mixed reality device and the tracking device is obtainedThe method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same timeAndthe calculation results in that,whereinIs composed ofC represents a fixed value, indicating that it does not change with changes in the position of the object.
The technical scheme tracks the medical instrument provided with the instrument tracker in the operation, and calculates and obtains the coordinate matrix of the medical tracker on the medical instrument of the mixed reality equipment and the operation area under the coordinate system ms of the mixed reality equipment
The technical scheme is as follows: detected orientation matrix of medical tracker on medical instrument based on tracking device coordinatesCoordinate transformation matrix for medical instruments mixing real equipment with operation area
After the technical scheme is adopted, the invention has the following positive effects:
(1) according to the invention, through the mixed reality device, the tracking device, the tracker and the ultrasonic scanning, the fusion of the image of the ultrasonic and the preoperative CT in the real operation of the operation area of the patient in real time in the real space scene is realized, so that the visual field of a doctor does not need to leave the operation area, and the ultrasonic is used for auxiliary positioning, thereby realizing the radiation-free dose mixed reality ultrasonic navigation method.
(2) The invention integrates the fusion of the ultrasound and the CT before the operation into an operation three-dimensional navigation system, an operator can adjust the track of a medical instrument according to a pre-planned three-dimensional operation path to achieve the aim of accurate operation treatment, the operation risk caused by errors in the traditional operation positioning based on experience is reduced, the accuracy of the operation is improved, and the invention is more efficient, more intuitive, higher in safety and stronger in functionality compared with the traditional operation.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a schematic diagram of a use scenario of the present invention;
FIG. 2 is a schematic diagram of coordinate transformation among the hybrid tracker, the hybrid display device and the tracking device according to the present invention;
FIG. 3 is a schematic diagram of an original tracking position of an ultrasound image and a desired position of an ultrasound image according to the present invention;
FIG. 4 is a schematic view of a scanning trajectory of an ultrasonic probe according to the present invention;
FIG. 5 is a schematic diagram illustrating registration transformation between an intraoperative ultrasound image sequence and a preoperative CT image sequence in accordance with the present invention;
FIG. 6 is a schematic view of embodiment 2 of the present invention;
FIG. 7 is a schematic spatial relationship diagram according to embodiment 3 of the present invention;
FIG. 8 is a schematic view of example 3 of the present invention;
fig. 9 is a schematic diagram of registration conversion between an intraoperative ultrasound image sequence and a preoperative CT image sequence according to embodiment 3 of the present invention.
Detailed Description
(example 1)
Referring to fig. 1-9, the present invention has an ultrasound probe, a hybrid tracker, a tracking device, and a mixed reality device worn on the head of a physician; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: turning on the tracking device, the hybrid tracker is used to track the position of the ultrasonic probe to obtain a position transformation matrixMeanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid trackerThereby calculating a conversion matrix between the tracking device and the ultrasonic imagingWherein the content of the first and second substances,
(3) continuously scanning the ultrasonic probe in the tracking state at the position of the patient along a mode of up and down to obtain an intraoperative ultrasonic image sequence taking the origin of the tracking device and a coordinate system w as reference, and referring to fig. 4;
(4) at least three characteristic points are respectively selected from the preoperative ultrasonic image sequence and the preoperative CT image sequence, and a registration conversion matrix from the preoperative CT sequence to the intraoperative ultrasonic sequence is calculated and obtainedSo as to realize the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence, as shown in figure 5; the us coordinate system coincides with the w coordinate system of the tracking device, so the transformation matrix is registeredCan also be said to be
(5) See fig. 2, the spatial information obtained by the tracking device is transmitted to the mixed reality device, and the mixed reality device performs coordinate conversion between the tracking device and the mixed reality deviceMatrix arrayAnd a transformation matrix between the tracking device and the ultrasound imagingCalculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate centerWhereinThe method realizes that a mixed reality device wearer sees a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient.
The coordinate origin of the mixed reality equipment in the invention is determined for the equipment starting time, the position of the origin cannot be changed by the movement of the equipment, and the mixed reality equipment has SLAM space positioning capability, so that the determination is carried outThe value does not change thereafter.
The registration module comprises a tracker and a series of mark points, namely four mark points a, b, c and d, the positions of the mark points are fixed relative to the tracker of the registration module, namely the position matrix of the trackerThe orientations of these marker points are derived.
The step (2) is specifically as follows:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtainedThe method specifically comprises the following steps: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; second rowPoint (c); the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due toKnowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker asThe position of point b isThe position of point c is The position of the point d isAccording to To obtainWhereinIs composed ofThe inverse matrix of (d);
B. acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position; the method specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image; the method specifically comprises the following steps: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, with its top left corner at the origin of the mixture tracker's own coordinates, under the coordinate system p, the positions of the four marker points a ', b ', c ', d ' in space, i.e. the distances of the points to the edges of the image, i.e. the position a ' of point a 'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a Position c 'of point c'p=[0,-Vc′,-Hc′,1]T(ii) a Position d 'of point d'p=[0,-Vd′,-Hd′,1]T;
D. Calculating a transformation matrix between the hybrid tracker-equipped ultrasound probe and the desired imaging locationThe method specifically comprises the following steps: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationshipThe corresponding relationship is as follows:
due to (a'p,b′p,c′p,d′p) The four marked points are transformed into (a) through rigidity changep,bp,cp,dp) Therefore, it isOnly contain translation and rotation operations and do not contain zoom-in and zoom-out operations.
Wherein the conversion matrixThe solution is obtained by first solving (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition; then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal; then fine-tuning the translation position to further reduce the comprehensive error; finally obtaining
The step (4) is specifically as follows:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
In the step (5), referring to fig. 2, a coordinate transformation matrix of the mixed reality device and the tracking device is obtainedThe method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same timeAndis calculated becauseThereby obtainingWhereinIs composed ofC represents a fixed value, indicating that it does not change with changes in the position of the object.
(example 2)
Referring to fig. 6, using the method of the mixed reality ultrasound navigation system in the embodiment 1, the medical instrument equipped with the instrument tracker is tracked during the operation, and the coordinate matrix of the medical tracker on the medical instrument of the mixed reality device and the operation area is calculated and obtained under the coordinate system ms of the mixed reality deviceThe method specifically comprises the following steps: detected orientation matrix of medical tracker on medical instrument based on tracking device coordinatesDue to the fact thatThereby obtaining a coordinate transformation matrix of a medical tracker on a medical instrument mixing a real device with an operating field
If a subsequent calculation is needed for the medical instrument, if the distance from the tip of the medical instrument to the center of the tracker is l, the orientation matrix of the tip is
(example 3)
In order to facilitate understanding, the mixed reality device and the tracking device are placed in a certain simple space with the tracked object.
An ultrasound probe with a hybrid tracker is placed in the position shown in figure 7, zwxwPlane, zmsxmsPlane and zpxpComplete overlap, at this time
therefore, the first and second electrodes are formed on the substrate,
the hybrid tracker is mounted exactly parallel to the imaging plane of the ultrasound probe, soOnly comprises a translation operation, and z is registered during scanningpxpPlane and zcxcCompletely overlapping.
Fig. 8 is a schematic diagram, and the spatial relationship is based on fig. 7.
At this point in time,
therefore, it is not only easy to use
Meanwhile, the spatial position of a 'point under the coordinate system p is obviously a'p=[0,-Va′,-Ha′,1]T。
The placing positions are ideal and practicalThe calculation can be carried out only by translating the point a to the point a', the minimum error transformation is already carried out, and the optimal transformation is solved without further rotation.
Since Ha' -k-L3 is almost equal to 0,
So far, we have the spatial orientation of ultrasonic imaging under the coordinate system w of the tracking device obtained by the tracking device, namely
For example, the current orientation of the ultrasound probe is
Then
Then, the ultrasound probe is used to continuously scan at the position of the patient along the mode of up-down and up-down to obtain an ultrasound image sequence, and the ultrasound sequence image and the CT sequence image are abstractly seen to be in a cuboid with spatial position attribute, as shown in fig. 9.
It is assumed here that the patient intraoperative position is the head orientation tracking device, i.e. foot-to-head direction and zwThree feature points, consistent and selected in the intraoperative ultrasound sequence, are in zwxwOn a plane and its front-to-back direction with ywIn agreement, i.e. the sequence of intraoperative ultrasound images in fig. 9 shows the patient lying on his side.
Three feature points before the operation are at yCTZ of LCTxCTOn a plane, then we call three points aCT,bCT,cCTThe three points in the operation corresponding to the above are aw,bwAnd cw. So aCT=[xcta,L,zcta,1]T,bCT=[xctb,L,zctb,1]T,cCT=[xctc,L,zctc,1]T;aw=[xwa,0,zwa,1]T,bw=[xwb,0,zwb,1]T,cw=[xwc,0,zwc,1]T;
The position information of these points can be acquired.
Since the patient position directions of the preoperative CT and the intraoperative ultrasound are consistent, the patient position directions are consistentOnly need to calculate aCTIs translated to awThe point can be calculated, the minimum error transformation is already obtained, and the optimal transformation is solved without further rotation.
As a test, we simulated the intraoperative red spot to yCTThe direction is increased by 5 (indicating a deviation towards the back of the patient), i.e. a new point position is obtained as newACT=[xcta,L+5,zcta,1]TFrom the registration transformation matrix we can get its position as intraoperative I.e. the point is also offset 5 towards the back of the patient during surgery, in line with the preoperative direction, and is correct.
After the registration process is completed, we have already fused preoperative CT and intraoperative ultrasound based on the coordinate system w.
And finally, transmitting the spatial information obtained by the tracking equipment to mixed reality equipment, wherein the mixed reality equipment needs to be based on the spatial information obtained by the tracking equipmentI.e., coordinate transformation matrix calculation between the tracking device to the mixed reality device, the actual position of the object as seen by the mixed reality device, so that the virtual object can be superimposed on the correct position in real space.
Here, taking the previous fig. 7 as an example, it is assumed that the ultrasonic probe is moved rightward 3 and downward 2. We then calculate where in space the real-time ultrasound image should be placed from the mixed reality device perspective.
Therefore, it is not only easy to use
It can be seen that the intraoperative ultrasound image is correct from the mixed reality device, n-h-q-2 in its x-axis, L1-L2-3 in its z-axis, and without any rotation.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A mixed reality ultrasound navigation system, characterized by: having an ultrasound probe, a hybrid tracker, a tracking device, and a mixed reality device worn on the head of a doctor; a mixing tracker is arranged on the ultrasonic probe; the mixed reality equipment is used for receiving the three-dimensional simulation model and projecting the three-dimensional simulation model to the actual spatial position of the patient; the method comprises the following specific steps:
(1) performing preoperative CT medical image scanning on a patient to obtain a preoperative CT image sequence, and introducing into mixed reality equipment;
(2) fixing the position of the tracking equipment, and obtaining the spatial orientation of ultrasonic imaging under a coordinate system w of the tracking equipment through the tracking equipment, specifically: starting the tracking equipment, realizing the azimuth tracking of the ultrasonic probe through the hybrid tracker to obtain an azimuth conversion matrixMeanwhile, the ultrasonic probe scans the registration module, and a conversion matrix between the ultrasonic probe provided with the hybrid tracker and the expected imaging position is calculated according to the mark points of the registration module detected by the ultrasonic probe and by combining the orientation of the hybrid trackerThereby calculating a conversion matrix between the tracking device and the ultrasonic imagingWherein the content of the first and second substances,
(3) continuously scanning the ultrasonic probe in a tracking state at the position of a patient along a mode from top to bottom to obtain an intraoperative ultrasonic image sequence based on the origin of the tracking equipment and a coordinate system w;
(4) at least three characteristic points are respectively selected from the preoperative ultrasonic image sequence and the preoperative CT image sequence, and a registration conversion matrix from the preoperative CT sequence to the intraoperative ultrasonic sequence is calculated and obtainedTherefore, the fusion of the preoperative CT image sequence and the intraoperative ultrasonic image sequence is realized;
(5) transmitting the spatial information obtained by the tracking equipment to the mixed reality equipment, wherein the mixed reality equipment is based on a coordinate transformation matrix from the tracking equipment to the mixed reality equipmentAnd a transformation matrix between the tracking device and the ultrasound imagingCalculating an intraoperative ultrasound image orientation matrix with mixed reality equipment as a coordinate centerWhereinThe method has the advantages that a mixed reality device wearer can see a fusion image of intraoperative ultrasound and preoperative CT in real time in a real scene, and the image position is at the actual position of a patient;
the registration module comprises a tracker and a series of mark points, the position of the mark points is fixed relative to the tracker of the registration module, namely the position matrix of the trackerDeducing the orientation of the mark points;
the step (2) is specifically as follows:
A. firstly, the position of a detectable mark point of an ultrasonic probe in the space of the expected position of an ultrasonic image is obtained
B. Acquiring the position of a corresponding detectable mark point in the original ultrasonic image tracking position;
C. deducing the spatial position of the detectable mark point in the original tracking position of the ultrasonic image;
2. The mixed reality ultrasound navigation system according to claim 1, wherein the step a is specifically: recording the left point of a first row of the mark points of the registration module as a point a and the right point of the first row as a point b; point of the second row is c; the third row has point d; and four points a, b, c, d are in one plane and parallel to z of the tracker on the registration modulecycPlane while knowing zcycThe distance between the plane and the planes of the points a, b, c and d is h, the distance between the tracker and the point a and the distance between the point a and the point b are both k, the distance between the first row and the second row and the distance between the second row and the third row are both v, and the orientation transformation matrix of the tracker on the registration module is due toKnowing, and hence deriving the position of point a under the coordinate system p of the hybrid tracker asThe position of point b isThe position of point c is The position of the point d isAccording to To obtainWhereinIs composed ofThe inverse matrix of (d);
the step B specifically comprises the following steps: identifying the imaging pixel positions of four marking points a ', b', c ', d' in the original tracking position of the ultrasonic image through the image, obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image by combining the length and width distances corresponding to each pixel of the ultrasonic image, and finally obtaining the distances from the imaging pixel positions to the upper boundary and the left boundary of the image, namely Va 'and Ha'; vb ', Hb'; vc ', Hc'; vd ', Hd';
the step C is specifically as follows: since the real-time tracking position of the ultrasound image is set at the hybrid tracker on the ultrasound probe, the image imaged at the original tracking position is at z of the hybrid trackerPyPOn the plane, with its top left corner at the origin of the mixture tracker's own coordinates, under the coordinate system p, the positions of the four marker points a ', b ', c ', d ' in space, i.e. the distances of the points to the edges of the image, i.e. the position a ' of point a 'p=[0,-Va′,-Ha′,1]T(ii) a The position of the point b 'is b'p=[0,-Vb′,-Hb′,1]T(ii) a Position c 'of point c'p=[0,-Vc′,-Hc′,1]T(ii) a Position d 'of point d'p=[0,-Vd′,-Hd′,1]T;
The step D is specifically as follows: through (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) Calculates a conversion matrix according to the one-to-one correspondence relationshipThe corresponding relationship is as follows:
3. The mixed reality ultrasound navigation system of claim 2, wherein in step D, the matrix is transformedThe solving method is as follows,
a) firstly, (a'p,b′p,c′p,d′p) A 'as a whole by translational and rotational operations'pAnd apPoint superposition;
b) then traversing the x, y, z three-axis rotation respectively to obtain transformed (a'p,b′p,c′p,d′p) And (a)p,bp,cp,dp) The combined error of (2) is minimal;
c) then fine-tuning the translation position to further reduce the comprehensive error;
4. The mixed reality ultrasound navigation system according to claim 1, wherein the step (4) is specifically:
a. selecting more than 3 mark points in the preoperative CT image sequence to obtain the preoperative space position (a)ct,bct,cct…);
b. Selecting and (a) in the intraoperative ultrasound image sequencect,bct,cct…) points corresponding to the anatomical structure, the spatial position (a) of which is obtainedus,bus,cus…);
c. Firstly (a)ct,bct,cct…) as a whole, by translational and rotational operationsctAnd ausPoint superposition;
d. then traversing the x, y, z three-axis rotation respectively, so as to obtain the transformed (a)us,bus,cus…) and (a)ct,bct,cct…) is minimized;
5. The mixed reality ultrasound navigation system of claim 1, wherein in step (5), a coordinate transformation matrix of the mixed reality device and the tracking device is obtainedThe method specifically comprises the following steps: position matrix obtained by identifying tracking hybrid tracker by hybrid reality device and tracking device in same timeAndthe calculation results in that,whereinIs composed ofC represents a fixed value, indicating that it does not change with changes in the position of the object.
6. The mixed reality ultrasound navigation system of any of claims 1 to 5, wherein: tracking the medical instrument equipped with the instrument tracker in the operation, and calculating and obtaining a coordinate matrix of the medical tracker on the medical instrument of the mixed reality equipment and the operation area under a mixed reality equipment coordinate system ms
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010715410.XA CN112107366B (en) | 2020-07-23 | 2020-07-23 | Mixed reality ultrasonic navigation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010715410.XA CN112107366B (en) | 2020-07-23 | 2020-07-23 | Mixed reality ultrasonic navigation system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112107366A CN112107366A (en) | 2020-12-22 |
CN112107366B true CN112107366B (en) | 2021-08-10 |
Family
ID=73799450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010715410.XA Active CN112107366B (en) | 2020-07-23 | 2020-07-23 | Mixed reality ultrasonic navigation system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112107366B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113041083A (en) * | 2021-04-22 | 2021-06-29 | 江苏瑞影医疗科技有限公司 | Holographic projection operation console applied to QMR technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102319117A (en) * | 2011-06-16 | 2012-01-18 | 上海交通大学医学院附属瑞金医院 | Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation |
CN103211655A (en) * | 2013-04-11 | 2013-07-24 | 深圳先进技术研究院 | Navigation system and navigation method of orthopedic operation |
CN106846496A (en) * | 2017-01-19 | 2017-06-13 | 杭州古珀医疗科技有限公司 | DICOM images based on mixed reality technology check system and operating method |
CN107536643A (en) * | 2017-08-18 | 2018-01-05 | 北京航空航天大学 | A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE602004016993D1 (en) * | 2003-06-05 | 2008-11-20 | Philips Intellectual Property | ADAPTIVE IMAGE INTERPOLATION FOR VOLUME PRESENTATION |
US20150067599A1 (en) * | 2013-09-05 | 2015-03-05 | General Electric Company | Smart and early workflow for quick vessel network detection |
CN107340871A (en) * | 2017-07-25 | 2017-11-10 | 深识全球创新科技(北京)有限公司 | The devices and methods therefor and purposes of integrated gesture identification and ultrasonic wave touch feedback |
CN110427102A (en) * | 2019-07-09 | 2019-11-08 | 河北经贸大学 | A kind of mixed reality realization system |
CN110537980A (en) * | 2019-09-24 | 2019-12-06 | 上海理工大学 | puncture surgery navigation method based on motion capture and mixed reality technology |
CN111420391A (en) * | 2020-03-04 | 2020-07-17 | 青岛小鸟看看科技有限公司 | Head-mounted display system and space positioning method thereof |
-
2020
- 2020-07-23 CN CN202010715410.XA patent/CN112107366B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102319117A (en) * | 2011-06-16 | 2012-01-18 | 上海交通大学医学院附属瑞金医院 | Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation |
CN103211655A (en) * | 2013-04-11 | 2013-07-24 | 深圳先进技术研究院 | Navigation system and navigation method of orthopedic operation |
CN106846496A (en) * | 2017-01-19 | 2017-06-13 | 杭州古珀医疗科技有限公司 | DICOM images based on mixed reality technology check system and operating method |
CN107536643A (en) * | 2017-08-18 | 2018-01-05 | 北京航空航天大学 | A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction |
Also Published As
Publication number | Publication date |
---|---|
CN112107366A (en) | 2020-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107468350B (en) | Special calibrator for three-dimensional image, operation positioning system and positioning method | |
JP4822634B2 (en) | A method for obtaining coordinate transformation for guidance of an object | |
CN103040525B (en) | A kind of multimode medical image operation piloting method and system | |
WO2021217713A1 (en) | Surgical navigation system, computer for performing surgical navigation method, and storage medium | |
US6996430B1 (en) | Method and system for displaying cross-sectional images of a body | |
JP2966089B2 (en) | Interactive device for local surgery inside heterogeneous tissue | |
US10912537B2 (en) | Image registration and guidance using concurrent X-plane imaging | |
CN110537961B (en) | Minimally invasive intervention guiding system and method for CT and ultrasonic image fusion | |
US8781186B2 (en) | System and method for abdominal surface matching using pseudo-features | |
US6546279B1 (en) | Computer controlled guidance of a biopsy needle | |
US11759272B2 (en) | System and method for registration between coordinate systems and navigation | |
US6782287B2 (en) | Method and apparatus for tracking a medical instrument based on image registration | |
Navab et al. | Merging visible and invisible: Two camera-augmented mobile C-arm (CAMC) applications | |
BR112020022649A2 (en) | live 3d holographic navigation guidance system to perform intervention procedure | |
Nakamoto et al. | Intraoperative magnetic tracker calibration using a magneto-optic hybrid tracker for 3-D ultrasound-based navigation in laparoscopic surgery | |
US20080123910A1 (en) | Method and system for providing accuracy evaluation of image guided surgery | |
WO2007135609A2 (en) | Coordinate system registration | |
Zeng et al. | A surgical robot with augmented reality visualization for stereoelectroencephalography electrode implantation | |
WO2008035271A2 (en) | Device for registering a 3d model | |
US20200222122A1 (en) | System and Method for Registration Between Coordinate Systems and Navigation | |
Mirota et al. | High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery | |
CN112107366B (en) | Mixed reality ultrasonic navigation system | |
CN113229937A (en) | Method and system for realizing surgical navigation by using real-time structured light technology | |
Uddin et al. | Three-dimensional computer-aided endoscopic sinus surgery | |
Rousseau et al. | A frameless method for 3D MRI-and CT guided stereotaxic localisation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |