WO2020210967A1 - Optical tracking system and training system for medical instruments - Google Patents

Optical tracking system and training system for medical instruments Download PDF

Info

Publication number
WO2020210967A1
WO2020210967A1 PCT/CN2019/082803 CN2019082803W WO2020210967A1 WO 2020210967 A1 WO2020210967 A1 WO 2020210967A1 CN 2019082803 W CN2019082803 W CN 2019082803W WO 2020210967 A1 WO2020210967 A1 WO 2020210967A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical
surgical
optical
medical appliance
presentation
Prior art date
Application number
PCT/CN2019/082803
Other languages
French (fr)
Chinese (zh)
Inventor
孙永年
周一鸣
朱敏慈
沈庭立
邱昌逸
蔡博翔
Original Assignee
孙永年
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 孙永年 filed Critical 孙永年
Priority to PCT/CN2019/082803 priority Critical patent/WO2020210967A1/en
Publication of WO2020210967A1 publication Critical patent/WO2020210967A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/12Arrangements for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges

Definitions

  • the invention relates to an optical tracking system and a training system, in particular to an optical tracking system and a training system for medical appliances.
  • the object of the present invention is to provide an optical tracking system and training system for medical devices, which can assist or train users to operate medical devices.
  • An optical tracking system for medical appliances which includes a plurality of optical markers, a plurality of optical sensors, and a computer device.
  • the optical markers are arranged on the medical appliance.
  • the optical sensor optically senses the optical markers to generate a plurality of sensings respectively. signal.
  • the computer device is coupled to the optical sensor to receive the sensing signal, and has a three-dimensional model of the surgical situation, and adjusts the relative position between the medical appliance present and the surgical target present in the three-dimensional model of the surgical situation according to the sensing signal.
  • the computer device and the optical sensor perform a pre-operation procedure, and the pre-operation procedure includes: calibrating the coordinate system of the optical sensor; and adjusting the scaling ratio of the medical appliance and the surgical target object.
  • the computer device and the optical sensor perform a coordinate calibration program
  • the calibration program includes an initial calibration step, an optimization step, and a correction step.
  • the initial correction step is to perform initial correction between the coordinate system of the optical sensor and the coordinate system of the three-dimensional model of the surgical situation to obtain initial conversion parameters.
  • the optimization step is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters.
  • the correction step is to correct the setting error caused by the optical marker in the optimized conversion parameter.
  • the initial calibration step is to use singular value decomposition (SVD), triangular coordinate registration (triangle coordinate registration), or linear least square estimation (linear least square estimation).
  • SVD singular value decomposition
  • triangular coordinate registration triangular coordinate registration
  • linear least square estimation linear least square estimation
  • the initial calibration step is to use singular value decomposition to find the transformation matrix between the feature points of the medical appliance and the optical sensor as the initial transformation parameter.
  • the transformation matrix includes a covariance matrix and rotation
  • the optimization step is to obtain multiple Euler angles with multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of the multiple degrees of freedom to obtain optimized conversion parameters.
  • the computer device sets the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation according to the optimized conversion parameters and the sensing signal.
  • the correcting step is to use the reverse conversion and sensing signals to correct the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation.
  • the computer device outputs display data, and the display data is used to present the 3D images of the medical appliance presentation and the surgical target presentation.
  • the computer device generates the medical image according to the three-dimensional model of the surgical situation and the medical image model.
  • the surgical target object is an artificial limb
  • the medical image is an artificial medical image for the surgical target object.
  • the computer device deduces the position of the medical appliance inside and outside the surgical target object, and adjusts the relative position between the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation accordingly.
  • a training system for operating medical appliances includes medical appliances and the aforementioned optical tracking system for medical appliances.
  • the medical appliance includes a medical probe and a surgical appliance
  • the medical appliance presentation includes a medical probe presentation and a surgical appliance presentation.
  • the computer device scores the detection object found by the medical probe present and the operation of the surgical instrument present.
  • a method for calibrating an optical tracking system for medical appliances includes a sensing step, an initial calibration step, an optimization step, and a correction step.
  • the sensing step uses a plurality of optical sensors of the optical tracking system to optically sense a plurality of optical markers arranged on the optical tracking system on the medical appliance to generate a plurality of sensing signals;
  • the initial calibration step performs the optical sensor according to the sensing signals.
  • the initial correction between the coordinate system of the surgical scene and the coordinate system of the three-dimensional model of the surgical situation to obtain the initial conversion parameters;
  • the optimization step is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters;
  • the correction step is to modify the leading factors in the optimized conversion parameters For the setting error of the optical marker.
  • the calibration method further includes a pre-operation procedure, the pre-operation procedure includes calibrating the coordinate system of the optical sensor; and adjusting the zoom ratio for the medical appliance and the surgical target object.
  • the initial calibration step is to use singular value decomposition (SVD), triangular coordinate registration (triangle coordinate registration), or linear least square estimation (linear least square estimation).
  • SVD singular value decomposition
  • triangular coordinate registration triangular coordinate registration
  • linear least square estimation linear least square estimation
  • the initial calibration step is to use singular value decomposition to find the transformation matrix between the feature points of the medical appliance presentation of the three-dimensional model of the surgical situation and the optical sensor as the initial transformation parameter.
  • Variation matrix and rotation matrix are used as the initial transformation parameter.
  • the optimization step is to obtain multiple Euras angles with multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of multiple degrees of freedom to obtain optimized conversion parameters.
  • the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation are set according to the optimized conversion parameters and the sensing signal.
  • the correction step is to use the reverse conversion and sensing signals to correct the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation.
  • the optical tracking system of the present disclosure can assist or train users to operate medical appliances, and the training system of the present disclosure can provide a realistic surgical training environment for the trainees to effectively assist the trainees in completing surgical training.
  • FIG. 1A is a block diagram of the optical tracking system of the embodiment.
  • FIGS. 1B and 1C are schematic diagrams of the optical tracking system of the embodiment.
  • Fig. 1D is a schematic diagram of a three-dimensional model of the surgical situation of the embodiment.
  • Fig. 2 is a flow chart of the pre-operation procedure of the optical tracking system of the embodiment.
  • FIG. 3A is a flowchart of the coordinate correction program of the optical tracking system of the embodiment.
  • Fig. 3B is a schematic diagram of the coordinate system correction of the embodiment.
  • Fig. 3C is a schematic diagram of the degrees of freedom of the embodiment.
  • Fig. 4 is a block diagram of the training system for medical appliance operation according to the embodiment.
  • Fig. 5A is a schematic diagram of a three-dimensional model of the operation situation of the embodiment.
  • FIG. 5B is a schematic diagram of a three-dimensional model of an entity medical image according to an embodiment.
  • FIG. 5C is a schematic diagram of the three-dimensional model of the artificial medical image of the embodiment.
  • 6A to 6D are schematic diagrams of the direction vector of the medical appliance of the embodiment.
  • FIG. 7A to 7D are schematic diagrams of the training process of the training system of the embodiment.
  • Fig. 8A is a schematic diagram of the finger structure of the embodiment.
  • FIG. 8B is a schematic diagram of applying principal component analysis on bones from computed tomography images in this embodiment.
  • Fig. 8C is a schematic diagram of applying principal component analysis on the skin from a computed tomography image in an embodiment.
  • Fig. 8D is a schematic diagram of calculating the distance between the bone spindle and the medical appliance according to the embodiment.
  • Fig. 8E is a schematic diagram of the artificial medical image of the embodiment.
  • Fig. 9A is a block diagram for generating artificial medical images according to an embodiment.
  • Fig. 9B is a schematic diagram of the artificial medical image of the embodiment.
  • 10A and 10B are schematic diagrams of the artificial hand model and the correction of the ultrasonic volume of the embodiment.
  • Fig. 10C is a schematic diagram of ultrasonic volume and collision detection of the embodiment.
  • Fig. 10D is a schematic diagram of an artificial ultrasound image of the embodiment.
  • FIG. 1A is a block diagram of the optical tracking system of the embodiment.
  • the optical tracking system 1 for medical appliances includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13.
  • the optical markers 11 are arranged on one or more medical appliances, and here are a plurality of medical appliances 21-24
  • the optical marker 11 can also be set on the surgical target object 3.
  • the medical appliances 21-24 and the surgical target object 3 are placed on the platform 4, and the optical sensor 12 optically senses the optical marker 11 to generate multiple senses. Test signal.
  • the computer device 13 is coupled to the optical sensor 12 to receive the sensing signal, and has a three-dimensional model 14 of the surgical context, and adjusts the gap between the medical appliance presents 141-144 and the surgical target present 145 in the three-dimensional model 14 of the surgical context according to the sensed signals relative position.
  • the medical appliance presentation objects 141 to 144 and the surgery target presentation object 145 are shown in FIG. 1D, and represent the medical appliances 21 to 24 and the surgery target object 3 in the three-dimensional model 14 of the surgery situation.
  • the three-dimensional model 14 of the surgical situation can obtain the current positions of the medical appliances 21-24 and the surgical target object 3 and reflect the medical appliance presentation and the surgical target presentation accordingly.
  • FIG. 1B is a schematic diagram of the optical tracking system of the embodiment, four optical sensors 121 to 124 are installed on the ceiling and facing the optical marker 11, medical appliances 21 to 24 and the surgical target on the platform 4 Object 3.
  • the medical tool 21 is a medical probe, such as a probe for ultrasonic imaging detection or other devices that can detect the inside of the surgical target object 3. These devices are actually used clinically, and the probe for ultrasonic imaging detection is, for example, ultrasound. Transducer (Ultrasonic Transducer).
  • the medical appliances 22-24 are surgical appliances, such as needles, scalpels, hooks, etc., which are actually used clinically. If used for surgical training, the medical probe can be a device that is actually used in clinical practice or a simulated device that simulates clinical practice, and the surgical instrument can be a device that is actually used in clinical practice or a simulated device that simulates clinical practice.
  • Figure 1C is a schematic diagram of the optical tracking system of the embodiment.
  • the medical appliances 21-24 and the surgical target 3 on the platform 4 are used for surgical training, such as minimally invasive finger surgery, which can be used for triggers. Refers to treatment surgery.
  • the material of the clamps of the platform 4 and the medical appliances 21-24 can be wood.
  • the medical appliance 21 is a realistic ultrasonic transducer (or probe), and the medical appliance 22-24 includes a plurality of surgical instruments, such as expanders ( dilator, needle, and hook blade.
  • the surgical target 3 is a hand phantom.
  • Three or four optical markers 11 are installed on each medical appliance 21-24, and three or four optical markers 11 are also installed on the surgical target object 3.
  • the computer device 13 is connected to the optical sensor 12 to track the position of the optical marker 11 in real time.
  • optical markers 11 There are 17 optical markers 11, including 4 that are linked on or around the surgical target object 3, and 13 optical markers 11 are on medical appliances 21-24.
  • the optical sensor 12 continuously transmits real-time information to the computer device 13.
  • the computer device 13 also uses the movement judgment function to reduce the computational burden. If the moving distance of the optical marker 11 is less than the threshold value, the position of the optical marker 11 is not updated ,
  • the threshold value is, for example, 0.7mm.
  • the computer device 13 includes a processing core 131, a storage element 132, and a plurality of I/O interfaces 133, 134.
  • the processing core 131 is coupled to the storage element 132 and I/O interfaces 133, 134.
  • the I/O interface 133 can receive optical sensors. 12, the I/O interface 134 communicates with the output device 5, and the computer device 13 can output the processing result to the output device 5 through the I/O interface 134.
  • the I/O interfaces 133 and 134 are, for example, peripheral transmission ports or communication ports.
  • the output device 5 is a device capable of outputting images, such as a display, a projector, a printer, and so on.
  • the storage element 132 stores program codes for execution by the processing core 131.
  • the storage element 132 includes a non-volatile memory and a volatile memory.
  • the non-volatile memory is, for example, a hard disk, a flash memory, a solid state disk, an optical disc, etc.
  • the volatile memory is, for example, dynamic random access memory, static random access memory, and so on.
  • the program code is stored in the non-volatile memory, and the processing core 131 can load the program code from the non-volatile memory to the volatile memory, and then execute the program code.
  • the storage component 132 stores the program code and data of the operation situation three-dimensional model 14 and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14 and the program code and data of the tracking module 15.
  • the processing core 131 is, for example, a processor, a controller, etc., and the processor includes one or more cores.
  • the processor may be a central processing unit or a graphics processor, and the processing core 131 may also be the core of a processor or a graphics processor.
  • the processing core 131 may also be a processing module, and the processing module includes multiple processors.
  • the operation of the optical tracking system includes the connection between the computer device 13 and the optical sensor 12, the pre-operation program, the coordinate correction program of the optical tracking system, the real-time rendering program, etc.
  • the tracking module 15 represents the relevant program codes of these operations and Data
  • the storage element 132 of the computer device 13 stores the tracking module 15
  • the processing core 131 executes the tracking module 15 to perform these operations.
  • the optimized conversion parameters can be found, and then the computer device 13 can set the medical appliance presentation objects 141-144 and the surgical target presentation objects 145 according to the optimized conversion parameters and the sensing signal The position in the three-dimensional model 14 of the surgical situation.
  • the computer device 13 can deduce the position of the medical appliance 21 inside and outside the surgical target object 3, and adjust the relative position between the medical appliance presenting objects 141 to 144 and the surgical target presenting object 145 in the three-dimensional model 14 of the operation situation accordingly.
  • the medical appliances 21-24 can be tracked in real time from the detection results of the optical sensor 12 and correspondingly presented in the three-dimensional model 14 of the surgical context.
  • the presentation of the three-dimensional model 14 in the surgical context is shown in FIG. 1D, for example.
  • the three-dimensional model 14 of the operation situation is a native model, which includes models established for the surgical target object 3 and also includes models established for the medical appliances 21-24.
  • the method of establishment can be that the developer directly uses computer graphics technology to construct it on the computer, such as using drawing software or special application development software.
  • the computer device 13 can output the display data 135 to the output device 5.
  • the display data 135 is used to present 3D images of the medical appliance presentation objects 141-144 and the surgical target presentation object 145.
  • the output device 5 can output the display data 135.
  • the output method is, for example, Display or print etc. The result of outputting in a display mode is shown in FIG. 1D, for example.
  • FIG. 2 is a flowchart of the pre-operation procedure of the optical tracking system of the embodiment.
  • the computer device 13 and the optical sensor 12 perform a pre-operation procedure.
  • the pre-operation procedure includes steps S01 and S02 for calibrating the optical sensor 12 and re-adjusting the scale of all medical appliances 21-24.
  • Step S01 is to calibrate the coordinate system of the optical sensor 12.
  • a plurality of calibration sticks have a plurality of optical markers, and the area enclosed by them is used to define the working area.
  • the optical sensor 12 senses the optical markers on the calibration stick. When all optical markers are detected by each optical sensor 12, the area enclosed by the correction rod is the effective working area.
  • the calibration bar is manually placed by the user, and the user can adjust the position of the calibration bar to modify the effective working area.
  • the sensitivity detected by the optical sensor 12 can be about 0.3 mm.
  • the coordinate system where the detection result of the optical sensor 12 is located is called the tracking coordinate system.
  • Step S02 is to adjust the scaling ratio of the medical appliances 21 to 24 and the surgical target object 3.
  • the medical appliances 21-24 are usually rigid bodies, and the coordinate correction adopts a rigid body correction method to avoid distortion. Therefore, the medical appliances 21-24 must be rescaled to the tracking coordinate system to obtain correct calibration results.
  • the calculation of scaling ratio can be obtained by the following formula:
  • Track G Tracking the center of gravity in the coordinate system
  • Track i Track the position of the optical marker in the coordinate system
  • Mesh G the center of gravity in the mesh point coordinate system
  • the tracking coordinate system is the coordinate system adopted by the detection result of the optical sensor 12, and the dot coordinate system is the coordinate system adopted by the three-dimensional model 14 of the surgical situation.
  • Step S02 first calculates the center of gravity in the tracking coordinate system and the dot coordinate system, and then calculates the distance between the optical marker and the center of gravity in the tracking coordinate system and the dot coordinate system. Then, for the individual ratios of the dot coordinate system to the tracking coordinate system, add up all the individual ratios and divide by the number of optical markers to obtain the ratio of the dot coordinate system to the tracking coordinate system.
  • FIG. 3A is a flowchart of the coordinate correction program of the optical tracking system of the embodiment.
  • the computer device and the optical sensor perform a coordinate calibration program, and the calibration program includes an initial calibration step S11, an optimization step S12, and a correction step S13.
  • the initial calibration step S11 performs an initial calibration between the coordinate system of the optical sensor 12 and the coordinate system of the three-dimensional model 14 of the surgical situation to obtain the initial conversion parameters.
  • the calibration between the coordinate systems is shown in FIG. 3B for example.
  • the optimization step S12 is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters. For example, the degrees of freedom are shown in FIG. 3C.
  • the correcting step S13 is to correct the setting error caused by the optical marker in the optimized conversion parameter.
  • the optical marker attached to the platform 4 can be used to correct the two coordinate systems.
  • the initial correction step S11 is to find the transformation matrix between the feature points of the medical appliance and the optical sensor as the initial transformation parameter.
  • the initial correction step is to use singular value decomposition (SVD) and triangular coordinate alignment (Triangle coordinate). registration) or linear least square estimation (linear least square estimation).
  • the transformation matrix includes, for example, a covariance matrix and a rotation matrix.
  • step S11 singular value decomposition can be used to find the optimal transformation matrix between the feature points of the medical appliance exhibits 141 to 144 and the optical sensor as the initial transformation parameter, and the covariance matrix H can be obtained from These feature points are obtained and can be regarded as the objective function to be optimized.
  • the rotation matrix M can be found by the following formula:
  • the translation matrix T After obtaining the rotation matrix M, the translation matrix T can be found by the following formula:
  • the optimization step S12 is to obtain multiple Euler angles with multiple degrees of freedom from the rotation matrix M, and use Gauss-Newton algorithm to iteratively optimize the parameters of multiple degrees of freedom to obtain optimized conversion parameters.
  • the multiple degrees of freedom are, for example, six degrees of freedom, and other numbers of degrees of freedom, such as nine degrees of freedom, etc., are also possible to modify the expression appropriately. Since the conversion result obtained from the initial calibration step S11 may not be accurate enough, performing the optimization step S12 can improve the accuracy and obtain a more accurate conversion result.
  • the rotation matrix M can be obtained from the above formula.
  • multiple Euler angles can be obtained from the following formula:
  • the rotation of the world coordinate system is assumed to be orthogonal. Since the parameters of six degrees of freedom have been obtained, these parameters can be iteratively optimized by the Gauss-Newton method to obtain optimized conversion parameters. Is the objective function to be minimized.
  • b represents the least square error between the reference target point and the current point
  • n is the number of feature points
  • It is a transformation parameter which has translation and rotation parameters, and iteratively transforms the parameters by using Gauss Newton method Will adjust to find the best value and change the parameters
  • the update function is as follows:
  • is the Jacobian matrix from the objective function (Jacobian matrix)
  • the stop condition is defined as follows:
  • the correcting step S13 is to correct the setting error caused by the optical marker in the optimized conversion parameter.
  • the correction step S13 includes a determination step S131 and an adjustment step S132.
  • the source feature point correction procedure can be used to overcome the error caused by manually selecting feature points. This is because the medical appliance presentation objects 141 to 144 of the surgical scene three-dimensional model 14 and the surgical target presentation object 145 have feature points and medical treatments. The error of the feature points of the tools 21-24 and the surgical target object 3, these feature points are selected by the user. The feature points of the medical tools 21-24 and the surgical target object 3 may include points where the optical marker 11 is set. Since the optimal transformation can be obtained from step S12, the target position transformed from the source point will approach the reference target point V T after the nth iteration as follows:
  • the source point correction step first calculate the inverse transformation of the transformation matrix, and then obtain the new source point from the reference target point.
  • the calculation formula is as follows:
  • each iteration can be set to a constraint step size (constraint step size) c 1 , and set the constraint area box size ( constraint region box size) c 2 It can be a constant value to limit the distance moved by the original source point. This correction is as follows:
  • V T from the source point is the target point after V S transformation.
  • the coordinate position of the three-dimensional model 14 of the surgical situation can be accurately transformed to correspond to the optical marker 11 in the tracking coordinate system, and vice versa.
  • the medical tools 21-24 and the surgical target object 3 can be tracked in real time based on the detection results of the optical sensor 12, and the positions of the medical tools 21-24 and the surgical target object 3 in the tracking coordinate system can be processed after the aforementioned processing.
  • the medical appliance presentation objects 141-144 correspond to the surgical target presentation object 145 accurately.
  • the medical appliance presentation objects 141-144 and the surgical target presentation The object 145 will follow the movement of the three-dimensional model 14 of the operation situation in real time.
  • FIG. 4 is a block diagram of the training system for the operation of the medical appliance according to the embodiment.
  • the training system for medical appliance operation (hereinafter referred to as the training system) can truly simulate the surgical training environment.
  • the training system includes an optical tracking system 1a, one or more medical appliances 21-24, and the surgical target object 3.
  • the optical tracking system 1a includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13.
  • the optical markers 11 are arranged on medical appliances 21-24 and surgical target objects 3, and medical appliances 21-24 and surgical target objects 3 are placed On platform 4.
  • the medical appliance presents 141 to 144 and the surgical target presents 145 are correspondingly presented on the three-dimensional model 14a of the surgical context.
  • the medical tools 21-24 include medical probes and surgical tools.
  • the medical tools 21 are medical probes
  • the medical tools 22-24 are surgical tools.
  • the medical appliance presentations 141-144 include medical probe presentations and surgical appliance presentations.
  • the medical appliance presentation 141 is a medical probe presentation
  • the medical appliance presentations 142-144 are surgical appliance presentations.
  • the storage component 132 stores the program code and data of the operation situation three-dimensional model 14a and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14a and the program code and data of the tracking module 15.
  • the surgical target object 3 is an artificial limb, such as artificial upper limbs, hand phantoms, artificial palms, artificial fingers, artificial arms, artificial upper arms, artificial forearms, artificial elbows, artificial upper limbs, artificial feet, artificial toes, artificial ankles, artificial Calf, false thigh, false knee, false torso, false neck, false head, false shoulder, false chest, false abdomen, false waist, false hip or other false parts, etc.
  • artificial upper limbs such as artificial upper limbs, hand phantoms, artificial palms, artificial fingers, artificial arms, artificial upper arms, artificial forearms, artificial elbows, artificial upper limbs, artificial feet, artificial toes, artificial ankles, artificial Calf, false thigh, false knee, false torso, false neck, false head, false shoulder, false chest, false abdomen, false waist, false hip or other false parts, etc.
  • the training system takes the minimally invasive surgery training of the fingers as an example.
  • the surgical target 3 is a prosthetic hand, the surgery is for example a trigger finger treatment surgery, and the medical probe 21 is a realistic ultrasonic transducer (or probe). ), the surgical instruments 22-24 are a needle, a dilator, and a hook blade. In other embodiments, other surgical target objects 3 may be used for other surgical training.
  • the storage element 132 also stores the program codes and data of the physical medical image 3D model 14b, the artificial medical image 3D model 14c, and the training module 16.
  • the processing core 131 can access the storage element 132 to execute and process the physical medical image 3D model 14b and artificial medicine.
  • the training module 16 is responsible for the following surgical training procedures and the processing, integration and calculation of related data.
  • FIG. 5A is a schematic diagram of the three-dimensional model of the surgical scene of the embodiment
  • FIG. 5B is a schematic diagram of the three-dimensional physical medical image model of the embodiment
  • FIG. 5C is the three-dimensional model of the artificial medical image Schematic.
  • the content of these three-dimensional models can be output or printed by the output device 5.
  • the solid medical image three-dimensional model 14b is a three-dimensional model established from medical images, which is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5B.
  • the medical image is, for example, a computer tomography image, and the image of the surgical target object 3 actually generated after the computer tomography is used to build the solid medical image three-dimensional model 14b.
  • the artificial medical image three-dimensional model 14c contains an artificial medical image model.
  • the artificial medical image model is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5C.
  • the artificial medical imaging model is a three-dimensional model of artificial ultrasound images. Since the surgical target 3 is not a real living body, although computer tomography can obtain images of the physical structure, other medical imaging equipment such as ultrasound imaging can still be used. Effective or meaningful images cannot be obtained directly from the surgical target object 3. Therefore, the ultrasound image model of the surgical target object 3 must be artificially generated. Selecting an appropriate position or plane from the three-dimensional model of artificial ultrasound images can generate two-dimensional artificial ultrasound images.
  • the computer device 13 generates a medical image 136 according to the three-dimensional model 14a of the surgical situation and the medical image model.
  • the medical image model is, for example, a solid medical image three-dimensional model 14b or an artificial medical image three-dimensional model 14c.
  • the computer device 13 generates a medical image 136 based on the three-dimensional model 14a of the surgical situation and the three-dimensional model 14c of an artificial medical image.
  • the medical image 136 is a two-dimensional artificial ultrasound image.
  • the computer device 13 scores the detection object found by the medical probe 141 and the operation of the surgical instrument representation 145, such as a specific surgical site.
  • 6A to 6D are schematic diagrams of the direction vector of the medical appliance of the embodiment.
  • the direction vectors of the medical device presentation objects 141-144 corresponding to the medical devices 21-24 will be rendered in real time.
  • the direction vector of the medical probe can be calculated by calculating the center of gravity of the optical marker And get, and then project from another point to the xz plane, and calculate the vector from the center of gravity to the projection point.
  • Other medical appliance presentations 142-144 are relatively simple, and the direction vector can be calculated using the sharp points in the model.
  • the training system can only draw the model of the area where the surgical target presenting object 145 is located instead of drawing all the medical appliance presenting objects 141-144.
  • the transparency of the skin model can be adjusted to observe the internal anatomical structure of the surgical target present 145, and to see ultrasound image slices or computed tomography image slices of different cross-sections, such as horizontal cross-sections. plane or axial plane), sagittal plane (sagittal plane) or coronal plane (coronal plane), which can help the operator during the operation.
  • the bounding boxes of each model are constructed for collision detection.
  • the surgical training system can determine which medical appliances have contacted tendons, bones and/or skin, and can determine when to start scoring.
  • the optical marker 11 attached to the surgical target object 3 must be clearly seen or detected by the optical sensor 12. If the optical marker 11 is covered, the position of the optical marker 11 must be detected accurately The degree will decrease, and the optical sensor 12 needs at least two to see all the optical markers at the same time.
  • the calibration procedure is as described above, for example, three-stage calibration, which is used to accurately calibrate two coordinate systems.
  • the correction error, the iteration count and the final position of the optical marker can be displayed in the window of the training system, for example by the output device 5.
  • the accuracy and reliability information can be used to remind users that the system needs to be recalibrated when the error is too large.
  • the three-dimensional model is drawn at a frequency of 0.1 times per second, and the drawn result can be output to the output device 5 for display or printing.
  • the user can start the surgical training process.
  • the training process first use a medical probe to find the site to be operated on. After finding the site to be operated on, the site is anesthetized. Then, expand the path from the outside to the surgical site, and after expansion, the scalpel is deepened along this path to the surgical site.
  • FIGS. 7A to 7D are schematic diagrams of the training process of the training system of the embodiment.
  • the surgical training process includes four stages and the minimally invasive surgery training of the fingers is taken as an example for illustration.
  • the medical probe 21 is used to find the site to be operated on to confirm that the site to be operated on is in the training system.
  • the surgical site is, for example, the pulley area (pulley), which can be judged by looking for the position of the metacarpophalangeal joints, the anatomical structures of the bones and tendons of the fingers, and the focus at this stage is whether the first pulley area (A1 pulley) is found.
  • the training system will automatically enter the next stage of scoring.
  • the medical probe 21 is placed on the skin and kept in contact with the skin at the metacarpal joints (MCP joints) along the midline of the flexor tendon.
  • the surgical instrument 22 is used to open the path of the surgical area.
  • the surgical instrument 22 is, for example, a needle.
  • the needle is inserted to inject local anesthetic and expand the space.
  • the process of inserting the needle can be performed under the guidance of continuous ultrasound images.
  • This continuous ultrasound image is an artificial ultrasound image, which is like the aforementioned medical image 136. Since it is difficult to simulate regional anesthesia with a prosthetic hand, anesthesia is not specifically simulated.
  • the surgical instrument 23 is pushed in along the same path as the surgical instrument 22 in the second stage to create the trajectory required for hooking the knife in the next stage.
  • the surgical instrument 23 is, for example, a dilator.
  • the training system will automatically enter the next stage of scoring.
  • the surgical instrument 24 is inserted along the trajectory created in the third stage, and the pulley is divided by the surgical instrument 24.
  • the surgical instrument 24 is, for example, a hook blade.
  • the focus of the third stage is similar to that of the fourth stage. During the surgical training process, the vessels and nerves near both sides of the flexor tendon may be easily miscut. Therefore, the third stage and the fourth stage The focus of the stage is not only not contacting tendons, nerves and blood vessels, but also opening a track that is at least 2mm larger than the first pulley area to leave space for the hook knife to cut the pulley area.
  • the operations of each training phase must be quantified.
  • the operation area during the operation is defined by the finger anatomy as shown in Figure 8A, which can be divided into an upper boundary and a lower boundary. Because most of the tissue on the tendon is fat and does not cause pain, the upper boundary of the surgical area can be defined by the skin of the palm, and the lower boundary is defined by the tendon.
  • the proximal depth boundary is 10mm (average length of the first trochlear zone) from the metacarpal head-neck joint.
  • the remote depth boundary is not important, because it has nothing to do with tendons, blood vessels, and nerves.
  • the left and right boundaries are defined by the width of the tendon, and nerves and blood vessels are located on both sides of the tendon.
  • the scoring method for each training stage is as follows.
  • the focus of the training is to find the target, for example, the target to be excised.
  • the target for example, the target to be excised.
  • the first pulley area A1 Pulley.
  • the scoring formula for the first stage is as follows:
  • the first stage score the score of the target object ⁇ its weight + the angle score of the probe ⁇ its weight
  • the focus of training is to use the needle to open the path of the surgical area. Since the pulley area surrounds the tendon, the distance between the main axis of the bone and the needle should be small. Therefore, the calculation formula for the second stage scoring is as follows:
  • Second stage score opening score ⁇ its weight + needle angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight
  • the focus of training is to insert a dilator that enlarges the surgical area into the finger.
  • the trajectory of the dilator must be close to the main axis of the bone.
  • the angle between the expander and the main axis of the bone should be approximately parallel, and the allowable angle deviation is ⁇ 30°. Due to the space left for the hook knife to cut the first trolley area, the expander must be at least 2mm higher than the first trolley area.
  • the third stage score higher than the pulley area score ⁇ its weight + expander angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight + not leaving the surgical area score ⁇ its weight
  • the scoring conditions are similar to those in the third stage, except that the hook needs to be rotated 90°. This rule is added to the scoring at this stage.
  • the scoring formula is as follows:
  • the fourth stage score higher than the pulley area score ⁇ its weight + hook angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight + not leaving the surgical area score ⁇ its weight + rotating hook score ⁇ its weight
  • this calculation method is the same as calculating the angle between the palm normal and the direction vector of the medical appliance.
  • this calculation method is the same as calculating the angle between the palm normal and the direction vector of the medical appliance.
  • the three axes of the bone can be found by using Principal Components Analysis (PCA) on the bone from the computed tomography image.
  • PCA Principal Components Analysis
  • the longest axis is taken as the main axis of the bone.
  • the shape of the bone in the computed tomography image is not uniform, which causes the axis found by the principal component analysis and the palm normal to be not perpendicular to each other.
  • FIG. 8C instead of using principal component analysis on the bone, the skin on the bone can be used to find the palm normal using principal component analysis. Then, the angle between the bone spindle and the medical appliance can be calculated.
  • the distance between the bone main axis and the medical appliance also needs to be calculated.
  • the distance calculation is similar to calculating the distance between the top and the plane of the medical appliance.
  • the plane refers to the plane containing the bone main axis vector vector and the palm normal.
  • the schematic diagram of distance calculation is shown in Figure 8D. This plane can be obtained by the cross product of the palm normal vector D2 and the bone principal axis vector D1. Since these two vectors can be obtained in the previous calculation, the distance between the main axis of the bone and the appliance can be easily calculated.
  • FIG. 8E is a schematic diagram of the artificial medical image of the embodiment, and the tendon section and the skin section in the artificial medical image are marked with dotted lines.
  • the tendon section and the skin section can be used to construct the model and the bounding box, the bounding box is used for collision detection, and the pulley area can be defined in the static model.
  • collision detection it is possible to determine the surgical area and determine whether the medical appliance crosses the pulley area.
  • the average length of the first pulley area is about 1mm, and the first pulley area is located at the proximal end of the metacarpal head-neck (MCP) joint.
  • MCP metacarpal head-neck
  • the average thickness of the pulley area is about 0.3mm and surrounds the tendons.
  • FIG. 9A is a flowchart of generating artificial medical images according to an embodiment. As shown in FIG. 9A, the generated flow includes step S21 to step S24.
  • Step S21 is to extract the first set of bone skin features from the cross-sectional image data of the artificial limb.
  • An artificial limb is the aforementioned surgical target object 3, which can be used as a limb for minimally invasive surgery training, such as a prosthetic hand.
  • the cross-sectional image data report contains multiple cross-sectional images, and the cross-sectional reference images are computed tomography images or solid cross-sectional images.
  • Step S22 is to extract the second set of bone skin features from the medical image data.
  • the medical image data is a three-dimensional ultrasound image, such as the three-dimensional ultrasound image of FIG. 9B, which is created by multiple planar ultrasound images.
  • Medical image data are medical images taken of real organisms, not artificial limbs.
  • the first group of bone skin features and the second group of bone skin features include multiple bone feature points and multiple skin feature points.
  • Step S23 is to establish feature registration data (registration) based on the first set of bone and skin features and the second set of bone and skin features.
  • Step S23 includes: taking the first set of bone-skin features as a reference target (target); finding out the correlation function as the spatial alignment correlation data, where the correlation function satisfies the second set of bone-skin features to align with the reference target without being due to the first set of bones Disturbance caused by skin features and the second set of bone skin features.
  • the correlation function is found through the algorithm of the maximum likelihood estimation problem (maximum likelihood estimation problem) and the maximum expectation algorithm (EM Algorithm).
  • Step S24 is to perform deformation processing on the medical image data according to the feature alignment data to generate artificial medical image data suitable for artificial limbs.
  • the artificial medical image data is, for example, a three-dimensional ultrasound image, which still retains the characteristics of the organism in the original ultrasound image.
  • Step S24 includes: generating a deformation function based on the medical image data and feature alignment data; applying a grid to the medical image data and obtaining multiple dot positions accordingly; deforming the dot positions according to the deformation function; based on the deformed dot positions,
  • the medical image data is supplemented with corresponding pixels to generate a deformed image, and the deformed image is used as artificial medical image data.
  • the deformation function is generated using the moving least square (MLS) method.
  • the deformed image is generated using affine transform.
  • step S21 to step S24 by capturing the image characteristics of the real ultrasound image and the artificial hand computed tomography image, the corresponding point relationship of the deformation is obtained by image alignment, and then the image close to the real ultrasound is generated based on the artificial hand through the deformation
  • the ultrasound retains the characteristics of the original live ultrasound image.
  • the artificial medical image data is a three-dimensional ultrasound image, a plane ultrasound image of a specific position or a specific section can be generated based on the position or section mapped by the three-dimensional ultrasound image.
  • FIG. 10A and FIG. 10B are schematic diagrams of the correction of the artificial hand model and the ultrasound volume of the embodiment.
  • the physical medical image 3D model 14b and the artificial medical image 3D model 14c are related to each other. Since the model of the prosthetic hand is constructed by the computed tomography image volume, the positional relationship between the computed tomography image volume and the ultrasound volume can be directly used to integrate the artificial hand. Establish correlation with ultrasound volume.
  • FIG. 10C is a schematic diagram of ultrasonic volume and collision detection according to an embodiment
  • FIG. 10D is a schematic diagram of an artificial ultrasound image according to an embodiment.
  • the training system must be able to simulate a real ultrasonic transducer (or probe) and generate slice image fragments from the ultrasonic volume. Regardless of the angle of the transducer (or probe), the simulated transducer (or probe) must depict the corresponding image segment.
  • the angle between the medical probe 21 and the ultrasonic body is first detected, and then the collision detection of the segment surface is based on the width of the medical probe 21 and the ultrasonic volume, which can be used to find the corresponding image segment being drawn.
  • the resulting image is shown in Figure 10D.
  • the artificial medical image data is a three-dimensional ultrasound image
  • the three-dimensional ultrasound image has a corresponding ultrasound volume
  • the content of the image segment to be depicted by the simulated transducer (or probe) can be generated according to the position of the three-dimensional ultrasound image mapping.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Instructional Devices (AREA)

Abstract

An optical tracking system (1) for medical instruments (21-24) comprises multiple optical markers (11), multiple optical sensors (12) and a computer device (13). The optical markers (11) are provided at the medical instruments (21-24). The respective optical sensors (12) optically sense the optical markers (11) to obtain multiple sensed signals. The computer device (13) has a three-dimensional surgical scenario model (14), and is connected to the optical sensors (12) to receive the sensed signals. The computer device adjusts, according to the sensed signals, relative positions of medical instrument representations (141-144) and a surgical target representation (145) in the three-dimensional surgical scenario model (14).

Description

用于医疗用具的光学追踪系统及训练系统Optical tracking system and training system for medical appliances 技术领域Technical field
本发明涉及一种光学追踪系统及训练系统,特别涉及一种医疗器具用的光学追踪系统及训练系统。The invention relates to an optical tracking system and a training system, in particular to an optical tracking system and a training system for medical appliances.
背景技术Background technique
医疗器具的操作训练需要花一段时间才能让学习的使用者能够熟练,以微创手术来说,除了操作手术刀之外通常还会操作超声波影像的探头,微创手术所能容许的误差不大,通常要有丰富的经验才能顺利的进行,因此,手术前的训练格外重要。It takes some time to train the operation of medical devices to make the learners become proficient. For minimally invasive surgery, in addition to operating a scalpel, ultrasound imaging probes are usually operated. The tolerance for minimally invasive surgery is not large. It usually takes a wealth of experience to proceed smoothly. Therefore, training before surgery is extremely important.
因此,如何提供一种医疗器具用的光学追踪系统及训练系统,可以协助或训练用户操作医疗器具,已成为重要课题之一。Therefore, how to provide an optical tracking system and training system for medical devices that can assist or train users to operate medical devices has become one of the important issues.
发明内容Summary of the invention
有鉴于上述课题,本发明的目的为提供一种医疗器具用的光学追踪系统及训练系统,能协助或训练用户操作医疗器具。In view of the above-mentioned problems, the object of the present invention is to provide an optical tracking system and training system for medical devices, which can assist or train users to operate medical devices.
一种光学追踪系统用于医疗用具,其包含多个光学标记物、多个光学传感器以及计算机装置,光学标记物设置在医疗用具,光学传感器光学地感测光学标记物以分别产生多个感测信号。计算机装置耦接光学传感器以接收感测信号,并具有手术情境三维模型,且根据感测信号调整手术情境三维模型中医疗用具呈现物与手术目标呈现物之间的相对位置。An optical tracking system for medical appliances, which includes a plurality of optical markers, a plurality of optical sensors, and a computer device. The optical markers are arranged on the medical appliance. The optical sensor optically senses the optical markers to generate a plurality of sensings respectively. signal. The computer device is coupled to the optical sensor to receive the sensing signal, and has a three-dimensional model of the surgical situation, and adjusts the relative position between the medical appliance present and the surgical target present in the three-dimensional model of the surgical situation according to the sensing signal.
在一个实施例中,光学传感器为至少两个,设置在医疗用具上方并朝向光学标记物。In one embodiment, there are at least two optical sensors, which are arranged above the medical appliance and facing the optical marker.
在一个实施例中,计算机装置与光学传感器进行前置作业程序,前置作业程序包括:校正光学传感器的坐标体系;以及调整针对医疗用具与手术目标物体的缩放比例。In one embodiment, the computer device and the optical sensor perform a pre-operation procedure, and the pre-operation procedure includes: calibrating the coordinate system of the optical sensor; and adjusting the scaling ratio of the medical appliance and the surgical target object.
在一个实施例中,计算机装置与光学传感器进行坐标校正程序,校正程序包括初始校正步骤、优化步骤以及修正步骤。初始校正步骤进行光学传感器的坐标体系与手术情境三维模型的坐标体系之间的初始校正,以得到初始转换参数。优化步骤为优化初始转换参数的自由度,以得到优化转换参数。修正步骤为修正优化转换参数中导因于光学标记物的设置误差。In one embodiment, the computer device and the optical sensor perform a coordinate calibration program, and the calibration program includes an initial calibration step, an optimization step, and a correction step. The initial correction step is to perform initial correction between the coordinate system of the optical sensor and the coordinate system of the three-dimensional model of the surgical situation to obtain initial conversion parameters. The optimization step is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters. The correction step is to correct the setting error caused by the optical marker in the optimized conversion parameter.
在一个实施例中,初始校正步骤是利用奇异值分解(singular value decomposition,SVD)、三角坐标对位(Triangle coordinate registration)或线性最小均方估算(linear least square estimation)。In one embodiment, the initial calibration step is to use singular value decomposition (SVD), triangular coordinate registration (triangle coordinate registration), or linear least square estimation (linear least square estimation).
在一个实施例中,初始校正步骤是利用奇异值分解来找出医疗用具呈现物的特征点与光学传感器之间的变换矩阵作为初始转换参数,变换矩阵包括共变异数矩阵(covariance matrix)以及旋转矩阵(rotation matrix),优化步骤是从旋转矩阵获得多自由度的多个尤拉角,并对多自由度的参数利用高斯牛顿法迭代优化,以得到优化转换参数。In one embodiment, the initial calibration step is to use singular value decomposition to find the transformation matrix between the feature points of the medical appliance and the optical sensor as the initial transformation parameter. The transformation matrix includes a covariance matrix and rotation For the rotation matrix, the optimization step is to obtain multiple Euler angles with multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of the multiple degrees of freedom to obtain optimized conversion parameters.
在一个实施例中,计算机装置根据优化转换参数与感测信号设定医疗用具呈现物与手术目标呈现物在手术情境三维模型中的位置。In one embodiment, the computer device sets the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation according to the optimized conversion parameters and the sensing signal.
在一个实施例中,修正步骤是利用反向转换与感测信号修正医疗用具呈现物与手术目标呈现物在手术情境三维模型中的位置。In one embodiment, the correcting step is to use the reverse conversion and sensing signals to correct the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation.
在一个实施例中,计算机装置输出显示数据,显示数据用以呈现医疗用具呈现物与手术目标呈现物的3D影像。In one embodiment, the computer device outputs display data, and the display data is used to present the 3D images of the medical appliance presentation and the surgical target presentation.
在一个实施例中,计算机装置依据手术情境三维模型以及医学影像模型产生医学影像。In one embodiment, the computer device generates the medical image according to the three-dimensional model of the surgical situation and the medical image model.
在一个实施例中,手术目标物体是人造肢体,医学影像是针对手术目标物体的人造医学影像。In one embodiment, the surgical target object is an artificial limb, and the medical image is an artificial medical image for the surgical target object.
在一个实施例中,计算机装置推演医疗用具在手术目标物体内外的位置,并据以调整手术情境三维模型中医疗用具呈现物与手术目标呈现物之间的相对位置。In one embodiment, the computer device deduces the position of the medical appliance inside and outside the surgical target object, and adjusts the relative position between the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation accordingly.
一种医疗用具操作的训练系统包含医疗用具以及前述用于医疗用具的光学追踪系统。A training system for operating medical appliances includes medical appliances and the aforementioned optical tracking system for medical appliances.
在一个实施例中,医疗用具包括医疗探具及手术器具,医疗用具呈现物包括医疗探具呈现物及手术器具呈现物。In one embodiment, the medical appliance includes a medical probe and a surgical appliance, and the medical appliance presentation includes a medical probe presentation and a surgical appliance presentation.
在一个实施例中,计算机装置依据医疗探具呈现物找出的检测物及手术器具呈现物的操作进行评分。In one embodiment, the computer device scores the detection object found by the medical probe present and the operation of the surgical instrument present.
一种医疗用具的光学追踪系统的校正方法包含感测步骤、初始校正步骤、优化步骤以及修正步骤。感测步骤利用光学追踪系统的多个光学传感器光学地感测设置在医疗用具上光学追踪系统的多个光学标记物,以分别产生多个感测信号;初始校正步骤依据感测信号进行光学传感器的坐标体系与手术情境三维模型的坐标体系之间的初始校正,以得到初始转换参数;优化步骤为 优化初始转换参数的自由度,以得到优化转换参数;修正步骤为修正优化转换参数中导因于光学标记物的设置误差。A method for calibrating an optical tracking system for medical appliances includes a sensing step, an initial calibration step, an optimization step, and a correction step. The sensing step uses a plurality of optical sensors of the optical tracking system to optically sense a plurality of optical markers arranged on the optical tracking system on the medical appliance to generate a plurality of sensing signals; the initial calibration step performs the optical sensor according to the sensing signals The initial correction between the coordinate system of the surgical scene and the coordinate system of the three-dimensional model of the surgical situation to obtain the initial conversion parameters; the optimization step is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters; the correction step is to modify the leading factors in the optimized conversion parameters For the setting error of the optical marker.
在一个实施例中,校正方法进一步包含前置作业程序,前置作业程序包括校正光学传感器的坐标体系;以及调整针对医疗用具与手术目标物体的缩放比例。In one embodiment, the calibration method further includes a pre-operation procedure, the pre-operation procedure includes calibrating the coordinate system of the optical sensor; and adjusting the zoom ratio for the medical appliance and the surgical target object.
在一个实施例中,初始校正步骤是利用奇异值分解(singular value decomposition,SVD)、三角坐标对位(Triangle coordinate registration)或线性最小均方估算(linear least square estimation)。In one embodiment, the initial calibration step is to use singular value decomposition (SVD), triangular coordinate registration (triangle coordinate registration), or linear least square estimation (linear least square estimation).
在一个实施例中,校正方法中,初始校正步骤是利用奇异值分解来找出手术情境三维模型的医疗用具呈现物的特征点与光学传感器之间的变换矩阵作为初始转换参数,变换矩阵包括共变异数矩阵以及旋转矩阵。优化步骤是从旋转矩阵获得多自由度的多个尤拉角,并对多自由度的参数利用高斯牛顿法迭代优化,以得到优化转换参数。In one embodiment, in the calibration method, the initial calibration step is to use singular value decomposition to find the transformation matrix between the feature points of the medical appliance presentation of the three-dimensional model of the surgical situation and the optical sensor as the initial transformation parameter. Variation matrix and rotation matrix. The optimization step is to obtain multiple Euras angles with multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of multiple degrees of freedom to obtain optimized conversion parameters.
在一个实施例中,医疗用具呈现物与手术目标呈现物在手术情境三维模型中的位置是根据优化转换参数与感测信号设定。修正步骤是利用反向转换与感测信号修正医疗用具呈现物与手术目标呈现物在手术情境三维模型中的位置。In one embodiment, the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation are set according to the optimized conversion parameters and the sensing signal. The correction step is to use the reverse conversion and sensing signals to correct the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation.
承上所述,本公开的光学追踪系统能协助或训练用户操作医疗器具,本公开的训练系统能提供受训者拟真的手术训练环境,用以有效地辅助受训者完成手术训练。In summary, the optical tracking system of the present disclosure can assist or train users to operate medical appliances, and the training system of the present disclosure can provide a realistic surgical training environment for the trainees to effectively assist the trainees in completing surgical training.
附图说明Description of the drawings
图1A为实施例的光学追踪系统的区块图。FIG. 1A is a block diagram of the optical tracking system of the embodiment.
图1B与图1C为实施例的光学追踪系统的示意图。1B and 1C are schematic diagrams of the optical tracking system of the embodiment.
图1D为实施例的手术情境三维模型的示意图。Fig. 1D is a schematic diagram of a three-dimensional model of the surgical situation of the embodiment.
图2为实施例的光学追踪系统的前置作业程序的流程图。Fig. 2 is a flow chart of the pre-operation procedure of the optical tracking system of the embodiment.
图3A为实施例的光学追踪系统的坐标校正程序的流程图。FIG. 3A is a flowchart of the coordinate correction program of the optical tracking system of the embodiment.
图3B为实施例的坐标体系校正的示意图。Fig. 3B is a schematic diagram of the coordinate system correction of the embodiment.
图3C为实施例的自由度的示意图。Fig. 3C is a schematic diagram of the degrees of freedom of the embodiment.
图4为实施例的医疗用具操作的训练系统的区块图。Fig. 4 is a block diagram of the training system for medical appliance operation according to the embodiment.
图5A为实施例的手术情境三维模型的示意图。Fig. 5A is a schematic diagram of a three-dimensional model of the operation situation of the embodiment.
图5B为实施例的实体医学影像三维模型的示意图。FIG. 5B is a schematic diagram of a three-dimensional model of an entity medical image according to an embodiment.
图5C为实施例的人造医学影像三维模型的示意图。FIG. 5C is a schematic diagram of the three-dimensional model of the artificial medical image of the embodiment.
图6A至图6D为实施例的医疗用具的方向向量的示意图。6A to 6D are schematic diagrams of the direction vector of the medical appliance of the embodiment.
图7A至图7D为实施例的训练系统的训练过程示意图。7A to 7D are schematic diagrams of the training process of the training system of the embodiment.
图8A为实施例的手指结构的示意图。Fig. 8A is a schematic diagram of the finger structure of the embodiment.
图8B为实施例从计算机断层摄影影像在骨头上采用主成分分析的示意图。FIG. 8B is a schematic diagram of applying principal component analysis on bones from computed tomography images in this embodiment.
图8C为实施例从计算机断层摄影影像在皮肤上采用主成分分析的示意图。Fig. 8C is a schematic diagram of applying principal component analysis on the skin from a computed tomography image in an embodiment.
图8D为实施例计算骨头主轴与算医疗用具间的距离的示意图。Fig. 8D is a schematic diagram of calculating the distance between the bone spindle and the medical appliance according to the embodiment.
图8E为实施例的人造医学影像的示意图。Fig. 8E is a schematic diagram of the artificial medical image of the embodiment.
图9A为实施例的产生人造医学影像的区块图。Fig. 9A is a block diagram for generating artificial medical images according to an embodiment.
图9B为实施例的人造医学影像的示意图。Fig. 9B is a schematic diagram of the artificial medical image of the embodiment.
图10A与图10B为实施例的假手模型与超声波容积的校正的示意图。10A and 10B are schematic diagrams of the artificial hand model and the correction of the ultrasonic volume of the embodiment.
图10C为实施例的超声波容积以及碰撞检测的示意图。Fig. 10C is a schematic diagram of ultrasonic volume and collision detection of the embodiment.
图10D为实施例的人造超声波影像的示意图。Fig. 10D is a schematic diagram of an artificial ultrasound image of the embodiment.
具体实施方式detailed description
以下将参照相关附图,说明根据本发明优选实施例的光学追踪系统及医疗用具操作的训练系统,其中相同的元件将以相同的附图标记加以说明。Hereinafter, the optical tracking system and the training system for medical appliance operation according to the preferred embodiments of the present invention will be described with reference to related drawings, in which the same elements will be described with the same reference numerals.
如图1A所示,图1A为实施例的光学追踪系统的区块图。用于医疗用具的光学追踪系统1包含多个光学标记物11、多个光学传感器12以及计算机装置13,光学标记物11设置在一个或多个医疗用具,在此以多个医疗用具21~24说明为例,光学标记物11也可设置在手术目标物体3,医疗用具21~24及手术目标物体3放置在平台4上,光学传感器12光学地感测光学标记物11以分别产生多个感测信号。计算机装置13耦接光学传感器12以接收感测信号,并具有手术情境三维模型14,且根据感测信号调整手术情境三维模型14中医疗用具呈现物141~144与手术目标呈现物145之间的相对位置。医疗用具呈现物141~144与手术目标呈现物145如图1D所示,是在手术情境三维模型14中代表医疗用具21~24及手术目标物体3。通过光学追踪系统1,手术情境三维模型14可以得到医疗用具21~24及手术目标物体3的当下位置并据以反应到医疗用具呈现物与手术目标呈现物。As shown in FIG. 1A, FIG. 1A is a block diagram of the optical tracking system of the embodiment. The optical tracking system 1 for medical appliances includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13. The optical markers 11 are arranged on one or more medical appliances, and here are a plurality of medical appliances 21-24 For example, the optical marker 11 can also be set on the surgical target object 3. The medical appliances 21-24 and the surgical target object 3 are placed on the platform 4, and the optical sensor 12 optically senses the optical marker 11 to generate multiple senses. Test signal. The computer device 13 is coupled to the optical sensor 12 to receive the sensing signal, and has a three-dimensional model 14 of the surgical context, and adjusts the gap between the medical appliance presents 141-144 and the surgical target present 145 in the three-dimensional model 14 of the surgical context according to the sensed signals relative position. The medical appliance presentation objects 141 to 144 and the surgery target presentation object 145 are shown in FIG. 1D, and represent the medical appliances 21 to 24 and the surgery target object 3 in the three-dimensional model 14 of the surgery situation. Through the optical tracking system 1, the three-dimensional model 14 of the surgical situation can obtain the current positions of the medical appliances 21-24 and the surgical target object 3 and reflect the medical appliance presentation and the surgical target presentation accordingly.
光学传感器12为至少两个,设置在医疗用具21~24上方并朝向光学标记物11,用以实时地(real-time)追踪医疗用具21~24以得知其位置。光学传感器12可以是基于摄像机的线性检测器。举例来说,在图1B中,图1B 为实施例的光学追踪系统的示意图,四个光学传感器121~124安装在天花板并且朝向平台4上的光学标记物11、医疗用具21~24及手术目标物体3。There are at least two optical sensors 12, which are arranged above the medical appliances 21-24 and facing the optical markers 11 for real-time tracking of the medical appliances 21-24 to know their positions. The optical sensor 12 may be a camera-based linear detector. For example, in FIG. 1B, FIG. 1B is a schematic diagram of the optical tracking system of the embodiment, four optical sensors 121 to 124 are installed on the ceiling and facing the optical marker 11, medical appliances 21 to 24 and the surgical target on the platform 4 Object 3.
举例来说,医疗用具21为医疗探具,医疗探具例如是超声波影像检测的探头或其他可探知手术目标物体3内部的装置,这些装置是临床真实使用的,超声波影像检测的探头例如是超声波换能器(Ultrasonic Transducer)。医疗用具22~24为手术器具,例如针、手术刀、勾等等,这些器具是临床真实使用的。若用于手术训练,医疗探具可以是临床真实使用的装置或是仿真临床的拟真装置,手术器具可以是临床真实使用的装置或是仿真临床的拟真装置。例如在图1C中,图1C为实施例的光学追踪系统的示意图,平台4上的医疗用具21~24及手术目标物体3是用于手术训练用,例如手指微创手术,其可用于板机指治疗手术。平台4及医疗用具21~24的夹具的材质可以是木头,医疗用具21是拟真超声波换能器(或探头),医疗用具22~24包括多个手术器具(surgical instruments),例如扩张器(dilator)、针(needle)、及勾刀(hook blade),手术目标物体3是假手(hand phantom)。各医疗用具21~24安装三或四个光学标记物11,手术目标物体3也安装三或四个光学标记物11。举例来说,计算机装置13联机至光学传感器12以实时追踪光学标记物11的位置。光学标记物11有17个,包含4个在手术目标物体3上或周围来连动,13个光学标记物11在医疗用具21~24。光学传感器12不断地传送实时信息到计算机装置13,此外,计算机装置13也使用移动判断功能来降低计算负担,若光学标记物11的移动距离步小于门坎值,则光学标记物11的位置不更新,门坎值例如是0.7mm。For example, the medical tool 21 is a medical probe, such as a probe for ultrasonic imaging detection or other devices that can detect the inside of the surgical target object 3. These devices are actually used clinically, and the probe for ultrasonic imaging detection is, for example, ultrasound. Transducer (Ultrasonic Transducer). The medical appliances 22-24 are surgical appliances, such as needles, scalpels, hooks, etc., which are actually used clinically. If used for surgical training, the medical probe can be a device that is actually used in clinical practice or a simulated device that simulates clinical practice, and the surgical instrument can be a device that is actually used in clinical practice or a simulated device that simulates clinical practice. For example, in Figure 1C, Figure 1C is a schematic diagram of the optical tracking system of the embodiment. The medical appliances 21-24 and the surgical target 3 on the platform 4 are used for surgical training, such as minimally invasive finger surgery, which can be used for triggers. Refers to treatment surgery. The material of the clamps of the platform 4 and the medical appliances 21-24 can be wood. The medical appliance 21 is a realistic ultrasonic transducer (or probe), and the medical appliance 22-24 includes a plurality of surgical instruments, such as expanders ( dilator, needle, and hook blade. The surgical target 3 is a hand phantom. Three or four optical markers 11 are installed on each medical appliance 21-24, and three or four optical markers 11 are also installed on the surgical target object 3. For example, the computer device 13 is connected to the optical sensor 12 to track the position of the optical marker 11 in real time. There are 17 optical markers 11, including 4 that are linked on or around the surgical target object 3, and 13 optical markers 11 are on medical appliances 21-24. The optical sensor 12 continuously transmits real-time information to the computer device 13. In addition, the computer device 13 also uses the movement judgment function to reduce the computational burden. If the moving distance of the optical marker 11 is less than the threshold value, the position of the optical marker 11 is not updated , The threshold value is, for example, 0.7mm.
在图1A中,计算机装置13包含处理核心131、储存元件132以及多个输出入接口133、134,处理核心131耦接储存元件132及输出入接口133、134,输出入接口133可接收光学传感器12产生的检测信号,输出入接口134与输出装置5通信,计算机装置13可透过输出入接口134输出处理结果到输出装置5。输出入接口133、134例如是周边传输端口或是通信端口。输出装置5是具备输出影像能力的装置,例如显示器、投影机、打印机等等。In FIG. 1A, the computer device 13 includes a processing core 131, a storage element 132, and a plurality of I/O interfaces 133, 134. The processing core 131 is coupled to the storage element 132 and I/O interfaces 133, 134. The I/O interface 133 can receive optical sensors. 12, the I/O interface 134 communicates with the output device 5, and the computer device 13 can output the processing result to the output device 5 through the I/O interface 134. The I/O interfaces 133 and 134 are, for example, peripheral transmission ports or communication ports. The output device 5 is a device capable of outputting images, such as a display, a projector, a printer, and so on.
储存元件132储存程序代码以供处理核心131执行,储存元件132包括非挥发性存储器及挥发性存储器,非挥发性存储器例如是硬盘、快闪存储器、固态盘、光盘片等等。挥发性存储器例如是动态随机存取存储器、静态随机存取存储器等等。举例来说,程序代码储存于非挥发性存储器,处理核心131可将程序代码从非挥发性存储器加载到挥发性存储器,然后执行程序代码。储存元件132储存手术情境三维模型14及追踪模块15的程序代码与数据,处理 核心131可存取储存元件132以执行及处理手术情境三维模型14及追踪模块15的程序代码与数据。The storage element 132 stores program codes for execution by the processing core 131. The storage element 132 includes a non-volatile memory and a volatile memory. The non-volatile memory is, for example, a hard disk, a flash memory, a solid state disk, an optical disc, etc. The volatile memory is, for example, dynamic random access memory, static random access memory, and so on. For example, the program code is stored in the non-volatile memory, and the processing core 131 can load the program code from the non-volatile memory to the volatile memory, and then execute the program code. The storage component 132 stores the program code and data of the operation situation three-dimensional model 14 and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14 and the program code and data of the tracking module 15.
处理核心131例如是处理器、控制器等等,处理器包括一个或多个核心。处理器可以是中央处理器或图型处理器,处理核心131也可以是处理器或图型处理器的核心。另一方面,处理核心131也可以是一个处理模块,处理模块包括多个处理器。The processing core 131 is, for example, a processor, a controller, etc., and the processor includes one or more cores. The processor may be a central processing unit or a graphics processor, and the processing core 131 may also be the core of a processor or a graphics processor. On the other hand, the processing core 131 may also be a processing module, and the processing module includes multiple processors.
光学追踪系统的运作包含计算机装置13与光学传感器12间的联机、前置作业程序、光学追踪系统的坐标校正程序、实时描绘(rendering)程序等等,追踪模块15代表这些运作的相关程序代码及数据,计算机装置13的储存元件132储存追踪模块15,处理核心131执行追踪模块15以进行这些运作。The operation of the optical tracking system includes the connection between the computer device 13 and the optical sensor 12, the pre-operation program, the coordinate correction program of the optical tracking system, the real-time rendering program, etc. The tracking module 15 represents the relevant program codes of these operations and Data, the storage element 132 of the computer device 13 stores the tracking module 15, and the processing core 131 executes the tracking module 15 to perform these operations.
计算机装置13进行前置作业及光学追踪系统的坐标校正后可找出优化转换参数,然后计算机装置13可根据优化转换参数与感测信号设定医疗用具呈现物141~144与手术目标呈现物145在手术情境三维模型14中的位置。计算机装置13可推演医疗用具21在手术目标物体3内外的位置,并据以调整手术情境三维模型14中医疗用具呈现物141~144与手术目标呈现物145之间的相对位置。由此可从光学传感器12的检测结果实时地追踪医疗用具21~24并且在手术情境三维模型14中对应地呈现,在手术情境三维模型14的呈现物例如在图1D所示。After the computer device 13 performs the pre-work and the coordinate correction of the optical tracking system, the optimized conversion parameters can be found, and then the computer device 13 can set the medical appliance presentation objects 141-144 and the surgical target presentation objects 145 according to the optimized conversion parameters and the sensing signal The position in the three-dimensional model 14 of the surgical situation. The computer device 13 can deduce the position of the medical appliance 21 inside and outside the surgical target object 3, and adjust the relative position between the medical appliance presenting objects 141 to 144 and the surgical target presenting object 145 in the three-dimensional model 14 of the operation situation accordingly. In this way, the medical appliances 21-24 can be tracked in real time from the detection results of the optical sensor 12 and correspondingly presented in the three-dimensional model 14 of the surgical context. The presentation of the three-dimensional model 14 in the surgical context is shown in FIG. 1D, for example.
手术情境三维模型14是原生(native)模型,其包含针对手术目标物体3所建立的模型,也包含针对医疗用具21~24所建立的模型。其建立方式可以是开发者直接以计算机图学的技术在计算机上建构,例如使用绘图软件或是特别应用的开发软件所建立。The three-dimensional model 14 of the operation situation is a native model, which includes models established for the surgical target object 3 and also includes models established for the medical appliances 21-24. The method of establishment can be that the developer directly uses computer graphics technology to construct it on the computer, such as using drawing software or special application development software.
计算机装置13可输出显示数据135至输出装置5,显示数据135用以呈现医疗用具呈现物141~144与手术目标呈现物145的3D影像,输出装置5可将显示数据135输出,输出方式例如是显示或打印等等。以显示方式的输出其结果例如在图1D所示。The computer device 13 can output the display data 135 to the output device 5. The display data 135 is used to present 3D images of the medical appliance presentation objects 141-144 and the surgical target presentation object 145. The output device 5 can output the display data 135. The output method is, for example, Display or print etc. The result of outputting in a display mode is shown in FIG. 1D, for example.
如图2所示,图2为实施例的光学追踪系统的前置作业程序的流程图。计算机装置13与光学传感器12进行前置作业程序,前置作业程序包括步骤S01以及步骤S02,用以校正光学传感器12并重新调整全部医疗用具21~24的缩放规模。As shown in FIG. 2, FIG. 2 is a flowchart of the pre-operation procedure of the optical tracking system of the embodiment. The computer device 13 and the optical sensor 12 perform a pre-operation procedure. The pre-operation procedure includes steps S01 and S02 for calibrating the optical sensor 12 and re-adjusting the scale of all medical appliances 21-24.
步骤S01是校正光学传感器12的坐标体系,多个校正棒(calibration  stick)带有多个光学标记物,其所围的区域是用来定义工作区域,光学传感器12感测校正棒上的光学标记物,各光学传感器12检测到全部光学标记物时,校正棒所围的区域就是有效工作区域。校正棒是使用者手动摆设的,使用者可调整校正棒的位置来修改有效工作区域。光学传感器12所检测的灵敏度可以到约0.3mm。在此,光学传感器12的检测结果所在的坐标体系称为追踪坐标体系。Step S01 is to calibrate the coordinate system of the optical sensor 12. A plurality of calibration sticks have a plurality of optical markers, and the area enclosed by them is used to define the working area. The optical sensor 12 senses the optical markers on the calibration stick. When all optical markers are detected by each optical sensor 12, the area enclosed by the correction rod is the effective working area. The calibration bar is manually placed by the user, and the user can adjust the position of the calibration bar to modify the effective working area. The sensitivity detected by the optical sensor 12 can be about 0.3 mm. Here, the coordinate system where the detection result of the optical sensor 12 is located is called the tracking coordinate system.
步骤S02是调整针对医疗用具21~24与手术目标物体3的缩放比例。医疗用具21~24通常是刚体(rigid body),坐标校正采用刚体校正的方式用以避免失真。因此,医疗用具21~24必须重调规模(rescale)到追踪坐标体系以得到正确的校正结果。缩放比例(scaling ratio)的计算可由下式得到:Step S02 is to adjust the scaling ratio of the medical appliances 21 to 24 and the surgical target object 3. The medical appliances 21-24 are usually rigid bodies, and the coordinate correction adopts a rigid body correction method to avoid distortion. Therefore, the medical appliances 21-24 must be rescaled to the tracking coordinate system to obtain correct calibration results. The calculation of scaling ratio can be obtained by the following formula:
Figure PCTCN2019082803-appb-000001
Figure PCTCN2019082803-appb-000001
Figure PCTCN2019082803-appb-000002
Figure PCTCN2019082803-appb-000002
Figure PCTCN2019082803-appb-000003
Figure PCTCN2019082803-appb-000003
Track G:追踪坐标体系中的重心 Track G : Tracking the center of gravity in the coordinate system
Track i:追踪坐标体系中光学标记物的位置 Track i : Track the position of the optical marker in the coordinate system
Mesh G:网点坐标体系中的重心 Mesh G : the center of gravity in the mesh point coordinate system
Mesh i:网点坐标体系中光学标记物的位置 Mesh i : the position of the optical marker in the mesh point coordinate system
追踪坐标体系是光学传感器12检测结果所采用的坐标体系,网点坐标体系是手术情境三维模型14所采用的坐标体系。步骤S02首先计算追踪坐标体系与网点坐标体系中的重心,然后,计算追踪坐标体系与网点坐标体系中光学标记物与重心的距离。接着,对于网点坐标体系对于追踪坐标体系的个别比例,加总全部的个别比例并除以光学标记物的数量,然后得到网点坐标体系对于追踪坐标体系的比例。The tracking coordinate system is the coordinate system adopted by the detection result of the optical sensor 12, and the dot coordinate system is the coordinate system adopted by the three-dimensional model 14 of the surgical situation. Step S02 first calculates the center of gravity in the tracking coordinate system and the dot coordinate system, and then calculates the distance between the optical marker and the center of gravity in the tracking coordinate system and the dot coordinate system. Then, for the individual ratios of the dot coordinate system to the tracking coordinate system, add up all the individual ratios and divide by the number of optical markers to obtain the ratio of the dot coordinate system to the tracking coordinate system.
如图3A所示,图3A为实施例的光学追踪系统的坐标校正程序的流程图。计算机装置与光学传感器进行坐标校正程序,校正程序包括初始校正步骤S11、优化步骤S12以及修正步骤S13。初始校正步骤S11进行光学传感器12的坐标体系与手术情境三维模型14的坐标体系之间的初始校正,以得到初始转换参数,坐标体系之间的校正例如图3B所示。优化步骤S12为优化初始转换参数的自由度,以得到优化转换参数,自由度例如图3C所示。修正步骤S13为 修正优化转换参数中导因于光学标记物的设置误差。As shown in FIG. 3A, FIG. 3A is a flowchart of the coordinate correction program of the optical tracking system of the embodiment. The computer device and the optical sensor perform a coordinate calibration program, and the calibration program includes an initial calibration step S11, an optimization step S12, and a correction step S13. The initial calibration step S11 performs an initial calibration between the coordinate system of the optical sensor 12 and the coordinate system of the three-dimensional model 14 of the surgical situation to obtain the initial conversion parameters. The calibration between the coordinate systems is shown in FIG. 3B for example. The optimization step S12 is to optimize the degrees of freedom of the initial conversion parameters to obtain the optimized conversion parameters. For example, the degrees of freedom are shown in FIG. 3C. The correcting step S13 is to correct the setting error caused by the optical marker in the optimized conversion parameter.
由于追踪坐标体系与手术情境三维模型14的坐标体系之间也有变换,附在平台4上的光学标记物可用来校正这两个坐标体系。Since there is also a conversion between the tracking coordinate system and the coordinate system of the three-dimensional model 14 of the surgical situation, the optical marker attached to the platform 4 can be used to correct the two coordinate systems.
初始校正步骤S11是找出医疗用具呈现物的特征点与光学传感器之间的变换矩阵作为初始转换参数,初始校正步骤是利用奇异值分解(singular value decomposition,SVD)、三角坐标对位(Triangle coordinate registration)或线性最小均方估算(linear least square estimation)。变换矩阵例如包括共变异数矩阵(covariance matrix)以及旋转矩阵(rotation matrix)。The initial correction step S11 is to find the transformation matrix between the feature points of the medical appliance and the optical sensor as the initial transformation parameter. The initial correction step is to use singular value decomposition (SVD) and triangular coordinate alignment (Triangle coordinate). registration) or linear least square estimation (linear least square estimation). The transformation matrix includes, for example, a covariance matrix and a rotation matrix.
举例来说,步骤S11可利用奇异值分解来找出医疗用具呈现物141~144的特征点与光学传感器之间的最佳变换(optimal transformation)矩阵作为初始转换参数,共变异数矩阵H可从这些特征点得到且其可视为要被优化的目标函数(objective function)。旋转(rotation)矩阵M可通过下式找到:For example, in step S11, singular value decomposition can be used to find the optimal transformation matrix between the feature points of the medical appliance exhibits 141 to 144 and the optical sensor as the initial transformation parameter, and the covariance matrix H can be obtained from These feature points are obtained and can be regarded as the objective function to be optimized. The rotation matrix M can be found by the following formula:
Figure PCTCN2019082803-appb-000004
Figure PCTCN2019082803-appb-000004
Figure PCTCN2019082803-appb-000005
Figure PCTCN2019082803-appb-000005
[U,∑,V]=SVD(H);M=VU T [U, Σ, V] = SVD (H); M = VU T
得到旋转矩阵M后,可通过下式找到平移(translation)矩阵T:After obtaining the rotation matrix M, the translation matrix T can be found by the following formula:
T=-M×centroid A+centroid B T=-M×centroid A +centroid B
优化步骤S12是从旋转矩阵M获得多自由度的多个尤拉角,并对多自由度的参数利用高斯牛顿法(Gauss-Newton algorithm)迭代优化,用以得到优化转换参数。多自由度例如是六自由度,其他数量的自由度例如九自由度等等并将表达式适当对应的修改也是可行的。由于从初始校正步骤S11得到的转换结果可能不够精确,进行优化步骤S12可以改善精确度因而得到较精确的转换结果。The optimization step S12 is to obtain multiple Euler angles with multiple degrees of freedom from the rotation matrix M, and use Gauss-Newton algorithm to iteratively optimize the parameters of multiple degrees of freedom to obtain optimized conversion parameters. The multiple degrees of freedom are, for example, six degrees of freedom, and other numbers of degrees of freedom, such as nine degrees of freedom, etc., are also possible to modify the expression appropriately. Since the conversion result obtained from the initial calibration step S11 may not be accurate enough, performing the optimization step S12 can improve the accuracy and obtain a more accurate conversion result.
设γ表示相对于X轴的角度,α表示相对于Y轴的角度,β表示相对于Z轴的角度,对于世界坐标轴的各轴的旋转可以表示如下:Let γ represent the angle relative to the X axis, α represent the angle relative to the Y axis, and β represent the angle relative to the Z axis. The rotation of each axis of the world coordinate axis can be expressed as follows:
Figure PCTCN2019082803-appb-000006
Figure PCTCN2019082803-appb-000006
m 11=cosαcosβm 12=sinγsinαcosβ-corγsinβ m 11 =cosαcosβm 12 =sinγsinαcosβ-corγsinβ
m 13=cosγsinαcosβ+sinγsinβ m 13 =cosγsinαcosβ+sinγsinβ
m 21=cosαsinβm 22=sinγsinαsinβ+corγcosβ m 21 =cosαsinβm 22 =sinγsinαsinβ+corγcosβ
m 23=cosγsinαsinβ-sinγcosβ m 23 =cosγsinαsinβ-sinγcosβ
m 31=-sinα m 31 =-sinα
m 32=sinγcosα m 32 =sinγcosα
m 33=cosγcosα m 33 =cosγcosα
旋转矩阵M可从上式得到,在一般状况下,多个尤拉角可由下式得到:The rotation matrix M can be obtained from the above formula. In general, multiple Euler angles can be obtained from the following formula:
γ=tan -1(m 32,m 33) γ=tan -1 (m 32 , m 33 )
Figure PCTCN2019082803-appb-000007
Figure PCTCN2019082803-appb-000007
β=tan -1(sin(γ)m 13-cos(γ)m 12,cos(γ)m 22-sin(γ)m 23) β=tan -1 (sin(γ)m 13 -cos(γ)m 12 , cos(γ)m 22 -sin(γ)m 23 )
取出多个尤拉角后,针对世界坐标体系的旋转假定是正交的(orthogonal),由于已经得到六个自由度的参数,这些参数可利用高斯牛顿法迭代优化,以得到优化转换参数。
Figure PCTCN2019082803-appb-000008
是要最小化的目标函数(objective function)。
After taking out multiple Euler angles, the rotation of the world coordinate system is assumed to be orthogonal. Since the parameters of six degrees of freedom have been obtained, these parameters can be iteratively optimized by the Gauss-Newton method to obtain optimized conversion parameters.
Figure PCTCN2019082803-appb-000008
Is the objective function to be minimized.
Figure PCTCN2019082803-appb-000009
Figure PCTCN2019082803-appb-000009
Figure PCTCN2019082803-appb-000010
Figure PCTCN2019082803-appb-000010
其中,b表示参考目标点和当下点之间的最小平方误差(least square errors),n是特征点数量,
Figure PCTCN2019082803-appb-000011
是变换参数其具有平移及旋转参数,通过使用高斯牛顿法迭代变换参数
Figure PCTCN2019082803-appb-000012
会因而调整而找出最佳值,变换参数
Figure PCTCN2019082803-appb-000013
的更新函数如下:
Among them, b represents the least square error between the reference target point and the current point, n is the number of feature points,
Figure PCTCN2019082803-appb-000011
It is a transformation parameter which has translation and rotation parameters, and iteratively transforms the parameters by using Gauss Newton method
Figure PCTCN2019082803-appb-000012
Will adjust to find the best value and change the parameters
Figure PCTCN2019082803-appb-000013
The update function is as follows:
Figure PCTCN2019082803-appb-000014
Figure PCTCN2019082803-appb-000014
Δ是从目标函数的雅可比矩阵(Jacobian matrix)Δ is the Jacobian matrix from the objective function (Jacobian matrix)
Δ=(J TJ) -1J Tb Δ=(J T J) -1 J T b
Figure PCTCN2019082803-appb-000015
Figure PCTCN2019082803-appb-000015
停止条件定义如下:The stop condition is defined as follows:
Figure PCTCN2019082803-appb-000016
Figure PCTCN2019082803-appb-000016
修正步骤S13为修正优化转换参数中导因于光学标记物的设置误差。修正步骤S13包含判断步骤S131以及调整步骤S132。The correcting step S13 is to correct the setting error caused by the optical marker in the optimized conversion parameter. The correction step S13 includes a determination step S131 and an adjustment step S132.
在步骤S13中,来源特征点修正程序可用来克服因手动选取特征点所造成的误差,这是因为手术情境三维模型14的医疗用具呈现物141~144与手术目标呈现物145的特征点和医疗用具21~24及手术目标物体3的特征点的误差,这些特征点是用户所选取的。医疗用具21~24及手术目标物体3的特征点可包含光学标记物11设置的点。由于最佳变换可从步骤S12得到,从源点变换得到的目标位置在第n次迭代后会接近参考目标点V T如下: In step S13, the source feature point correction procedure can be used to overcome the error caused by manually selecting feature points. This is because the medical appliance presentation objects 141 to 144 of the surgical scene three-dimensional model 14 and the surgical target presentation object 145 have feature points and medical treatments. The error of the feature points of the tools 21-24 and the surgical target object 3, these feature points are selected by the user. The feature points of the medical tools 21-24 and the surgical target object 3 may include points where the optical marker 11 is set. Since the optimal transformation can be obtained from step S12, the target position transformed from the source point will approach the reference target point V T after the nth iteration as follows:
Figure PCTCN2019082803-appb-000017
Figure PCTCN2019082803-appb-000017
Figure PCTCN2019082803-appb-000018
第n次迭代从源点到目标点的转换矩阵
Figure PCTCN2019082803-appb-000018
The conversion matrix from the source point to the target point in the nth iteration
Figure PCTCN2019082803-appb-000019
第n次迭代的源点
Figure PCTCN2019082803-appb-000019
The source point of the nth iteration
Figure PCTCN2019082803-appb-000020
第n次迭代变换后的目标点
Figure PCTCN2019082803-appb-000020
The target point after the nth iteration
在来源点修正步骤,首先计算转换矩阵的反变换,然后从参考目标点得到新源点,计算式如下:In the source point correction step, first calculate the inverse transformation of the transformation matrix, and then obtain the new source point from the reference target point. The calculation formula is as follows:
Figure PCTCN2019082803-appb-000021
Figure PCTCN2019082803-appb-000021
Figure PCTCN2019082803-appb-000022
转换矩阵的反变换
Figure PCTCN2019082803-appb-000022
Inverse transformation of transformation matrix
Figure PCTCN2019082803-appb-000023
第n次迭代反变换后的新源点
Figure PCTCN2019082803-appb-000023
The new source point after the inverse transformation of the nth iteration
Figure PCTCN2019082803-appb-000024
第n次迭代变换后的目标点
Figure PCTCN2019082803-appb-000024
The target point after the nth iteration
假定两个坐标体系的确切变换如上,在n次迭代后,新源点将会是原始源点的理想位置。然而,原始源点和理想源点有些移位,为了校正原始源点来最小化手动选位误差,各次迭代可设约束步距大小(constraint step size) c 1,并设约束区域盒大小(constraint region box size)c 2其可以是常数(constant value)以限制原始源点移动的距离。这个校正如下式: Assuming that the exact transformation of the two coordinate systems is as above, after n iterations, the new source point will be the ideal position of the original source point. However, the original source point and the ideal source point are somewhat shifted. In order to correct the original source point to minimize the manual selection error, each iteration can be set to a constraint step size (constraint step size) c 1 , and set the constraint area box size ( constraint region box size) c 2 It can be a constant value to limit the distance moved by the original source point. This correction is as follows:
Figure PCTCN2019082803-appb-000025
Figure PCTCN2019082803-appb-000025
Figure PCTCN2019082803-appb-000026
Figure PCTCN2019082803-appb-000026
在各次迭代中,如果这两个点之间的距离小于c 1,来源点会移到新的点,否则来源点只朝新的点移动长度c 1。如果下式的情况发生,迭代将会中止。V T从源点是V S变换后的目标点。 In each iteration, if the distance between these two points is less than c 1 , the source point will move to the new point, otherwise the source point will only move to the new point by the length c 1 . If the following situation occurs, the iteration will be aborted. V T from the source point is the target point after V S transformation.
Figure PCTCN2019082803-appb-000027
Figure PCTCN2019082803-appb-000027
通过前述三个步骤的校正,手术情境三维模型14的坐标位置可以精确地变换对应至追踪坐标体系中光学标记物11,反之亦然。由此,根据光学传感器12的检测结果可实时地追踪医疗用具21~24及手术目标物体3,并将追踪坐标体系中医疗用具21~24及手术目标物体3的位置经由前述处理后能在手术情境三维模型14中以医疗用具呈现物141~144与手术目标呈现物145对应准确地呈现,随着医疗用具21~24及手术目标物体3实际移动,医疗用具呈现物141~144与手术目标呈现物145会在手术情境三维模型14实时地跟着移动。Through the correction of the foregoing three steps, the coordinate position of the three-dimensional model 14 of the surgical situation can be accurately transformed to correspond to the optical marker 11 in the tracking coordinate system, and vice versa. As a result, the medical tools 21-24 and the surgical target object 3 can be tracked in real time based on the detection results of the optical sensor 12, and the positions of the medical tools 21-24 and the surgical target object 3 in the tracking coordinate system can be processed after the aforementioned processing. In the contextual three-dimensional model 14, the medical appliance presentation objects 141-144 correspond to the surgical target presentation object 145 accurately. As the medical appliances 21-24 and the surgical target object 3 actually move, the medical appliance presentation objects 141-144 and the surgical target presentation The object 145 will follow the movement of the three-dimensional model 14 of the operation situation in real time.
如图4所示,图4为实施例的医疗用具操作的训练系统的区块图。医疗用具操作的训练系统(以下称为训练系统)可真实地仿真手术训练环境,训练系统包含光学追踪系统1a、一个或多个医疗用具21~24以及手术目标物体3。光学追踪系统1a包含多个光学标记物11、多个光学传感器12以及计算机装置13,光学标记物11设置在医疗用具21~24及手术目标物体3,医疗用具21~24及手术目标物体3放置在平台4上。针对医疗用具21~24及手术目标物体3,医疗用具呈现物141~144与手术目标呈现物145对应地呈现在手术情境三维模型14a。医疗用具21~24包括医疗探具及手术器具,例如医疗用具21是医疗探具,医疗用具22~24是手术器具。医疗用具呈现物141~144包括医疗探具呈现物及手术器具呈现物,例如医疗用具呈现物141是医疗探具呈现物,医疗用具呈现物142~144是手术器具呈现物。储存元件132储存手术情境三维模型14a及追踪模块15的程序代码与数据,处理核心131可存取储存元件132以执行及 处理手术情境三维模型14a及追踪模块15的程序代码与数据。与前述段落及附图中对应或相同标号的元件其实施方式及变化可参考先前段落的说明,故此不再赘述。As shown in FIG. 4, FIG. 4 is a block diagram of the training system for the operation of the medical appliance according to the embodiment. The training system for medical appliance operation (hereinafter referred to as the training system) can truly simulate the surgical training environment. The training system includes an optical tracking system 1a, one or more medical appliances 21-24, and the surgical target object 3. The optical tracking system 1a includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13. The optical markers 11 are arranged on medical appliances 21-24 and surgical target objects 3, and medical appliances 21-24 and surgical target objects 3 are placed On platform 4. For the medical appliances 21-24 and the surgical target object 3, the medical appliance presents 141 to 144 and the surgical target presents 145 are correspondingly presented on the three-dimensional model 14a of the surgical context. The medical tools 21-24 include medical probes and surgical tools. For example, the medical tools 21 are medical probes, and the medical tools 22-24 are surgical tools. The medical appliance presentations 141-144 include medical probe presentations and surgical appliance presentations. For example, the medical appliance presentation 141 is a medical probe presentation, and the medical appliance presentations 142-144 are surgical appliance presentations. The storage component 132 stores the program code and data of the operation situation three-dimensional model 14a and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14a and the program code and data of the tracking module 15. The implementations and changes of the elements corresponding to or with the same numbers in the preceding paragraphs and drawings can be referred to the descriptions in the previous paragraphs, so they will not be repeated here.
手术目标物体3是人造肢体,例如是假上肢、假手(hand phantom)、假手掌、假手指、假手臂、假上臂、假前臂、假手肘、假上肢、假脚、假脚趾、假脚踝、假小腿、假大腿、假膝盖、假躯干、假颈、假头、假肩、假胸、假腹部、假腰、假臀或其他假部位等等。The surgical target object 3 is an artificial limb, such as artificial upper limbs, hand phantoms, artificial palms, artificial fingers, artificial arms, artificial upper arms, artificial forearms, artificial elbows, artificial upper limbs, artificial feet, artificial toes, artificial ankles, artificial Calf, false thigh, false knee, false torso, false neck, false head, false shoulder, false chest, false abdomen, false waist, false hip or other false parts, etc.
在本实施例中,训练系统是以手指的微创手术训练为例说明,手术目标物体3是假手,手术例如是板机指治疗手术,医疗探具21是拟真超声波换能器(或探头),手术器具22~24是针(needle)、扩张器(dilator)及勾刀(hook blade)。在其他的实施方式中,针对其他的手术训练可以采用其他部位的手术目标物体3。In this embodiment, the training system takes the minimally invasive surgery training of the fingers as an example. The surgical target 3 is a prosthetic hand, the surgery is for example a trigger finger treatment surgery, and the medical probe 21 is a realistic ultrasonic transducer (or probe). ), the surgical instruments 22-24 are a needle, a dilator, and a hook blade. In other embodiments, other surgical target objects 3 may be used for other surgical training.
储存元件132还储存实体医学影像三维模型14b、人造医学影像三维模型14c及训练模块16的程序代码与数据,处理核心131可存取储存元件132以执行及处理实体医学影像三维模型14b、人造医学影像三维模型14c及训练模块16的程序代码与数据。训练模块16负责以下手术训练流程的进行以及相关数据的处理、整合与计算。The storage element 132 also stores the program codes and data of the physical medical image 3D model 14b, the artificial medical image 3D model 14c, and the training module 16. The processing core 131 can access the storage element 132 to execute and process the physical medical image 3D model 14b and artificial medicine. The program code and data of the image 3D model 14c and the training module 16. The training module 16 is responsible for the following surgical training procedures and the processing, integration and calculation of related data.
手术训练用的影像模型在手术训练流程进行前预先建立及汇入系统。以手指微创手术训练为例,影像模型的内容包含手指骨头(掌指及近端指骨)及屈肌腱(flexor tendon)。这些影像模型可参考图5A至图5C,图5A为实施例的手术情境三维模型的示意图,图5B为实施例的实体医学影像三维模型的示意图,图5C为实施例的人造医学影像三维模型的示意图。这些三维模型的内容可以通过输出装置5来输出或打印。The image model for surgical training is pre-established and imported into the system before the surgical training process. Taking minimally invasive finger surgery training as an example, the content of the image model includes finger bones (metacarpal and proximal phalanx) and flexor tendons. These image models can refer to FIGS. 5A to 5C. FIG. 5A is a schematic diagram of the three-dimensional model of the surgical scene of the embodiment, FIG. 5B is a schematic diagram of the three-dimensional physical medical image model of the embodiment, and FIG. 5C is the three-dimensional model of the artificial medical image Schematic. The content of these three-dimensional models can be output or printed by the output device 5.
实体医学影像三维模型14b是从医学影像建立的三维模型,其是针对手术目标物体3所建立的模型,例如像图5B出示的三维模型。医学影像例如是计算机断层摄影影像,手术目标物体3实际地经计算机断层摄影后产生的影像拿来建立实体医学影像三维模型14b。The solid medical image three-dimensional model 14b is a three-dimensional model established from medical images, which is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5B. The medical image is, for example, a computer tomography image, and the image of the surgical target object 3 actually generated after the computer tomography is used to build the solid medical image three-dimensional model 14b.
人造医学影像三维模型14c内含人造医学影像模型,人造医学影像模型是针对手术目标物体3所建立的模型,例如像图5C出示的三维模型。举例来说,人造医学影像模型是人造超声波影像三维模型,由于手术目标物体3并非真的生命体,虽然计算机断层摄影能得到实体结构的影像,但是若用其他 的医学影像设备如超声波影像则仍无法直接从手术目标物体3得到有效或有意义的影像。因此,手术目标物体3的超声波影像模型必须以人造的方式产生。从人造超声波影像三维模型选择适当的位置或平面可据以产生二维人造超声波影像。The artificial medical image three-dimensional model 14c contains an artificial medical image model. The artificial medical image model is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5C. For example, the artificial medical imaging model is a three-dimensional model of artificial ultrasound images. Since the surgical target 3 is not a real living body, although computer tomography can obtain images of the physical structure, other medical imaging equipment such as ultrasound imaging can still be used. Effective or meaningful images cannot be obtained directly from the surgical target object 3. Therefore, the ultrasound image model of the surgical target object 3 must be artificially generated. Selecting an appropriate position or plane from the three-dimensional model of artificial ultrasound images can generate two-dimensional artificial ultrasound images.
计算机装置13依据手术情境三维模型14a以及医学影像模型产生医学影像136,医学影像模型例如是实体医学影像三维模型14b或人造医学影像三维模型14c。举例来说,计算机装置13依据手术情境三维模型14a以及人造医学影像三维模型14c产生医学影像136,医学影像136是二维人造超声波影像。计算机装置13依据医疗探具呈现物141找出的检测物及手术器具呈现物145的操作进行评分,检测物例如是特定的受术部位。The computer device 13 generates a medical image 136 according to the three-dimensional model 14a of the surgical situation and the medical image model. The medical image model is, for example, a solid medical image three-dimensional model 14b or an artificial medical image three-dimensional model 14c. For example, the computer device 13 generates a medical image 136 based on the three-dimensional model 14a of the surgical situation and the three-dimensional model 14c of an artificial medical image. The medical image 136 is a two-dimensional artificial ultrasound image. The computer device 13 scores the detection object found by the medical probe 141 and the operation of the surgical instrument representation 145, such as a specific surgical site.
图6A至图6D为实施例的医疗用具的方向向量的示意图。对应于医疗用具21~24的医疗用具呈现物141~144的方向向量会实时地描绘(rendering),以医疗用具呈现物141来说,医疗探具的方向向量可以通过计算光学标记物的重心点而得到,然后从另一点投射到x-z平面,计算从重心点到投射点的向量。其他的医疗用具呈现物142~144较为简单,用模型中的尖点就能计算方向向量。6A to 6D are schematic diagrams of the direction vector of the medical appliance of the embodiment. The direction vectors of the medical device presentation objects 141-144 corresponding to the medical devices 21-24 will be rendered in real time. For the medical device presentation object 141, the direction vector of the medical probe can be calculated by calculating the center of gravity of the optical marker And get, and then project from another point to the xz plane, and calculate the vector from the center of gravity to the projection point. Other medical appliance presentations 142-144 are relatively simple, and the direction vector can be calculated using the sharp points in the model.
为了降低系统负担避免延迟,影像描绘的量可以减少,例如训练系统可以仅绘制手术目标呈现物145所在区域的模型而非全部的医疗用具呈现物141~144都要绘制。In order to reduce the burden of the system and avoid delays, the amount of image rendering can be reduced. For example, the training system can only draw the model of the area where the surgical target presenting object 145 is located instead of drawing all the medical appliance presenting objects 141-144.
此外,在训练系统中,皮肤模型的透明度可以调整以观察手术目标呈现物145内部的解剖结构,并且看到不同横切面的超声波影像切片或计算机断层摄影影像切片,横切面例如是横断面(horizontal plane或axial plane)、矢面(sagittal plane)或冠状面(coronal plane),由此可在手术过程中帮助执刀者。各模型的边界盒(bounding boxes)是建构来碰撞检测(collision detection),手术训练系统可以判断哪些医疗用具已经接触到肌腱、骨头和/或皮肤,以及可以判断何时开始评分。In addition, in the training system, the transparency of the skin model can be adjusted to observe the internal anatomical structure of the surgical target present 145, and to see ultrasound image slices or computed tomography image slices of different cross-sections, such as horizontal cross-sections. plane or axial plane), sagittal plane (sagittal plane) or coronal plane (coronal plane), which can help the operator during the operation. The bounding boxes of each model are constructed for collision detection. The surgical training system can determine which medical appliances have contacted tendons, bones and/or skin, and can determine when to start scoring.
进行校正程序前,附在手术目标物体3上的光学标记物11必须要能清楚地被光学传感器12看到或检测到,如果光学标记物11被遮住则检测光学标记物11的位置的准确度会降低,光学传感器12至少同时需要两个看到全部的光学标记物。校正程序如前所述,例如三阶段校正,三阶段校正用来准确地校正两个坐标体系。校正误差、迭代计数和光学标记物的最后位置可以显示 在训练系统的窗口中,例如通过输出装置5显示。准确度和可靠度信息可用来提醒用户,当误差过大时系统需要重新校正。完成坐标体系校正后,三维模型以每秒0.1次的频率来描绘,描绘的结果可输出到输出装置5来显示或打印。Before performing the calibration procedure, the optical marker 11 attached to the surgical target object 3 must be clearly seen or detected by the optical sensor 12. If the optical marker 11 is covered, the position of the optical marker 11 must be detected accurately The degree will decrease, and the optical sensor 12 needs at least two to see all the optical markers at the same time. The calibration procedure is as described above, for example, three-stage calibration, which is used to accurately calibrate two coordinate systems. The correction error, the iteration count and the final position of the optical marker can be displayed in the window of the training system, for example by the output device 5. The accuracy and reliability information can be used to remind users that the system needs to be recalibrated when the error is too large. After the coordinate system is corrected, the three-dimensional model is drawn at a frequency of 0.1 times per second, and the drawn result can be output to the output device 5 for display or printing.
训练系统准备好后,用户可以开始进行手术训练流程。在训练流程中,首先使用医疗探具寻找受术部位,找到受术部位后,将受术部位麻醉。然后,扩张从外部通往受术部位的路径,扩张后,将手术刀沿此路径深入至受术部位。After the training system is ready, the user can start the surgical training process. In the training process, first use a medical probe to find the site to be operated on. After finding the site to be operated on, the site is anesthetized. Then, expand the path from the outside to the surgical site, and after expansion, the scalpel is deepened along this path to the surgical site.
图7A至图7D为实施例的训练系统的训练过程示意图,手术训练流程包含四阶段并以手指的微创手术训练为例说明。FIGS. 7A to 7D are schematic diagrams of the training process of the training system of the embodiment. The surgical training process includes four stages and the minimally invasive surgery training of the fingers is taken as an example for illustration.
如图7A所示,在第一阶段,使用医疗探具21寻找受术部位,用以确认受术部位在训练系统内。受术部位例如是滑车区(pulley),这可通过寻找掌指关节的位置、手指的骨头及肌腱的解剖结构来判断,这阶段的重点在于第一个滑车区(A1 pulley)是否有找到。此外,若受训者没有移动医疗探具超过三秒来决定位置,然后训练系统将自动地进入到下一阶段的评分。在手术训练期间,医疗探具21摆设在皮肤上并且保持与皮肤接触在沿屈肌腱(flexor tendon)的中线(midline)上的掌指关节(metacarpal joints,MCP joints)。As shown in FIG. 7A, in the first stage, the medical probe 21 is used to find the site to be operated on to confirm that the site to be operated on is in the training system. The surgical site is, for example, the pulley area (pulley), which can be judged by looking for the position of the metacarpophalangeal joints, the anatomical structures of the bones and tendons of the fingers, and the focus at this stage is whether the first pulley area (A1 pulley) is found. In addition, if the trainee does not move the medical probe for more than three seconds to determine the position, then the training system will automatically enter the next stage of scoring. During surgical training, the medical probe 21 is placed on the skin and kept in contact with the skin at the metacarpal joints (MCP joints) along the midline of the flexor tendon.
如图7B所示,在第二阶段,使用手术器具22打开手术区域的路径,手术器具22例如是针。插入针来注入局部麻醉剂并且扩张空间,插入针的过程可在连续超声波影像的导引下进行。这个连续超声波影像是人造超声波影像,其如前述的医学影像136。由于用假手很难仿真区域麻醉,因此,麻醉并没有特别模拟。As shown in FIG. 7B, in the second stage, the surgical instrument 22 is used to open the path of the surgical area. The surgical instrument 22 is, for example, a needle. The needle is inserted to inject local anesthetic and expand the space. The process of inserting the needle can be performed under the guidance of continuous ultrasound images. This continuous ultrasound image is an artificial ultrasound image, which is like the aforementioned medical image 136. Since it is difficult to simulate regional anesthesia with a prosthetic hand, anesthesia is not specifically simulated.
如图7C所示,在第三阶段,沿与第二阶段中手术器具22相同的路径推入手术器具23,以创造下一阶段勾刀所需的轨迹。手术器具23例如是扩张器(dilator)。此外,若受训者没有移动手术器具23超过三秒来决定位置,然后训练系统将自动地进入到下一阶段的评分。As shown in FIG. 7C, in the third stage, the surgical instrument 23 is pushed in along the same path as the surgical instrument 22 in the second stage to create the trajectory required for hooking the knife in the next stage. The surgical instrument 23 is, for example, a dilator. In addition, if the trainee does not move the surgical instrument 23 for more than three seconds to determine the position, then the training system will automatically enter the next stage of scoring.
如图7D所示,在第四阶段,沿第三阶段创造出的轨迹将手术器具24插入,并且利用手术器具24将滑车区分开(divide),手术器具24例如是勾刀(hook blade)。第三阶段与第四阶段的重点类似,在手术训练过程中,沿屈肌腱(flexor tendon)两侧附近的血管(vessels)和神经可能会容易地被误切,因此,第三阶段与第四阶段的重点在不仅在没有接触肌腱、神经及血管,还有要开启一个轨迹其大于第一个滑车区至少2mm,用以留给勾刀切割滑车区的空间。As shown in FIG. 7D, in the fourth stage, the surgical instrument 24 is inserted along the trajectory created in the third stage, and the pulley is divided by the surgical instrument 24. The surgical instrument 24 is, for example, a hook blade. The focus of the third stage is similar to that of the fourth stage. During the surgical training process, the vessels and nerves near both sides of the flexor tendon may be easily miscut. Therefore, the third stage and the fourth stage The focus of the stage is not only not contacting tendons, nerves and blood vessels, but also opening a track that is at least 2mm larger than the first pulley area to leave space for the hook knife to cut the pulley area.
为了要对使用者的操作进行评分,必须要将各训练阶段的操作量化。首先,手术进行中的手术区域是由如图8A的手指解剖结构所定义,其可分为上边界及下边界。因肌腱上的组织大部分是脂肪不会造成疼痛感,所以手术区域的上边界可以用手掌的皮肤来定义,另外,下边界则是由肌腱所定义。近端深度边界(proximal depth boundary)在10mm(平均第一个滑车区长度)离掌骨头颈(metacarpal head-neck)关节。远程深度边界(distal depth boundary)则不重要,这是因为其与肌腱、血管及神经受损无关。左右边界是由肌腱的宽度(width)所定义,神经及血管位在肌腱的两侧。In order to score the user's operations, the operations of each training phase must be quantified. First of all, the operation area during the operation is defined by the finger anatomy as shown in Figure 8A, which can be divided into an upper boundary and a lower boundary. Because most of the tissue on the tendon is fat and does not cause pain, the upper boundary of the surgical area can be defined by the skin of the palm, and the lower boundary is defined by the tendon. The proximal depth boundary is 10mm (average length of the first trochlear zone) from the metacarpal head-neck joint. The remote depth boundary is not important, because it has nothing to do with tendons, blood vessels, and nerves. The left and right boundaries are defined by the width of the tendon, and nerves and blood vessels are located on both sides of the tendon.
手术区域定义好之后,针对各训练阶段的评分方式如下。在如图7A的第一阶段中,训练的重点在于找到目标物,例如是要被切除的目标物,以手指为例是第一个滑车区(A1 pulley)。现实手术过程中,为了要有好的超声波影像质量,医疗探具和骨头主轴的角度最好要接近垂直,可容许的角度偏差为±30°。因此,第一阶段评分的算式如下:After the surgical area is defined, the scoring method for each training stage is as follows. In the first stage of FIG. 7A, the focus of the training is to find the target, for example, the target to be excised. Taking the finger as an example is the first pulley area (A1 Pulley). In the actual operation process, in order to have good ultrasound image quality, the angle between the medical probe and the bone spindle should be close to vertical, and the allowable angle deviation is ±30°. Therefore, the scoring formula for the first stage is as follows:
第一阶段分数=找标的物评分×其权重+探具角度评分×其权重The first stage score = the score of the target object × its weight + the angle score of the probe × its weight
在如图7B的第二阶段中,训练的重点在于使用针来打开手术区域的路径。由于滑车区环绕肌腱,骨头主轴和针之间的距离应该要小比较好。因此,第二阶段评分的算式如下:In the second stage as shown in Figure 7B, the focus of training is to use the needle to open the path of the surgical area. Since the pulley area surrounds the tendon, the distance between the main axis of the bone and the needle should be small. Therefore, the calculation formula for the second stage scoring is as follows:
第二阶段分数=开口评分×其权重+针角度评分×其权重+离骨头主轴距离评分×其权重Second stage score = opening score × its weight + needle angle score × its weight + distance from the main axis of the bone score × its weight
在第三阶段中,训练的重点在于将扩大手术区域的扩张器插入手指。在手术过程中,扩张器的轨迹必须要接近骨头主轴。为了不伤害肌腱、血管与神经,扩张器不会超出先前定义的手术区域边界。为了扩张出好的手术区域轨迹,扩张器与骨头主轴的角度最好近似于平行,可容许的角度偏差为±30°。由于要留给勾刀切割第一个滑车区的空间,扩张器必须要高于(over)第一个滑车区至少2mm。第三阶段评分的算式如下:In the third stage, the focus of training is to insert a dilator that enlarges the surgical area into the finger. During the operation, the trajectory of the dilator must be close to the main axis of the bone. In order not to damage tendons, blood vessels and nerves, the dilator will not exceed the previously defined boundary of the surgical area. In order to expand a good trajectory of the surgical area, the angle between the expander and the main axis of the bone should be approximately parallel, and the allowable angle deviation is ±30°. Due to the space left for the hook knife to cut the first trolley area, the expander must be at least 2mm higher than the first trolley area. The third stage scoring formula is as follows:
第三阶段分数=高于滑车区评分×其权重+扩张器角度评分×其权重+离骨头主轴距离评分×其权重+未离开手术区域评分×其权重The third stage score = higher than the pulley area score × its weight + expander angle score × its weight + distance from the main axis of the bone score × its weight + not leaving the surgical area score × its weight
在第四阶段中,评分的条件和第三阶段类似,不同处在于勾刀需要旋转90°,这规则加入到此阶段的评分中。评分的算式如下:In the fourth stage, the scoring conditions are similar to those in the third stage, except that the hook needs to be rotated 90°. This rule is added to the scoring at this stage. The scoring formula is as follows:
第四阶段分数=高于滑车区评分×其权重+勾刀角度评分×其权重+离骨头主轴距离评分×其权重+未离开手术区域评分×其权重+旋转勾刀评分×其权重The fourth stage score = higher than the pulley area score × its weight + hook angle score × its weight + distance from the main axis of the bone score × its weight + not leaving the surgical area score × its weight + rotating hook score × its weight
为了要建立评分标准以对使用者的手术操作做评分,必须定义如何计算骨头主轴和医疗用具间的角度。举例来说,这个计算方式是和计算手掌法线(palm normal)和医疗用具的方向向量间的角度一样。首先,要先找到骨头主轴,如图8B所示,从计算机断层摄影影像在骨头上采用主成分分析(Principal components analysis,PCA)可找出骨头的三个轴。在这三个轴中,取最长的轴作为骨头主轴。然而,在计算机断层摄影影像中骨头形状并非平的(uneven),这造成主成分分析找到的轴和手掌法线彼此不垂直。于是,如图8C所示,代替在骨头上采用主成分分析,在骨头上的皮肤可用来采用主成分分析找出手掌法线。然后,骨头主轴和医疗用具之间的角度可据以计算得到。In order to establish a scoring standard to score the user's surgical operation, it is necessary to define how to calculate the angle between the bone spindle and the medical appliance. For example, this calculation method is the same as calculating the angle between the palm normal and the direction vector of the medical appliance. First, find the main axis of the bone. As shown in Figure 8B, the three axes of the bone can be found by using Principal Components Analysis (PCA) on the bone from the computed tomography image. Among the three axes, the longest axis is taken as the main axis of the bone. However, the shape of the bone in the computed tomography image is not uniform, which causes the axis found by the principal component analysis and the palm normal to be not perpendicular to each other. Thus, as shown in FIG. 8C, instead of using principal component analysis on the bone, the skin on the bone can be used to find the palm normal using principal component analysis. Then, the angle between the bone spindle and the medical appliance can be calculated.
计算骨头主轴与用具的角度后,骨头主轴与算医疗用具间的距离也需要计算,距离计算类似于计算医疗用具的顶尖和平面间的距离,平面指包含骨头主轴向量向量和手掌法线的平面,距离计算的示意如图8D所示。这个平面可利用手掌法线的向量D2和骨头主轴的向量D1的外积(cross product)得到。由于这两个向量可在先前的计算得到,骨头主轴与用具之间的距离可容易地算出。After calculating the angle between the bone main axis and the appliance, the distance between the bone main axis and the medical appliance also needs to be calculated. The distance calculation is similar to calculating the distance between the top and the plane of the medical appliance. The plane refers to the plane containing the bone main axis vector vector and the palm normal. , The schematic diagram of distance calculation is shown in Figure 8D. This plane can be obtained by the cross product of the palm normal vector D2 and the bone principal axis vector D1. Since these two vectors can be obtained in the previous calculation, the distance between the main axis of the bone and the appliance can be easily calculated.
如图8E所示,图8E为实施例的人造医学影像的示意图,人造医学影像中的肌腱区段和皮肤区段以虚线标示。肌腱区段和皮肤区段可用来建构模型及边界盒,边界盒是用来碰撞检测,滑车区可以定义在静态模型。通过使用碰撞检测,可以决定手术区域及判断医疗用具是否跨过滑车区。第一个滑车区的平均长度约为1mm,第一个滑车区是位在掌骨头颈(MCP head-neck)关节近端,滑车区平均厚度约0.3mm并且环绕肌腱。As shown in FIG. 8E, FIG. 8E is a schematic diagram of the artificial medical image of the embodiment, and the tendon section and the skin section in the artificial medical image are marked with dotted lines. The tendon section and the skin section can be used to construct the model and the bounding box, the bounding box is used for collision detection, and the pulley area can be defined in the static model. By using collision detection, it is possible to determine the surgical area and determine whether the medical appliance crosses the pulley area. The average length of the first pulley area is about 1mm, and the first pulley area is located at the proximal end of the metacarpal head-neck (MCP) joint. The average thickness of the pulley area is about 0.3mm and surrounds the tendons.
图9A为实施例的产生人造医学影像的流程图。如图9A所示,产生的流程包括步骤S21至步骤S24。FIG. 9A is a flowchart of generating artificial medical images according to an embodiment. As shown in FIG. 9A, the generated flow includes step S21 to step S24.
步骤S21是从人造肢体的断面影像数据取出第一组骨皮特征。人 造肢体是前述手术目标物体3,其可作为微创手术训练用肢体,例如是假手。断面影像数据报含多个断面影像,断面参考影像为计算机断层摄影(computed tomography)影像或实体剖面影像。Step S21 is to extract the first set of bone skin features from the cross-sectional image data of the artificial limb. An artificial limb is the aforementioned surgical target object 3, which can be used as a limb for minimally invasive surgery training, such as a prosthetic hand. The cross-sectional image data report contains multiple cross-sectional images, and the cross-sectional reference images are computed tomography images or solid cross-sectional images.
步骤S22是从医学影像数据取出第二组骨皮特征。医学影像数据为立体超声波影像,例如像图9B的立体超声波影像,立体超声波影像由多个平面超声波影像所建立。医学影像数据是对真实生物拍摄的医学影像,并非是对人造肢体肢体拍摄。第一组骨皮特征及第二组骨皮特征包含多个骨头特征点以及多个皮肤特征点。Step S22 is to extract the second set of bone skin features from the medical image data. The medical image data is a three-dimensional ultrasound image, such as the three-dimensional ultrasound image of FIG. 9B, which is created by multiple planar ultrasound images. Medical image data are medical images taken of real organisms, not artificial limbs. The first group of bone skin features and the second group of bone skin features include multiple bone feature points and multiple skin feature points.
步骤S23是根据第一组骨皮特征及第二组骨皮特征建立特征对位数据(registration)。步骤S23包含:以第一组骨皮特征为参考目标(target);找出关联函数作为空间对位关联数据,其中关联函数满足第二组骨皮特征对准参考目标时没有因第一组骨皮特征与第二组骨皮特征造成的扰动。关联函数是通过最大似然估计问题(maximum likelihood estimation problem)的算法以及最大期望算法(EM Algorithm)找出。Step S23 is to establish feature registration data (registration) based on the first set of bone and skin features and the second set of bone and skin features. Step S23 includes: taking the first set of bone-skin features as a reference target (target); finding out the correlation function as the spatial alignment correlation data, where the correlation function satisfies the second set of bone-skin features to align with the reference target without being due to the first set of bones Disturbance caused by skin features and the second set of bone skin features. The correlation function is found through the algorithm of the maximum likelihood estimation problem (maximum likelihood estimation problem) and the maximum expectation algorithm (EM Algorithm).
步骤S24是根据特征对位数据对于医学影像数据进行形变处理,以产生适用于人造肢体的人造医学影像数据。人造医学影像数据例如是立体超声波影像,其仍保留原始超声波影像内生物体的特征。步骤S24包含:根据医学影像数据以及特征对位数据产生形变函数;在医学影像数据套用网格并据以得到多个网点位置;依据形变函数对网点位置进行形变;基于形变后的网点位置,从医学影像数据补入对应画素以产生形变影像,形变影像作为人造医学影像数据。形变函数是利用移动最小二乘法(moving least square,MLS)产生。形变影像是利用仿射变换(affine transform)产生。Step S24 is to perform deformation processing on the medical image data according to the feature alignment data to generate artificial medical image data suitable for artificial limbs. The artificial medical image data is, for example, a three-dimensional ultrasound image, which still retains the characteristics of the organism in the original ultrasound image. Step S24 includes: generating a deformation function based on the medical image data and feature alignment data; applying a grid to the medical image data and obtaining multiple dot positions accordingly; deforming the dot positions according to the deformation function; based on the deformed dot positions, The medical image data is supplemented with corresponding pixels to generate a deformed image, and the deformed image is used as artificial medical image data. The deformation function is generated using the moving least square (MLS) method. The deformed image is generated using affine transform.
通过步骤S21至步骤S24,通过将真人超声波影像与假手计算机断层影像撷取影像特征,利用影像对位取得形变的对应点关系,再通过形变的方式基于假手产生接近真人超声波的影像,并使产生的超声波保有原先真人超声波影像中的特征。以人造医学影像数据是立体超声波影像来说,某特定位置或特定切面的平面超声波影像可根据立体超声波图像映射的位置或切面产生。Through step S21 to step S24, by capturing the image characteristics of the real ultrasound image and the artificial hand computed tomography image, the corresponding point relationship of the deformation is obtained by image alignment, and then the image close to the real ultrasound is generated based on the artificial hand through the deformation The ultrasound retains the characteristics of the original live ultrasound image. If the artificial medical image data is a three-dimensional ultrasound image, a plane ultrasound image of a specific position or a specific section can be generated based on the position or section mapped by the three-dimensional ultrasound image.
如图10A与图10B所示,图10A与图10B为实施例的假手模型与超声波容积(ultrasound volume)的校正的示意图。实体医学影像三维模型14b及人造医学影像三维模型14c彼此之间有关联,由于假手的模型是由计算机断层影像容积所建构,因此可以直接拿计算机断层影像容积与超声波容积间的位 置关系来将假手和超声波容积建立关联。As shown in FIG. 10A and FIG. 10B, FIG. 10A and FIG. 10B are schematic diagrams of the correction of the artificial hand model and the ultrasound volume of the embodiment. The physical medical image 3D model 14b and the artificial medical image 3D model 14c are related to each other. Since the model of the prosthetic hand is constructed by the computed tomography image volume, the positional relationship between the computed tomography image volume and the ultrasound volume can be directly used to integrate the artificial hand. Establish correlation with ultrasound volume.
如图10C与图10D所示,图10C为实施例的超声波容积以及碰撞检测的示意图,图10D为实施例的人造超声波影像的示意图。训练系统要能仿真真实的超声波换能器(或探头),从超声波容积产生切面影像片段。不论换能器(或探头)在任何角度,模拟的换能器(或探头)必须描绘对应的影像区段。在实作中,首先检测医疗探具21与超声波体之间的角度,然后,片段面的碰撞检测是依据医疗探具21的宽度及超声波容积,其可用来找到正在描绘的影像区段的对应值,产生的影像如图10D所示。例如人造医学影像数据是立体超声波影像来说,立体超声波影像有对应的超声波容积,模拟的换能器(或探头)要描绘的影像区段的内容可根据立体超声波图像映射的位置产生。As shown in FIGS. 10C and 10D, FIG. 10C is a schematic diagram of ultrasonic volume and collision detection according to an embodiment, and FIG. 10D is a schematic diagram of an artificial ultrasound image according to an embodiment. The training system must be able to simulate a real ultrasonic transducer (or probe) and generate slice image fragments from the ultrasonic volume. Regardless of the angle of the transducer (or probe), the simulated transducer (or probe) must depict the corresponding image segment. In the implementation, the angle between the medical probe 21 and the ultrasonic body is first detected, and then the collision detection of the segment surface is based on the width of the medical probe 21 and the ultrasonic volume, which can be used to find the corresponding image segment being drawn. Value, the resulting image is shown in Figure 10D. For example, if the artificial medical image data is a three-dimensional ultrasound image, the three-dimensional ultrasound image has a corresponding ultrasound volume, and the content of the image segment to be depicted by the simulated transducer (or probe) can be generated according to the position of the three-dimensional ultrasound image mapping.
以上所述仅为举例性而非为限制性。任何未脱离本发明的精神与范畴而对其进行的等效修改或变更均应包含于随附权利要求中。The above description is merely illustrative and not restrictive. Any equivalent modifications or changes made without departing from the spirit and scope of the present invention shall be included in the appended claims.

Claims (20)

  1. 一种光学追踪系统,用于医疗用具,包含:An optical tracking system for medical appliances, including:
    多个光学标记物,设置在所述医疗用具;A plurality of optical markers arranged on the medical appliance;
    多个光学传感器,光学地感测所述光学标记物以分别产生多个感测信号;以及A plurality of optical sensors, optically sensing the optical markers to generate a plurality of sensing signals respectively; and
    计算机装置,耦接所述光学传感器以接收所述感测信号,具有手术情境三维模型,根据所述感测信号调整所述手术情境三维模型中医疗用具呈现物与手术目标呈现物之间的相对位置。A computer device, coupled to the optical sensor to receive the sensing signal, has a three-dimensional model of the surgical situation, and adjusts the relative between the medical appliance present and the surgical target present in the three-dimensional model of the surgical situation according to the sensing signal position.
  2. 根据权利要求1所述的系统,其中,所述光学传感器为至少两个,设置在所述医疗用具上方并朝向所述光学标记物。The system according to claim 1, wherein there are at least two optical sensors, which are arranged above the medical appliance and face the optical marker.
  3. 根据权利要求1所述的系统,其中,所述计算机装置与所述光学传感器进行前置作业程序,所述前置作业程序包括:The system according to claim 1, wherein the computer device and the optical sensor perform a pre-operation program, and the pre-operation program includes:
    校正所述光学传感器的坐标体系;以及Correcting the coordinate system of the optical sensor; and
    调整针对所述医疗用具与手术目标物体的缩放比例。Adjust the zoom ratio for the medical appliance and the surgical target object.
  4. 根据权利要求1所述的系统,其中,所述计算机装置与所述光学传感器进行坐标校正程序,所述校正程序包括:The system according to claim 1, wherein the computer device and the optical sensor perform a coordinate calibration program, and the calibration program includes:
    初始校正步骤,进行所述光学传感器的坐标体系与所述手术情境三维模型的坐标体系之间的初始校正,以得到初始转换参数;The initial calibration step is to perform an initial calibration between the coordinate system of the optical sensor and the coordinate system of the three-dimensional model of the surgical situation to obtain initial conversion parameters;
    优化步骤,优化所述初始转换参数的自由度,以得到优化转换参数;以及An optimization step, optimizing the degrees of freedom of the initial conversion parameters to obtain optimized conversion parameters; and
    修正步骤,修正所述优化转换参数中导因于所述光学标记物的设置误差。The correcting step is to correct the setting error of the optical marker caused by the optimized conversion parameter.
  5. 根据权利要求4所述的系统,其中,所述初始校正步骤是利用奇异值分解、三角坐标对位或线性最小均方估算。The system according to claim 4, wherein the initial correction step uses singular value decomposition, triangular coordinate alignment, or linear least mean square estimation.
  6. 根据权利要求4所述的系统,其中,所述初始校正步骤是利用奇异值分解来找出所述医疗用具呈现物的特征点与所述光学传感器之间的变换矩阵作为所述初始转换参数,所述变换矩阵包括共变异数矩阵以及旋转矩阵,其中,所述优化步骤是从所述旋转矩阵获得多自由度的多个尤拉角,并对多自由度的参数利用高斯牛顿法迭代优化,以得到所述优化转换参数。The system according to claim 4, wherein the initial correction step is to use singular value decomposition to find a transformation matrix between the characteristic points of the medical appliance presentation and the optical sensor as the initial transformation parameter, The transformation matrix includes a covariance matrix and a rotation matrix, wherein the optimization step is to obtain multiple Euler angles of multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of multiple degrees of freedom, To obtain the optimized conversion parameters.
  7. 根据权利要求4所述的系统,其中,所述计算机装置根据所述优化转换参数与所述感测信号设定所述医疗用具呈现物与所述手术目标呈现物在所述手术情境三维模型中的位置。The system according to claim 4, wherein the computer device sets the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation according to the optimized conversion parameter and the sensing signal s position.
  8. 根据权利要求4所述的系统,其中,所述修正步骤是利用反向转换与所述感测信号修正所述医疗用具呈现物与所述手术目标呈现物在所述手术情境三维模型中的位置。The system according to claim 4, wherein the correcting step is to correct the positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation by using reverse conversion and the sensing signal .
  9. 根据权利要求1所述的系统,其中,所述计算机装置输出显示数据,所述显示数据用以呈现所述医疗用具呈现物与所述手术目标呈现物的3D影像。The system according to claim 1, wherein the computer device outputs display data, and the display data is used to present 3D images of the medical appliance presentation and the surgical target presentation.
  10. 根据权利要求1所述的系统,其中,所述计算机装置依据所述手术情境三维模型以及医学影像模型产生医学影像。The system according to claim 1, wherein the computer device generates medical images according to the three-dimensional model of the surgical situation and the medical image model.
  11. 根据权利要求10所述的系统,其中,手术目标物体是人造肢体,所述医学影像是针对所述手术目标物体的人造医学影像。The system according to claim 10, wherein the surgical target object is an artificial limb, and the medical image is an artificial medical image for the surgical target object.
  12. 根据权利要求1所述的系统,其中,所述计算机装置推演所述医疗用具在所述手术目标物体内外的位置,并据以调整所述手术情境三维模型中所述医疗用具呈现物与所述手术目标呈现物之间的相对位置。The system according to claim 1, wherein the computer device deduces the position of the medical appliance inside and outside the surgical target object, and adjusts the medical appliance presentation and the medical appliance present in the three-dimensional model of the surgical situation accordingly. The relative position between the surgical target presents.
  13. 一种医疗用具操作的训练系统,包含:A training system for medical appliance operation, including:
    医疗用具;以及Medical appliances; and
    根据权利要求1至12中一项所述的光学追踪系统,用于所述医疗用具。The optical tracking system according to one of claims 1 to 12, used in the medical appliance.
  14. 根据权利要求13所述的系统,其中,所述医疗用具包括医疗探具及手术器具,所述医疗用具呈现物包括医疗探具呈现物及手术器具呈现物。The system according to claim 13, wherein the medical appliance includes a medical probe and a surgical instrument, and the medical appliance presentation includes a medical probe presentation and a surgical instrument presentation.
  15. 根据权利要求14所述的系统,其中,所述计算机装置依据所述医疗探具呈现物找出的检测物及所述手术器具呈现物的操作进行评分。The system according to claim 14, wherein the computer device performs a score based on the detection object found by the medical probe present and the operation of the surgical instrument present.
  16. 一种医疗用具的光学追踪系统的校正方法,包含:A correction method of an optical tracking system for medical appliances, including:
    感测步骤,利用所述光学追踪系统的多个光学传感器光学地感测设置在所述医疗用具上所述光学追踪系统的多个光学标记物,以分别产生多个感测信号;A sensing step, using a plurality of optical sensors of the optical tracking system to optically sense a plurality of optical markers arranged on the optical tracking system on the medical appliance to generate a plurality of sensing signals respectively;
    初始校正步骤,依据所述感测信号进行所述光学传感器的坐标体系与手术情境三维模型的坐标体系之间的初始校正,以得到初始转换参数;The initial calibration step is to perform an initial calibration between the coordinate system of the optical sensor and the coordinate system of the three-dimensional model of the surgical situation according to the sensing signal to obtain initial conversion parameters;
    优化步骤,优化所述初始转换参数的自由度,以得到优化转换参数;以及An optimization step, optimizing the degrees of freedom of the initial conversion parameters to obtain optimized conversion parameters; and
    修正步骤,修正所述优化转换参数中导因于所述光学标记物的设置误差。The correction step is to correct the setting error of the optical marker caused by the optimized conversion parameter.
  17. 根据权利要求16所述的方法,进一步包含前置作业程序,所述前置作业程序包括:16. The method of claim 16, further comprising a pre-operation program, the pre-operation program comprising:
    校正所述光学传感器的坐标体系;以及Correcting the coordinate system of the optical sensor; and
    调整针对所述医疗用具与手术目标物体的缩放比例。Adjust the zoom ratio for the medical appliance and the surgical target object.
  18. 根据权利要求16所述的方法,其中,所述初始校正步骤是利用奇异值分解、三角坐标对位或线性最小均方估算。The method according to claim 16, wherein the initial correction step uses singular value decomposition, triangular coordinate alignment or linear least mean square estimation.
  19. 根据权利要求16所述的方法,其中,The method of claim 16, wherein:
    所述初始校正步骤是利用奇异值分解来找出所述手术情境三维模型的医疗用具呈现物的特征点与所述光学传感器之间的变换矩阵作为所述初始转换参数,所述变换矩阵包括共变异数矩阵以及旋转矩阵,The initial correction step is to use singular value decomposition to find the transformation matrix between the feature points of the medical appliance presentation of the three-dimensional model of the surgical situation and the optical sensor as the initial transformation parameter, and the transformation matrix includes Variance matrix and rotation matrix,
    所述优化步骤是从所述旋转矩阵获得多自由度的多个尤拉角,并对多自由度的参数利用高斯牛顿法迭代优化,以得到所述优化转换参数。The optimization step is to obtain multiple Euras angles with multiple degrees of freedom from the rotation matrix, and use Gauss-Newton method to iteratively optimize the parameters of multiple degrees of freedom to obtain the optimized conversion parameters.
  20. 根据权利要求16所述的方法,其中,The method of claim 16, wherein:
    所述医疗用具呈现物与手术目标呈现物在所述手术情境三维模型中的位置是根据所述优化转换参数与所述感测信号设定,The positions of the medical appliance presentation and the surgical target presentation in the three-dimensional model of the surgical situation are set according to the optimized conversion parameter and the sensing signal,
    所述修正步骤是利用反向转换与所述感测信号修正所述医疗用具呈现物与手术目标呈现物在所述手术情境三维模型中的位置。The correcting step is to use the reverse conversion and the sensing signal to correct the positions of the medical appliance present and the surgical target present in the three-dimensional model of the surgical situation.
PCT/CN2019/082803 2019-04-16 2019-04-16 Optical tracking system and training system for medical instruments WO2020210967A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082803 WO2020210967A1 (en) 2019-04-16 2019-04-16 Optical tracking system and training system for medical instruments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082803 WO2020210967A1 (en) 2019-04-16 2019-04-16 Optical tracking system and training system for medical instruments

Publications (1)

Publication Number Publication Date
WO2020210967A1 true WO2020210967A1 (en) 2020-10-22

Family

ID=72837685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082803 WO2020210967A1 (en) 2019-04-16 2019-04-16 Optical tracking system and training system for medical instruments

Country Status (1)

Country Link
WO (1) WO2020210967A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090087050A1 (en) * 2007-08-16 2009-04-02 Michael Gandyra Device for determining the 3D coordinates of an object, in particular of a tooth
CN101467887A (en) * 2007-12-29 2009-07-01 复旦大学 X ray perspective view calibration method in operation navigation system
CN102860841A (en) * 2012-09-25 2013-01-09 陈颀潇 Aided navigation system and method of puncture operation under ultrasonic image
CN106859767A (en) * 2017-03-29 2017-06-20 上海霖晏网络科技有限公司 A kind of operation piloting method
CN106890025A (en) * 2017-03-03 2017-06-27 浙江大学 A kind of minimally invasive operation navigating system and air navigation aid
CN107970074A (en) * 2016-10-25 2018-05-01 韦伯斯特生物官能(以色列)有限公司 Head alignment is carried out using personalized fixture
CN109195527A (en) * 2016-03-13 2019-01-11 乌泽医疗有限公司 Device and method for being used together with bone-operating

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090087050A1 (en) * 2007-08-16 2009-04-02 Michael Gandyra Device for determining the 3D coordinates of an object, in particular of a tooth
CN101467887A (en) * 2007-12-29 2009-07-01 复旦大学 X ray perspective view calibration method in operation navigation system
CN102860841A (en) * 2012-09-25 2013-01-09 陈颀潇 Aided navigation system and method of puncture operation under ultrasonic image
CN109195527A (en) * 2016-03-13 2019-01-11 乌泽医疗有限公司 Device and method for being used together with bone-operating
CN107970074A (en) * 2016-10-25 2018-05-01 韦伯斯特生物官能(以色列)有限公司 Head alignment is carried out using personalized fixture
CN106890025A (en) * 2017-03-03 2017-06-27 浙江大学 A kind of minimally invasive operation navigating system and air navigation aid
CN106859767A (en) * 2017-03-29 2017-06-20 上海霖晏网络科技有限公司 A kind of operation piloting method

Similar Documents

Publication Publication Date Title
TWI711428B (en) Optical tracking system and training system for medical equipment
US7715602B2 (en) Method and apparatus for reconstructing bone surfaces during surgery
US9101394B2 (en) Implant planning using captured joint motion information
US8257360B2 (en) Determining femoral cuts in knee surgery
JP5866346B2 (en) A method to determine joint bone deformity using motion patterns
US20210012492A1 (en) Systems and methods for obtaining 3-d images from x-ray information for deformed elongate bones
US20210007806A1 (en) A method for obtaining 3-d deformity correction for bones
JP2004512136A (en) Knee prosthesis positioning system
CA3146388A1 (en) Augmented reality assisted joint arthroplasty
US20140324061A1 (en) Acquiring contact position parameters and detecting contact of a joint
US11344180B2 (en) System, apparatus, and method for calibrating oblique-viewing rigid endoscope
CN107106239A (en) Surgery is planned and method
US20230100824A1 (en) Bone registration methods for robotic surgical procedures
US20180199996A1 (en) Configuring a surgical tool
KR20160133367A (en) Device and method for the computer-assisted simulation of surgical interventions
Pettersson et al. Simulation of patient specific cervical hip fracture surgery with a volume haptic interface
TWI707660B (en) Wearable image display device for surgery and surgery information real-time system
CN109350059B (en) Combined steering engine and landmark engine for elbow auto-alignment
US20210298848A1 (en) Robotically-assisted surgical device, surgical robot, robotically-assisted surgical method, and system
JP4319043B2 (en) Method and apparatus for reconstructing a bone surface during surgery
Liu et al. Augmented reality system training for minimally invasive spine surgery
WO2020210967A1 (en) Optical tracking system and training system for medical instruments
WO2020210972A1 (en) Wearable image display device for surgery and surgical information real-time presentation system
Wittmann et al. Official measurement protocol and accuracy results for an optical surgical navigation system (NPU)
JP7414611B2 (en) Robotic surgery support device, processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925185

Country of ref document: EP

Kind code of ref document: A1