WO2020210972A1 - Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales - Google Patents

Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales Download PDF

Info

Publication number
WO2020210972A1
WO2020210972A1 PCT/CN2019/082834 CN2019082834W WO2020210972A1 WO 2020210972 A1 WO2020210972 A1 WO 2020210972A1 CN 2019082834 W CN2019082834 W CN 2019082834W WO 2020210972 A1 WO2020210972 A1 WO 2020210972A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical
surgical
image
information
display
Prior art date
Application number
PCT/CN2019/082834
Other languages
English (en)
Chinese (zh)
Inventor
孙永年
周一鸣
邱昌逸
蔡博翔
郑宇翔
庄柏逸
郭振鹏
Original Assignee
孙永年
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 孙永年 filed Critical 孙永年
Priority to PCT/CN2019/082834 priority Critical patent/WO2020210972A1/fr
Publication of WO2020210972A1 publication Critical patent/WO2020210972A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions

Definitions

  • the invention relates to a wearable image display device and a presentation system, in particular to a wearable image display device for surgery and a real-time presentation system of surgery information.
  • the purpose of the present invention is to provide a surgical wearable image display device and a real-time surgical information presentation system, which can assist or train users to operate medical instruments.
  • a wearable image display device for surgery includes a display, a wireless receiver, and a processing core.
  • the wireless receiver wirelessly receives medical images or medical device information in real time;
  • the processing core is coupled to the wireless receiver and the display to display the medical images or medical device information on the display.
  • the medical image is an artificial medical image of an artificial limb.
  • the surgical wearable image display device is smart glasses or a head-mounted display.
  • the medical appliance information includes location information and angle information.
  • the wireless receiver wirelessly receives the surgical target information in real time, and the processing core displays the medical image, medical appliance information, or surgical target information on the display.
  • the surgical target information includes position information and angle information.
  • the wireless receiver wirelessly receives the surgical guidance video in real time, and the processing core displays the medical image, medical appliance information or the surgical guidance video on the display.
  • a real-time presentation system for surgical information includes the aforementioned surgical wearable image display device and a server.
  • the server and the wireless receiver are connected wirelessly to wirelessly transmit medical images and medical device information in real time.
  • the server transmits medical images and medical device information through two network ports, respectively.
  • the system further includes an optical positioning device.
  • the optical positioning device detects the position of the medical appliance and generates a positioning signal.
  • the server generates medical appliance information according to the positioning signal.
  • the surgical wearable image display device and surgical information real-time presentation system of the present disclosure can assist or train users to operate medical instruments.
  • the training system of the present disclosure can provide trainees with a realistic surgical training environment, thereby effectively Assist trainees to complete surgical training.
  • the surgical performer can also perform a simulated operation on the prosthesis first, and use the surgical wearable image display device and the surgical information real-time display system to review or review the simulated surgery performed in advance before the actual operation, so that the surgical performer Can quickly grasp the key points of surgery or points that need attention.
  • surgical wearable image display devices and surgical information real-time display systems can also be applied to actual surgical procedures.
  • Medical images such as ultrasound images are transmitted to surgical wearable image display devices such as smart glasses. This display method It can make the operator no longer need to turn his head to look at the screen.
  • FIG. 1A is a block diagram of an embodiment of a real-time presentation system for surgical information.
  • FIG. 1B is a schematic diagram of the wearable image display device for surgery in FIG. 1A receiving medical images or medical device information.
  • FIG. 1C is a schematic diagram of the transmission between the server and the surgical wearable image display device in FIG. 1A.
  • Figure 1D is a schematic diagram of the server in Figure 1A transmitting through two network ports.
  • Fig. 2A is a block diagram of an optical tracking system according to an embodiment.
  • FIGS. 2B and 2C are schematic diagrams of an optical tracking system according to an embodiment.
  • Fig. 2D is a schematic diagram of a three-dimensional model of a surgical situation in an embodiment.
  • Fig. 3 is a functional block diagram of a surgical training system according to an embodiment.
  • Fig. 4 is a block diagram of a training system for medical appliance operation according to an embodiment.
  • Fig. 5A is a schematic diagram of a three-dimensional model of a surgical scenario according to an embodiment.
  • FIG. 5B is a schematic diagram of a three-dimensional model of an entity medical image according to an embodiment.
  • FIG. 5C is a schematic diagram of a three-dimensional model of an artificial medical image according to an embodiment.
  • 6A to 6D are schematic diagrams of the direction vector of the medical appliance according to an embodiment.
  • FIG. 7A to 7D are schematic diagrams of the training process of the training system in an embodiment.
  • Fig. 8A is a schematic diagram of a finger structure according to an embodiment.
  • Fig. 8B is a schematic diagram of applying principal component analysis on bones from computed tomography images in an embodiment.
  • FIG. 8C is a schematic diagram of applying principal component analysis on the skin from a computed tomography image in an embodiment.
  • Fig. 8D is a schematic diagram of calculating the distance between the bone spindle and the medical appliance in an embodiment.
  • Fig. 8E is a schematic diagram of an artificial medical image according to an embodiment.
  • FIG. 9A is a block diagram for generating artificial medical images according to an embodiment.
  • Fig. 9B is a schematic diagram of an artificial medical image according to an embodiment.
  • 10A and 10B are schematic diagrams of the artificial hand model and the correction of ultrasonic volume according to an embodiment.
  • Fig. 10C is a schematic diagram of ultrasonic volume and collision detection in an embodiment.
  • FIG. 10D is a schematic diagram of an artificial ultrasound image according to an embodiment.
  • FIG. 11A and 11B are schematic diagrams of an operation training system according to an embodiment.
  • 12A and 12B are schematic diagrams of images of the training system according to an embodiment.
  • FIG. 1A is a block diagram of a real-time presentation system for surgical information according to an embodiment.
  • the surgical information real-time presentation system includes a surgical wearable image display device 6 (hereinafter referred to as the display device 6) and a server 7.
  • the display device 6 includes a processing core 61, a wireless receiver 62, a display 63 and a storage element 64.
  • the wireless receiver 62 wirelessly receives medical images 721 or medical appliance information 722 in real time.
  • the processing core 61 is coupled to the storage element 64, and the processing core 61 is coupled to the wireless receiver 62 and the display 63 to display the medical image 721 or the medical appliance information 722 on the display 63.
  • the server 7 includes a processing core 71, an input/output interface 72, an input/output interface 74 and a storage element 73.
  • the processing core 71 is coupled to the I/O interface 72, the I/O interface 74, and the storage element 73.
  • the server 7 is wirelessly connected to the wireless receiver 62, and wirelessly transmits medical images 721 and medical appliance information 722 in real time.
  • the surgical information real-time presentation system can also include a display device 8, and the server 7 can also output information to the display device 8 for display through the I/O interface 74.
  • the processing cores 61 and 71 are, for example, processors, controllers, etc.
  • the processors include or multiple cores.
  • the processor may be a central processing unit or a graphics processor, and the processing cores 61 and 71 may also be the cores of a processor or a graphics processor.
  • the processing cores 61 and 71 may also be one processing module, and the processing module includes multiple processors.
  • the storage components 64 and 73 store program codes for execution by the processing cores 61 and 71.
  • the storage components 64 and 73 include non-volatile memory and volatile memory, such as hard disks, flash memory, solid state disks, and optical discs. and many more. Volatile memory is, for example, dynamic random access memory, static random access memory, and so on.
  • the program code is stored in a non-volatile memory, and the processing cores 61 and 71 can load the program code from the non-volatile memory to the volatile memory, and then execute the program code.
  • the wireless receiver 62 can wirelessly receive the surgical target information 723 in real time, and the processing core 61 can display the medical image 721, the medical appliance information 722, or the surgical target information 723 on the display 63.
  • the wireless receiver 62 can wirelessly receive the surgical guidance video 724 in real time, and the processing core 61 displays the medical image 721, medical appliance information 722 or the surgical guidance video 724 on the display 63.
  • Medical images, medical device information, surgical target information or surgical guidance video can guide or prompt the user to take the next action.
  • the wireless receiver 62 and the I/O interface 72 may be wireless transceivers, which conform to a wireless transmission protocol, such as a wireless network or Bluetooth.
  • the instant transmission method is, for example, wireless network transmission or Bluetooth transmission.
  • This embodiment adopts wireless network transmission, and the wireless network is, for example, Wi-Fi specifications or compliance with IEEE 802.11b, IEEE 802.11g, IEEE 802.11n and other specifications.
  • FIG. 1B is a schematic diagram of the surgical wearable image display device in FIG. 1A receiving medical images or medical device information.
  • Wearable image display devices for surgery are smart glasses or head-mounted displays.
  • Smart glasses are wearable computer glasses that can increase the information seen by the wearer.
  • smart glasses can also be said to be wearable computer glasses, which can change the optical characteristics of the glasses during execution.
  • Smart glasses can superimpose information into the field of view and hands-free applications.
  • the overlapping of information to the field of view can be achieved by the following methods: optical head-mounted display (OHMD), embedded wireless glasses with transparent head-up display (HUD), Or augmented reality (AR) and so on.
  • Hands-free applications can be achieved through a voice system, which uses natural language voice commands to communicate with smart glasses.
  • the ultrasound images are transmitted to the smart glasses and displayed so that users no longer need to turn their heads to look at the screen.
  • the medical image 721 is an artificial medical image of an artificial limb.
  • the artificial medical image is a medical image generated for the artificial limb.
  • the medical image is, for example, an ultrasonic image.
  • the medical appliance information 722 includes position information and angle information, such as the tool information (Tool Information) shown in FIG. 1B.
  • the position information includes the XYZ coordinate position, and the angle information includes the ⁇ angle.
  • the surgical target information 723 includes position information and angle information, such as the target information (Target Information) shown in FIG. 1B.
  • the position information includes the XYZ coordinate position, and the angle information includes the ⁇ angle.
  • the content of the surgical guidance video 724 may be as shown in FIGS. 7A to 7D, which present the medical appliances and operations used in each stage of the operation.
  • the display device 6 may have a sound input element such as a microphone, and may be used for the aforementioned hands-free application.
  • the user can speak to give voice commands to the display device 6 to control the operation of the display device 6. For example, start or stop all or part of the operations described below. This is conducive to the operation, and the user can control the display device 6 without putting down the utensils held by the hand.
  • the screen of the display device 6 may display an icon to indicate that it is currently in the voice operation mode.
  • FIG. 1C is a schematic diagram of the transmission between the server and the surgical wearable image display device in FIG. 1A.
  • the transmission between the server 7 and the display device 6 includes steps S01 to S08.
  • step S01 the server 7 first transmits the image size information to the display device 6.
  • step S02 the display device 6 receives the image size information and sends it back for confirmation.
  • step S03 the server 7 divides the image into multiple parts and transmits them to the display device 6 sequentially.
  • step S04 the display device 6 receives the image size information and sends it back for confirmation. Steps S03 and S04 will continue to be repeated until the display device 6 has received the entire image.
  • step S05 after the entire image reaches the display device 6, the display device 6 starts processing the image. Since the bmp format is too large for real-time transmission, the server 7 can compress the image from the bmp format to the JPEG format to reduce the size of the image file.
  • step S06 the display device combines multiple parts of the image to obtain the entire JPEG image, decompresses and displays the JPEG image in step S07, and then completes the transmission of an image in step S08. Steps S01 to S08 will continue until the server 7 stops transmitting.
  • FIG. 1D is a schematic diagram of the server in FIG. 1A transmitting through two network ports.
  • the server 7 transmits medical images 721 and medical device information 722 through two network sockets 751 and 752 respectively.
  • One network port 751 is responsible for transmitting medical images 721, and one network port 752 is responsible for transmitting medical device information. 722.
  • the display device 6 is a client, which is responsible for receiving medical images 721 and medical appliance information 722 transmitted from the network port.
  • API Application Programming Interface
  • the use of customized socket server and client can reduce complex functions and directly treat all data as bits Group array to transmit.
  • the surgical target information 723 can be transmitted to the display device 6 through the network port 751 or the additional network port 752, and the surgical guidance video 724 can be transmitted to the display device 6 through the network port 751 or the additional network port 752.
  • the surgical information real-time presentation system may further include an optical positioning device that detects the position of the medical appliance and generates a positioning signal, and the server generates the medical appliance information according to the positioning signal.
  • the optical positioning device is, for example, the optical marker and the optical sensor of the subsequent embodiment.
  • the surgical information real-time presentation system can be used in the optical tracking system and training system of the following embodiments.
  • the display device 8 can be the output device 5 of the following embodiment
  • the server can be the computer device 13 of the following embodiment
  • the input/output interface 74 can be the following implementation
  • the I/O interface 72 can be the I/O interface 137 of the following embodiment
  • the content output through the I/O interface 134 in the following embodiment can also be converted to the display through the I/O interface 137 after the relevant format conversion Device 6 to display.
  • FIG. 2A is a block diagram of an optical tracking system according to an embodiment.
  • the optical tracking system 1 for medical appliances includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13.
  • the optical markers 11 are arranged on one or more medical appliances, and here are a plurality of medical appliances 21 ⁇ 24 description as an example, the optical marker 11 can also be set on the surgical target object 3, the medical appliances 21-24 and the surgical target object 3 are placed on the platform 4, and the optical sensor 12 optically senses the optical marker 11 to Generate multiple sensing signals respectively.
  • the computer device 13 is coupled to the optical sensor 12 to receive the sensing signal, and has a three-dimensional model 14 of the surgical situation, and adjusts the three-dimensional model 14 of the surgical situation according to the sensing signal among the medical appliance presents 141-144 and the surgical target present 145 The relative position between.
  • the medical appliance presenting objects 141 to 144 and the surgical target presenting object 145 are shown in FIG. 2D, which represent the medical appliances 21 to 24 and the surgical target object 3 in the three-dimensional model 14 of the operation situation.
  • the three-dimensional model 14 of the surgical situation can obtain the current positions of the medical appliances 21-24 and the surgical target object 3 and reflect the medical appliance presentation and the surgical target presentation accordingly.
  • FIG. 2B is a schematic diagram of the optical tracking system of the embodiment.
  • Four optical sensors 121 to 124 are installed on the ceiling and face the optical markers 11, medical appliances 21 to 24, and Surgical target object 3.
  • the medical tool 21 is a medical probe, such as a probe for ultrasonic imaging detection or other devices that can detect the inside of the surgical target object 3. These devices are actually used clinically, and the probe for ultrasonic imaging detection is, for example, ultrasound. Transducer (Ultrasonic Transducer).
  • the medical appliances 22-24 are surgical appliances, such as needles, scalpels, hooks, etc., which are actually used clinically. If used for surgical training, the medical probe can be a device that is actually used in clinical practice or a clinically simulated device, and the surgical instrument can be a device that is actually used in clinical practice or a simulated device that simulates clinical practice.
  • Figure 2C is a schematic diagram of the optical tracking system of the embodiment.
  • the medical appliances 21-24 and the surgical target 3 on the platform 4 are used for surgical training, such as minimally invasive finger surgery, which can be used for triggers. Refers to treatment surgery.
  • the material of the clamps of the platform 4 and the medical appliances 21-24 can be wood.
  • the medical appliance 21 is a realistic ultrasonic transducer (or probe), and the medical appliance 22-24 includes a plurality of surgical instruments, such as expanders ( dilator, needle, and hook blade.
  • the surgical target 3 is a hand phantom.
  • Three or four optical markers 11 are installed on each medical appliance 21-24, and three or four optical markers 11 are also installed on the surgical target object 3.
  • the computer device 13 is connected to the optical sensor 12 to track the position of the optical marker 11 in real time.
  • optical markers 11 There are 17 optical markers 11, including 4 that are linked on or around the surgical target object 3, and 13 optical markers 11 are on medical appliances 21-24.
  • the optical sensor 12 continuously transmits real-time information to the computer device 13.
  • the computer device 13 also uses the movement judgment function to reduce the calculation burden. If the moving distance of the optical marker 11 is less than the threshold value, the position of the optical marker 11 Without updating, the threshold value is, for example, 0.7 mm.
  • the computer device 13 includes a processing core 131, a storage element 132, and a plurality of I/O interfaces 133, 134.
  • the processing core 131 is coupled to the storage element 132 and the I/O interfaces 133, 134.
  • the I/O interface 133 can receive optical sensing.
  • the detection signal generated by the detector 12 communicates with the output device 5 through the I/O interface 134, and the computer device 13 can output the processing result to the output device 5 through the I/O interface 134.
  • the I/O interfaces 133 and 134 are, for example, peripheral transmission ports or communication ports.
  • the output device 5 is a device capable of outputting images, such as a display, a projector, a printer, and so on.
  • the storage element 132 stores program codes for execution by the processing core 131.
  • the storage element 132 includes a non-volatile memory and a volatile memory.
  • the non-volatile memory is, for example, a hard disk, a flash memory, a solid state disk, an optical disk, and so on.
  • the volatile memory is, for example, dynamic random access memory, static random access memory, and so on.
  • the program code is stored in the non-volatile memory, and the processing core 131 can load the program code from the non-volatile memory to the volatile memory, and then execute the program code.
  • the storage component 132 stores the program code and data of the operation situation three-dimensional model 14 and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14 and the program code and data of the tracking module 15.
  • the processing core 131 is, for example, a processor, a controller, etc., and the processor includes one or more cores.
  • the processor may be a central processing unit or a graphics processor, and the processing core 131 may also be the core of a processor or a graphics processor.
  • the processing core 131 may also be a processing module, and the processing module includes multiple processors.
  • the operation of the optical tracking system includes the connection between the computer device 13 and the optical sensor 12, pre-operation procedures, coordinate correction procedures of the optical tracking system, real-time rendering procedures, etc.
  • the tracking module 15 represents the correlation of these operations
  • the storage element 132 of the computer device 13 stores the tracking module 15, and the processing core 131 executes the tracking module 15 to perform these operations.
  • the computer device 13 performs the pre-work and the coordinate correction of the optical tracking system to find the optimized conversion parameters, and then the computer device 13 can set the medical appliance presentations 141-144 and the operation according to the optimized conversion parameters and sensing signals The position of the target presentation 145 in the three-dimensional model 14 of the surgical situation.
  • the computer device 13 can deduce the position of the medical appliance 21 inside and outside the surgical target object 3, and adjust the relative position between the medical appliance presenting objects 141 to 144 and the surgical target presenting object 145 in the three-dimensional model 14 of the operation situation accordingly.
  • the medical appliances 21-24 can be tracked in real time from the detection results of the optical sensor 12 and correspondingly presented in the three-dimensional model 14 of the surgical context.
  • the representation of the three-dimensional model 14 in the surgical context is shown in FIG. 2D.
  • the three-dimensional model 14 of the operation situation is a native model, which includes models established for the surgical target object 3 and also includes models established for the medical appliances 21-24.
  • the method of establishment can be that the developer directly uses computer graphics technology to construct it on the computer, such as using drawing software or special application development software.
  • the computer device 13 can output the display data 135 to the output device 5.
  • the display data 135 is used to present 3D images of the medical appliance presentation objects 141-144 and the surgical target presentation object 145.
  • the output device 5 can output the display data 135.
  • the output method is, for example, Display or print, etc. The result of outputting in display mode is shown in FIG. 2D, for example.
  • the coordinate position of the three-dimensional model 14 of the surgical situation can be accurately transformed to correspond to the optical marker 11 in the tracking coordinate system, and vice versa.
  • the medical appliances 21-24 and the surgical target object 3 can be tracked in real time based on the detection result of the optical sensor 12, and the positions of the medical appliances 21-24 and the surgical target object 3 in the tracking coordinate system can be obtained after the aforementioned processing.
  • the medical appliance presentation objects 141-144 correspond to the surgical target presentation object 145 accurately.
  • the medical appliance presentation objects 141-144 correspond to the surgery The target presentation 145 will move immediately following the three-dimensional model 14 of the operation situation.
  • Fig. 3 is a functional block diagram of a surgical training system according to an embodiment.
  • the operation information real-time presentation system can be used in the operation training system, and the server 7 can perform the blocks shown in FIG. 3.
  • multiple functions can be programmed into multi-threaded execution. For example, there are four threads in Figure 3, which are the main thread for calculation and drawing, the thread for updating marker information, the thread for transmitting images, and the thread for scoring.
  • the main thread of calculation and drawing includes block 902 to block 910.
  • the program of the main thread starts to execute, and in block 904, the UI event listener starts other threads for the event or further executes other blocks of the main thread.
  • the optical tracking system will be calibrated, and then in block 908, the subsequent image to be rendered is calculated, and then in block 910, the image is rendered by OpenGL.
  • the thread for updating the marker information includes block 912 to block 914.
  • the thread for updating the marker information opened from the block 904 first connects the server 7 to the components of the optical tracking system, such as an optical sensor, in block 912, and then updates the marker information in block 914. Between block 906 and block 906, these two threads share memory to update the marker information.
  • the thread for transmitting the image includes block 916 to block 920.
  • the thread for transmitting the image started in block 904 will start the transmission server in block 916, and then in block 918 it will get the rendered image from block 908 and compose the bmp image and compress it into jpeg, and then transmit the image in block 920 To the display device.
  • the scoring thread includes blocks 922 to 930.
  • the scoring thread started in block 904 starts in block 922, and in block 924, it is confirmed that the training phase is completed or manually stopped. If it is completed, enter block 930 to stop the scoring thread. If only the trainee manually stops, enter the block 926.
  • the marker information is obtained from block 906 and the current training phase information is sent to the display device.
  • the scoring conditions of the stage are confirmed, and then return to block 924.
  • Fig. 4 is a block diagram of a training system for medical appliance operation according to an embodiment.
  • the training system for medical appliance operation (hereinafter referred to as the training system) can truly simulate the surgical training environment.
  • the training system includes an optical tracking system 1a, one or more medical appliances 21-24, and the surgical target object 3.
  • the optical tracking system 1a includes a plurality of optical markers 11, a plurality of optical sensors 12, and a computer device 13.
  • the optical markers 11 are arranged on medical appliances 21-24 and surgical target objects 3, medical appliances 21-24 and surgical target objects 3 Place on the platform 4.
  • the medical appliance presents 141 to 144 and the surgical target presents 145 are correspondingly presented on the three-dimensional model 14a of the surgical context.
  • the medical tools 21-24 include medical probes and surgical tools.
  • the medical tools 21 are medical probes
  • the medical tools 22-24 are surgical tools.
  • the medical appliance presentations 141-144 include medical probe presentations and surgical appliance presentations.
  • the medical appliance presentation 141 is a medical probe presentation
  • the medical appliance presentations 142-144 are surgical appliance presentations.
  • the storage component 132 stores the program code and data of the operation situation three-dimensional model 14a and the tracking module 15, and the processing core 131 can access the storage component 132 to execute and process the operation situation three-dimensional model 14a and the program code and data of the tracking module 15.
  • the surgical target object 3 is an artificial limb, such as artificial upper limbs, hand phantoms, artificial palms, artificial fingers, artificial arms, artificial upper arms, artificial forearms, artificial elbows, artificial upper limbs, artificial feet, artificial toes, artificial ankles, artificial Calf, false thigh, false knee, false torso, false neck, false head, false shoulder, false chest, false abdomen, false waist, false hip or other false parts, etc.
  • artificial upper limbs such as artificial upper limbs, hand phantoms, artificial palms, artificial fingers, artificial arms, artificial upper arms, artificial forearms, artificial elbows, artificial upper limbs, artificial feet, artificial toes, artificial ankles, artificial Calf, false thigh, false knee, false torso, false neck, false head, false shoulder, false chest, false abdomen, false waist, false hip or other false parts, etc.
  • the training system takes the minimally invasive surgery training of the fingers as an example.
  • the surgery is a trigger finger treatment operation
  • the surgical target object 3 is a prosthetic hand
  • the medical probe 21 is a realistic ultrasonic transducer (or probe).
  • the surgical instruments 22-24 are a needle, a dilator, and a hook blade.
  • other surgical target objects 3 may be used for other surgical training.
  • the storage element 132 also stores the program codes and data of the physical medical image 3D model 14b, the artificial medical image 3D model 14c, and the training module 16.
  • the processing core 131 can access the storage element 132 to execute and process the physical medical image 3D model 14b and artificial medicine.
  • the training module 16 is responsible for the following surgical training procedures and the processing, integration and calculation of related data.
  • FIG. 5A is a schematic diagram of a three-dimensional model of an operation scenario according to an embodiment
  • FIG. 5B is a schematic diagram of a physical medical image three-dimensional model according to an embodiment
  • FIG. 5C is an artificial medical image according to an embodiment. Schematic of the three-dimensional model.
  • the content of these three-dimensional models can be output or printed by the output device 5.
  • the solid medical image three-dimensional model 14b is a three-dimensional model established from medical images, which is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5B.
  • the medical image is, for example, a computer tomography image, and the image of the surgical target object 3 actually generated after the computer tomography is used to build the three-dimensional model 14b of the physical medical image.
  • the artificial medical image three-dimensional model 14c contains an artificial medical image model.
  • the artificial medical image model is a model established for the surgical target object 3, such as the three-dimensional model shown in FIG. 5C.
  • the artificial medical imaging model is a three-dimensional model of artificial ultrasound imaging. Since the surgical target 3 is not a real living body, although computer tomography can obtain images of the physical structure, it is still possible to use other medical imaging equipment such as ultrasound imaging. Effective or meaningful images cannot be obtained directly from the surgical target object 3. Therefore, the ultrasound image model of the surgical target object 3 must be artificially generated. Selecting an appropriate position or plane from the three-dimensional model of artificial ultrasound images can generate two-dimensional artificial ultrasound images.
  • the computer device 13 generates a medical image 136 according to the three-dimensional model 14a of the surgical situation and the medical image model.
  • the medical image model is, for example, a solid medical image three-dimensional model 14b or an artificial medical image three-dimensional model 14c.
  • the computer device 13 generates a medical image 136 based on the three-dimensional model 14a of the surgical situation and the three-dimensional model 14c of an artificial medical image.
  • the medical image 136 is a two-dimensional artificial ultrasound image.
  • the computer device 13 scores the detection object found by the medical probe 141 and the operation of the surgical instrument representation 145, such as a specific surgical site.
  • 6A to 6D are schematic diagrams of the direction vector of the medical appliance according to an embodiment.
  • the direction vectors of the medical device presentation objects 141-144 corresponding to the medical devices 21-24 will be rendered instantly.
  • the direction vector of the medical probe can be calculated by calculating the center of gravity of the optical marker And get, and then project from another point to the xz plane, and calculate the vector from the center of gravity to the projection point.
  • Other medical appliance presentations 142-144 are relatively simple, and the direction vector can be calculated using the sharp points in the model.
  • the training system can only draw the model of the area where the surgical target presenting object 145 is located instead of drawing all the medical appliance presenting objects 141-144.
  • the transparency of the skin model can be adjusted to observe the internal anatomical structure of the surgical target present 145, and to see ultrasound image slices or computer tomography image slices of different cross-sections, such as horizontal cross-sections. plane or axial plane), sagittal plane (sagittal plane) or coronal plane (coronal plane), which can help the operator during the operation.
  • the bounding boxes of each model are constructed to detect collisions.
  • the surgical training system can determine which medical appliances have contacted tendons, bones and/or skin, and can determine when to start scoring.
  • the optical marker 11 attached to the surgical target object 3 must be clearly seen or detected by the optical sensor 12. If the optical marker 11 is covered, the position of the optical marker 11 is detected The accuracy of is reduced, and the optical sensor 12 needs at least two to see all the optical markers at the same time.
  • the calibration procedure is as described above, for example, three-stage calibration, which is used to accurately calibrate two coordinate systems.
  • the correction error, the iteration count, and the final position of the optical marker can be displayed in the window of the training system, for example, by the output device 5.
  • the accuracy and reliability information can be used to remind users that the system needs to be recalibrated when the error is too large.
  • the three-dimensional model is drawn at a frequency of 0.1 times per second, and the drawn result can be output to the output device 5 for display or printing.
  • the user can start the surgical training process.
  • the training process first use a medical probe to find the site to be operated on. After finding the site to be operated on, the site is anesthetized. Then, expand the path from the outside to the surgical site, and after expansion, the scalpel is deepened along this path to the surgical site.
  • FIGS. 7A to 7D are schematic diagrams of the training process of the training system of an embodiment.
  • the surgical training process includes four stages and is illustrated by taking minimally invasive surgery training of fingers as an example.
  • the medical probe 21 is used to find the site to be operated on, so as to confirm that the site to be operated on is in the training system.
  • the surgical site is, for example, the pulley area (pulley), which can be judged by looking for the position of the metacarpophalangeal joints, the anatomical structures of the bones and tendons of the fingers, and the focus at this stage is whether the first pulley area (A1 pulley) is found.
  • the training system will automatically enter the next stage of scoring.
  • the medical probe 21 is placed on the skin and kept in contact with the skin at the metacarpal joints (MCP joints) along the midline of the flexor tendon.
  • the surgical instrument 22 is used to open the path of the surgical area.
  • the surgical instrument 22 is, for example, a needle.
  • the needle is inserted to inject local anesthetic and expand the space.
  • the process of inserting the needle can be performed under the guidance of continuous ultrasound images.
  • This continuous ultrasound image is an artificial ultrasound image, which is the aforementioned medical image 136. Because it is difficult to simulate regional anesthesia with prosthetic hands, anesthesia is not specifically simulated.
  • the surgical instrument 23 is pushed in along the same path as the surgical instrument 22 in the second stage to create the trajectory required for hooking the knife in the next stage.
  • the surgical instrument 23 is, for example, a dilator.
  • the training system will automatically enter the next stage of scoring.
  • the surgical instrument 24 is inserted along the trajectory created in the third stage, and the pulley is divided by the surgical instrument 24.
  • the surgical instrument 24 is, for example, a hook blade.
  • the focus of the third stage is similar to that of the fourth stage.
  • the vessels and nerves near both sides of the flexor tendon may be easily miscut. Therefore, the third stage and the fourth stage
  • the focus of the stage is not only not touching the tendons, nerves and blood vessels, but also opening a track that is at least 2mm larger than the first pulley area, so as to leave space for the hook knife to cut the pulley area.
  • the operations of each training phase must be quantified.
  • the operation area during the operation is defined by the finger anatomy as shown in Figure 8A, which can be divided into an upper boundary and a lower boundary. Because most of the tissue on the tendon is fat and does not cause pain, the upper boundary of the surgical area can be defined by the skin of the palm, and the lower boundary is defined by the tendon.
  • the proximal depth boundary is 10mm (average length of the first trochlear zone) from the metacarpal head-neck joint.
  • the distal depth boundary is not important, because it has nothing to do with tendons, blood vessels, and nerves.
  • the left and right boundaries are defined by the width of the tendon, and nerves and blood vessels are located on both sides of the tendon.
  • the scoring method for each training stage is as follows.
  • the focus of the training is to find the target, such as the target to be excised.
  • the target such as the target to be excised.
  • the first pulley area A1pulley.
  • the angle between the medical probe and the bone spindle should be close to vertical, and the allowable angle deviation is ⁇ 30°. Therefore, the scoring formula for the first stage is as follows:
  • the first stage score the score of the target object ⁇ its weight + the angle score of the probe ⁇ its weight
  • the focus of training is to use the needle to open the path of the surgical area. Since the pulley area surrounds the tendon, the distance between the main axis of the bone and the needle should be small. Therefore, the calculation formula for the second stage scoring is as follows:
  • Second stage score opening score ⁇ its weight + needle angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight
  • the focus of training is to insert a dilator that enlarges the surgical area into the finger.
  • the trajectory of the dilator must be close to the main axis of the bone.
  • the angle between the expander and the main axis of the bone should be approximately parallel, and the allowable angle deviation is ⁇ 30°. Due to the space left for the hook knife to cut the first trolley area, the expander must be at least 2mm higher than the first trolley area.
  • the third stage score higher than the pulley area score ⁇ its weight + expander angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight + not leaving the surgical area score ⁇ its weight
  • the scoring conditions are similar to those in the third stage, except that the hook needs to be rotated 90°. This rule is added to the scoring at this stage.
  • the scoring formula is as follows:
  • the fourth stage score higher than the pulley area score ⁇ its weight + hook angle score ⁇ its weight + distance from the main axis of the bone score ⁇ its weight + not leaving the surgical area score ⁇ its weight + rotating hook score ⁇ its weight
  • this calculation method is the same as calculating the angle between the palm normal and the direction vector of the medical appliance.
  • this calculation method is the same as calculating the angle between the palm normal and the direction vector of the medical appliance.
  • PCA principal component analysis
  • the longest axis is taken as the main axis of the bone.
  • the shape of the bone in the computer tomography image is not uniform, which causes the axis found by the principal component analysis and the palm normal line to be not perpendicular to each other.
  • the skin on the bone can be used to find the palm normal using principal component analysis. Then, the angle between the bone spindle and the medical appliance can be calculated.
  • the distance between the bone main axis and the medical appliance also needs to be calculated.
  • the distance calculation is similar to calculating the distance between the top and the plane of the medical appliance.
  • the plane refers to the plane containing the bone main axis vector vector and the palm normal.
  • the schematic diagram of distance calculation is shown in Figure 8D. This plane can be obtained by the cross product of the palm normal vector D2 and the bone principal axis vector D1. Since these two vectors can be obtained in the previous calculation, the distance between the main axis of the bone and the appliance can be easily calculated.
  • FIG. 8E is a schematic diagram of an artificial medical image according to an embodiment, and the tendon section and the skin section in the artificial medical image are marked with dotted lines.
  • the tendon section and the skin section can be used to construct the model and the bounding box, the bounding box is used for collision detection, and the pulley area can be defined in the static model.
  • collision detection it is possible to determine the surgical area and determine whether the medical appliance crosses the pulley area.
  • the average length of the first pulley area is about 1mm, and the first pulley area is located at the proximal end of the metacarpal head-neck (MCP) joint.
  • MCP metacarpal head-neck
  • the average thickness of the pulley area is about 0.3mm and surrounds the tendons.
  • Fig. 9A is a flow chart of generating artificial medical images according to an embodiment. As shown in FIG. 9A, the generated flow includes step S21 to step S24.
  • Step S21 is to extract the first set of bone skin features from the cross-sectional image data of the artificial limb.
  • the artificial limb is the aforementioned surgical target object 3, which can be used as a limb for minimally invasive surgery training, such as a prosthetic hand.
  • the cross-sectional image data includes multiple cross-sectional images, and the cross-sectional reference image is a computed tomography image or a solid cross-sectional image.
  • Step S22 is to extract the second set of bone skin features from the medical image data.
  • the medical image data is a three-dimensional ultrasound image, such as the three-dimensional ultrasound image of FIG. 9B, which is created by multiple planar ultrasound images.
  • Medical image data are medical images taken of real organisms, not artificial limbs.
  • the first group of bone skin features and the second group of bone skin features include multiple bone feature points and multiple skin feature points.
  • Step S23 is to establish feature registration data (registration) based on the first set of bone and skin features and the second set of bone and skin features.
  • Step S23 includes: taking the first set of bone-skin features as a reference target (target); finding out the correlation function as the spatial alignment correlation data, where the correlation function satisfies the second set of bone-skin features to align with the reference target without being due to the first set of bones Disturbance caused by skin features and the second set of bone skin features.
  • the correlation function is found through the algorithm of the maximum likelihood estimation problem (maximum likelihood estimation problem) and the maximum expectation algorithm (EM Algorithm).
  • Step S24 is to perform deformation processing on the medical image data according to the feature alignment data to generate artificial medical image data suitable for artificial limbs.
  • the artificial medical image data is, for example, a three-dimensional ultrasound image, which still retains the characteristics of the organism in the original ultrasound image.
  • Step S24 includes: generating a deformation function based on the medical image data and feature alignment data; applying a grid to the medical image data and obtaining multiple dot positions accordingly; deforming the dot positions according to the deformation function; based on the deformed dot positions,
  • the medical image data is supplemented with corresponding pixels to generate a deformed image, and the deformed image is used as artificial medical image data.
  • the deformation function is generated using the moving least square (MLS) method.
  • the deformed image is generated using affine transform.
  • step S21 to step S24 by capturing the image characteristics of the real ultrasonic image and the artificial hand computer tomography image, the corresponding point relationship of the deformation is obtained by image registration, and then the ultrasonic image close to the real human is generated based on the artificial hand through the deformation
  • the ultrasound retains the characteristics of the original live ultrasound image.
  • the artificial medical image data is a three-dimensional ultrasound image, a plane ultrasound image of a specific position or a specific section can be generated based on the corresponding position or section of the three-dimensional ultrasound image.
  • FIG. 10A and FIG. 10B are schematic diagrams of the correction of the artificial hand model and the ultrasonic volume according to an embodiment.
  • the physical medical image 3D model 14b and the artificial medical image 3D model 14c are related to each other. Since the model of the prosthetic hand is constructed by the computed tomographic image volume, the positional relationship between the computed tomographic image volume and the ultrasound volume can be directly used to integrate the artificial hand. Establish correlation with ultrasound volume.
  • FIG. 10C is a schematic diagram of ultrasonic volume and collision detection according to an embodiment
  • FIG. 10D is a schematic diagram of an artificial ultrasound image according to an embodiment.
  • the training system must be able to simulate a real ultrasonic transducer (or probe) to generate slice image fragments from the ultrasonic volume. Regardless of the angle of the transducer (or probe), the simulated transducer (or probe) must depict the corresponding image segment.
  • the angle between the medical probe 21 and the ultrasonic body is first detected. Then, the collision detection of the segment surface is based on the width of the medical probe 21 and the ultrasonic volume, which can be used to find the corresponding image segment being drawn.
  • the resulting image is shown in Figure 10D.
  • the artificial medical image data is a three-dimensional ultrasound image
  • the three-dimensional ultrasound image has a corresponding ultrasound volume
  • the content of the image segment to be depicted by the simulated transducer (or probe) can be generated according to the corresponding position of the three-dimensional ultrasound image.
  • FIG. 11A and FIG. 11B are schematic diagrams of an operation training system according to an embodiment.
  • Surgery trainees operate medical appliances, and the medical appliances can be correspondingly displayed on the display device in real time.
  • FIGS. 12A and 12B are schematic diagrams of images of the training system according to an embodiment.
  • Operation trainees operate medical appliances.
  • the current artificial ultrasound images can also be displayed in real time.
  • the surgical wearable image display device and the surgical information real-time presentation system of the present disclosure can assist or train users to operate medical instruments.
  • the training system of the present disclosure can provide a realistic surgical training environment for trainees, thereby effectively Assist trainees to complete surgical training.
  • the surgical performer can also perform a simulated operation on the prosthesis first, and use the surgical wearable image display device and the surgical information real-time display system to review or review the simulated surgery performed in advance before the actual operation, so that the surgical performer Can quickly grasp the key points of surgery or points that need attention.
  • surgical wearable image display devices and surgical information real-time display systems can also be applied to actual surgical procedures.
  • Medical images such as ultrasound images are transmitted to surgical wearable image display devices such as smart glasses. This display method It can make the operator no longer need to turn his head to look at the screen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un dispositif d'affichage d'image portable (6) pour une chirurgie et un système de présentation en temps réel d'informations chirurgicales. Le dispositif comprend un affichage (63), un récepteur sans fil (62) et un cœur de traitement (61); le récepteur sans fil (62) reçoit sans fil une image médicale (721) ou des informations d'instrument médical (722) en temps réel; le cœur de traitement (61) est couplé au récepteur sans fil (62) et au dispositif d'affichage (63), de façon à afficher l'image médicale (721) ou les informations d'instrument médical (722) sur l'affichage (63).
PCT/CN2019/082834 2019-04-16 2019-04-16 Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales WO2020210972A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082834 WO2020210972A1 (fr) 2019-04-16 2019-04-16 Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/082834 WO2020210972A1 (fr) 2019-04-16 2019-04-16 Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales

Publications (1)

Publication Number Publication Date
WO2020210972A1 true WO2020210972A1 (fr) 2020-10-22

Family

ID=72836765

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082834 WO2020210972A1 (fr) 2019-04-16 2019-04-16 Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales

Country Status (1)

Country Link
WO (1) WO2020210972A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203101728U (zh) * 2012-11-27 2013-07-31 天津市天堰医教科技开发有限公司 一种辅助医疗手术教学的头戴式显示器
CN103845113A (zh) * 2012-11-29 2014-06-11 索尼公司 无线手术放大镜及其利用方法、装置和系统
US20160191887A1 (en) * 2014-12-30 2016-06-30 Carlos Quiles Casas Image-guided surgery with surface reconstruction and augmented reality visualization
CN106156398A (zh) * 2015-05-12 2016-11-23 西门子保健有限责任公司 用于计算机辅助地模拟外科手术的设备和方法
TW201742603A (zh) * 2016-05-31 2017-12-16 長庚醫療財團法人林口長庚紀念醫院 外科手術輔助系統
WO2018183001A1 (fr) * 2017-03-30 2018-10-04 Novarad Corporation Augmentation de vues en temps réel d'un patient avec des données tridimensionnelles

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203101728U (zh) * 2012-11-27 2013-07-31 天津市天堰医教科技开发有限公司 一种辅助医疗手术教学的头戴式显示器
CN103845113A (zh) * 2012-11-29 2014-06-11 索尼公司 无线手术放大镜及其利用方法、装置和系统
US20160191887A1 (en) * 2014-12-30 2016-06-30 Carlos Quiles Casas Image-guided surgery with surface reconstruction and augmented reality visualization
CN106156398A (zh) * 2015-05-12 2016-11-23 西门子保健有限责任公司 用于计算机辅助地模拟外科手术的设备和方法
TW201742603A (zh) * 2016-05-31 2017-12-16 長庚醫療財團法人林口長庚紀念醫院 外科手術輔助系統
WO2018183001A1 (fr) * 2017-03-30 2018-10-04 Novarad Corporation Augmentation de vues en temps réel d'un patient avec des données tridimensionnelles

Similar Documents

Publication Publication Date Title
US11483532B2 (en) Augmented reality guidance system for spinal surgery using inertial measurement units
TWI711428B (zh) 用於醫療用具的光學追蹤系統及訓練系統
CA3072774A1 (fr) Realite virtuelle medicale, realite mixte ou systeme chirurgical en realite augmentee
TWI707660B (zh) 手術用穿戴式影像顯示裝置及手術資訊即時呈現系統
JP2023505956A (ja) 拡張現実を使用した解剖学的特徴抽出およびプレゼンテーション
JP2021153773A (ja) ロボット手術支援装置、手術支援ロボット、ロボット手術支援方法、及びプログラム
WO2020210972A1 (fr) Dispositif d'affichage d'image portable pour chirurgie et système de présentation en temps réel d'informations chirurgicales
WO2020210967A1 (fr) Système de suivi optique et système d'entraînement pour instruments médicaux
JP7414611B2 (ja) ロボット手術支援装置、処理方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925523

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925523

Country of ref document: EP

Kind code of ref document: A1