WO2020133097A1 - 基于混合现实的控制系统 - Google Patents

基于混合现实的控制系统 Download PDF

Info

Publication number
WO2020133097A1
WO2020133097A1 PCT/CN2018/124470 CN2018124470W WO2020133097A1 WO 2020133097 A1 WO2020133097 A1 WO 2020133097A1 CN 2018124470 W CN2018124470 W CN 2018124470W WO 2020133097 A1 WO2020133097 A1 WO 2020133097A1
Authority
WO
WIPO (PCT)
Prior art keywords
mixed reality
dimensional
remote
processed
dimensional holographic
Prior art date
Application number
PCT/CN2018/124470
Other languages
English (en)
French (fr)
Inventor
唐佩福
张�浩
鲁通
张巍
郝明
王锟
李建涛
Original Assignee
北京维卓致远医疗科技发展有限责任公司
中国人民解放军总医院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京维卓致远医疗科技发展有限责任公司, 中国人民解放军总医院 filed Critical 北京维卓致远医疗科技发展有限责任公司
Priority to PCT/CN2018/124470 priority Critical patent/WO2020133097A1/zh
Publication of WO2020133097A1 publication Critical patent/WO2020133097A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots

Definitions

  • the present invention relates to the field of medical technology, and in particular to a control system based on mixed reality.
  • a remotely transmitted two-dimensional image can be obtained, where the two-dimensional image can be obtained by using a camera or other equipment to take different angles of the patient's treatment site of. Then the doctor can view the situation of the patient's site to be treated according to the two-dimensional image, and then perform surgical guidance and operation.
  • a two-dimensional image can only show part of the patient's to-be-treated part.
  • the doctor needs to manually combine multiple two-dimensional images to imagine the actual situation of the patient's to-be-treated part and the incidence of the disease in his mind.
  • the actual part therefore, the two-dimensional image cannot accurately reflect the actual situation of the patient's to-be-processed part, which is inconvenient for the doctor to guide the operation or the operation of the operation, and is likely to cause errors in the operation.
  • the invention provides a control system based on mixed reality, which can accurately reflect the actual situation of the patient's to-be-treated part, facilitate the doctor to guide the operation or the operation, and reduce the errors of the operation.
  • the invention provides a control system based on mixed reality, including:
  • a navigation device and a mixed reality device wherein the navigation device is connected to the mixed reality device;
  • the navigation device is used to obtain the first posture information of the patient's to-be-processed part, and send the first posture information to the mixed reality device;
  • the mixed reality device is configured to receive the first pose information, adjust the three-dimensional holographic model of the part to be processed according to the first pose information, and obtain and display the adjusted three-dimensional holographic model.
  • system further includes: a remote surgical robot;
  • the remote surgical robot is used to obtain remote control instructions and perform surgical operations on the to-be-treated site according to the remote control instructions.
  • system further includes a control device, wherein the control device is connected to the remote surgical robot;
  • the control device is configured to receive a user control instruction, generate the remote control instruction according to the user control instruction, and send the remote control instruction to the remote surgical robot.
  • control device is provided in the mixed reality device
  • the control device is specifically configured to receive the user control instruction and adjust the adjusted three-dimensional holographic model according to the user control instruction to generate the remote control instruction, wherein the remote control instruction includes motion Path, the motion path is used to indicate the adjustment process of the adjusted three-dimensional holographic model; and send the remote control instruction to the remote surgical robot;
  • the remote surgical robot is specifically used to control the mechanical arm of the remote surgical robot to perform a surgical operation on the site to be treated according to the motion path in the remote control instruction, wherein the mechanical arm is provided in the Treatment site.
  • the user control instruction is at least one of the following: voice instruction, gesture instruction, touch instruction.
  • a three-dimensional projection device is provided on the remote surgical robot, and the three-dimensional projection device is connected to the control device;
  • the three-dimensional projection device is configured to receive the adjusted three-dimensional holographic model sent by the control device and display the adjusted three-dimensional holographic model.
  • the navigation device includes a photosensitive ball, a photosensitive device, and a control device, wherein the photosensitive ball is disposed on the to-be-processed portion, and the photosensitive device is connected to the control device;
  • the photosensitive device is used to identify the second posture information of the photosensitive ball, wherein the photosensitive ball moves with the movement of the part to be processed;
  • the first pose information of the part to be processed and send the first pose information to the mixed reality device through the control device.
  • the mixed reality device is also used for:
  • an initial three-dimensional holographic model is generated.
  • the mixed reality device is specifically used for:
  • the first pose information includes: three-dimensional position information and/or angle values.
  • the present invention provides a mixed reality-based control system.
  • the system includes: a navigation device and a mixed reality device, wherein the navigation device and the mixed reality device are connected, and the navigation device is used to obtain the first posture information of the patient to be treated, And send the first pose information to the mixed reality device, and after receiving the first pose information, the mixed reality device adjusts the pose of the currently displayed three-dimensional holographic model of the part to be processed according to the first pose information To obtain and display the three-dimensional holographic model corresponding to the current first pose information.
  • This solution can directly show the real-time actual situation of the treatment site to the doctor through the adjusted three-dimensional holographic model of the treatment site, so that the doctor does not need to imagine the actual situation of the treatment site in the mind based on multiple two-dimensional images.
  • This solution facilitates doctors to perform surgical guidance or surgical operations according to the currently displayed three-dimensional holographic model, improves the accuracy of surgical operations, and greatly avoids errors in surgical operations.
  • FIG. 1 is a schematic structural diagram 1 of a mixed reality-based control system provided by Embodiment 1 of the present invention.
  • FIG. 2 is a schematic structural diagram 2 of a mixed reality-based control system according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic structural diagram of a mixed reality-based control system according to Embodiment 2 of the present invention.
  • FIG. 4 is a schematic structural diagram 1 of a mixed reality-based control system provided by Embodiment 3 of the present invention.
  • FIG. 5 is a second schematic structural diagram of a mixed reality-based control system according to Embodiment 3 of the present invention.
  • FIG. 1 is a schematic structural diagram 1 of a mixed reality-based control system according to Embodiment 1 of the present invention. As shown in FIG. 1, the system includes:
  • the navigation device 101 is configured to acquire first pose information of a patient's to-be-processed part, and send the first pose information to the mixed reality device 102;
  • the mixed reality device 102 is configured to receive the first pose information, adjust the three-dimensional holographic model of the part to be processed according to the first pose information, and obtain and display the adjusted three-dimensional holographic model.
  • connection referred to above includes a physical wired connection, and also includes a wireless connection for data transmission.
  • the mixed reality device 102 may display the initial three-dimensional holographic model of the part to be processed, wherein the mixed reality device 102 generates the initial three-dimensional holographic model
  • the mixed reality device 102 is also used to:
  • an initial three-dimensional holographic model is generated.
  • the mixed reality device 102 can adjust the currently displayed three-dimensional holographic model of the part to be processed according to the first posture information received this time to The doctor shows the actual situation of the patient's to-be-treated part in real time.
  • a positioning method of the navigation device 101 is magnetic positioning, and the principle of magnetic positioning is: generally includes three magnetic field generators and a magnetic field detector, each magnetic field generator coil defines a direction in space, the detector The coil detects the low-frequency magnetic field emitted by the magnetic field generator and passing through the air or soft tissue. The relative position of each generator and the received signal can determine the spatial position of the detector, so as to achieve the positioning of the target, and its positioning accuracy can reach 2mm.
  • This positioning method is low in cost, convenient and flexible, and there is no problem of light path blocking between the detector and the generator.
  • Another positioning method of the navigation device 101 is ultrasonic positioning, and its principle is ultrasonic distance measurement.
  • a system is generally composed of an ultrasonic transmitter, a receiver, a surgical instrument, and a computer.
  • the transmitter is installed on the frame, and the receiver is installed on the surgical instrument.
  • the relative distance between the transmitter and the receiver is calculated at a fixed speed of sound, and then the transmitter is used as the center, and the relative distance is the radius.
  • the intersection point of the sphere is the receiver The spatial location of the device.
  • high-definition images can be constructed by time shifting, scaling, and intelligently summing the echo energy. Under strict laboratory conditions, the accuracy of ultrasonic positioning can reach 0.4mm.
  • the disadvantage of ultrasonic positioning is that it is susceptible to interference from environmental noise, and because the system assumes that the propagation speed of ultrasonic waves in the air is constant, the air temperature, air flow and non-uniformity will affect the accuracy of the system.
  • a further positioning method of the navigation device 101 is optical positioning, where the optical positioning can be used to obtain the first pose information as follows:
  • FIG. 2 is a schematic structural diagram 2 of a mixed reality-based control system according to Embodiment 1 of the present invention.
  • the navigation device 101 includes:
  • the photosensitive small ball 1011, the photosensitive device 1012 and the control device 1013 wherein the photosensitive small ball 1011 is disposed on the site to be processed, and the photosensitive device 1012 is connected to the control device 1013;
  • the photosensitive device 1012 is used for identifying the second pose information of the photosensitive ball 1011, wherein the photosensitive ball 1011 moves with the movement of the part to be processed; the part to be processed is determined according to the second pose information The first pose information, and send the first pose information to the mixed reality device 102 through the control device 1013.
  • the obtained data has high accuracy, flexible and convenient application, and multiple targets can be tracked by setting different photosensitive balls on different parts to be processed.
  • the fixing device of the photosensitive ball is rigidly connected to the part to be processed, and the posture change of the photosensitive ball can be synchronized with the change of the posture of the part to be processed, so that the second posture information of the photosensitive ball determines the to-be-processed The first posture information of the part.
  • the photosensitive small ball 1011 is connected to the two broken ends of the fracture, the photosensitive device 1012 is set at a position where the photosensitive small ball can be recognized, and when the fractured end moves, the photosensitive small ball 1011 Move along with the movement of the fractured end, so that the photosensitive device 1012 can recognize the second posture information of the photosensitive ball 1011, and determine the first posture information of the fractured end according to the second posture information, and pass the control device 1013 sends the first posture information to the mixed reality device 102, and the mixed reality device 102 performs posture on the three-dimensional holographic model of the fracture currently displayed to the doctor according to the first posture information of the fracture end received this time. Adjust to show the doctor the actual posture of the fractured end.
  • the mixed reality device 102 shows the actual posture of the fractured end to the doctor
  • the doctor can guide the operation according to the actual posture of the fractured end; on the other hand, the doctor can use the actual situation of the fractured end
  • the posture situation controls the remote robot to perform the surgical operation to reset the fractured end of the patient through the remote surgical robot.
  • the mixed reality device 102 is specifically used for:
  • the first posture information of the patient's to-be-processed part acquired by the navigation device 101 includes three-dimensional position information (x, y, z) and angle value ⁇ , where the three-dimensional position information (x, y, z) and angle value ⁇ All refer to the coordinate system of the navigation device 101, then in order to show the actual situation of the part to be treated to the doctor, the mixed reality device 102 should convert the coordinate system of the three-dimensional position information (x, y, z) and the angle value ⁇ to Convert the first pose information to the coordinate system where the 3D holographic model of the part to be processed is located to obtain 3D position information (x1, y1, z1) and angle value ⁇ , and then use the 3D position information (x1, y1, z1) And the angle value ⁇ adjust the three-dimensional holographic model of the site to be treated to obtain three-dimensional position information (x1, y1, z1) and the three-dimensional holographic model corresponding to the angle value ⁇ .
  • the first posture information includes: three-dimensional position information and/or angle values, where the three-dimensional position information characterizes the position of the part to be processed, and the angle value characterizes the posture of the part to be processed.
  • This embodiment provides a mixed reality-based control system.
  • the system includes: a navigation device and a mixed reality device, wherein the navigation device and the mixed reality device are connected, and the navigation device is used to obtain first posture information of a patient to be treated And send the first posture information to the mixed reality device, and after receiving the first posture information, the mixed reality device performs posture on the currently displayed three-dimensional holographic model of the part to be processed according to the first posture information Adjust to obtain and display the three-dimensional holographic model corresponding to the current first pose information.
  • This solution can directly show the real-time actual situation of the site to be treated to the doctor through the adjusted three-dimensional holographic model of the site to be treated, so that there is no need for the doctor to manually judge the actual situation of the site to be processed based on multiple two-dimensional images. It is convenient for doctors to conduct surgical guidance or surgical operations according to the currently displayed three-dimensional holographic model, which improves the accuracy of surgical operations and greatly avoids errors in surgical operations.
  • FIG. 3 is a schematic structural diagram of a mixed reality-based control system provided by Embodiment 2 of the present invention. Based on Embodiment 1, as shown in FIG. 3, the system further includes:
  • the control device 202 is configured to receive a user control instruction, generate the remote control instruction according to the user control instruction, and send the remote control instruction to the remote surgical robot 201;
  • the remote surgical robot 201 is used to obtain remote control instructions and perform surgical operations on the to-be-treated site according to the remote control instructions.
  • the user control instruction is at least one of the following: voice instruction, gesture instruction, touch instruction;
  • the control device 202 may be a control handle/device that controls the remote surgical robot 201, and in addition, the control device 202 may also be set in mixed reality A controller within the device 102 to control the virtual three-dimensional holographic model.
  • the doctor can manipulate the control device 201 according to the actual situation of the fractured end seen, to pass the control device 201
  • the remote surgical robot 201 is controlled to perform surgical reduction on the fractured end.
  • the doctor realizes manual surgical reduction through the remote surgical robot 201, that is, the doctor uses the remote surgical robot 201 to replace his own fracture fracture according to the actual situation of the fracture fracture seen.
  • the control handle converts the touch command into corresponding remote control commands such as grabbing and turning to the left, and controls the robot arm of the remote surgical robot 201 to grab Or move to the left, etc., to perform surgical reduction on the fractured end, and, during the operation, the navigation device 101 will collect the first posture information of the fractured end in real time, and send it to the mixed reality device 102 to pass the hybrid
  • the reality device 102 adjusts and displays the posture information of the three-dimensional holographic model of the fracture according to the actual situation. Then, based on the currently displayed three-dimensional holographic model of the fracture, the doctor controls the control handle to control the remote surgical robot 201 to perform the next surgical operation. Until the fracture end reduction is completed.
  • the photosensitive ball 1011 is set on the bone, and the photosensitive device 1012 can recognize the second posture information of the photosensitive ball 1011, and determine the first position of the bone according to the second posture information Pose information, and sent to the mixed reality device 102 through the control device 1013 in the navigation device 101, the mixed reality device 102 adjusts the currently displayed three-dimensional holographic model of the bone according to the first posture information to obtain and display the adjustment
  • the doctor can intuitively see the actual condition of the lesion on the bone, so as to control the control device 202 to control the remote surgical robot 201 Surgical resection of the lesion, and during the operation, the navigation device 101 will also collect the first posture information of the bone in real time and send it to the mixed reality device 101 to pass the three-dimensional holographic model of the bone through the mixed reality device 102 Adjust and display according to the actual situation to visually show the current actual situation of
  • the doctor when performing a surgical operation, can control the remote surgical robot to perform surgical operations on the treatment site based on the three-dimensional holographic model of the treatment site displayed on the mixed reality device, thereby providing good medical services to the patient, and The accuracy of the surgical operation is improved, and the mistake of the surgical operation is greatly avoided.
  • FIG. 4 is a schematic structural diagram 1 of a mixed reality-based control system according to Embodiment 3 of the present invention.
  • the control device 201 is provided on the mixed reality device 102 in;
  • the control device 201 is specifically configured to receive the user control instruction and adjust the adjusted three-dimensional holographic model according to the user control instruction to generate the remote control instruction, wherein the remote control instruction includes A motion path, the motion path is used to indicate the adjustment process of the adjusted three-dimensional holographic model; and send the remote control instruction to the remote surgical robot 201;
  • the remote surgical robot 201 is specifically used to control the robot arm of the remote surgical robot 201 to perform a surgical operation on the site to be treated according to the motion path in the remote control instruction, wherein the robot arm is provided on the Describe the area to be treated.
  • control device 202 is a controller provided in the mixed reality device 102 to control the virtual three-dimensional holographic model.
  • a doctor may perform a virtual operation on the three-dimensional holographic model displayed by the mixed reality device 102. Therefore, the controller can determine the motion path according to the doctor's operation on the virtual three-dimensional holographic model.
  • the control device 201 provided in the mixed reality device 102 can generate the motion corresponding to the process according to the movement process of the virtual bones by the doctor
  • the remote control command of the path and send the remote control command to the remote surgical robot 201, because the mechanical arm of the remote surgical robot 201 is fixed on the fractured end, so that the remote surgical robot 201 can control after receiving the remote control command
  • the mechanical arm performs the surgical operation according to the motion path to automatically complete the actual bone reset, and after the mechanical arm performs the surgical operation according to the motion path, the navigation device 101 sends the first posture information of the current bone to the mixed reality device 102.
  • the mixed reality device 102 displays a three-dimensional holographic model of the fractured end
  • FIG. 5 is a schematic structural diagram 2 of a mixed reality-based control system according to Embodiment 3 of the present invention.
  • a three-dimensional projection device 301 is provided on the remote surgical robot 201, and the three-dimensional projection device 301 Connected to the control device 202;
  • the three-dimensional projection device 301 is configured to receive the adjusted three-dimensional holographic model sent by the control device 202 and display the adjusted three-dimensional holographic model.
  • the adjusted three-dimensional holographic model can also be displayed to the doctor on the patient's side through the remote surgical robot, so that the doctors of the two places can communicate and learn.
  • the mixed reality device 102 in this solution may use a wearable device, such as a helmet-type display device.
  • the doctor observes the distal surgical scene in the field of vision by wearing the device.
  • the robot arm of the remote surgical robot is set on the site to be treated, and the mixed reality device is provided with a control device that can determine the movement path, then when performing the surgical operation, the doctor can use the mixed reality device 102
  • the three-dimensional holographic model presented in the field of view performs a virtual surgical operation, that is, performs a surgical operation on the virtual to-be-processed part in the three-dimensional holographic model.
  • the control device provided in the mixed reality device can perform the virtual surgical operation performed by the doctor To obtain the movement path of the entire process, so that the robotic arm provided on the site to be processed automatically performs the operation operation according to the movement path, and improves the effect and efficiency of the operation completion.
  • this solution can also be applied to local guidance and local surgery.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于混合现实的控制系统,该系统包括:导航设备(101)和混合现实设备(102),其中,所述导航设备(101)与所述混合现实设备(102)连接;所述导航设备(101),用于获取病人的待处理部位的第一位姿信息,并将所述第一位姿信息发送给所述混合现实设备(102);所述混合现实设备(102),用于接收所述第一位姿信息,根据所述第一位姿信息调整待处理部位的三维全息模型,得到并显示调整后的三维全息模型。该系统能够准确的体现出病人的待处理部位的实际情况,便于医生进行手术指导或手术操作,减少手术操作的失误。

Description

基于混合现实的控制系统 技术领域
本发明涉及医疗技术领域,尤其涉及一种基于混合现实的控制系统。
背景技术
为给病人提供好的就医服务,目前,医生可以进行远程指导或者远程手术。
现有技术中,在远程指导或是远程手术的过程中,可以获取到远程传输的二维图像,其中,二维图像是可以由相机等设备针对病人的待处理部位进行不同角度的拍摄而得到的。然后医生可以根据二维图像观看病人的待处理部位的情况,然后进行手术指导和操作。
但是现有技术中,一幅二维图像只能显示出病人的待处理部位的部分情况,医生需要人工的结合多幅二维图像在脑海中想象出病人的待处理部位的实际情况以及发病的实际部位;从而,二维图像无法准确的体现出病人的待处理部位的实际情况,不便于医生进行手术的指导或手术操作,容易导致手术操作的失误。
发明内容
本发明提供一种基于混合现实的控制系统,能够准确的体现出病人的待处理部位的实际情况,便于医生进行手术的指导或手术操作,减少手术操作的失误。
本发明提供一种基于混合现实的控制系统,包括:
导航设备和混合现实设备,其中,所述导航设备与所述混合现实设备连接;
所述导航设备,用于获取病人的待处理部位的第一位姿信息,并将所述第一位姿信息发送给所述混合现实设备;
所述混合现实设备,用于接收所述第一位姿信息,根据所述第一位姿信 息调整待处理部位的三维全息模型,得到并显示调整后的三维全息模型。
进一步地,所述系统还包括:远程手术机器人;
所述远程手术机器人,用于获取远程控制指令,并根据所述远程控制指令对所述待处理部位进行手术操作。
进一步地,所述系统还包括控制设备,其中,所述控制设备与所述远程手术机器人连接;
所述控制设备,用于接收用户控制指令,根据所述用户控制指令生成所述远程控制指令,并将所述远程控制指令发送给所述远程手术机器人。
进一步地,所述控制设备设置在所述混合现实设备中;
所述控制设备,具体用于接收所述用户控制指令,根据所述用户控制指令对所述调整后的三维全息模型进行调整,以生成所述远程控制指令,其中,所述远程控制指令包括运动路径,所述运动路径用于指示对所述调整后的三维全息模型的调整过程;并将所述远程控制指令发送给所述远程手术机器人;
所述远程手术机器人,具体用于控制所述远程手术机器人的机械臂按照所述远程控制指令中的运动路径,对所述待处理部位进行手术操作,其中,所述机械臂设置在所述待处理部位上。
进一步地,所述用户控制指令为以下的至少一种:语音指令、手势指令、触碰指令。
进一步地,所述远程手术机器人上设置有三维投影设备,所述三维投影设备与所述控制设备连接;
所述三维投影设备,用于接收所述控制设备发送的所述调整后的三维全息模型,并显示所述调整后的三维全息模型。
进一步地,所述导航设备,包括感光小球、感光装置和控制装置,其中,所述感光小球设置在所述待处理部位上,所述感光装置与所述控制装置连接;
所述感光装置,用于识别所述感光小球的第二位姿信息,其中,所述感光小球随着所述待处理部位的移动而移动;根据所述第二位姿信息确定所述待处理部位的第一位姿信息,并将所述第一位姿信息通过所述控制装置发送给所述混合现实设备。
进一步地,所述混合现实设备,还用于:
获取所述待处理部位的三维模型,其中,三维模型是根据所述待处理部 位的电子计算机断层扫描(Computed Tomography,简称CT)数据或磁共振成像(Magnetic Resonance Imaging,简称MRI)数据进行三维重建而生成的;
根据所述三维模型,生成初始的三维全息模型。
进一步地,所述混合现实设备,具体用于:
将所述第一位姿信息转换到所述待处理部位的三维全息模型所位于的坐标系下,以得到坐标系转换后的第一坐姿信息,并根据所述坐标系转换后的第一坐姿信息调整待处理部位的三维全息模型。
进一步地,所述第一位姿信息包括:三维位置信息和/或角度值。
本发明提供了一种基于混合现实的控制系统,该系统包括:导航设备和混合现实设备,其中,导航设备和混合现实设备连接,导航设备用于获取病人待处理部位的第一位姿信息,并将第一位姿信息发送给混合现实设备,而混合现实设备在接收到第一位姿信息之后,便根据该第一位姿信息对当前显示的待处理部位的三维全息模型进行位姿调整,以得到并显示与当前第一位姿信息对应的三维全息模型。本方案通过调整后的待处理部位的三维全息模型,能够向医生直接展示待处理部位实时的实际情况,从而无需再由医生根据多个二维图像在脑海中想象待处理部位的实际情况,通过本方案,便于医生根据当前展示的三维全息模型进行手术指导或者是手术操作,提高了手术操作的准确度,极大避免了手术操作的失误。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本发明实施例一提供的一种基于混合现实的控制系统的结构示意图一;
图2为本发明实施例一提供的一种基于混合现实的控制系统的结构示意图二;
图3为本发明实施例二提供的一种基于混合现实的控制系统的结构示意图;
图4为本发明实施例三提供的一种基于混合现实的控制系统的结构示意图一;
图5为本发明实施例三提供的一种基于混合现实的控制系统的结构示意图二。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。需要说明的是,本文中的“第一”、“第二”仅用于区分,其并未对先后顺序进行限定。
图1为本发明实施例一提供的一种基于混合现实的控制系统的结构示意图一,如图1所示,该系统包括:
导航设备101和混合现实设备102,其中,所述导航设备101与所述混合现实设备102连接;
所述导航设备101,用于获取病人的待处理部位的第一位姿信息,并将所述第一位姿信息发送给所述混合现实设备102;
所述混合现实设备102,用于接收所述第一位姿信息,根据所述第一位姿信息调整待处理部位的三维全息模型,得到并显示调整后的三维全息模型。
其中,上述所指的连接包括物理上的有线连接,也包括无线连接,以进行数据传输。
在本实施例中,混合现实设备102在第一次接收导航设备101发送的第 一位姿信息之前,可显示待处理部位初始的三维全息模型,其中,混合现实设备102生成初始的三维全息模型的一种实现方式可如下:
混合现实设备102还用于:
获取所述待处理部位的三维模型,其中,三维模型是根据所述待处理部位的CT数据进行三维重建而生成的;
根据所述三维模型,生成初始的三维全息模型。
那么,混合现实设备102在接收到导航设备101发送的第一位姿信息后,便可根据本次接收到的第一位姿信息对当前显示的待处理部位的三维全息模型进行调整,以向医生展示病人待处理部位实时的实际情况。
具体的,导航设备101的一种定位方式为磁学定位,磁学定位的原理是:一般包含三个磁场发生器和一个磁场探测器,每个磁场发生器线圈定义空间的一个方向,探测器线圈检测由磁场发生器发射并通过空气或软组织的低频磁场,由各发生器间的相对位置和接收到的信号就可以确定探测器的空间位置,从而实现对目标的定位,其定位精度可达2mm。这种定位方法造价低,方便灵活,探测器与发生器之间没有光路遮挡问题。
导航设备101的另一种定位方式为超声波定位,其原理是超声测距,这类系统一般由超声波发射器、接收器、手术器械和计算机组成。发射器安装在标架上,接收器安装在手术器械上,以固定声速计算发射器和接收器之间的相对距离,然后以发射器为中心,相对距离为半径作球面,球面的交点就是接收器的空间位置。采用阵列接收器,通过时间平移、缩放以及智能求和回波能量,可以构建高清晰度的图像。在严格的实验室条件下,超声波定位的精度可达到0.4mm。超声波定位的缺点是易受环境噪声的干扰,而且因为系统假设超声波在空气中的传播速度是常数,所以空气温度、气流和非均匀性都会影响系统精度。
导航设备101的再一种定位方式为光学定位,其中,采用光学定位获取第一位姿信息的实现方式可如下:
如图2所示,图2为本发明实施例一提供的一种基于混合现实的控制系统的结构示意图二,导航设备101,包括:
感光小球1011、感光装置1012和控制装置1013,其中,感光小球1011设置在待处理部位上,感光装置1012与控制装置1013连接;
所述感光装置1012,用于识别所述感光小球1011的第二位姿信息,其中,所述感光小球1011随着待处理部位的移动而移动;根据第二位姿信息确定待处理部位的第一位姿信息,并将第一位姿信息通过所述控制装置1013发送给所述混合现实设备102。
采用光学定位的方式,获得的数据精度较高,应用灵活方便,而且通过在不同的待处理部位上设置不同的感光小球能够跟踪多个目标。
其中,通过感光小球的固定装置与待处理部位形成刚性连接,进而感光小球的位姿变化可与待处理部位的位姿变化同步,从而通过感光小球的第二位姿信息确定待处理部位的第一位姿信息。
以病人腿部骨折举例来说:感光小球1011分别连接在骨折的两个断端上,感光装置1012设置在能够识别到感光小球的位置上,当骨折断端移动时,感光小球1011随着骨折断端的移动而移动,从而感光装置1012能够识别到感光小球1011的第二位姿信息,并根据第二位姿信息确定骨折断端的第一位姿信息,并通过控制装置1013将第一位姿信息发送给混合现实设备102,混合现实设备102则根据本次接收到的骨折断端的第一位姿信息,对当前已展示给医生的骨折的三维全息模型进行位姿调整,以向医生展示骨折断端的实际位姿情况。
当混合现实设备102向医生展示如骨折断端的实际位姿情况之后,一方面,医生可根据骨折断端的实际位姿情况进行手术指导,另一方面,医生可根据该骨折断端的实际位姿情况控制远程机器人进行手术操作,以通过远程手术机器人对病人的骨折断端进行复位。
另外,混合现实设备102具体用于:
将所述第一位姿信息转换到所述待处理部位的三维全息模型所位于的坐标系下,以得到坐标系转换后的第一坐姿信息,并根据所述坐标系转换后的第一坐姿信息调整待处理部位的三维全息模型。
例如,导航设备101获取到的病人待处理部位的第一位姿信息包括三维位置信息(x,y,z)和角度值α,其中,三维位置信息(x,y,z)和角度值α均是以导航设备101的坐标系为参考的,那么为向医生展示待处理部位的实际情况,则混合现实设备102应对三维位置信息(x,y,z)和角度值α进行坐标系转换以将第一位姿信息转换到待处理部位的三维全息模型所在的坐标系下, 得到三维位置信息(x1,y1,z1)和角度值β,然后再利用三维位置信息(x1,y1,z1)和角度值β对待处理部位的三维全息模型进行调整,以得到三维位置信息(x1,y1,z1)和角度值β所对应的三维全息模型。
其中,第一位姿信息包括:三维位置信息和/或角度值,其中,三维位置信息表征待处理部位的位置,而角度值表征待处理部位的姿势。
本实施例提供了一种基于混合现实的控制系统,该系统包括:导航设备和混合现实设备,其中,导航设备和混合现实设备连接,导航设备用于获取病人待处理部位的第一位姿信息,并将第一位姿信息发送给混合现实设备,而混合现实设备在接收到第一位姿信息之后,便根据该第一位姿信息对当前显示的待处理部位的三维全息模型进行位姿调整,以得到并显示与当前第一位姿信息对应的三维全息模型。本方案通过调整后的待处理部位的三维全息模型,能够向医生直接展示待处理部位实时的实际情况,从而无需再由医生根据多个二维图像人工判断待处理部位的实际情况,通过本方案,便于医生根据当前展示的三维全息模型进行手术指导或者是手术操作,提高了手术操作的准确度,极大避免了手术操作的失误。
图3为本发明实施例二提供的一种基于混合现实的控制系统的结构示意图,在实施例一的基础上,如图3所示,该系统还包括:
远程手术机器人201和控制设备202,其中,所述控制设备202与所述远程手术机器人201连接;
所述控制设备202,用于接收用户控制指令,根据所述用户控制指令生成所述远程控制指令,并将所述远程控制指令发送给所述远程手术机器人201;
所述远程手术机器人201,用于获取远程控制指令,并根据所述远程控制指令对所述待处理部位进行手术操作。
其中,用户控制指令为以下的至少一种:语音指令、手势指令、触碰指令;控制设备202可以是控制远程手术机器人201的操控柄/设备,另外,控制设备202还可以是设置在混合现实设备102内的以控制虚拟的三维全息模型的控制器。
同样以病人腿部骨折举例来说:混合现实设备102向医生展示骨折断端的实际位姿情况之后,医生可根据看到的骨折断端的实际情况,操控控制设 备201,以通过控制设备201控制远程手术机器人201对骨折断端进行手术复位。例如,以控制设备202为操控柄为例,医生通过远程手术机器人201实现手动的手术复位,也即,医生根据看到的骨折断端的实际情况,利用远程手术机器人201代替自己对骨折断端进行手动操作,具体的,比如,操控柄接收到触碰指令后,将触碰指令转换为相应的抓取、向左等远程控制指令,控制远端的远程手术机器人201的机械臂进行抓取或者是向左移动等,以对骨折断端进行手术复位,并且,在手术过程中,导航设备101将实时采集骨折断端的第一位姿信息,并发送给混合现实设备102,以通过混合现实设备102对骨折的三维全息模型的位姿信息按照实际情形进行调整并显示,然后医生基于当前显示的骨折的三维全息模型,再控制操控柄来控制远程手术机器人201进行下一步的手术操作,直至骨折断端复位完成。
以病人某骨头上有病灶举例来说:感光小球1011设置在该骨头上,感光装置1012能够识别到感光小球1011的第二位姿信息,并根据第二位姿信息确定该骨头的第一位姿信息,且通过导航设备101中的控制装置1013发送给混合现实设备102,混合现实设备102根据该第一位姿信息对当前显示的该骨头的三维全息模型进行调整,得到并显示调整后的该骨头的三维全息模型,医生通过混合现实设备102看到调整后的三维全息模型后,能够直观看到该骨头上的病灶的实际情况,从而操控控制设备202,以控制远程手术机器人201对病灶进行手术切除,并且,在手术过程中,导航设备101也会实时采集该骨头的第一位姿信息,并发送给混合现实设备101,以通过混合现实设备102对该骨头的三维全息模型按照实际情形进行调整并显示,以向医生直观展示该骨头当前的实际情况,从而便于控制远程手术机器人201对病灶进行下一步的手术切除操作。
本实施例中在进行手术操作时,医生可基于混合现实设备显示的待处理部位的三维全息模型,操控控制设备控制远程手术机器人对待处理部位进行手术操作,从而为病人提供好的就医服务,而且提高了手术操作的准确度,极大避免了手术操作的失误。
图4为本发明实施例三提供的一种基于混合现实的控制系统的结构示意图一,在实施例二的基础上,如图4所示,所述控制设备201设置在所述 混合现实设备102中;
所述控制设备201,具体用于接收所述用户控制指令,根据所述用户控制指令对所述调整后的三维全息模型进行调整,以生成所述远程控制指令,其中,所述远程控制指令包括运动路径,所述运动路径用于指示对所述调整后的三维全息模型的调整过程;并将所述远程控制指令发送给所述远程手术机器人201;
所述远程手术机器人201,具体用于控制所述远程手术机器人201的机械臂按照所述远程控制指令中的运动路径,对所述待处理部位进行手术操作,其中,所述机械臂设置在所述待处理部位上。
在本实施例中,控制设备202是设置在混合现实设备102内的以控制虚拟的三维全息模型的控制器,具体的,医生可对混合现实设备102展示的三维全息模型进行虚拟的手术操作,从而该控制器可根据医生对虚拟的三维全息模型的手术操作,确定出运动路径。
同样以病人腿部骨折举例来说:混合现实设备102向医生展示骨折断端的三维全息模型之后,医生可在展示的骨折断端的三维全息模型上进行虚拟手术操作,例如,医生对骨折的三维全息模型中的虚拟骨头进行了移动,完成了虚拟骨头的复位,那么设置在混合现实设备102中的控制设备201便可根据医生对虚拟骨头的移动过程,生成包括有该过程所对应的运动路径的远程控制指令,并将该远程控制指令发送给远程手术机器人201,由于远程手术机器人201的机械臂是固定在骨折断端上,从而远程手术机器人201在接收到远程控制指令之后,可控制其机械臂按照该运动路径进行手术操作,以此自动完成实际的骨头复位,并且,机械臂按照运动路径进行手术操作之后,导航设备101会将当前骨头的第一位姿信息发送给混合现实设备102,混合现实设备102基于当前的第一位姿信息展示骨头断端的三维全息模型,以供医生查看机械臂复位的效果。
进一步地,如图5所示,图5为本发明实施例三提供的一种基于混合现实的控制系统的结构示意图二,远程手术机器人201上设置有三维投影设备301,所述三维投影设备301与所述控制设备202连接;
所述三维投影设备301,用于接收所述控制设备202发送的调整后的三维全息模型,并显示调整后的三维全息模型。
在进行远程的手术操作时,也可将调整后的三维全息模型通过远程手术机器人展示给病人侧的医生,以便两地医生进行交流和学习。
本方案中的混合现实设备102可采用穿戴设备,如头盔式显示装置。医生通过穿戴设备在视野中观察到远端手术场景。
本实施例中,在进行手术之前,远程手术机器人的机械臂设置在待处理部位上,混合现实设备中设置有能够确定运动路径的控制设备,那么在进行手术操作时,医生可通过混合现实设备102在视野中呈现的三维全息模型进行虚拟的手术操作,即,对三维全息模型中虚拟的待处理部位进行手术操作,这样,设置在混合现实设备中的控制设备可根据医生进行的虚拟手术操作,得到整个过程的运动路径,从而使得设置在待处理部位上的机械臂按照此运动路径自动进行手术操作,提高了手术完成的效果和效率。本方案除了应用于远程指导或远程手术之外,还可应用于本地指导和本地手术。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本发明旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求书指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求书来限制。

Claims (10)

  1. 一种基于混合现实的控制系统,其特征在于,包括:
    导航设备和混合现实设备,其中,所述导航设备与所述混合现实设备连接;
    所述导航设备,用于获取病人的待处理部位的第一位姿信息,并将所述第一位姿信息发送给所述混合现实设备;
    所述混合现实设备,用于接收所述第一位姿信息,根据所述第一位姿信息调整待处理部位的三维全息模型,得到并显示调整后的三维全息模型。
  2. 根据权利要求1所述的系统,其特征在于,所述系统还包括:远程手术机器人;
    所述远程手术机器人,用于获取远程控制指令,并根据所述远程控制指令对所述待处理部位进行手术操作。
  3. 根据权利要求2所述的系统,其特征在于,所述系统还包括控制设备,其中,所述控制设备与所述远程手术机器人连接;
    所述控制设备,用于接收用户控制指令,根据所述用户控制指令生成所述远程控制指令,并将所述远程控制指令发送给所述远程手术机器人。
  4. 根据权利要求3所述的系统,其特征在于,所述控制设备设置在所述混合现实设备中;
    所述控制设备,具体用于接收所述用户控制指令,根据所述用户控制指令对所述调整后的三维全息模型进行调整,以生成所述远程控制指令,其中,所述远程控制指令包括运动路径,所述运动路径用于指示对所述调整后的三维全息模型的调整过程;并将所述远程控制指令发送给所述远程手术机器人;
    所述远程手术机器人,具体用于控制所述远程手术机器人的机械臂按照所述远程控制指令中的运动路径,对所述待处理部位进行手术操作,其中,所述机械臂设置在所述待处理部位上。
  5. 根据权利要求4所述的系统,其特征在于,所述用户控制指令为以下的至少一种:语音指令、手势指令、触碰指令。
  6. 根据权利要求4所述的系统,其特征在于,所述远程手术机器人上设置有三维投影设备,所述三维投影设备与所述控制设备连接;
    所述三维投影设备,用于接收所述控制设备发送的所述调整后的三维全息模型,并显示所述调整后的三维全息模型。
  7. 根据权利要求1所述的系统,其特征在于,所述导航设备,包括感光小球、感光装置和控制装置,其中,所述感光小球设置在所述待处理部位上,所述感光装置与所述控制装置连接;
    所述感光装置,用于识别所述感光小球的第二位姿信息,其中,所述感光小球随着所述待处理部位的移动而移动;根据所述第二位姿信息确定所述待处理部位的第一位姿信息,并将所述第一位姿信息通过所述控制装置发送给所述混合现实设备。
  8. 根据权利要求1-7任一项所述的系统,其特征在于,所述混合现实设备,还用于:
    获取所述待处理部位的三维模型,其中,三维模型是根据所述待处理部位的电子计算机断层扫描CT数据或磁共振成像MRI数据进行三维重建而生成的;
    根据所述三维模型,生成初始的三维全息模型。
  9. 根据权利要求1-7任一项所述的系统,其特征在于,所述混合现实设备,具体用于:
    将所述第一位姿信息转换到所述待处理部位的三维全息模型所位于的坐标系下,以得到坐标系转换后的第一坐姿信息,并根据所述坐标系转换后的第一坐姿信息调整待处理部位的三维全息模型。
  10. 根据权利要求1-7任一项所述的系统,其特征在于,所述第一位姿信息包括:三维位置信息和/或角度值。
PCT/CN2018/124470 2018-12-27 2018-12-27 基于混合现实的控制系统 WO2020133097A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124470 WO2020133097A1 (zh) 2018-12-27 2018-12-27 基于混合现实的控制系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124470 WO2020133097A1 (zh) 2018-12-27 2018-12-27 基于混合现实的控制系统

Publications (1)

Publication Number Publication Date
WO2020133097A1 true WO2020133097A1 (zh) 2020-07-02

Family

ID=71128456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124470 WO2020133097A1 (zh) 2018-12-27 2018-12-27 基于混合现实的控制系统

Country Status (1)

Country Link
WO (1) WO2020133097A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103230304A (zh) * 2013-05-17 2013-08-07 深圳先进技术研究院 手术导航系统及其方法
CN104739519A (zh) * 2015-04-17 2015-07-01 中国科学院重庆绿色智能技术研究院 一种基于增强现实的力反馈手术机器人控制系统
CN106308946A (zh) * 2016-08-17 2017-01-11 清华大学 一种应用于立体定向手术机器人的增强现实装置及其方法
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103230304A (zh) * 2013-05-17 2013-08-07 深圳先进技术研究院 手术导航系统及其方法
CN104739519A (zh) * 2015-04-17 2015-07-01 中国科学院重庆绿色智能技术研究院 一种基于增强现实的力反馈手术机器人控制系统
CN106308946A (zh) * 2016-08-17 2017-01-11 清华大学 一种应用于立体定向手术机器人的增强现实装置及其方法
US20180303558A1 (en) * 2016-08-17 2018-10-25 Monroe Milas Thomas Methods and systems for registration of virtual space with real space in an augmented reality system

Similar Documents

Publication Publication Date Title
US20240050156A1 (en) Surgical Systems And Methods For Providing Surgical Guidance With A Head-Mounted Device
US20170296292A1 (en) Systems and Methods for Surgical Imaging
US20150327841A1 (en) Tracking in ultrasound for imaging and user interface
JP4056791B2 (ja) 骨折整復誘導装置
EP3628262A1 (en) Determining a configuration of a medical robotic arm
JP2019506922A (ja) ロボット手術のために仮想現実デバイスを使用するシステム、コントローラ、及び方法
CN109717957B (zh) 基于混合现实的控制系统
JP2020156800A (ja) 医療用アームシステム、制御装置、及び制御方法
CN112043382A (zh) 一种外科手术导航系统及其使用方法
US20210378750A1 (en) Spatially-Aware Displays For Computer-Assisted Interventions
EP3200719B1 (en) Determining a configuration of a medical robotic arm
EP3328308B1 (en) Efficient positioning of a mechatronic arm
TW201742603A (zh) 外科手術輔助系統
WO2020133097A1 (zh) 基于混合现实的控制系统
JP6287257B2 (ja) 画像形成装置及び超音波診断装置
WO2022166929A1 (zh) 计算机可读存储介质、电子设备及手术机器人系统
EP4018957A1 (en) Systems and methods for surgical port positioning
JP2011050583A (ja) 医療用診断装置
WO2022249163A1 (en) System and method of gesture detection and device positioning
US11847809B2 (en) Systems, devices, and methods for identifying and locating a region of interest
US20230316599A1 (en) Systems, methods, and devices for generating a corrected image
US20220139532A1 (en) Interactive flying frustums visualization in augmented reality
WO2023227200A1 (en) Robotic calibration
WO2024105557A1 (en) Method for creating a surgical plan based on an ultrasound view
JP2023516522A (ja) 拡張現実における医療用ビューの配置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944323

Country of ref document: EP

Kind code of ref document: A1