WO2023040817A1 - 医生控制台的控制方法、医生控制台、机器人系统和介质 - Google Patents

医生控制台的控制方法、医生控制台、机器人系统和介质 Download PDF

Info

Publication number
WO2023040817A1
WO2023040817A1 PCT/CN2022/118380 CN2022118380W WO2023040817A1 WO 2023040817 A1 WO2023040817 A1 WO 2023040817A1 CN 2022118380 W CN2022118380 W CN 2022118380W WO 2023040817 A1 WO2023040817 A1 WO 2023040817A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
operator
main
surgical robot
display
Prior art date
Application number
PCT/CN2022/118380
Other languages
English (en)
French (fr)
Inventor
马申宇
王家寅
王超
梁玄清
陈功
何超
Original Assignee
上海微创医疗机器人(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微创医疗机器人(集团)股份有限公司 filed Critical 上海微创医疗机器人(集团)股份有限公司
Publication of WO2023040817A1 publication Critical patent/WO2023040817A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Master-slave robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/08Accessories or related features not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes

Definitions

  • the present application relates to a control method of a doctor's console, a doctor's console, a robot system and a medium.
  • robots are also widely used in medical operations, such as various minimally invasive surgical robots, endoscopic surgical robots, etc.
  • the operator When operating with a robot, the operator observes the tissue characteristics in the patient's body through the display device of the console, and controls the robotic arm and surgical instruments on the robot in a remote or remote manner to complete the operation.
  • the inventor realizes that in the traditional way, the operator can only observe the tissue features in the patient's body through a single display device. It is difficult to change the operator observation mode during operation.
  • adjusting the observation posture or adjusting the observation mode is to change the operation direction or operation mode of the surgical instrument by adjusting the operator's posture. Therefore, the current surgical robot has not yet provided the operator with a more convenient hand-eye consistent adjustment method.
  • a control method of a doctor's console, a doctor's console, a robot system, and a medium are provided.
  • a method for controlling a doctor's console includes a plurality of display devices and a master control component, and the master control component is used to control a surgical robot according to the master-slave mapping relationship.
  • the main display device selected in the main display device; according to the display pose of the main display device, adjust the operation mapping conversion relationship between the main control component and the surgical robot; and according to the operation mapping conversion relationship, control the surgical robot to perform operations through the main control component.
  • the doctor's console includes: a memory, a processor, a plurality of display devices and a main control component; a plurality of display devices are used to display the actions of the surgical robot; the memory stores computer-readable instructions, and the processor executes The computer-readable instructions implement the steps of the method in any of the above embodiments, and adjust the operation mapping conversion relationship between the main control component and the surgical robot; and the active component is used to control the surgical robot to perform operations according to the operation mapping conversion relationship.
  • a robot system the system includes: a surgical robot and the doctor's console of any one of the above embodiments; the doctor's console is used to generate robot control instructions; and the surgical robot is used to perform operations based on the robot control instructions.
  • a computer-readable storage medium stores computer-readable instructions thereon, and when the computer-readable instructions are executed by a processor, the steps of the method described in any of the above-mentioned embodiments are implemented.
  • the doctor's console includes a plurality of display devices and a main control part, and the main control part is used to control the surgical robot according to the master-slave mapping relationship.
  • the main display device selected from the multiple display devices adjusts the operation mapping conversion relationship between the main control part and the surgical robot according to the display pose of the main display device, and controls the main control part through the main control part according to the operation mapping conversion relationship.
  • Surgical robots perform operations. Therefore, by detecting the main display device corresponding to the operator, and according to the display pose of the main display device, the adjustment and control of the operation mapping conversion relationship between the main control part and the surgical robot can be performed, so that the adjustment of the main display device can be performed.
  • the operation mapping conversion relationship between the control part and the surgical robot is used to control the surgical robot, which can improve the convenience of the control of the surgical robot, and can also improve the control accuracy of the surgical robot.
  • Fig. 1 is an application scene diagram of a control method of a doctor console according to one or more embodiments
  • Figure 2 is a schematic diagram of a physician's console in accordance with one or more embodiments
  • Fig. 3 is a schematic diagram of a display device according to one or more embodiments.
  • Fig. 4 is a schematic diagram of a display device according to another or more embodiments.
  • Fig. 5 is a schematic diagram of a corresponding relationship between an operator and a display device according to one or more embodiments
  • Fig. 6 is a schematic diagram of a corresponding relationship between an operator and a display device according to another or more embodiments
  • Fig. 7 is a schematic flowchart of a control method of a doctor's console according to one or more embodiments
  • Figure 8 is a schematic illustration of a physician's console in accordance with another or more embodiments.
  • Figure 9 is a schematic diagram of a console according to one or more embodiments.
  • Figure 10 is a schematic diagram of an endoscopic imaging system according to one or more embodiments.
  • Fig. 11 is a schematic diagram of operator pose transformation before and after mode switching according to one or more embodiments.
  • Fig. 12 is a schematic flowchart of a control method of a doctor's console according to another or more embodiments
  • Fig. 13 is a schematic diagram of a control method of a doctor's console according to yet another one or more embodiments.
  • Figure 14 is a block diagram of a control device of a physician console in accordance with one or more embodiments
  • Figure 15 is a block diagram of a computer device in accordance with one or more embodiments.
  • the method for controlling the doctor's console provided in the present application can be applied in the application environment shown in FIG. 1 , that is, in a robot system.
  • the robot system can include a doctor's console 10 and a surgical robot 20 .
  • the doctor console 10 communicates with the surgical robot 20 through the network.
  • the doctor console 10 in FIG. 2 The steps of the method for controlling the doctor's console are implemented when the computer-readable instructions are executed, and the operation mapping conversion relationship between the main control component 102 and the surgical robot 20 is adjusted.
  • the processor 104 executes the computer-readable instructions, it detects the main display device selected by the operator among the multiple display devices 101, and adjusts the relationship between the main control part 102 and the surgical robot 20 according to the display pose of the main display device. The operation mapping conversion relationship between them. Further, the operator can operate the main control component 102 so that the main control component 102 controls the surgical robot 20 according to the adjusted master-slave mapping relationship.
  • the memory 103 and the processor 104 may be implemented by a server, such as a single server or a server cluster, so as to execute the control method of the doctor's console. The specific control method will be described in detail later.
  • the plurality of display devices 101 may include at least one of AR glasses and a display screen.
  • the AR glasses when the multiple display devices 101 include AR glasses, the AR glasses provide pose data reflecting the displayed pose.
  • the pose data may include position coordinate data and pose data, and the pose data provided by the AR glasses may refer to the pose data of the operator wearing the AR glasses, such as head pose.
  • the AR glasses can collect pose data in real time and feed it back to the server.
  • each display screen is fixed at a preset position on the doctor's console, and provides images taken by the endoscope to the doctor/nurse in a preset orientation (also called posture).
  • the doctor console provides the pose data of each display screen according to its own coordinate system.
  • the first display device 1011 and the second display device 1012 may switch modes based on the control command, that is, the multiple display devices 101 are switched from the first display mode to the second display mode.
  • the first display device 1011 can be switched from the first position to the second position based on the mode switching command, and the second display device 1012 can be switched from the third position to the fourth position based on the mode switching command.
  • the first display device 1011 can be switched from the second position to the first position based on the mode switching command, and the second display device 1012 can be switched from the fourth position to the third position based on the mode switching command, which is not limited in this application.
  • the switching process of multiple display devices will be described in detail below with reference to FIGS. 2 to 6 .
  • the first display device 1011 and the second display device 1012 are respectively connected to the base 105 , and the first display device 1011 and the second display device 1012 are used for optional observation by the operator.
  • the display device can be compatible with two observation modes, the first observation mode and the second observation mode, so that it can adapt to the different operation needs and preferences of different operators (medical staff), and at the same time, the operator can switch the observation mode to avoid operating Those who maintain a single posture operation for a long time cause work fatigue.
  • the first display device 1011 can rotate around the connection axis of the first display device 1011 and the base 105 , that is, rotate between a first position and a second position.
  • the coordinate position of the display device is at ⁇ Me1 ⁇ ; when it is at the second position, the coordinate system of the display device is at ⁇ Me1 ⁇ '.
  • the second display device 1012 can rotate around the connection axis of the second display device 1012 and the base 1053 , that is, move between a third position and a fourth position. Specifically, when the first display device 10111 is at the first position ⁇ Me1 ⁇ , the coordinate position of the second display device 1012 is at the third position ⁇ Me2 ⁇ ', and when the first display device 1011 is at the second position ⁇ Me1 ⁇ ', the coordinate position of the second display device 1012 is located at the fourth position ⁇ Me2 ⁇ .
  • the operator when the operator uses the first observation mode to observe during the operation, the operator generally looks up at the first display device 1011, refer to FIG. For device 1012, refer to FIG. 4 .
  • the operator can select and switch between the two observation modes as needed, so that the operator can change the working posture and reduce the fatigue caused by maintaining a single posture for a long time.
  • the first display device 1011 when the operator is in the first observation mode, the first display device 1011 is at the first position, and the coordinate position of the first display device 1011 is located at ⁇ Me1 ⁇ , referring to FIG. 5 , at this time the second display device 1012 In the third position, based on FIG. 4 , it can be seen that the coordinate position of the second display device 1012 is located at ⁇ Me2 ⁇ '.
  • the second display device 1012 when switching from the first observation mode to the second observation mode through mode switching, referring to FIG.
  • the coordinate position of 1011 is located at ⁇ Me1 ⁇ '
  • the second display device 1012 is at the fourth position, continue to refer to FIG. 4, at this time the coordinate position of the second display device 1012 is located at ⁇ Me2 ⁇ .
  • the first display device 1011 and the second display device 1012 are adapted to complete the mode switching.
  • control center can also control multiple display devices 101 to switch from the second display mode to the first display mode based on the control instruction, that is, control the first A display device 1011 switches from the second position to the first position, and controls the second display device 1012 to switch from the fourth position to the third position.
  • the doctor console 10 can also include an armrest 106 and a lifting adjustment mechanism 107, the armrest 106 is used to support the operator's arm, and the armrest 106 can be raised and lowered with the base 105 through the lifting adjustment mechanism 107. connected so that the height of the main control part 102 and the armrest 106 can be adjusted to suit different operators.
  • the main control part 102 may be a main operator, which may include a left main operator and a right main operator, and the operation of the surgical robot can be controlled by the main operator.
  • the doctor's console may also include an imaging device, which is used to take images including the position of the operator's head and the positions of the left and right hands, and transmit the images to the processor in real time, and make the processor perform identification processing , acquire the operator's head position and hand position, and determine the main display device based on the operator's head position and hand position.
  • an imaging device which is used to take images including the position of the operator's head and the positions of the left and right hands, and transmit the images to the processor in real time, and make the processor perform identification processing , acquire the operator's head position and hand position, and determine the main display device based on the operator's head position and hand position.
  • the surgical robot 20 may include at least one robotic arm 201 , and each robotic arm 201 is loaded with surgical instruments.
  • surgical instruments can be mounted on the robotic arm 201 .
  • Surgical instruments may include endoscopes, scalpels, forceps, and suture needles, among others.
  • FIG. 1 and FIG. 2 there is a certain spatial mapping relationship between the main control component 102 and the surgical robot 20, and the main control component 102 is used to control the surgical robot 20 to perform target actions, such as cutting and suturing. , biopsy sampling, etc.
  • the auxiliary device may include an image terminal, such as an image trolley 30 , and at least one of an instrument table 40 , a ventilator, and an anesthesia machine 50 .
  • control method of the doctor console is applied to the robot system.
  • the control method of the doctor console and the control process of the surgical robot will be described in detail below in combination with the robot system.
  • a method for controlling a doctor console is provided, and the method is described by taking the application of the method to the aforementioned server as an example, including the following steps:
  • Step S702 detecting the main display device selected by the operator among the plurality of display devices.
  • the doctor console includes multiple display devices, such as a first display device and a second display device.
  • the main display device refers to the device that the operator directly displays and observes, such as AR glasses worn on the head of the doctor, or a display device directly viewed by the operator, etc., as shown in the display device 1011 in Figure 5 and shown in Figure 6
  • the display device 1012 of all refers to the main display device.
  • the server can collect images or pose data of the operator through the image acquisition device installed on the doctor's operating table or through the AR glasses worn on the doctor's head to determine the operator's position.
  • the selected main display device among the plurality of display devices.
  • Step S704 adjusting the operation mapping conversion relationship between the main control component and the surgical robot according to the display pose of the main display device.
  • the display pose refers to the position and attitude of the display device.
  • the operation mapping transformation relationship refers to the relationship in which the main control component controls the surgical robot, and the operation mapping transformation relationship may include the relationship to the coordinate position and the posture relationship.
  • the server after the server determines the main display device, it can correspondingly acquire the display pose of the main display device, determine the display mode of the display device, and when it is determined that the operator changes the display model, based on the display pose, Adjust the operation mapping conversion relationship between the main control part and the surgical robot, and use it for the control of the subsequent surgical robot.
  • Step S706 according to the operation mapping conversion relationship, the main control component controls the surgical robot to perform the operation.
  • the server can control the surgical robot to perform operations through the operation mapping conversion relationship based on the operator's operation on the main control component, such as controlling the surgical robot to perform cutting, suturing, and biopsy sampling. wait.
  • the doctor's console includes a plurality of display devices and a main control part
  • the main control part is used to control the surgical robot according to the master-slave mapping relationship, by detecting the main display device selected by the operator among the multiple display devices, and then According to the display pose of the main display device, the operation mapping conversion relationship between the main control component and the surgical robot is adjusted, and according to the operation mapping conversion relationship, the main control component controls the surgical robot to perform operations. Therefore, by detecting the main display device corresponding to the operator, and according to the display pose of the main display device, the adjustment and control of the operation mapping conversion relationship between the main control part and the surgical robot can be performed, so that the adjustment of the main display device can be performed.
  • the operation mapping conversion relationship between the control part and the surgical robot is used to control the surgical robot, which can improve the convenience of the control of the surgical robot, and can also improve the control accuracy of the surgical robot.
  • detecting the main display device selected by the operator among the plurality of display devices may include: acquiring a detection image, wherein the detection image includes the operator and at least one display device; identifying the operator reflected in the detection image The first image positional relationship between the head of the head and each display device; based on the first image positional relationship, determine the main display device.
  • the doctor's console or the operating room may be equipped with an imaging device, and the imaging device may collect images of a specified location area, such as collecting images including at least one display device on the doctor's console.
  • the server may acquire the detection image collected by the imaging device, and the detection image may include an operator and/or at least one display device.
  • the server may identify the operator and the display device in the acquired detection image, so as to identify the first image positional relationship between the operator's head and each display device.
  • the server can recognize whether the operator's head is bowed or looking up, and recognize the position distance between the operator's head and the display device, that is, obtain the first distance between the operator's head and each display device. image location relationship.
  • the server may determine the main display device based on the identified positional relationship of the first image. For example, continuing to refer to FIG. 5 and FIG. 6 , when the head is bowed and the distance to the display device is relatively short, it may be determined that the main display device is the second display device. When the head is viewed head-up and the distance between the head and the display device is relatively long, it can be determined that the main display device is the first display device.
  • the above method may further include: identifying and detecting a second image positional relationship between the hand of the same operator and the main control component in the detection image.
  • the detection image collected by the imaging device may also include the hand of the operator.
  • the characteristics of the operator's hand are not the same, and the position between it and the main control part is not the same.
  • the distance between the hand and the main control part is also different.
  • the server may perform recognition processing on the detected image to determine the characteristics of the operator's hand, and then determine the second image positional relationship between the same operator's hand and the main control component.
  • determining the main display device based on the first image position relationship may include: determining the main display device in combination with the first image position relationship and the second image position relationship.
  • the server determines the operator by combining the first image position relationship and the second image position relationship, and then determines the main display device.
  • the server recognizes the detection image to obtain the first image position relationship and the second image position relationship, which can be determined based on machine learning, that is, the server can pre-train the recognition model, and then based on the recognition model, the detection image Recognition is performed to determine the positional relationship of the first image and the positional relationship of the second image.
  • detecting the main display device selected by the operator among the plurality of display devices may include: determining the main display device used by the operator by detecting an action for prompting a confirmation option of the main display device.
  • the confirmation option may be an option displayed on the main display device, may be a switching button prompting to switch modes or display device switching, or a selection button for a display mode, or the like.
  • the server may prompt the confirmation option through the main display device, and determine the main display device based on the operator's selection action. If the operator selects the first display mode, the server determines the main display device corresponding to the first display mode. device.
  • the server adjusts the operation mapping conversion relationship between the main control component and the surgical robot according to the display pose of the main display device, which may include: The display pose changes between them, and adjust the operation mapping transformation relationship.
  • the server adjusts the operation mapping conversion relationship according to the detected display pose changes between the two main display devices before and after the replacement by the operator, which may include: determining according to the display pose of the main display device before replacement The coordinates of the operator's field of view before replacement, and the coordinates of the operator's field of view after replacement are determined according to the display pose of the main display device after replacement; The coordinate transformation relationship before and after; adjust the operation mapping transformation relationship according to the coordinate transformation relationship.
  • the coordinates of the operator's field of view refer to the coordinates of the operator's eyes.
  • the main display device is the first display device 1011, and the coordinates of the operator's field of view where the operator's eyes are located are ⁇ H1 ⁇ , when the operator When in the second viewing mode, continue to refer to FIG. 4 , at this time, the main display device is the first display device 1012 , and the coordinates of the operator's field of view where the operator's eyes are located are ⁇ H2 ⁇ .
  • the server can determine the coordinates of the operator's field of view before and after the replacement according to the display poses of the main display device before and after the replacement, that is, according to the display poses before the replacement, such as the first display coordinate ⁇ Me1 ⁇ to determine the The coordinates of the operator's field of view ⁇ H1 ⁇ , and according to the replaced display pose, such as the second display coordinate ⁇ Me2 ⁇ , determine the replaced operator's field of view coordinates ⁇ H2 ⁇ .
  • the server can determine the coordinates ⁇ H1 ⁇ of the operator's field of view before replacement and the coordinates of ⁇ H2 ⁇ of the operator's field of view after replacement according to the displayed pose and the ergonomic model.
  • the server may determine the coordinate conversion relationship of the display device before and after the replacement based on the determined coordinates ⁇ H1 ⁇ of the operator's field of view before the replacement and the coordinates ⁇ H2 ⁇ of the operator's field of view after the replacement.
  • the server may adjust the operation mapping conversion relationship according to the coordinate conversion relationship.
  • the server determines the coordinate conversion relationship before and after the replacement of the display device based on the coordinates of the operator's field of view before the replacement and the coordinates of the operator's field of view after the replacement, which may include: The coordinates of the operator's field of view establish the field of view transformation relationship between the operator's field of view coordinates before replacement, the operator's field of view coordinates after replacement, and the reference coordinates; based on the field of view transformation relationship, determine the coordinate transformation relationship before and after the replacement of the display device.
  • the display device is connected to the base of the doctor's console, and the displayed pose of the display device can be the pose relative to the base coordinates of the doctor's console.
  • the base coordinate of the doctor's console is ⁇ Mb ⁇
  • the server can determine the relative position of the display device relative to the base coordinate ⁇ Mb ⁇ based on the base coordinate ⁇ Mb ⁇ and the kinematic model of the doctor's trolley, that is, determine the replacement A first relative position Pe1 of the front display device relative to the base coordinate ⁇ Mb ⁇ , and a second relative position Pe2 of the replaced display device relative to the base coordinate ⁇ Mb ⁇ .
  • the reference coordinate may refer to the base coordinate ⁇ Mb ⁇ of the doctor's console.
  • the server can determine the relative relationship between the coordinates of the operator's field of view and the base coordinate ⁇ Mb ⁇ based on the relative relationship between the display device and the base coordinate ⁇ Mb ⁇ . relationship, that is, the server can establish a transformation matrix TH1B between the operator's field of view coordinate ⁇ H1 ⁇ and the base coordinate ⁇ Mb ⁇ before the replacement, and a transformation matrix TH2B between the operator's field of view coordinate ⁇ H2 ⁇ and the base coordinate ⁇ Mb ⁇ after the replacement.
  • the server adjusts the operation mapping transformation relationship according to the coordinate transformation relationship, which may include: acquiring the first relative position of the main control component relative to the main display device before replacement; acquiring the rotation of the mechanical arm corresponding to the main control component Matrix, the rotation matrix is the rotation matrix of the manipulator relative to the field of view lens, and the field of view lens maps the field of view image to the main display device through the mapping relationship; based on the first relative position and the rotation matrix, adjust the main control components in different display modes and the corresponding The rotation position relationship between the mechanical arms; the rotation position relationship and the coordinate transformation relationship are used as the adjusted operation mapping transformation relationship.
  • the server may determine the position of the main control component before the replacement of the main display device based on the kinematics model of the doctor's trolley. As continuing to refer to FIG. 8 , based on the kinematics model of the doctor's trolley, it is determined that the coordinate positions of the left master control hand and the right master control hand before replacing the main display device are ⁇ Ma1 ⁇ and ⁇ Ma2 ⁇ .
  • the main control part is connected to the base of the doctor's console, and the server can determine the relative relationship between the main control part and the base coordinate ⁇ Mb ⁇ of the doctor's console based on the kinematics model of the doctor's trolley.
  • the server can determine the main control unit based on the relative relationship between the main display device before replacement and the base coordinate ⁇ Mb ⁇ of the doctor’s console, and the relative relationship between the main control component and the base coordinate ⁇ Mb ⁇ of the doctor’s console. The first relative position of the component relative to the main display device before replacement.
  • the mechanical arm controlled by the main control component is installed on the console.
  • the robotic arm may include a surgical instrument robotic arm loaded with surgical instruments and a field of view lens robotic arm, such as an endoscope robotic arm.
  • the surgical instrument manipulator is controlled by the main control part based on the operation mapping transformation relationship, and operates.
  • the relative relationship between the surgical instrument robotic arm, the endoscope robotic arm and the operating table can be shown in FIG. 9 .
  • the mechanical arm is installed on the console, and the coordinates of the console can be expressed as ⁇ Sb ⁇ , which is located on the base of the console.
  • the coordinates of the field of view lens on the robotic arm of the field of view are ⁇ Se ⁇ , which can be relative Relative to the console coordinates ⁇ Sb ⁇
  • the end coordinates of the surgical instrument manipulator loaded with surgical instruments are ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ respectively, for example, they can be controlled by the left main operator and the right main operator respectively.
  • the server can calculate the rotation matrices Rsa1 and Rsa2 of the mechanical arm of the surgical instrument based on the coordinate ⁇ Se ⁇ of the field of view lens and the end coordinates ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ of the mechanical arm loaded with the surgical instrument, that is, determine the mechanical The rotation matrix of the arm.
  • the server can calculate the rotation matrix of the manipulator through the kinematics model of the robot and the like.
  • the server can establish the rotational position relationship between the main control component and the corresponding mechanical arm before and after the replacement of the main display device, that is, establish the main control component in the non-display mode.
  • the relative relationship between the control part and the corresponding mechanical arm realizes the adjustment of the rotation position relationship between the corresponding main control part and the corresponding mechanical arm under different main display devices.
  • the server may use the rotation position relationship and the coordinate transformation relationship as the operation mapping transformation relationship of the main control component, that is, obtain the adjusted operation mapping transformation relationship.
  • a schematic diagram of an endoscope imaging system is also provided, as shown in FIG. 10 .
  • the endoscope imaging system projects the image of the operation object ⁇ P ⁇ to the second display device ⁇ Me2 ⁇ , and projects the endoscope coordinates ⁇ Se ⁇ to the operator's visual field coordinates ⁇ H2 ⁇ .
  • the surgical instrument manipulator ⁇ Sa1 ⁇ can be moved proportionally along the Z direction of ⁇ Se ⁇ .
  • the movement of can be proportionally transformed into the movement of the surgical instrument manipulator ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ under the coordinate ⁇ Se ⁇ of the endoscope field of view.
  • the motion controller uses optional visual servoing, joint force feedback, and force feedback from external sensors to convert the motions of the surgical instrument manipulator ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ into the main manipulator ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ equivalently or proportionally Movement of Ma1 ⁇ and ⁇ Ma2 ⁇ .
  • the server can detect the terminal force/external moment received by the main operator, and according to the dynamic model of the tool arm, the main operator can calculate the expected motion, and calculate the expected motion of the tip of the surgical instrument arm. From the endoscopic image, a computer vision approach can be taken to obtain the actual position of the tip of the robotic arm of the surgical instrument.
  • specific methods may include but not limited to neural network algorithms, optical flow algorithms, and the like.
  • the server can calculate the deviation value between the two, and through the deviation value, and according to the dynamic model of the tool arm, the end force of the surgical instrument manipulator can be calculated, and the force can be mapped to the movement space of the main hand , and then transformed into the feedback torque of the main hand joint space through decoupling, and used as an input to control the motion of the main manipulators ⁇ Ma1 ⁇ and ⁇ Ma2 ⁇ .
  • the server adjusts the operation mapping conversion relationship between the main control component and the surgical robot according to the display pose of the main display device, which may include: detecting that the operator operates the selected main display device to cause the main display The display pose of the device changes; according to the change of the display pose, the operation mapping conversion relationship is adjusted.
  • the server can determine the main display device by detecting the operator's operation, such as the operator rotating or moving the display device, or detecting the change of the pose of the AR glasses worn by the operator, and determining The display pose of the main display device changes.
  • the server can adjust the operation mapping conversion relationship between the main control component and the surgical robot according to the changes in the displayed pose, such as changes in position coordinates and attitudes.
  • changes in the displayed pose such as changes in position coordinates and attitudes.
  • the server uses the main control component to control the surgical robot to perform operations according to the operation mapping conversion relationship, which may include: adjusting the posture of the main control component according to the operation mapping conversion relationship, so that the posture of the surgical robot remains unchanged In the case of , the attitude of the operating end of the surgical robot displayed on the main display device corresponds to the adjusted attitude of the main control component.
  • the server may determine adjustment data for adjusting the posture of the main control component according to the adjusted operation mapping conversion relationship, so as to adjust the posture of the main control component based on the determined adjustment data.
  • the operation mapping transformation relationship may include a coordinate transformation relationship and a rotation position relationship.
  • the server may determine the second relative position of the main control component relative to the replaced main display device according to the coordinate transformation relationship, and determine the adjustment data of the main control component after the main display device is replaced based on the second relative position and the relationship of the rotation position .
  • the server may determine the second relative position of the main control component relative to the replaced main display device according to the determined coordinate transformation relationship, that is, the transformation matrix TH2H1. That is, calculate the second relative positions ⁇ Ma1 ⁇ ' and ⁇ Ma2 ⁇ ' of the main control components ⁇ Ma1 ⁇ and ⁇ Ma2 ⁇ under ⁇ Me2 ⁇ according to the transformation matrix TH2H1.
  • the server can calculate the rotation matrices Rma1 and Rma2 that match ⁇ Ma1 ⁇ ' and ⁇ Ma2 ⁇ ' to ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ , that is, obtain the Adjustment data of the main control unit after replacement.
  • FIG. 11 shows a schematic diagram of adjusting the main control component, and takes the display device switching from the first observation mode to the second observation mode as an example for illustration, that is, the main display device switches from the first observation mode to the second observation mode.
  • a display device is replaced with a second display device as an example for description.
  • the server may acquire display sitting postures of the main display device before and after replacement, such as first display coordinates ⁇ Me1 ⁇ and second display coordinates ⁇ Me2 ⁇ .
  • the server determines the position of the first display coordinate ⁇ Me1 ⁇ relative to the base coordinate ⁇ Mb ⁇ before the replacement of the main display device according to the first display coordinate ⁇ Me1 ⁇ before the replacement of the main display device and the kinematics model of the doctor's trolley. - relative position Pe1.
  • the server may determine the coordinates of key parts of the operator before switching, such as the coordinate position Ps1 of the human shoulder coordinate ⁇ Ms ⁇ relative to the base coordinate ⁇ Mb ⁇ .
  • the server determines the position of the second display coordinate ⁇ Me2 ⁇ relative to the base coordinate ⁇ Mb ⁇ after the replacement of the main display device based on the second display coordinate ⁇ Me2 ⁇ after the replacement of the main display device and the kinematics model of the doctor's trolley.
  • a relative position Pe2 and based on the second relative position Pe2 and the ergonomics model, determine the coordinates of key parts of the operator after switching, such as the coordinate position Ps2 of the human shoulder coordinate ⁇ Ms ⁇ ' relative to the base coordinate ⁇ Mb ⁇ .
  • the server can obtain the position difference of the human shoulder, namely Ps1-Ps2, based on the coordinates of key parts of the operator before and after the replacement of the main display device, such as the coordinate position Ps1 and the coordinate position Ps2 of the human shoulder.
  • the server can adjust the main control part according to the operator's reference position before the replacement of the main display device, such as the base coordinate ⁇ Mb ⁇ of the doctor's console, and the coordinate position of the main control part, combined with the ergonomic model The operation mapping conversion relationship.
  • the server can convert the obtained human shoulder position difference Ps1-Ps2 to obtain the adjustment data of the main control component, that is, the adjustment distance, that is, from ⁇ Mh ⁇ to ⁇ Mh ⁇ to ⁇ Mh ⁇ ' position distance.
  • the server may control the main control part to move from ⁇ Mh ⁇ to ⁇ Mh ⁇ ' to complete the adjustment of the main control part.
  • the server adjusts the main control component based on the adjustment data, which may be based on the adjustment data, determines the movement trajectory data of the main control component, and then controls the main control component to adjust according to the movement trajectory data.
  • the server when the server replaces the main display device, it can also make a security judgment to determine whether to replace the main display device, which may include: obtaining the operator's pose, which is used to determine Whether the operator is in a safe position; when the operator is in a safe position, multiple display devices are controlled to switch modes based on the display device mode switching instruction.
  • the operator pose refers to the operator's posture and coordinate position, which may specifically be the posture of a certain body part of the operator, such as the head or a certain part of the body.
  • the server may perform operator safety detection based on the operator's pose to determine whether the operator is in a safe position.
  • the server when the server determines that the operator is in a safe position, the server may control multiple display devices to switch modes based on the display device mode switching instruction.
  • the mode switching instruction refers to an instruction for controlling the display device to switch the observation mode, for example, it may be switching from the first observation mode to the second observation mode, or it may be switching from the second observation mode to the first observation mode, There is no limit to this.
  • the safety detection is described by taking a doctor's head as an example.
  • the server can detect the position of the doctor's head through sensors and other devices, and based on the detected head position of the doctor, determine whether the doctor's head is in a safe position, and then determine whether to control the display device to switch the mode. That is, it is determined whether to replace the main display device.
  • the server controlling the display device to switch the mode may be to control the switching structure connected to the display device to switch the mode of the display device.
  • the server when the server determines that the doctor's head is not in a safe position, it can control the switching mechanism to stop, that is, the display device does not perform mode switching.
  • the server when the server determines that the doctor's head is in a safe position, the server may control the switching mechanism to perform a switching action, and control the display device to switch modes.
  • the server controls the switching mechanism to switch, it can judge the current position of the display device and judge whether it has reached the target position.
  • the server can continue to obtain the position of the doctor's head and continue to make judgments, so as to ensure that the doctor's head is always in a safe position during the switching process.
  • the server may adjust the display device to be in a floating state, that is, in a state that the doctor can manually drag.
  • the doctor can drag the display device to a pose that matches himself according to his own conditions.
  • the server may detect in real time whether the instruction to end the adjustment is received, that is, determine whether the manual dragging of the display device by the doctor has ended.
  • the server may lock the display device, that is, lock it so that it cannot be switched, and complete the switching of the display device.
  • the operator’s target site data is detected and judged before and during the mode switching of the display device, so that the mode switching of the display device is always performed when the operator is in a safe position, The safety of display device switching is ensured, and the accuracy of switching is improved.
  • Fig. 13 shows a schematic diagram of a control method of the doctor's console in another embodiment, which will be described in detail below based on Fig. 13 .
  • the server may judge whether the display mode switching instruction is received, and when it is determined that the display mode switching instruction is received, use the kinematics model of the operating trolley. Calculate the coordinates ⁇ Se ⁇ of the endoscope mounted on the console, and the instrument coordinates ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ of the instruments.
  • the server can calculate the coordinates ⁇ Ma1 ⁇ and ⁇ Ma2 ⁇ of the main operator on the console and the coordinates ⁇ Me1 ⁇ of the first display device through the kinematics model of the doctor's trolley.
  • the server may control the display device to perform mode switching, and calculate the coordinate ⁇ Me2 ⁇ of the second display device through the kinematics model of the trolley after the mode switching is completed.
  • the server can be based on endoscope coordinates ⁇ Se ⁇ , instrument coordinates ⁇ Sa1 ⁇ and ⁇ Sa2 ⁇ , main operator coordinates ⁇ Ma1 ⁇ and ⁇ Ma2 ⁇ , first display device coordinates ⁇ Me1 ⁇ and second display device coordinates ⁇ Me2 ⁇ , perform calculations, and carry out adjustment data for the master operator.
  • the server can detect the position of the operator's arm. If it is in a safe range, it will run trajectory planning and control the main hand for posture matching, that is, adjust the main operator's hand. If not, it will remind the operator to pay attention to the position of the arm and continue to detect.
  • the server may detect whether the trajectory planning is completed, and stop detection until the trajectory planning is completed.
  • steps in the flow charts of FIG. 7 and FIG. 13 are shown sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in FIG. 7 and FIG. 13 may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. These sub-steps or The execution order of the stages is not necessarily performed sequentially, but may be executed alternately or alternately with at least a part of other steps or substeps of other steps or stages.
  • a control device for a doctor's console includes a plurality of display devices and a master control component, and the master control component is used to control the surgical robot according to the master-slave mapping relationship.
  • control device of the doctor console may include: a detection module 141, an adjustment module 142, and a control module 143, wherein:
  • the detection module 141 is configured to detect the main display device selected by the operator among the plurality of display devices.
  • the adjustment module 142 is configured to adjust the operation mapping conversion relationship between the main control component and the surgical robot according to the display pose of the main display device.
  • the control module 143 is configured to control the surgical robot to perform operations through the main control component according to the operation mapping conversion relationship.
  • the detection module 141 may include:
  • the obtaining sub-module is used to obtain a detection image, wherein the detection image includes an operator and at least one display device.
  • the identification sub-module is used to identify the first image positional relationship between the operator's head and each display device in the detection image.
  • the determination sub-module is configured to determine the main display device based on the positional relationship of the first image.
  • the above-mentioned device may also include:
  • the identification module is used to identify the second image positional relationship between the hand of the same operator and the main control component in the detection image.
  • the determining submodule is configured to determine the main display device in combination with the first image positional relationship and the second image positional relationship.
  • the detection module 141 is configured to determine the main display device used by the operator by detecting an action for prompting the confirmation option of the main display device.
  • the adjustment module 142 is configured to adjust the operation mapping conversion relationship according to the detected display pose changes between the two main display devices before and after the operator is replaced.
  • the adjustment module 142 may include:
  • the display pose change display sub-module is used to detect the change of the display pose of the main display device caused by the operator operating the selected main display device.
  • the adjustment sub-module is used to adjust the operation mapping conversion relationship according to the change of the display pose.
  • control module is used to adjust the posture of the main control component according to the operation mapping conversion relationship, so that the posture of the operating end of the surgical robot displayed on the main display device is Corresponds to the pose of the adjusted master component.
  • the above-mentioned device may also include:
  • the operator pose acquisition module is used to acquire the operator pose, and the operator pose is used to determine whether the operator is in a safe position.
  • the switching control module is used for controlling multiple display devices to switch modes based on the display device mode switching instruction when the operator is in a safe position.
  • the switch control module is used to control the multiple display devices to switch from the first display mode to the second display mode based on the display device mode switching instruction.
  • Each module in the above-mentioned control device of the doctor's console can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • a computer device is provided, and the computer device may be a server, and its internal structure may be as shown in FIG. 15 .
  • the computer device includes a processor, memory, network interface and database connected by a system bus. Wherein, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions and a database.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the database of the computer device is used to store data such as display poses, operation mapping conversion relations, and adjustment data.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer readable instructions are executed by the processor, a method for controlling the doctor's console is realized.
  • Figure 15 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the computer equipment on which the solution of this application is applied.
  • the specific computer equipment can be More or fewer components than shown in the figures may be included, or some components may be combined, or have a different arrangement of components.
  • a computer device comprising a memory and one or more processors, wherein computer readable instructions are stored in the memory, and when executed by the processor, the computer readable instructions cause the one or more processors to perform the following steps: The main display device selected among the two display devices; according to the display pose of the main display device, adjust the operation mapping conversion relationship between the main control component and the surgical robot; according to the operation mapping conversion relationship, the surgical robot is controlled by the main control component to perform operate.
  • One or more computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps: The main display device selected in the main display device; according to the display pose of the main display device, adjust the operation mapping conversion relationship between the main control component and the surgical robot; according to the operation mapping conversion relationship, the surgical robot is controlled by the main control component to perform operations.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Robotics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Manipulator (AREA)

Abstract

一种医生控制台的控制方法,包括:检测操作者在多个显示装置中所选择的主显示装置;根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系;根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。

Description

医生控制台的控制方法、医生控制台、机器人系统和介质
相关申请的交叉引用
本申请要求于2021年9月16日提交中国专利局,申请号为2021110863534,申请名称为“医生控制台的控制方法、医生控制台、机器人系统和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种医生控制台的控制方法、医生控制台、机器人系统和介质。
背景技术
随着现代化医疗技术的发展,机器人也广泛应用于医疗手术中,如各种微创手术机器人、内窥镜手术机器人等。在通过机器人进行手术的时候,操作者通过操控台的显示装置观察病人体内的组织特征,并以遥控方式或远程方式操控机器人上的机械臂及手术器械来完成手术的操作。
然而发明人意识到,在传统方式中,操作者只能通过单一显示装置观察病人体内的组织特征。在操作过程难于改变操作者观察模式。在医生的手术习惯中,调整观测姿势,或者调整观测模式,是希望通过调整施术者的姿势来改变手术器械的操作方向、或操作方式等。因此,目前的手术机器人尚未向操作者提供更便捷地进行手眼一致式的调节方式。
发明内容
根据本申请公开的各种实施例,提供一种医生控制台的控制方法、医生控制台、机器人系统和介质。
一种医生控制台的控制方法,医生控制台包含多个显示装置和主控部件,主控部件用于根据主从映射关系控制手术机器人,所述控制方法包括:检测操作者在多个显示装置中所选择的主显示装置;根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系;以及根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。
在其中一些实施例中在其中一些实施例中在其中一些实施例中在其中一些实施例中在其中一些实施例中在其中一些实施例中在其中一些实施例中在其中一些实施例中一种医生控制台,所述医生控制台包括:存储器、处理器、多个显示装置以及主控部件;多个显示装置用于对手术机器人的动作进行显示;存储器存储有计算机可读指令,处理器执行计算机可读指令时实现上述任一实施例方法的步骤,调整主控部件与手术机器人之间的操作映射转换关系;以及主动部件用于根据操作映射转换关系,控制手术机器人执行操作。
一种机器人系统,所述系统包括:手术机器人以及上述任一实施例的医生控制台;医生控制台用于生成机器人控制指令;以及手术机器人用于基于机器人控制指令执行操作。
在其中一些实施例中一种计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述任一实施例所述的方法的步骤。
上述医生控制台的控制方法、医生控制台、机器人系统和介质中,医生控制台包含多个显示装置和主控部件,主控部件用于根据主从映射关系控制手术机器人,通过检测操作者在多个显示装置中所选择的主显示装置,然后根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系,并根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。从而,可以通过对操作者所对应的主显示装置进行检测,并根据主显示装置的显示位姿,进行主控部件与手术机器人之间的操作映射转换关系的调整与控制,使得可以通过调整主控部件与手术机器人之间的操作映射转换关系进行手术机器人的控制,可以提升手术机器人的调控的便捷性,也可以提升手术机器人的控制准确性。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为根据一个或多个实施例中医生控制台的控制方法的应用场景图;
图2为根据一个或多个实施例中医生控制台的示意图;
图3为根据一个或多个实施例中显示装置的示意图;
图4为根据另一个或多个实施例中显示装置的示意图;
图5为根据一个或多个实施例中操作者与显示装置对应关系的示意图;
图6为根据另一个或多个实施例中操作者与显示装置对应关系的示意图;
图7为根据一个或多个实施例中医生控制台的控制方法的流程示意图;
图8为根据另一个或多个实施例中医生控制台的示意图;
图9为根据一个或多个实施例中控制台的示意图;
图10为根据一个或多个实施例中内窥镜成像系统的示意图;
图11为根据一个或多个实施例中模式切换前后操作者位姿变换的示意图;
图12为根据另一个或多个实施例中医生控制台的控制方法的流程示意图;
图13为根据又一个或多个实施例中医生控制台的控制方法的示意图;
图14为根据一个或多个实施例中医生控制台的控制装置的框图;
图15为根据一个或多个实施例中计算机设备的框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的医生控制台的控制方法,可以应用于如图1所示的应用环境中,即应用于机器人系统中,机器人系统可以包括医生控制台10以及手术机器人20。医生控制台10 通过网络与手术机器人20进行通信。
在本实施例中,参考图2,图1中的医生控制台10可以包含多个显示装置101、主控部件102、存储器103和处理器104,存储器103存储有计算机可读指令,处理器104执行计算机可读指令时实现医生控制台的控制方法的步骤,调整所述主控部件102与手术机器人20之间的操作映射转换关系。
具体地,处理器104执行计算机可读指令时实现检测操作者在多个显示装置101中所选择的主显示装置,并根据主显示装置的显示位姿,调整主控部件102与手术机器人20之间的操作映射转换关系。进一步,操作者可以通过对主控部件102进行操作,使得主控部件102根据调整后的主从映射关系控制手术机器人20。其中,存储器103和处理器104可以通过服务器实现,如通过单个服务器或者是服务器集群实现,以执行医生控制台的控制方法。具体控制方法后文将进行详细说明。
在本实施例中,多个显示装置101可以包括AR眼镜和显示屏中的至少一种。
在其中一些实施例中,当多个显示装置101包括AR眼镜时,则AR眼镜提供反映显示位姿的位姿数据。
其中,位姿数据可以包括位置坐标数据以及姿态数据,AR眼镜提供的位姿数据可以是指佩戴AR眼镜的操作者的位姿数据,如头部位姿等。
在本实施例中,AR眼镜可以实时采集位姿数据,并反馈至服务器。
在其中一些实施例中,当多个显示装置101包括显示屏时,继续参考图2,多个显示装置101可以包括第一显示装置1011和第二显示装置1012,第一显示装置1011和第二显示装置1012与医生控制台10的基座105相连接。其中,各显示屏固定在医生控制台的预设位置上,且以预设的朝向(又称姿态)向医生/护士等提供内窥镜拍摄的影像等。医生控制台根据其自身所在坐标系,提供各显示屏的位姿数据。
在本实施例中,第一显示装置1011以及第二显示装置1012可以基于控制指令进行模式的切换,即使得多个显示装置101从第一显示模式切换至第二显示模式。
在其中一些实施例中,第一显示装置1011可以基于模式切换指令从第一位置切换至第二位置,第二显示装置1012基于模式切换指令从第三位置切换至第四位置。同理,第一显示装置1011可以基于模式切换指令从第二位置切换至第一位置,第二显示装置1012基于模式切换指令从第四位置切换至第三位置,本申请对此不作限制。多个显示装置的切换过程以下结合图2~图6进行详细说明。
继续参考图2,第一显示装置1011和第二显示装置1012分别与基座105连接,第一显示装置1011和第二显示装置1012用于供操作者选择性使用观察。如此配置,显示装置可以兼容第一观察模式和第二观察模式两种观察模式,使得可以适应不同操作者(医护人员)不同的操作需求和偏好,同时可以使操作者通过切换观察方式,避免操作者长时间保持单一姿态操作造成工作疲劳。
在本实施例中,参考图3,第一显示装置1011可以绕第一显示装置1011与基座105 的连接轴线进行旋转,即在第一位置与第二位置之间转动。当第一显示装置1011处于第一位置时,显示装置的坐标位置位于{Me1},当处于第二位置时,显示装置所在坐标系位于{Me1}’。
进一步,参考图4,第二显示装置1012可绕第二显示装置1012与基座1053的连接轴线进行旋转,即在第三位置与第四位置之间移动。具体地,当第一显示装置10111处在第一位置{Me1}时,第二显示装置1012所在坐标位置位于第三位置{Me2}’,当第一显示装置1011处在第二位置{Me1}’时,第二显示装置1012所在坐标位置位于第四位置{Me2}。
在医疗手术中,操作者在手术中采用第一观察模式进行观察时,操作者一般平视第一显示装置1011,参考图3,而采用第二观察模式观察时,操作者一般低头观察第二显示装置1012,参考图4。操作者可以根据需要在两种观察模式之间选择和切换,这样操作者可以改变工作姿态,减缓长时间保持单一姿态造成的疲劳。
在本实施例中,当操作者处于第一观察模式时,第一显示装置1011处于第一位置,第一显示装置1011的坐标位置位于{Me1},参考图5,此时第二显示装置1012处于第三位置,基于图4可知,此时第二显示装置1012的坐标位置位于{Me2}’。
在本实施例中,当通过模式切换,从第一观察模式切换至第二观察模式时,参考图6,第一显示装置1011绕轴线旋转,形成下倾,处于第二位置,第一显示装置1011的坐标位置位于{Me1}’,第二显示装置1012处于第四位置,继续参考图4,此时第二显示装置1012的坐标位置位于{Me2}。此时,第一显示装置1011和第二显示装置1012相适配,完成模式的切换。
本领域技术人员可以理解的是,以上仅为举例说明,在其他实施例中,控制中心也可以基于控制指令,控制多个显示装置101从第二显示模式切换至第一显示模式,即控制第一显示装置1011从第二位置切换至第一位置,并控制第二显示装置1012从第四位置切换至第三位置。
本实施例中,继续参考图2,医生控制台10还可以包括扶手106及升降调节机构107,扶手106用于承托操作者的手臂,扶手106通过升降调节机构107可升降地与基座105连接,使得主控部件102及扶手106可以调节高度,以适应不同的操作者。
在其中一些实施例中,继续参考图2,主控部件102可以是主操作手,可以包括左主操作手以及右主操作手,通过主操作手即可控制手术机器人操作。
在其中一些实施例中,医生控制台还可以包括成像设备,成像设备用于拍摄包括操作者头部位置、左右手位置的图像,并将图像实时传输至处理器,并使得处理器进行识别处理后,获取操作者头部位置与手部位置,并基于操作者的头部位置以及手部位置确定主显示装置。
在其中一些实施例中,继续参考图1,手术机器人20可以包括至少一个机械臂201,各机械臂201上装载有手术器械。
在本实施例中,手术器械可以挂载于机械臂201上。手术器械可以包括内窥镜、手术 刀、手术钳以及缝合针等。
在本实施例中,结合参考图1和图2,主控部件102与手术机器人20之间存在着一定的空间映射关系,主控部件102用于控制手术机器人20执行目标动作,如切割、缝合、活检采样等。
在其中一些实施例中,继续参考图1,辅助装置可以包括图像端,如图像台车30,以及器械台40、呼吸机以及麻醉机50中至少一种。
如前文所述,医生控制台的控制方法应用与机器人系统中,以下将结合机器人系统,对医生控制台的控制方法以及手术机器人的控制过程进行详细说明。
在其中一些实施例中,如图7所示,提供了一种医生控制台的控制方法,以该方法应用于前文所述的服务器为例进行说明,包括以下步骤:
步骤S702,检测操作者在多个显示装置中所选择的主显示装置。
如前文所述,医生控制台包括多个显示装置,如可以包括第一显示装置和第二显示装置。主显示装置是指操作者直接进行显示观察的装置,如佩戴于医生头部的AR眼镜,或者是操作者直接查看的显示装置等,如图5所示中的显示装置1011以及图6所示的显示装置1012均是指主显示装置。
在本实施例中,服务器可以通过安装于医生手术台的图像采集设备或者是通过佩戴于于医生头部的AR眼镜等,通过采集图像或者是采集操作者的位姿数据,以确定操作者在多个显示装置中所选择的主显示装置。
步骤S704,根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系。
显示位姿是指显示装置的位置与姿态。
操作映射转换关系是指主控部件对手术机器人进行控制的关系,操作映射转换关系可以包括对坐标位置的关系以及姿态的关系。
在本实施例中,服务器在确定主显示装置之后,可以对应获取到主显示装置的显示位姿,进行显示装置的显示模式的判定,并在确定操作者变更显示模型时,基于显示位姿,对主控部件与手术机器人之间的操作映射转换关系进行调整,并用于后续手术机器人的控制。
步骤S706,根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。
在本实施例中,服务器在完成操作映射转换关系的调整之后,可以基于操作者对主控部件的操作,通过操作映射转换关系控制手术机器人执行操作,如控制手术机器人执行切割、缝合、活检采样等。
上述实施例中,医生控制台包含多个显示装置和主控部件,主控部件用于根据主从映射关系控制手术机器人,通过检测操作者在多个显示装置中所选择的主显示装置,然后根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系,并根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。从而,可以通过对操作者所 对应的主显示装置进行检测,并根据主显示装置的显示位姿,进行主控部件与手术机器人之间的操作映射转换关系的调整与控制,使得可以通过调整主控部件与手术机器人之间的操作映射转换关系进行手术机器人的控制,可以提升手术机器人的调控的便捷性,也可以提升手术机器人的控制准确性。
在其中一些实施例中,检测操作者在多个显示装置中所选择的主显示装置,可以包括:获取检测图像,其中,检测图像包含操作者与至少一个显示装置;识别检测图像中反映操作者的头部与各显示装置之间的第一图像位置关系;基于第一图像位置关系,确定主显示装置。
如前所述,医生控制台、或者手术室内可以配置有成像设备,成像设备可以采集指定位置区域的图像,如采集包含医生控制台上至少一个显示装置的图像等。
在本实施例中,服务器可以获取成像设备采集的检测图像,检测图像中可以包括操作者和/或至少一个显示装置。
在本实施例中,服务器可以对获取到的检测图像中的操作者以及显示装置进行识别,以识别出操作者的头部与各显示装置之间的第一图像位置关系。
具体地,服务器可以识别出操作者头部是低头或者是平视,以及识别出操作者的头部与显示装置之间的位置距离,即得到操作者的头部与各显示装置之间的第一图像位置关系。
在本实施例中,服务器可以基于识别得到的第一图像位置关系,确定主显示装置。例如,继续参考图5和图6,当头部为低头,且与显示装置之间的距离较近,则可以确定主显示装置为第二显示装置。当头部为平视,且与显示装置之间的距离较远,则可以确定主显示装置为第一显示装置。
在其中一些实施例中,上述方法还可以包括:识别检测图像中反映同一操作者的手部与主控部件之间的第二图像位置关系。
在本实施例中,成像设备采集到的检测图像中还可以包括操作者的手部。在不同的手术阶段,操作者的手部特征并不相同,其与主控部件之间的位置并不相同。对于不同的操作者,手部与主控部件之间的位置距离也并不相同。
在本实施例中,服务器可以通过对检测图像进行识别处理,以确定操作者的手部特征,进而确定同一操作者的手部与主控部件之间的第二图像位置关系。
在本实施例中,基于第一图像位置关系,确定主显示装置,可以包括:结合第一图像位置关系和第二图像位置关系,确定主显示装置。
在本实施例中,服务器在确定第一图像位置关系以及第二图像位置关系之后,以通过结合第一图像位置关系以及第二图像位置关系,对操作者进行判定,进而确定主显示装置。
在本实施例中,服务器对检测图像进行识别,得到第一图像位置关系以及第二图像位置关系,可以基于机器学习的方式确定,即服务器可以预先训练识别模型,进而基于识别模型,对检测图像进行识别,以确定第一图像位置关系以及第二图像位置关系。
在其中一些实施例中,检测操作者在多个显示装置中所选择的主显示装置,可以包括:通过检测用于提示主显示装置的确认选项的动作,确定操作者所使用的主显示装置。
其中,确认选项可以是展示于主显示装置上的选项,可以是提示进行模式切换或者是显示装置切换的切换按钮,或者是对显示模式的选择按钮等。
在本实施例中,服务器可以通过主显示装置提示确认选项,并基于操作者的选择动作,确定主显示装置,如操作者选择第一显示模式,则服务器确定对应于第一显示模式的主显示装置。
在其中一些实施例中,服务器根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系,可以包括:根据所检测到操作者更换前后的两个主显示装置之间的显示位姿变化,调整操作映射转换关系。
在本实施例中,服务器根据所检测到操作者更换前后的两个主显示装置之间的显示位姿变化,调整操作映射转换关系,可以包括:根据更换前的主显示装置的显示位姿确定更换前的操作者视野坐标,以及根据更换后的主显示装置的显示位姿确定更换后的操作者视野坐标;基于更前的操作者视野坐标以及更换后的操作者视野坐标,确定显示装置更换前后的坐标转换关系;根据坐标转换关系,调整操作映射转换关系。
其中,操作者视野坐标是指操作者的眼部所在坐标位置。
在本实施例中,继续参考3,当操作者处于第一观察模式时,此时,主显示装置为第一显示装置1011,操作者眼部所在操作者视野坐标为{H1},当操作者处于第二观察模式时,继续参考图4,此时,主显示装置为第一显示装置1012,操作者眼部所在操作者视野坐标为{H2}。
在本实施例中,服务器可以根据更换前后的主显示装置的显示位姿,确定更换前后的操作者视野坐标,即根据更换前的显示位姿,如第一显示坐标{Me1}确定更换前的操作者视野坐标{H1},以及根据更换后的显示位姿,如第二显示坐标{Me2},确定更换后的操作者视野坐标{H2}。
具体地,服务器可以根据显示位姿,结合人体工程学模型,确定更换前的操作者视野坐标{H1}以及更换后的操作者视野坐标{H2}。
进一步,服务器可以基于确定的更换前的操作者视野坐标{H1}以及更换后的操作者视野坐标{H2},确定显示装置从更换前后的坐标转换关系。
进一步,服务器可以根据坐标转换关系,调整操作映射转换关系。
在其中一些实施例中,服务器基于更换前的操作者视野坐标以及更换后的操作者视野坐标,确定显示装置更换前后的坐标转换关系,可以包括:基于更换前的操作者视野坐标以及更换后的操作者视野坐标,建立更换前的操作者视野坐标、更换后的操作者视野坐标以及参考坐标之间的视野转换关系;基于视野转换关系,确定显示装置更换前后的坐标转换关系。
如前所述,显示装置与医生控制台的基座连接,显示装置的显示位姿可以是相对于医 生控制台基坐标的位姿。例如,参考图8,医生控制台的基坐标为{Mb},服务器可以基于基坐标{Mb}以及医生台车运动学模型,确定显示装置相对于基坐标{Mb}的相对位置,即确定更换前显示装置相对于基坐标{Mb}的第一相对位置Pe1,以及更换后显示装置相对于基坐标{Mb}的第二相对位置Pe2。
在本实施例中,参考坐标即可以是指医生控制台的基坐标{Mb}。
在本实施例中,服务器在基于显示位姿确定操作者视野坐标之后,可以基于显示装置与基坐标{Mb}之间的相对关系,确定操作者视野坐标与基坐标{Mb}之间的相对关系,即服务器可以建立更换前操作者视野坐标{H1}与基坐标{Mb}的转换矩阵TH1B,以及更换后操作者视野坐标{H2}与基坐标{Mb}的转换矩阵TH2B。
进一步,服务器可以通过公式TH2B=TH2H1*TH1B,得到第二显示装置坐标{Me2}相对第一显示装置坐标{Me1}的转换矩阵TH2H1,即得到显示装置更换前后的坐标转换关系。
在其中一些实施例中,服务器根据坐标转换关系,调整操作映射转换关系,可以包括:获取主控部件相对于更换前的主显示装置的第一相对位置;获取主控部件对应的机械臂的旋转矩阵,旋转矩阵为机械臂相对于视野镜头的旋转矩阵,视野镜头通过映射关系将视野图像映射至主显示装置;基于第一相对位置以及旋转矩阵,调整在不同显示模式中主控部件与对应的机械臂之间的旋转位置关系;将旋转位置关系以及坐标转换关系作为调整后的操作映射转换关系。
在本实施例中,服务器可以基于医生台车运动学模型,确定主控部件在主显示装置更换前的位置。如继续参考图8,基于医生台车运动学模型,确定更换主显示装置前左主控制手和右主控制手的坐标位置为{Ma1}和{Ma2}。
在本实施例中,主控部件与医生控制台的基座相连接,则服务器基于医生台车运动学模型可以确定主控部件与医生控制台的基坐标{Mb}之间的相对关系。
进一步,服务器可以基于更换前的主显示装置与医生控制台的基坐标{Mb}之间的相对关系,以及主控部件与医生控制台的基坐标{Mb}之间的相对关系,确定主控部件相对于更换前的主显示装置的第一相对位置。
在本实施例中,如前文所述,主控部件控制的机械臂安装于操作台上。,机械臂可以包括装载有手术器械的手术器械机械臂以及视野镜头机械臂,如内窥镜机械臂。手术器械机械臂基于操作映射转换关系受主控部件的控制,并进行操作。手术器械机械臂、内窥镜机械臂以及操作台之间的相对关系可以如图9所示。
在本实施例中,机械臂安装于操作台上,操作台坐标可以表示为{Sb},位于操作台的基座上,视野镜头机械臂上的视野镜头的坐标为{Se},可以是相对于操作台坐标{Sb}的相对坐标,装载有手术器械的手术器械机械臂的末端坐标分别为{Sa1}和{Sa2},如可以分别受控于左主操作手和右主操作手。
在本实施例中,服务器可以基于视野镜头的坐标{Se}以及装载有手术器械的机械臂的 末端坐标{Sa1}和{Sa2},计算手术器械机械臂的旋转矩阵Rsa1和Rsa2,即确定机械臂的旋转矩阵。例如,服务器可以通过机器人运动学模型等,计算机械臂的旋转矩阵等。
在本实施例中,服务器可以基于获取到的第一相对位置以及旋转矩阵,建立在主显示装置更换前后主控部件与对应的机械臂之间的旋转位置关系,即建立在不显示模式下主控部件与对应的机械臂之间的相对关系,实现在不同的主显示装置下,对所对应的主控部件与对应的机械臂之间的旋转位置关系的调整。
进一步,服务器可以将旋转位置关系以及坐标转换关系作为主控部件的操作映射转换关系,即得到调整后的操作映射转换关系。
在本实施例中,还提供了一种内窥镜成像系统的示意图,如图10所示。具体地,内窥镜成像系统将手术操作对象{P}图像投射至第二显示装置{Me2},并另内窥镜坐标{Se}投射至操作者视野坐标{H2}。例如,操作者沿{H2}的Z方向移动左主操作臂{Ma1},可另手术器械机械臂{Sa1}沿{Se}的Z方向成比例地移动。
在本实施例中,在通过内窥镜成像系统建立手术器械机械臂与主控部件之间的操作映射转换关系后,主操作手{Ma1}和{Ma2}在操作者视野坐标{H2}下的运动,可成比例地转化为手术器械机械臂{Sa1}和{Sa2}在内窥镜视野坐标{Se}下运动。同时,运动控制器采用可选的视觉伺服、关节力反馈、外加传感器的力反馈等方式,将手术器械机械臂{Sa1}和{Sa2}的运动等效或成比例地转化为主操作手{Ma1}和{Ma2}的运动。
在本实施例中,当采用视觉伺服方法时,服务器可以检测主操作手接收到的末端力/外力矩,根据工具臂动力学模型,可以计算主操作手在该末端力/外力矩作用下的预期运动,并计算手术器械机械臂的末端的期望运动。从内窥镜图像中,可以采取计算机视觉方法,得到手术器械机械臂的末端的实际位置。其中,具体的方法可以包括但不限于神经网络算法、光流算法等。
在本实施例中,服务器可以计算和两者的偏差值,通过该偏差值,以及根据工具臂动力学模型,可以计算计算手术器械机械臂的末端受力,将该力映射至主手运动空间,之后通过解耦转化为主手关节空间的反馈力矩,并作为输入控制主操作手{Ma1}和{Ma2}的运动。
在其中一些实施例中,服务器根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系,可以包括:检测操作者操作所选择的主显示装置而引起该主显示装置的显示位姿变化;根据显示位姿变化,调整操作映射转换关系。
具体地,服务器可以通过检测操作者的操作,如操作者对显示装置进行旋转、移动,或者是检测到操作者头戴的AR眼镜的位姿的改变等,以此确定主显示装置,并确定主显示装置的显示位姿变化。
进一步,服务器可以根据显示位姿变化,如位置坐标的变化,姿态的改变等,对主控部件与手术机器人之间的操作映射转换关系进行调整。具体调整过程参见前文所述,此处不再赘述。
在其中一些实施例中,服务器根据操作映射转换关系,藉由主控部件控制手术机器人执行操作,可以包括:根据操作映射转换关系,调整主控部件的姿态,以使得在手术机器人位姿不变的情况下,主显示装置所显示的手术机器人的操作末端的姿态对应于调整后的主控部件的姿态。
在本实施例中,服务器可以根据调整后的操作映射转换关系,确定对主控部件的姿态进行调整的调整数据,以基于确定的调整数据,对主控部件的姿态进行调整。
具体地,如前文所述,操作映射转换关系可以包括坐标转换关系以及旋转位置关系。服务器可以根据坐标转换关系,确定主控部件相对于更换后的主显示装置的第二相对位置,并基于第二相对位置以及旋转位置关系,确定对主显示装置更换后的主控部件的调整数据。
在本实施例中,服务器可以根据确定的坐标转换关系,即转换矩阵TH2H1,确定主控部件相对于更换后的主显示装置的第二相对位置。即根据转换矩阵TH2H1计算主控部件{Ma1}和{Ma2}在{Me2}下的第二相对位置{Ma1}’和{Ma2}’。
进一步,服务器可以基于第二相对位置{Ma1}和{Ma2},计算将{Ma1}’和{Ma2}’匹配至{Sa1}和{Sa2}的旋转矩阵Rma1和Rma2,即得到对主显示装置更换后主控部件的调整数据。
在其中一些实施例中,参考图11,示出了一种对主控部件进行调整的示意图,以显示装置从第一观察模式切换至第二观察模式为例进行说明,即主显示装置从第一显示装置更换为第二显示装置为例进行说明。
具体地,服务器可以获取到更换前后的主显示装置的显示坐姿,如第一显示坐标{Me1}以及第二显示坐标{Me2}。
进一步,服务器根据模主显示装置更换前的第一显示坐标{Me1},以及医生台车运动学模型,确定在主显示装置更换前第一显示坐标{Me1}相对于基坐标{Mb}的第一相对位置Pe1。
进一步,服务器可以基于第一相对位置Pe1,以及人体工程学模型,确定切换前操作者的关键部位坐标,如人体肩部坐标{Ms}相对于基坐标{Mb}的坐标位置Ps1。
同理,服务器根据主显示装置更换后的第二显示坐标{Me2},以及医生台车运动学模型,确定在主显示装置更换后第二显示坐标{Me2}相对于基坐标{Mb}的第一相对位置Pe2,以及基于第二相对位置Pe2,以及人体工程学模型,确定切换后操作者的关键部位坐标,如人体肩部坐标{Ms}’相对于基坐标{Mb}的坐标位置Ps2。
进一步,服务器可以基于主显示装置更换前后的操作者关键部位坐标,如人体肩部的坐标位置Ps1以及坐标位置Ps2,得到人体肩部的位置差,即Ps1-Ps2。
在本实施例中,服务器可以根据操作者在主显示装置更换前的参考位置,如医生控制台的基坐标{Mb},以及主控部件的坐标位置,结合人体工程学模型,调整主控部件的操作映射转换关系。
进一步,然后服务器可以基于调整后的操作映射转换关系,对得到的人体肩部的位置差Ps1-Ps2进行转换,以得到主控部件的调整数据,即调整距离,即得到从{Mh}至{Mh}’的位置距离。
进一步,服务器可以控制主控部件从{Mh}移动至{Mh}’,以完成对主控部件的调整。
在其中一些实施例中,服务器基于调整数据,对主控部件进行调整,可以是基于调整数据,确定主控部件的移动轨迹数据,然后根据移动轨迹数据,控制主控部件进行调整。
在其中一些实施例中,服务器在进行主显示装置的更换的时候,还可以进行安全判定,以确定是否进行主显示装置的更换,可以包括:获取操作者位姿,操作者位姿用于判定操作者是否位于安全位置;当操作者位于安全位置时,基于显示装置模式切换指令控制多个显示装置进行模式切换。
其中,操作者位姿是指操作者的姿态与坐标位置,具体可以是操作者某个身体部位的姿态,如头部或者是身体某一部位等。
在本实施例中,服务器在获取到操作者位姿之后,可以基于操作者位姿,进行操作者安全检测,以判定操作者是否处于安全位置。
在本实施例中,服务器确定操作者处于安全位置时,服务器可以基于显示装置模式切换指令控制多个显示装置进行模式切换。
其中,模式切换指令是指用于控制显示装置进行观察模式切换的指令,例如,可以是从第一观察模式切换至第二观察模式,或者也可以是从第二模式切换至第一观察模式,对此不作限制。
在本实施例中,参考图12,安全检测以医生头部为例进行说明。服务器在接收到模式切换指令之后,可以通过传感器等装置检测医生头部位置,并基于检测到的医生头部位置,判定医生头部是否处于安全位置,进而确定是否控制显示装置进行模式的切换,即确定是否更换主显示装置。
在本实施例中,服务器控制显示装置进行模式的切换可以是控制连接显示装置的切换结构进行切换,进行对显示装置进行模式的切换。
在本实施例中,继续参考图12,当服务器确定医生头部未处在安全位置时,可以控制切换机构停止动作,即不进行显示装置进行模式切换。
在本实施例中,当服务器确定医生头部处于安全位置时,此时服务器可以控制切换机构执行切换动作,控制显示装置进行模式切换。
进一步,服务器在控制切换机构进行切换的时候,可以对显示装置的当前位置进行判定,并判断是否达到目标位置。
在本实施例中,当显示装置为达到目标位置时,服务器可以继续获取医生头部位置,并继续进行判定,以确保在切换过程中医生头部一直处于安全位置。
在本实施例中,当服务器确定显示装置处于目标位置时,此时服务器可以调整显示装置处于浮动状态,即处于医生可以手动拖动状态。
在本实施例中,当显示装置处于浮动状态时,医生可以根据自身条件,将显示装置拖动至与自己相匹配的位姿。
在本实施例中,服务器在确定显示装置处于浮动状态之后,可以实时检测是否接受到调整结束指令,即判定医生手动拖动显示装置是否结束。
在本实施例中,当服务器确定接收到调整结束指令后,服务器可以对显示装置进行锁定,即锁定为不能切换,完成显示装置的切换。
上述实施例中,通过在显示装置进行模式切换前以及模式切换中,对操作者的目标部位数据进行检测,并进行判定,使得显示装置的模式切换始终是在操作者处于安全位置时执行的,保障了显示装置切换的安全性,提升切换的准确性。
图13示出了另一个实施例中医生控制台的控制方法的示意图,以下基于图13进行详细说明。
在本实施例中,服务器可以对是否接收到显示模式切换指令进行判断,并在确定接收到显示模式切换指令时,通过手术台车运动学模型。计算操作台上安装的内窥镜坐标{Se},以及器械的器械坐标{Sa1}和{Sa2}。
进一步,服务器可以通过医生台车运动学模型,计算控制台上主操作手的坐标{Ma1}和{Ma2},以及第一显示装置坐标{Me1}。
进一步,服务器可以控制显示装置进行模式切换,并在模式切换完成后,通过台车运动学模型,计算第二显示装置坐标{Me2}。
进一步,服务器可以基于内窥镜坐标{Se}、器械坐标{Sa1}和{Sa2}、主操作手的坐标{Ma1}和{Ma2}、第一显示装置坐标{Me1}以及第二显示装置坐标{Me2},进行计算,并进行主操作手的调整数据。
进一步,服务器可以检测操作者手臂位置,若位于安全范围,则运行轨迹规划,控制主手进行姿态匹配,即进行主操作手的调整,若否,则提示操作者注意手臂位置,并继续检测。
进一步,服务器可以检测轨迹规划是否结束,直至轨迹规划结束,停止检测。
应该理解的是,虽然图7和图13的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图7和图13中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在其中一些实施例中,提供了一种医生控制台的控制装置,医生控制台包含多个显示装置和主控部件,主控部件用于根据主从映射关系控制手术机器人。具体地可以参见前文所述。
在本实施例中,如图14所示,医生控制台的控制装置可以包括:检测模块141、调整模块142以及控制模块143,其中:
检测模块141,用于检测操作者在多个显示装置中所选择的主显示装置。
调整模块142,用于根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系。
控制模块143,用于根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。
在其中一些实施例中,检测模块141,可以包括:
获取子模块,用于获取检测图像,其中,检测图像包含操作者与至少一个显示装置。
识别子模块,用于识别检测图像中反映操作者的头部与各显示装置之间的第一图像位置关系。
确定子模块,用于基于第一图像位置关系,确定主显示装置。
在其中一些实施例中,上述装置还可以包括:
识别模块,用于识别检测图像中反映同一操作者的手部与主控部件之间的第二图像位置关系。
在本实施例中,确定子模块用于结合第一图像位置关系和第二图像位置关系,确定主显示装置。
在其中一些实施例中,检测模块141用于通过检测用于提示主显示装置的确认选项的动作,确定操作者所使用的主显示装置。
在其中一些实施例中,调整模块142用于根据所检测到操作者更换前后的两个主显示装置之间的显示位姿变化,调整操作映射转换关系。
在其中一些实施例中,调整模块142,可以包括:
显示位姿变化显示子模块,用于检测操作者操作所选择的主显示装置而引起该主显示装置的显示位姿变化。
调整子模块,用于根据显示位姿变化,调整操作映射转换关系。
在其中一些实施例中,控制模块用于根据操作映射转换关系,调整主控部件的姿态,以使得在手术机器人位姿不变的情况下,主显示装置所显示的手术机器人的操作末端的姿态对应于调整后的主控部件的姿态。
在其中一些实施例中,上述装置还可以包括:
操作者位姿获取模块,用于获取操作者位姿,操作者位姿用于判定操作者是否位于安全位置。
切换控制模块,用于当操作者位于安全位置时,基于显示装置模式切换指令控制多个显示装置进行模式切换。
在其中一些实施例中,切换控制模块用于基于显示装置模式切换指令,控制多个显示装置从第一显示模式切换至第二显示模式。
关于医生控制台的控制装置的具体限定可以参见上文中对于医生控制台的控制方法 的限定,在此不再赘述。上述医生控制台的控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在其中一些实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图15所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储显示位姿、操作映射转换关系以及调整数据等数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种医生控制台的控制方法。
本领域技术人员可以理解,图15中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤:检测操作者在多个显示装置中所选择的主显示装置;根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系;根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。
一个或多个存储有计算机可读指令的计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:检测操作者在多个显示装置中所选择的主显示装置;根据主显示装置的显示位姿,调整主控部件与手术机器人之间的操作映射转换关系;根据操作映射转换关系,藉由主控部件控制手术机器人执行操作。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读取存储介质中,其中,该计算机可读存储介质可以是非易失性,也可以是易失性的。该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动 态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种医生控制台的控制方法,其中,所述医生控制台包含多个显示装置和主控部件,所述主控部件用于根据主从映射关系控制手术机器人,所述控制方法包括:
    检测操作者在多个显示装置中所选择的主显示装置;
    根据所述主显示装置的显示位姿,调整所述主控部件与手术机器人之间的操作映射转换关系;以及
    根据所述操作映射转换关系,藉由所述主控部件控制所述手术机器人执行操作。
  2. 根据权利要求1所述的方法,其中,所述检测操作者在多个显示装置中所选择的主显示装置,包括:
    获取检测图像,其中,所述检测图像包含操作者与至少一个显示装置;
    识别所述检测图像中反映操作者的头部与各显示装置之间的第一图像位置关系;以及
    基于所述第一图像位置关系,确定主显示装置。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    识别所述检测图像中反映同一操作者的手部与主控部件之间的第二图像位置关系;以及
    所述基于所述第一图像位置关系,确定主显示装置,包括:
    结合所述第一图像位置关系和所述第二图像位置关系,确定主显示装置。
  4. 根据权利要求1所述的方法,其中,所述检测操作者在多个显示装置中所选择的主显示装置,包括:
    通过检测用于提示主显示装置的确认选项的动作,确定所述操作者所使用的主显示装置。
  5. 根据权利要求1所述的方法,其中,所述根据所述主显示装置的显示位姿,调整所述主控部件与手术机器人之间的操作映射转换关系,包括:
    根据所检测到操作者更换前后的两个主显示装置之间的显示位姿变化,调整所述操作映射转换关系。
  6. 根据权利要求1所述的方法,其中,所述根据所述主显示装置的显示位姿,调整所述主控部件与手术机器人之间的操作映射转换关系,包括:
    检测操作者操作所选择的主显示装置而引起该主显示装置的显示位姿变化;以及
    根据所述显示位姿变化,调整所述操作映射转换关系。
  7. 根据权利要求1所述的方法,其中,所述根据所述操作映射转换关系,藉由所述主控部件控制所述手术机器人执行操作,包括:
    根据所述操作映射转换关系,调整所述主控部件的姿态,以使得在所述手术机器人位姿不变的情况下,所述主显示装置所显示的所述手术机器人的操作末端的姿态对应于调整后的所述主控部件的姿态。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取操作者位姿,所述操作者位姿用于判定所述操作者是否位于安全位置;以及
    当所述操作者位于安全位置时,基于显示装置模式切换指令控制所述多个显示装置进行模式切换。
  9. 根据权利要求8所述的方法,其中,所述基于显示装置模式切换指令控制所述多个显示装置进行模式切换,包括:
    基于显示装置模式切换指令,控制所述多个显示装置从第一显示模式切换至第二显示模式。
  10. 一种医生控制台,其中,所述医生控制台包括:存储器、处理器、多个显示装置以及主控部件;
    所述多个显示装置用于对手术机器人的动作进行显示;以及
    所述存储器存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现权利要求1~9中任一项所述方法的步骤,调整所述主控部件与所述手术机器人之间的操作映射转换关系;
    所述主控部件用于根据所述操作映射转换关系,控制所述手术机器人执行操作。
  11. 根据权利要求10所述的医生控制台,其中,所述多个显示装置包括AR眼镜和显示屏中的至少一种。
  12. 根据权利要求11所述的医生控制台,其中,所述AR眼镜提供反映显示位姿的位姿数据。
  13. 根据权利要求10所述的医生控制台,其中,所述多个显示装置包括第一显示装置以及第二显示装置;
    所述第一显示装置能够基于模式切换指令从第一位置切换至第二位置;以及
    所述第二显示装置能够基于模式切换指令从第三位置切换至第四位置。
  14. 一种机器人系统,其中,所述系统包括:手术机器人以及权利要求10~13中任一项所述的医生控制台;
    所述医生控制台用于生成机器人控制指令;以及
    所述手术机器人用于基于所述机器人控制指令执行操作。
  15. 根据权利要求14所述的系统,其中,所述手术机器人包括至少一个机械臂,各所述机械臂上装载有手术器械。
  16. 一种计算机可读存储介质,其上存储有计算机可读指令,其中,所述计算机可读指令被处理器执行时实现权利要求1~9中任一项所述的方法的步骤。
PCT/CN2022/118380 2021-09-16 2022-09-13 医生控制台的控制方法、医生控制台、机器人系统和介质 WO2023040817A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111086353.4 2021-09-16
CN202111086353.4A CN113729967B (zh) 2021-09-16 2021-09-16 医生控制台的控制方法、医生控制台、机器人系统和介质

Publications (1)

Publication Number Publication Date
WO2023040817A1 true WO2023040817A1 (zh) 2023-03-23

Family

ID=78739308

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/118380 WO2023040817A1 (zh) 2021-09-16 2022-09-13 医生控制台的控制方法、医生控制台、机器人系统和介质

Country Status (2)

Country Link
CN (1) CN113729967B (zh)
WO (1) WO2023040817A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113729967B (zh) * 2021-09-16 2023-09-19 上海微创医疗机器人(集团)股份有限公司 医生控制台的控制方法、医生控制台、机器人系统和介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9538962B1 (en) * 2014-12-31 2017-01-10 Verily Life Sciences Llc Heads-up displays for augmented reality network in a medical environment
US20190053851A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Surgical navigation system and method for providing an augmented reality image during operation
CN109806002A (zh) * 2019-01-14 2019-05-28 微创(上海)医疗机器人有限公司 一种用于手术机器人的成像系统及手术机器人
CN212574961U (zh) * 2020-08-31 2021-02-23 微创(上海)医疗机器人有限公司 控制台、医生控制台及手术机器人
CN113271883A (zh) * 2019-02-06 2021-08-17 柯惠Lp公司 用于机器人手术系统的手眼协调系统
CN113729967A (zh) * 2021-09-16 2021-12-03 上海微创医疗机器人(集团)股份有限公司 医生控制台的控制方法、医生控制台、机器人系统和介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176425A (zh) * 2018-11-12 2020-05-19 宏碁股份有限公司 多屏幕操作方法与使用此方法的电子系统
CN111176524B (zh) * 2019-12-25 2021-05-28 歌尔股份有限公司 一种多屏显示系统及其鼠标切换控制方法
CN112463097B (zh) * 2020-12-11 2022-11-15 杭州拼便宜网络科技有限公司 信息显示方法及其系统
CN112817550B (zh) * 2021-02-07 2023-08-22 联想(北京)有限公司 一种数据处理方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9538962B1 (en) * 2014-12-31 2017-01-10 Verily Life Sciences Llc Heads-up displays for augmented reality network in a medical environment
US20190053851A1 (en) * 2017-08-15 2019-02-21 Holo Surgical Inc. Surgical navigation system and method for providing an augmented reality image during operation
CN109806002A (zh) * 2019-01-14 2019-05-28 微创(上海)医疗机器人有限公司 一种用于手术机器人的成像系统及手术机器人
CN113271883A (zh) * 2019-02-06 2021-08-17 柯惠Lp公司 用于机器人手术系统的手眼协调系统
CN212574961U (zh) * 2020-08-31 2021-02-23 微创(上海)医疗机器人有限公司 控制台、医生控制台及手术机器人
CN113729967A (zh) * 2021-09-16 2021-12-03 上海微创医疗机器人(集团)股份有限公司 医生控制台的控制方法、医生控制台、机器人系统和介质

Also Published As

Publication number Publication date
CN113729967A (zh) 2021-12-03
CN113729967B (zh) 2023-09-19

Similar Documents

Publication Publication Date Title
JP7367140B2 (ja) 走査ベースの位置付けを伴う遠隔操作手術システム
US11779418B2 (en) System and apparatus for positioning an instrument in a body cavity for performing a surgical procedure
US11717309B2 (en) Medical manipulator and method of controlling the same
US8892224B2 (en) Method for graphically providing continuous change of state directions to a user of a medical robotic system
WO2018214840A1 (zh) 手术机器人系统及手术器械位置的显示方法
JP3540362B2 (ja) 手術用マニピュレータの制御システム及びその制御方法
JP5737796B2 (ja) 内視鏡操作システムおよび内視鏡操作プログラム
US9844416B2 (en) Medical manipulator and method of controlling the same
KR20140139840A (ko) 디스플레이 장치 및 그 제어방법
KR20140112207A (ko) 증강현실 영상 표시 시스템 및 이를 포함하는 수술 로봇 시스템
JP2021531910A (ja) ロボット操作手術器具の位置を追跡システムおよび方法
US20230372014A1 (en) Surgical robot and motion error detection method and detection device therefor
WO2022002155A1 (zh) 主从运动的控制方法、机器人系统、设备及存储介质
CN113876434A (zh) 主从运动的控制方法、机器人系统、设备及存储介质
WO2023083077A1 (zh) 保持rc点不变的方法、机械臂、设备、机器人和介质
US11992283B2 (en) Systems and methods for controlling tool with articulatable distal portion
WO2023083078A1 (zh) 机械臂、从操作设备和手术机器人
WO2023040817A1 (zh) 医生控制台的控制方法、医生控制台、机器人系统和介质
WO2022166929A1 (zh) 计算机可读存储介质、电子设备及手术机器人系统
JP2023507063A (ja) 手術中に画像取込装置を制御するための方法、装置、およびシステム
CN113876433A (zh) 机器人系统以及控制方法
US20230139425A1 (en) Systems and methods for optimizing configurations of a computer-assisted surgical system for reachability of target objects
CN115847385A (zh) 调整臂控制方法、装置、系统、计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22869186

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE