CN117562674A - Surgical robot and method performed by the same - Google Patents

Surgical robot and method performed by the same Download PDF

Info

Publication number
CN117562674A
CN117562674A CN202410041901.9A CN202410041901A CN117562674A CN 117562674 A CN117562674 A CN 117562674A CN 202410041901 A CN202410041901 A CN 202410041901A CN 117562674 A CN117562674 A CN 117562674A
Authority
CN
China
Prior art keywords
surgical
robotic arms
patient
collision detection
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410041901.9A
Other languages
Chinese (zh)
Inventor
金存山
旷静
史文勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kochi Medical Technology Beijing Co ltd
Original Assignee
Kochi Medical Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kochi Medical Technology Beijing Co ltd filed Critical Kochi Medical Technology Beijing Co ltd
Priority to CN202410041901.9A priority Critical patent/CN117562674A/en
Publication of CN117562674A publication Critical patent/CN117562674A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators

Abstract

The application provides surgical robots and methods performed thereby. According to one embodiment, a surgical robot includes a surgical console and a patient surgical platform having a plurality of robotic arms. The surgical console is configured to collect control input from an operator, generate control information for controlling at least a portion of the plurality of robotic arms of the patient surgical platform based on the control input, and transmit the control information to the patient surgical platform. The patient surgical platform is configured to collect status information of the plurality of robotic arms, determine collision detection results of the plurality of robotic arms using a virtual reality physics engine capable of simulating movement of the plurality of robotic arms based on the status information and control information received from the surgical console, and perform a collision prevention operation in response to the collision detection results indicating that two or more robotic arms will collide.

Description

Surgical robot and method performed by the same
Technical Field
The present disclosure relates to the field of medical devices, and more particularly to a surgical robot and a method performed thereby.
Background
Minimally invasive surgical robots generally include a physician console and a patient surgical platform. The patient surgical platform is equipped with a plurality of multi-axis surgical robotic arms, and surgical instruments may be mounted on the robotic arms. By receiving control instructions from the doctor console, the patient surgical platform can achieve multiple spatial movements, completing various surgical actions.
Disclosure of Invention
This section is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This section is not intended to identify essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
It is an object of the present disclosure to provide an improved surgical robot and a method performed thereby. In particular, one of the technical problems to be solved by the present disclosure is that in the existing minimally invasive surgical robots, especially when two doctor consoles control one patient operation platform at the same time, collision of the mechanical arm may occur, thereby causing damage to the patient.
According to a first aspect of the present disclosure, a surgical robot is provided. The surgical robot includes a surgical console and a patient surgical platform having a plurality of robotic arms. The surgical console is configured to collect control input of an operator, generate control information for controlling at least a portion of the plurality of robotic arms of the patient surgical platform based on the control input, and transmit the control information to the patient surgical platform. The patient surgical platform is configured to collect status information of the plurality of robotic arms, determine collision detection results of the plurality of robotic arms using a virtual reality physical engine capable of simulating movement of the plurality of robotic arms based on the status information and the control information received from the surgical console, and perform a collision prevention operation in response to the collision detection results indicating that two or more robotic arms will collide.
According to the first aspect described above, since collision detection is performed using the virtual reality physical engine and a collision prevention operation is performed in the event that a collision will occur, it is possible to accurately predict and prevent occurrence of a collision, thereby avoiding a potential accident and improving safety of the system.
In one embodiment of the present disclosure, the patient surgical platform includes a first controller and a second controller. The first controller is configured to collect state information of the plurality of mechanical arms and send the state information and the control information to the second controller. The second controller is configured to determine the collision detection result using the virtual reality physics engine based on the state information and the control information, and transmit the collision detection result to the first controller. The first controller is configured to perform a collision prevention operation in response to the collision detection result indicating that two or more robot arms will collide.
In one embodiment of the present disclosure, the collision prevention operation includes one or more of the following operations: stopping a surgical action to be performed by the plurality of robotic arms based on the status information and the control information; and transmitting the collision detection result to the surgical console.
In one embodiment of the present disclosure, the patient surgical platform is configured to cause the plurality of robotic arms to perform respective surgical actions based on the status information and the control information in response to the collision detection indicating that no robotic arm will collide.
In one embodiment of the present disclosure, the patient surgical platform is configured to build a three-dimensional model of the plurality of robotic arms using the virtual reality physics engine, update the three-dimensional model based on the state information and the control information, determine whether two or more robotic arms will collide in the updated three-dimensional model, and take a result of the determination as the collision detection result.
In one embodiment of the present disclosure, a surgical instrument component is mounted on at least a portion of the plurality of robotic arms.
In one embodiment of the present disclosure, the surgical console is configured to notify the operator of the collision detection results received from the patient surgical platform.
In one embodiment of the present disclosure, the collision detection result is notified to the operator in one or more of the following forms: data; an image; sound; an optical signal; and (5) force feedback.
In one embodiment of the present disclosure, the number of surgical consoles is one or more.
According to a second aspect of the present disclosure, a method performed by a surgical robot is provided. The method comprises the following steps: control inputs to the operator are collected by the surgical console. The method further comprises the steps of: control information for controlling at least a portion of a plurality of robotic arms of a patient surgical platform is generated by the surgical console based on the control input. The method further comprises the steps of: the control information is sent by the surgical console to the patient surgical platform. The method further comprises the steps of: acquiring status information of the plurality of robotic arms by the patient surgical platform; the method further comprises the steps of: a collision detection result of the plurality of robotic arms is determined by the patient surgical platform based on the status information and the control information received from the surgical console using a virtual reality physics engine capable of simulating movement of the plurality of robotic arms. The method further comprises the steps of: a collision prevention operation is performed by the patient surgical platform in response to the collision detection result indicating that two or more robotic arms will collide.
According to the second aspect described above, since collision detection is performed using the virtual reality physical engine and a collision prevention operation is performed in the event that a collision will occur, it is possible to accurately predict and prevent occurrence of a collision, thereby avoiding a potential accident and improving safety of the system.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided. Program instructions are stored on the computer readable storage medium. The program instructions, when executed by at least one processor, cause the at least one processor to perform operations of the patient surgical platform according to the second aspect described above.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the following description will briefly explain the drawings of the embodiments. Clearly, the structural schematic drawings in the following figures are not necessarily drawn to scale, but rather present features in simplified form. Moreover, the following drawings are only illustrative of some embodiments of the present disclosure and are not intended to limit the present disclosure.
Fig. 1 is a block diagram illustrating a surgical robot according to an embodiment of the present disclosure;
FIG. 2 is a block diagram illustrating one exemplary implementation of a patient surgical platform included in the surgical robot of FIG. 1;
FIG. 3 is a block diagram illustrating one exemplary implementation of the surgical robot of FIG. 1; and
fig. 4 is a flowchart illustrating a method performed by a surgical robot according to an embodiment of the present disclosure.
Detailed Description
For purposes of explanation, certain details are set forth in the following description in order to provide a thorough understanding of the disclosed embodiments. It is apparent, however, to one skilled in the art that the embodiments may be practiced without these specific details or with an equivalent arrangement.
As previously mentioned, minimally invasive surgical robots generally include a physician console and a patient surgical platform. The patient surgical platform is equipped with a plurality of multi-axis surgical robotic arms, and surgical instruments may be mounted on the robotic arms. The minimally invasive surgical robot has high requirements on the real-time performance and the safety performance of motion control. While the task of the doctor console and patient table system is complex, especially when two doctor consoles are simultaneously controlling one patient table, collisions of the robotic arms may occur, causing damage to the patient. It is therefore desirable to quickly anticipate a motion crash during surgery and to have the patient surgical platform perform a collision prevention measure to prevent a potential accident when a motion crash is likely.
The present disclosure provides an improved surgical robot and methods performed thereby. Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a block diagram illustrating a surgical robot according to an embodiment of the present disclosure. As an illustrative example, the surgical robot may be a minimally invasive surgical robot. It should be noted that the principles of the present disclosure may also be applied to other surgical robots having multiple robotic arms and thus having the potential for robotic arm collisions. As shown in fig. 1, surgical robot 10 includes a surgical console 12, and a patient surgical platform 14 having a plurality of robotic arms 141. The number of surgical consoles 12 may be one, or multiple (e.g., two). The surgical console 12 is configured to collect control inputs from an operator, generate control information for controlling at least a portion of the plurality of robotic arms 141 of the patient surgical platform 14 based on the control inputs, and transmit the control information to the patient surgical platform 14. For example, the operator may be a doctor, who may gather control inputs (e.g., spatial location and movement information of each finger) through data gloves worn by the doctor or through other suitable input devices. The control information (e.g., motion and motion velocity of each robotic arm) may be calculated using various motion solving algorithms with the acquired control inputs as input parameters. In the case where the number of surgical consoles 12 is one, a robot collision may occur when two or more robots are required for the acquired control input. In the case where the number of surgical consoles 12 is plural and all are used by an operator, a robot collision may occur when at least one robot is required for control input collected at each surgical console 12.
The patient surgical platform 14 is configured to collect status information of the plurality of robotic arms 141, determine a collision detection result of the plurality of robotic arms 141 using a virtual reality physical engine 144 capable of simulating movement of the plurality of robotic arms 141 based on the status information and the control information received from the surgical console 12, and perform a collision prevention operation in response to the collision detection result indicating that two or more robotic arms will collide. For example, the patient surgical platform 14 may use sensors built into each robotic arm to collect state information of the robotic arm (e.g., pose information of the robotic arm, state information of the joints of the robotic arm) in real-time. The patient surgical platform 14 may build a three-dimensional model of the plurality of robotic arms 141 using the virtual reality physics engine 144, update the three-dimensional model based on the state information and the control information, determine whether two or more robotic arms will collide in the updated three-dimensional model, and take the result of the determination as the collision detection result. In the case where there are two or more mechanical arms to collide, the collision detection result may contain identification information of the two or more mechanical arms. In the event that no robot collision will occur, the collision detection result may indicate this. It should be noted that the virtual reality physics engine 144 may simulate the movement of the plurality of mechanical arms 141 in a state in which surgical instrument components are mounted on at least a part of the plurality of mechanical arms 141 in determining the collision detection result to perform collision detection. In this way, the robot arm and the surgical instrument components mounted thereon can be prevented from collision.
As an illustrative example, virtual reality physics engine 144 may include a three-dimensional rendering engine (e.g., an Open Scene Graph (OSG) three-dimensional rendering engine) and a detection physics engine (e.g., a Flexible Collision Library (FCL) physics engine) capable of collision detection. The three-dimensional rendering engine may load predetermined three-dimensional model files (e.g., FBX model files) of the plurality of robotic arms 141 and construct a three-dimensional geometry matrix of the plurality of robotic arms 141 to build a three-dimensional model of the plurality of robotic arms 141. The detection physical engine can determine a rotation matrix of each component part in each mechanical arm according to the state information and the control information, and perform rotation transformation on the three-dimensional geometric matrix of the mechanical arm by utilizing each rotation matrix to update the three-dimensional model. Note that in the case of using the OSG engine and the FCL engine, it is necessary to convert the three-dimensional geometry matrix of the OSG into the three-dimensional geometry matrix of the FCL. The detection physical engine can perform the operation of intersection between every two of the updated three-dimensional geometrical matrix of the plurality of mechanical arms, and if the intersection is not empty, the two mechanical arms are determined to collide; if the intersection is empty, it is determined that no collision will occur. It should be noted that the implementation of the virtual reality physics engine is not limited to this example, as other implementations are possible as long as the virtual reality physics engine is capable of simulating the motion of multiple robotic arms.
As an option, the collision prevention operation may include stopping a surgical action to be performed by the plurality of robot arms 141 based on the state information and the control information. As another option, the collision prevention operation may include transmitting the collision detection result to the surgical console 12. For this option, surgical console 12 may be configured to notify the operator of the collision detection results received from the patient surgical platform 14. For example, the collision detection result may be notified to the operator in one or more of the following forms: data; an image; sound; an optical signal; and (5) force feedback. In response to such feedback, the operator may delay or slow the relevant operation. As yet another option, the collision prevention operation may include a combination of the above operations.
With the surgical robot 10 shown in fig. 1, since collision detection is performed using a virtual reality physical engine and a collision prevention operation is performed in the event that a collision will occur, the occurrence of a collision can be accurately predicted and prevented, thereby avoiding potential accidents and improving the safety of the system.
Optionally, the patient surgical platform 14 may be configured to cause the plurality of robotic arms to perform corresponding surgical actions based on the status information and the control information in response to the collision detection indicating that no robotic arm will collide. In this way, the surgical operation can be performed normally without the occurrence of a robot collision.
As one example, patient surgical platform 14 may be implemented as at least one memory including at least one processor and stored program instructions. The program instructions, when executed by the at least one processor, cause the at least one processor to perform the operations of the patient surgical platform 14 described above. Examples of processors include, but are not limited to, general purpose computers, special purpose computers, microprocessors, digital Signal Processors (DSPs), processors based on a multi-core processor architecture, micro Control Units (MCUs), and the like. The memory may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed and removable memory, and so forth. As another example, patient surgical platform 14 may contain hardware circuitry, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or the like, to implement the corresponding functions.
Accordingly, at least one aspect of the present disclosure provides a computer-readable storage medium. Program instructions are stored on the computer readable storage medium. The program instructions, when executed by at least one processor, cause the at least one processor to perform the operations of the patient surgical platform 14 described above. Examples of a computer-readable storage medium include, but are not limited to, hard disks, optical disks, removable storage media, solid state memory, random Access Memory (RAM), and the like.
Fig. 2 is a block diagram illustrating one exemplary implementation of a patient surgical platform included in the surgical robot of fig. 1. As shown in fig. 2, in this exemplary implementation, the patient surgical platform 14 includes a plurality of robotic arms 141, a first controller 142, a second controller 143, and a virtual reality physics engine 144. The first controller 142 is configured to collect state information of the plurality of robot arms 141 and transmit the state information and the control information to the second controller 143. The second controller 143 is configured to determine the collision detection result using the virtual reality physical engine 144 based on the state information and the control information, and transmit the collision detection result to the first controller 142. The first controller 142 is configured to perform a collision prevention operation in response to the collision detection result indicating that two or more robot arms will collide.
For the implementation scheme of fig. 2, since the independent collision detection controller is used for collision detection, the complexity of the scheme of using a single controller for control can be reduced, the system risk is dispersed, and the stability, the instantaneity and the motion control safety of the multi-task system are improved.
Fig. 3 is a block diagram illustrating one exemplary implementation of the surgical robot of fig. 1. In this exemplary implementation, the surgical robot is a minimally invasive surgical robot that includes two surgical consoles, implemented as a doctor console a and a doctor console B, respectively. The doctor console is operated by the doctor to perform the operation on the patient and includes corresponding mechanical structure, sensors and hardware and software platforms to accomplish this. The controller of the doctor console is used as a core component of the doctor console and is mainly used for man-machine interaction interface, real-time data and image display, database management, communication control and the like. The controller can obtain motion control information input by an operator, perform motion solving, further transmit related information to the patient operation platform to generate corresponding motion operation output, and simultaneously can receive and process feedback information of the patient operation platform. For simplicity, only the controller 321-1 of the physician console A and the controller 321-2 of the physician console B are shown. The minimally invasive surgical robot also comprises a patient surgical platform which makes corresponding actions timely, accurately and safely to the motion control instructions from the doctor console and feeds back corresponding results to the doctor console. The patient surgical platform contains a plurality of surgical robotic arms 341 (four surgical robotic arms in the example of fig. 3) and two separate controllers. These two separate controllers are patient surgical table controller 342 and collision detection controller 343, respectively. The patient surgical platform controller 342 is mainly used for man-machine interaction interface, database management, communication control, real-time motion control and other functions. The collision detection controller 343 is mainly used for performing collision detection of the mechanical arm. Although not shown in the figures, the patient surgical platform may also include a surgical instrument component unit mounted on the surgical robotic arm.
Next, a flow concerning motion control and collision detection protection is described. In step 1, information such as the operator A's operation control inputs (e.g., various surgical actions) and the current human-machine state is collected and processed by the physician console A controller 321-1. Information such as the operator B's operational control inputs (e.g., various surgical actions) and current human-machine state is collected and processed by the physician console B controller 321-2.
In step 2, the doctor console a controller 321-1 performs motion solution, and converts the operation instruction of the operator a into a motion control instruction corresponding to the surgical robot. The doctor console B processor 321-2 performs motion solving to convert the operation instruction of the operator B into a motion control instruction corresponding to the surgical robot.
In step 3, the physician console A controller 321-1 and the physician console B controller 321-2 are in two-way remote communication with the communication control unit of the patient operation platform controller 342 via their communication control units, respectively. The physician console a controller 321-1, the physician console B controller 321-2 communicate operational control information (e.g., motion instructions after motion resolution, motion speed, etc.), human machine state information (e.g., whether protection locking is currently enabled), etc., to the patient surgical platform 342. The patient operating platform 342 may feed back information such as the result of its motion control output, collision detection result, etc. to the doctor console a controller 321-1, the doctor console B controller 321-2 later in the bi-directional communication of step 3.
In step 4, the patient surgical platform controller 342 parses the motion solution information from the received information. In step 5, the real-time motion control processing unit of the patient table controller 342 parses the data of each joint of each mechanical arm from the data of the real-time status detection and feedback section.
In step 6, the patient operation platform controller 342 integrates the relevant information obtained in steps 4 and 5, and then transmits the state information (e.g., arm joint data, posture information), operation control information, etc. of the arm to the specific collision detection controller 343 through local network communication. The joint data may include information such as current movement speed and motor parameters, and the gesture information may include a current spatial position, a relative position between the joint and the upper and lower joints, and the like.
In step 7, the collision detection controller 343 loads a three-dimensional (3D) model of a plurality of surgical robots of the surgical robot, and performs collision detection judgment by using the virtual reality physical engine 344 after importing the data received in step 6, thereby performing prediction of a motion operation and updating the robot 3D model. In step 8, collision detection controller 343 feeds collision detection results (which include information such as alarm output) back to the real-time motion control processing unit of patient surgical table controller 342 via local network communication.
The obtained motion solution information is also obtained by the real-time motion control processing unit in step 9. In step 11, the real-time motion control processing unit detects the motion gesture of the surgical mechanical arm in real time. In step 10, the real-time motion control processing unit makes motion control output based on the input information obtained in step 8, step 9 and step 11, and controls the operation mechanical arm to generate motion. When the input information obtained in step 8 contains contents such as alarm information, a corresponding motion protecting action (or collision preventing operation) is generated as part of the contents transferred later in step 12.
In step 12, the information such as the collision detection result, the state of the mechanical arm, the movement process, etc. obtained in step 8 and step 11 is integrated and transferred to the communication control unit, and is transferred to the doctor console a controller 321-1 and the doctor console B controller 321-2 in real time, respectively.
In step 13, the doctor console a controller 321-1 and the doctor console B controller 321-2 respectively parse the motion control output result and the collision detection result from the patient operation platform information received through the communication control unit, and feed back the motion control output result and the collision detection result to the corresponding operators through the output devices. The output mode of the collision detection feedback can include, but is not limited to, data and image output, acousto-optic output, force feedback, and the like.
In the minimally invasive surgery robot, considering that the whole system is complex, stability, instantaneity and motion control safety of the multi-task system need to be ensured. The collision detection task is thus handled with a different controller than the other tasks. And establishing a 3D model of the robot in a collision detection controller for executing a collision detection task, acquiring the running gesture and motion control data of the mechanical arm in real time, realizing collision detection by using a physical engine algorithm, and feeding back results to other controllers and operators for executing other tasks so as to execute corresponding motion protection measures. Since the correlation calculation of collision detection is performed using the virtual modeling technique of the 3D physical engine, more accurate collision detection can be performed so as to know the risk of the surgical action in advance. The mechanical arm collision detection architecture performs task decomposition on a complex system, uses a plurality of controllers to coordinate and work, performs collision detection by using an independent collision detection controller, reduces the complexity of a control algorithm of a single controller, disperses system risks, and improves the stability, instantaneity and motion control safety of a multi-task system.
Fig. 4 is a flowchart illustrating a method performed by a surgical robot according to an embodiment of the present disclosure. At step 402, control inputs for an operator are collected by a surgical console. At step 404, control information for controlling at least a portion of a plurality of robotic arms of a patient surgical platform is generated by the surgical console based on the control input. At step 406, the control information is sent by the surgical console to the patient surgical platform. At step 408, status information of the plurality of robotic arms is acquired by the patient surgical platform. At step 410, collision detection results of the plurality of robotic arms are determined by the patient surgical platform based on the status information and the control information received from the surgical console using a virtual reality physics engine capable of simulating movement of the plurality of robotic arms. At step 412, a collision prevention operation is performed by the patient surgical platform in response to the collision detection result indicating that two or more robotic arms will collide. Details of the above steps are described in detail above with respect to surgical robot 10, and are not described here. With the method shown in fig. 4, since collision detection is performed using the virtual reality physical engine and a collision prevention operation is performed in the case where a collision will occur, the occurrence of a collision can be accurately predicted and prevented, thereby avoiding a potential accident and improving the safety of the system.
References in the present disclosure to "one embodiment," "an embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. It should be noted that two blocks (or steps) shown in succession may in fact be executed substantially concurrently or the blocks (or steps) may sometimes be executed in the reverse order, depending upon the functionality involved.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. In this disclosure, the term "and/or" includes any and all combinations of one or more of the associated listed terms. It will be further understood that the terms "comprises," "comprising," "has," "including," and/or "having," when used herein, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components, and/or groups thereof. The term "coupled" as used herein encompasses direct and/or indirect coupling between two elements.
The disclosure includes any novel feature or combination of features disclosed herein either explicitly or in any of its generic forms. Various modifications and adaptations to the foregoing exemplary embodiments of this disclosure will become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, any and all modifications and adaptations will still fall within the scope of the non-limiting and exemplary embodiments of this disclosure.

Claims (11)

1. A surgical robot, comprising:
a surgical console configured to collect control input of an operator, generate control information for controlling at least a portion of a plurality of robotic arms of a patient surgical platform based on the control input, and send the control information to the patient surgical platform;
the patient surgical platform having the plurality of robotic arms is configured to collect status information of the plurality of robotic arms, determine collision detection results of the plurality of robotic arms using a virtual reality physics engine capable of simulating movement of the plurality of robotic arms based on the status information and the control information received from the surgical console, and perform a collision prevention operation in response to the collision detection results indicating that two or more robotic arms will collide.
2. The surgical robot of claim 1, wherein the patient surgical platform comprises:
a first controller configured to collect state information of the plurality of mechanical arms and transmit the state information and the control information to a second controller; and
the second controller configured to determine the collision detection result using the virtual reality physical engine based on the state information and the control information, and to transmit the collision detection result to the first controller;
wherein the first controller is configured to perform a collision prevention operation in response to the collision detection result indicating that two or more robot arms will collide.
3. The surgical robot of claim 1 or 2, wherein the collision prevention operation comprises one or more of:
stopping a surgical action to be performed by the plurality of robotic arms based on the status information and the control information; and
and sending the collision detection result to the operation console.
4. The surgical robot of claim 1 or 2, wherein the patient surgical platform is configured to cause the plurality of robotic arms to perform respective surgical actions based on the status information and the control information in response to the collision detection indicating that no robotic arm will collide.
5. The surgical robot of claim 1 or 2, wherein the patient surgical platform is configured to build a three-dimensional model of the plurality of robotic arms using the virtual reality physics engine, update the three-dimensional model based on the state information and the control information, determine whether two or more robotic arms will collide in the updated three-dimensional model, and take a result of the determination as the collision detection result.
6. The surgical robot of claim 1 or 2, wherein a surgical instrument component is mounted on at least a portion of the plurality of robotic arms.
7. A surgical robot as claimed in claim 3, wherein the surgical console is configured to notify the operator of the collision detection results received from the patient surgical platform.
8. The surgical robot of claim 7, wherein the collision detection result is notified to the operator in one or more of the following forms:
data; an image; sound; an optical signal; and (5) force feedback.
9. The surgical robot of claim 1 or 2, wherein the number of surgical consoles is one or more.
10. A method performed by a surgical robot, comprising:
collecting control input of an operator by a surgical console;
generating, by the surgical console, control information for controlling at least a portion of a plurality of robotic arms of a patient surgical platform based on the control input;
transmitting, by the surgical console, the control information to the patient surgical platform;
acquiring status information of the plurality of robotic arms by the patient surgical platform;
determining, by the patient surgical platform, collision detection results for the plurality of robotic arms using a virtual reality physics engine capable of simulating movement of the plurality of robotic arms based on the status information and the control information received from the surgical console; and
a collision prevention operation is performed by the patient surgical platform in response to the collision detection result indicating that two or more robotic arms will collide.
11. A computer readable storage medium having stored thereon program instructions that, when executed by at least one processor, cause the at least one processor to perform the operations of the patient surgical platform of claim 10.
CN202410041901.9A 2024-01-11 2024-01-11 Surgical robot and method performed by the same Pending CN117562674A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410041901.9A CN117562674A (en) 2024-01-11 2024-01-11 Surgical robot and method performed by the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410041901.9A CN117562674A (en) 2024-01-11 2024-01-11 Surgical robot and method performed by the same

Publications (1)

Publication Number Publication Date
CN117562674A true CN117562674A (en) 2024-02-20

Family

ID=89884720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410041901.9A Pending CN117562674A (en) 2024-01-11 2024-01-11 Surgical robot and method performed by the same

Country Status (1)

Country Link
CN (1) CN117562674A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117798936A (en) * 2024-02-29 2024-04-02 卡奥斯工业智能研究院(青岛)有限公司 Control method and device for mechanical arm cluster, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (en) * 2009-03-31 2012-05-09 直观外科手术操作公司 Synthetic representation of a surgical robot
US20190069962A1 (en) * 2016-02-26 2019-03-07 Think Surgical, Inc. Method and system for guiding user positioning of a robot
CN113478492A (en) * 2021-09-07 2021-10-08 成都博恩思医学机器人有限公司 Method and system for avoiding collision of mechanical arms, robot and storage medium
CN113613852A (en) * 2019-03-20 2021-11-05 柯惠Lp公司 Robot surgery collision detection system
CN113648066A (en) * 2021-08-20 2021-11-16 苏州康多机器人有限公司 Collision detection method, electronic equipment and master-slave surgical robot
WO2022024130A2 (en) * 2020-07-31 2022-02-03 Mazor Robotics Ltd. Object detection and avoidance in a surgical setting

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102448680A (en) * 2009-03-31 2012-05-09 直观外科手术操作公司 Synthetic representation of a surgical robot
US20190069962A1 (en) * 2016-02-26 2019-03-07 Think Surgical, Inc. Method and system for guiding user positioning of a robot
CN113613852A (en) * 2019-03-20 2021-11-05 柯惠Lp公司 Robot surgery collision detection system
WO2022024130A2 (en) * 2020-07-31 2022-02-03 Mazor Robotics Ltd. Object detection and avoidance in a surgical setting
CN113648066A (en) * 2021-08-20 2021-11-16 苏州康多机器人有限公司 Collision detection method, electronic equipment and master-slave surgical robot
CN113478492A (en) * 2021-09-07 2021-10-08 成都博恩思医学机器人有限公司 Method and system for avoiding collision of mechanical arms, robot and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117798936A (en) * 2024-02-29 2024-04-02 卡奥斯工业智能研究院(青岛)有限公司 Control method and device for mechanical arm cluster, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
KR102014351B1 (en) Method and apparatus for constructing surgical information
EP0087198B1 (en) A method for preventing collision for two mutually movable bodies and an apparatus including an arrangement for preventing collision
EP2142133B1 (en) Methods, devices, and systems for automated movements involving medical robots
CN117562674A (en) Surgical robot and method performed by the same
JPS62232006A (en) Robot system
US11192249B2 (en) Simulation device for robot
Yepes et al. Implementation of an Android based teleoperation application for controlling a KUKA-KR6 robot by using sensor fusion
JP7384160B2 (en) Information processing device, information processing method and program
US11551810B2 (en) Method for acquiring and for altering a configuration of a number of objects in a procedure room and corresponding device
US10593223B2 (en) Action evaluation apparatus, action evaluation method, and computer-readable storage medium
Duchemin et al. Medically safe and sound [human-friendly robot dependability]
Guiochet et al. Safety analysis of a medical robot for tele-echography
CN115703227A (en) Robot control method, robot, and computer-readable storage medium
CN111797506B (en) Master-slave guide wire control method
Guiochet et al. Integration of UML in human factors analysis for safety of a medical robot for tele-echography
Stańczyk et al. Logical architecture of medical telediagnostic robotic system
CN110549375A (en) protective door anti-collision method and system for mechanical arm
CN113397708B (en) Particle puncture surgical robot navigation system
CN113119131B (en) Robot control method and device, computer readable storage medium and processor
US20210107158A1 (en) Apparatus and Method for Monitoring a Working Environment
CN114888809B (en) Robot control method and device, computer readable storage medium and robot
Li et al. Surgeon training in telerobotic surgery via a hardware-in-the-loop simulator
CN110069063B (en) Method for computer-aided user assistance during the commissioning of a movement planner
WO2023246907A1 (en) Method for controlling mechanical arm, medical system, and computer device
DE102020104359B4 (en) Workspace limitation for a robot manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination