CN110464469B - Surgical robot, method and device for controlling distal end instrument, and storage medium - Google Patents

Surgical robot, method and device for controlling distal end instrument, and storage medium Download PDF

Info

Publication number
CN110464469B
CN110464469B CN201910854104.1A CN201910854104A CN110464469B CN 110464469 B CN110464469 B CN 110464469B CN 201910854104 A CN201910854104 A CN 201910854104A CN 110464469 B CN110464469 B CN 110464469B
Authority
CN
China
Prior art keywords
pose information
target pose
instrument
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910854104.1A
Other languages
Chinese (zh)
Other versions
CN110464469A (en
Inventor
王建辰
高元倩
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Edge Medical Co Ltd
Original Assignee
Shenzhen Edge Medical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Edge Medical Co Ltd filed Critical Shenzhen Edge Medical Co Ltd
Priority to CN201910854104.1A priority Critical patent/CN110464469B/en
Publication of CN110464469A publication Critical patent/CN110464469A/en
Priority to PCT/CN2020/114113 priority patent/WO2021047520A1/en
Application granted granted Critical
Publication of CN110464469B publication Critical patent/CN110464469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/00234Surgical instruments, devices or methods, e.g. tourniquets for minimally invasive surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Master-slave robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/302Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities

Abstract

The present invention relates to a surgical robot, a method and a device for controlling a distal end instrument, and a storage medium. The method comprises the following steps: calculating first target pose information of the far end of the mechanical arm, second target pose information of an image terminal instrument, third target pose information of each controlled operation terminal instrument and fourth target pose information of each controlled operation terminal instrument by combining each group of pose information sets obtained by decomposition; when the pose information of each target is effective, controlling the motion of the mechanical arm according to the first target pose information, controlling the motion of the operation arm corresponding to the image terminal instrument according to the second target pose information so as to keep the image terminal instrument at the current pose, controlling the motion of the operation arm corresponding to the controlled operation terminal instrument according to the third target pose information, and controlling the motion of the operation arm corresponding to the uncontrolled operation terminal instrument according to the fourth target pose information. The invention is beneficial to enlarging the operation space of the operation terminal instrument on the premise of keeping the visual field unchanged in the operation.

Description

Surgical robot, method and device for controlling distal end instrument, and storage medium
Technical Field
The present invention relates to the field of medical instruments, and in particular, to a surgical robot, a control method and a control device for a distal end instrument, and a storage medium.
Background
The minimally invasive surgery is a surgery mode for performing surgery in a human body cavity by using modern medical instruments such as a laparoscope, a thoracoscope and the like and related equipment. Compared with the traditional minimally invasive surgery, the minimally invasive surgery has the advantages of small wound, light pain, quick recovery and the like.
With the progress of science and technology, the minimally invasive surgery robot technology is gradually mature and widely applied. The minimally invasive surgery robot generally comprises a main operation table and a slave operation device, wherein the main operation table comprises a handle, a doctor sends a control command to the slave operation device through the operation handle, the slave operation device comprises a mechanical arm and a plurality of operation arms arranged at the far end of the mechanical arm, the operation arms are provided with tail end instruments, and the tail end instruments move along with the handle in a working state so as to realize remote operation.
The end instruments include image end instruments providing a surgical field of view and operation end instruments performing surgical operations, and it is often desirable to provide a larger movement range (i.e., an operation space, which can also be understood as flexibility) for operating the end instruments in a fixed field of view during surgery, but since the movement range is limited by the small movement range of the operation arm itself, the movement range of the mechanical arm can be considered to be enlarged in combination with the movement of the mechanical arm. However, the change of the pose of the far end of the mechanical arm is easy to cause the problem of changing the undesirable visual field, and the operation safety can be affected.
Disclosure of Invention
In view of the above, it is desirable to provide a surgical robot capable of extending the range of motion of an operation end instrument while maintaining a constant field of view, a control method and a control device for an end instrument, and a storage medium.
In one aspect, there is provided a control method of a tip instrument in a surgical robot, the control method including: an acquisition step of acquiring initial target pose information of each controlled operation terminal instrument; decomposing the initial target pose information to obtain a group of pose information sets respectively, wherein each group of pose information set comprises first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system, the first coordinate system refers to a base coordinate system of the mechanical arm, and the second coordinate system refers to a tool coordinate system of the mechanical arm; a first judgment step of judging the validity of each set of pose information sets; a calculation step of calculating, in a condition that at least one group of pose information sets is valid and the image end apparatus and each of the uncontrolled operation end apparatuses are kept in the current pose, first target pose information of the distal end of the mechanical arm in a first coordinate system, second target pose information of the image end apparatus in a second coordinate system, third target pose information of each of the controlled operation end apparatuses in the second coordinate system, and fourth target pose information of each of the uncontrolled operation end apparatuses in the second coordinate system, respectively, by combining the pose information sets; a second judgment step of judging validity of the first to fourth target pose information; and a control step of, when the first to fourth target pose information are all valid, controlling the movement of the robot arm according to the first target pose information so as to enable the distal end of the robot arm to reach a corresponding target pose, controlling the movement of the operation arm corresponding to the image end instrument according to the second target pose information so as to enable the image end instrument to be kept at a current pose, controlling the movement of the operation arm corresponding to the controlled operation end instrument according to each third target pose information so as to enable the controlled operation end instrument to reach a corresponding target pose, and controlling the movement of the operation arm corresponding to the uncontrolled operation end instrument according to each fourth target pose information so as to enable each uncontrolled operation end instrument to be kept at a current pose.
Wherein, when there is one controlled operation end instrument and the pose information set is valid, the calculating step includes: under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, and converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system; assigning the first component target pose information to the first target pose information, assigning the target pose information of the end-of-image instrument in a second coordinate system to the second target pose information, assigning the second component target pose information to the third target pose information, and assigning the target pose information of each of the uncontrolled operation end instruments in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Wherein, when the number of the controlled operation terminal instruments is more than two, and one pose information set is valid and the other pose information sets are invalid, the calculating step comprises: under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain second expected target pose information of the controlled operation end instrument in the second coordinate system; assigning first component target pose information in the set of effective pose information to the first target pose information, assigning target pose information of the end-of-image instrument in a second coordinate system to the second target pose information, assigning second component target pose information in the set of effective pose information to associated third target pose information of the controlled operation end instrument, assigning each second expected target pose to third target pose information of the corresponding controlled operation end instrument, and assigning target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Wherein, when there are a plurality of controlled operation terminal instruments, and more than two pose information sets are valid, and the rest pose information sets are invalid, the calculating step includes: respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set; judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system; when only one target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to a first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each controlled operation end instrument associated with the ineffective target pose information set to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information set to obtain first expected target pose information of the controlled operation end instrument in the second coordinate system; judging the validity of each first expected target position and attitude information; assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning target pose information of each uncontrolled operation end instrument in a second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument, if each first desired target pose information is valid; if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system; assigning first component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the first target pose information, assigning valid target pose information for the end-of-image instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the third target pose information for the associated controlled operation end instrument, assigning each of the valid first desired target pose information to the corresponding third target pose information for the controlled operation end instrument, assigning each of the second desired target pose information to the corresponding third target pose information for the controlled operation end instrument, and assigning target pose information for each of the uncontrolled operation end instruments in a second coordinate system to the corresponding third target pose information for the uncontrolled operation end instrument Four target pose information; when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
Wherein, when the number of the controlled operation terminal instruments is more than two and each posture information set is valid, the calculating step includes: respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set; judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system; when only one target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in a second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system; judging the validity of each first expected target position and attitude information; assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning target pose information of each uncontrolled operation end instrument in a second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument if each first desired target pose information is valid; if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system; assigning first component target pose information of the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information of the set of pose information associated with valid target pose information of the image end instrument to the associated third target pose information of the controlled operation end instrument, assigning each valid first desired target pose information to the corresponding third target pose information of the controlled operation end instrument, assigning each second desired target pose information to the corresponding third target pose information of the controlled operation end instrument, assigning each target pose information of the uncontrolled operation end instrument in a second coordinate system to the corresponding third target pose information of the uncontrolled operation end instrument Fourth target pose information; when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
Wherein the decomposing step comprises: acquiring an input operation command related to the task degree of freedom of the remote end of the mechanical arm; and decomposing each initial target pose information by combining the task freedom degrees to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
Wherein the operation command comprises a first operation command and a second operation command; the first operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches an effective degree of freedom of the robot arm; the second operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches a pose degree of freedom in the effective degrees of freedom of the robot arm.
Wherein the decomposing step comprises: acquiring current pose information of the far end of the mechanical arm in a first coordinate system; converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm; judging the validity of the pose information of the second component target; if the first component target pose information is valid, converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation end instrument reaches a target pose corresponding to the second component target pose information; and if the first component target pose information is invalid, adjusting the second component target pose information to be valid, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches a target pose corresponding to the updated second component target pose information.
Wherein the first judging step includes: judging the validity of the first component target pose information obtained by converting the initial target pose information; if the first component target pose information is valid, judging that the pose information set is valid; and if the first component target pose information is invalid, judging that the pose information set is invalid.
Wherein the step of judging the validity of the target pose information comprises: analyzing the target pose information into target motion state parameters of each joint assembly in a corresponding arm body, wherein the arm body is the mechanical arm or the operating arm; comparing the target motion state parameters of each joint component in the arm body with the motion state threshold of each joint component in the arm body; if more than one target motion state parameter of each joint assembly in the arm body exceeds the motion state threshold of the corresponding joint assembly, judging that the target pose information is invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is valid.
Acquiring motion information of the remote operation of the controlled operation terminal instrument input by a motion input device, wherein the motion information is pose information of the motion input device; and analyzing the motion information into initial target pose information of the controlled operation terminal instrument.
In yet another aspect, a computer-readable storage medium is provided, which stores a computer program configured to be executed by one or more processors to implement the steps of the control method according to any one of the above embodiments.
In still another aspect, there is provided a control apparatus of a surgical robot, including: a memory for storing a computer program; and a processor for loading and executing the computer program; wherein the computer program is configured to be loaded by the processor and to execute steps implementing the control method according to any of the embodiments described above.
In yet another aspect, a surgical robot is provided, comprising: a mechanical arm; one or more operation arms provided with end instruments and arranged at the distal end of the mechanical arm, wherein the end instruments comprise an image end instrument and one or more operation end instruments, and the operation end instruments are all configured to be controlled to operate the end instruments; and a control device respectively connected with the mechanical arm and the operating arm; the control device is used for executing the steps of realizing the control method according to any one of the above embodiments.
The invention has the following beneficial effects:
the operation space of the operation tail end instrument can be enlarged by means of the movement of the mechanical arm on the premise of keeping the visual field unchanged, and the operation tail end instrument is convenient and safe to use.
Drawings
FIG. 1 is a schematic structural diagram of a surgical robot according to an embodiment of the present invention;
FIG. 2 is a partial schematic view of the surgical robot of FIG. 1;
FIG. 3 is a partial schematic view of the surgical robot of FIG. 1;
FIG. 4 is a flowchart of one embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention;
FIG. 5 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 6 is a simplified diagram of a surgical robot according to an embodiment of the present invention in a use state;
FIG. 7 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 8 is a simplified schematic view of a surgical robot in accordance with another embodiment of the present invention;
FIGS. 9-12 are flow charts of various embodiments of methods for controlling a distal instrument in a surgical robot in accordance with the present invention;
FIG. 13 is a schematic diagram of a robotic arm of the surgical robotic arm mechanism of FIG. 1;
fig. 14 is an analysis diagram of a spatial movement angle in the control method of the surgical robot according to the present invention;
FIG. 15 is a flowchart of a method of controlling a distal end instrument in a surgical robot according to yet another embodiment of the present invention;
FIG. 16 is a flow chart of an embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one mode of operation;
FIG. 17 is a schematic view of the operation of an embodiment of the method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one operation mode;
FIGS. 18-19 are flow charts of another embodiment of a method for controlling a distal end instrument in a surgical robot according to the present invention in a two-to-one mode of operation;
FIG. 20 is a schematic view illustrating the operation of the surgical robot in a two-to-one operation mode according to another embodiment of the method for controlling the distal end instrument;
FIG. 21 is a flowchart of an embodiment of a method of controlling a distal instrument in a surgical robot in a one-to-one mode of operation according to the present invention;
FIG. 22 is a schematic view illustrating the operation of an embodiment of the method for controlling a distal end instrument in a one-to-one operation mode of the surgical robot according to the present invention;
FIGS. 23-25 are flow charts of various embodiments of methods for controlling a distal instrument in a surgical robot in accordance with the present invention;
fig. 26 is a schematic structural view of another embodiment of the surgical robot of the present invention.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "coupled" to another element, it can be directly coupled to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments. As used herein, the terms "distal" and "proximal" are used as terms of orientation that are conventional in the art of interventional medical devices, wherein "distal" refers to the end of the device that is distal from the operator during a procedure, and "proximal" refers to the end of the device that is proximal to the operator during a procedure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The term "each" as used herein includes one and more than one. The term "plurality", as used herein, includes two and more.
Fig. 1 to 3 are schematic structural diagrams and partial schematic diagrams of a surgical robot according to an embodiment of the present invention.
The surgical robot includes a master operation table 1 and a slave operation device 2. The main operating table 1 has a motion input device 11 and a display 12, and a doctor transmits a control command to the slave operating device 2 by operating the motion input device 11 to make the slave operating device 2 perform a corresponding operation according to the control command of the doctor operating the motion input device 11, and observes an operation area through the display 12. The slave operation device 2 has an arm body having a robot arm 21 and an operation arm 31 detachably attached to a distal end of the robot arm 21. The robot arm 21 includes a base and a connecting assembly in series, the connecting assembly having a plurality of joint assemblies, which in the configuration illustrated in FIG. 1 have joint assemblies 210-214. The operating arm 31 comprises a connecting rod 32, a connecting assembly 33 and a terminal instrument 34 which are connected in sequence, wherein the connecting assembly 33 is provided with a plurality of joint assemblies, and the operating arm 31 adjusts the pose of the terminal instrument 34 through adjusting the joint assemblies; end instrument 34 has an image end instrument 34A and a manipulation end instrument 34B. More specifically, the operating arm 31 is mounted to the power mechanism 22 at the distal end of the robot arm 21 and is driven by a driving portion in the power mechanism 22. The robot arm 21 and/or the operation arm 31 may follow the motion input device 11, and the robot arm 21 may be dragged by an external force.
For example, the motion-input device 11 may be connected to the main console 1 by a wire, or connected to the main console 1 by a rotating link. The motion-input device 11 may be configured to be hand-held or wearable (often worn at the far end of the wrist, such as the fingers or palm), with multiple degrees of freedom available. The motion-input device 11 is, for example, configured in the form of a handle as shown in fig. 3. In one case, the number of degrees of freedom available for the motion-input device 11 is configured to be lower than the number of degrees of freedom defined for the task at the distal end of the arm body; in another case, the number of effective degrees of freedom of the motion-input device 11 is configured not to be lower than the number of task degrees of freedom of the distal end of the arm body. The number of effective degrees of freedom of the motion input device 11 is up to 6, and in order to flexibly control the motion of the arm body in the cartesian space, the motion input device 11 is exemplarily configured to have 6 effective degrees of freedom, wherein the effective degrees of freedom of the motion input device 11 refer to effective degrees of freedom that can follow the motion of the hand, so that the doctor has a large operation space, and can generate more meaningful data by analyzing each effective degree of freedom, thereby satisfying the control of the robot arm 21 in almost all configurations.
The motion input device 11 follows the hand motion of the doctor, and collects the motion information of the motion input device itself caused by the hand motion in real time. The motion information can be analyzed to obtain position information, attitude information, speed information, acceleration information and the like. The motion-input device 11 includes, but is not limited to, a magnetic navigation position sensor, an optical position sensor, or a link-type main operator, etc.
In one embodiment, a method of controlling a tip instrument in a surgical robot is provided. As shown in fig. 4, the control method includes:
and step S1, an acquisition step, namely acquiring initial target pose information of each controlled operation terminal instrument.
Operative tip instruments 34B mounted on power mechanism 22 include operative tip instruments configured as controlled operative tip instruments (operative tip instruments to be controlled by the motion-input device) and unconfigured uncontrolled operative tip instruments (operative tip instruments not to be controlled by the motion-input device). At most, one operator can control two controlled operation distal end instruments 34B at the same time, and when there are two or more controlled operation distal end instruments 34B, the two or more operators can cooperatively control the instruments.
And step S2, a decomposition step, namely decomposing the pose information of each initial target to respectively obtain a group of pose information sets.
Each set of pose information comprises first component target pose information of the distal end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system. The first coordinate system refers to a base coordinate system of the robot arm, and the second coordinate system refers to a tool coordinate system of the robot arm.
In step S3, the first determination step is to determine the validity of each group of posture information sets.
Specifically, the validity of two component pose information included in each set of pose information is judged, and when the two component pose information are both valid, the set of pose information is determined to be valid, otherwise, the set of pose information is determined to be invalid.
Step S4, a calculation step, which is to calculate, in combination with at least one set of pose information sets, first target pose information of the distal end of the robot arm in the first coordinate system, second target pose information of the image end instrument in the second coordinate system, third target pose information of each controlled operation end instrument in the second coordinate system, and fourth target pose information of each uncontrolled operation end instrument in the second coordinate system, under the condition that at least one set of pose information sets is valid and the image end instrument and each uncontrolled operation end instrument are kept in the current pose.
In this step, it is desirable that the uncontrolled working tip instrument 34B be able to remain in the current pose; and it is preferable to expect each controlled operation tip instrument 34B to be able to reach the first desired pose, and if several controlled operation tip instruments 34B are not able to reach the first desired pose, it is expected that these controlled operation tip instruments 34B which are not able to reach the first desired pose are able to reach the second desired pose. The first desired pose refers to the pose of the target corresponding to the initial target pose information (including both cases where the initial target pose (which is associated with the motion information input by the controlled motion-input device) is consistent or inconsistent with the current pose). This second desired pose refers to the current pose, in case several controlled operational end-instruments 34B cannot reach the first desired pose, aiming to ensure that it/they can reach the second desired pose, in order to guarantee the safety of the operation.
Step S5, a second judgment step of judging the validity of the first target pose information, the second target pose information, each third target pose information, and each fourth target pose information.
And step S6, a control step, namely when the first target pose information, the second target pose information, each third target pose information and each fourth target pose information are effective, controlling the mechanical arm to move according to the first target pose information so that the far end of the mechanical arm reaches the corresponding target pose, controlling the operation arm corresponding to the controlled operation end instrument to move according to the second target pose information so that the image end instrument is kept at the current pose, controlling the operation arm corresponding to the controlled operation end instrument to move according to each third target pose information so that the controlled operation end instrument reaches the corresponding target pose, and controlling the operation arm corresponding to the uncontrolled operation end instrument to move according to each fourth target pose information so that each uncontrolled operation end instrument is kept at the current pose.
Through the steps S1 to S6, when the acquired first target pose information, second target pose information, third target pose information, and fourth target pose information are all valid, the controlled operation end instrument 34B can reach the first desired pose or the second desired pose while keeping the image end instrument 34A and the uncontrolled operation end instrument 34B in the current poses to provide a stable field of view; in addition, the range of motion of controlled manipulation tip instrument 34B may be extended in some scenarios in conjunction with the movement of the robotic arm and corresponding manipulation arm to facilitate easier surgical deployment.
In one embodiment, specifically, in step S3, if the sets of posture information obtained by the determination are all invalid, it indicates that none of the controlled operation end devices 34B (including one or more controlled operation end devices) has adjustability, and therefore the subsequent step is not performed, i.e., the control is ended, and the process returns to step S1.
In one embodiment, referring to fig. 5 and 6, when the controlled operation end instrument is configured as one and the set of pose information is valid, the step S4, namely the calculating step, includes:
and S411, under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the end instrument of the image to obtain the target pose information of the end instrument in a second coordinate system, and converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the end instrument in the second coordinate system.
The current pose information of each end instrument 34, including image end instrument 34A and controlled operation end instrument 34B, may be a first coordinate system or a second coordinate system, and may be other reference coordinate systems, which are substantially interconvertible. As exemplified herein, "current pose information" refers to current pose information in a second coordinate system, although current pose information in other coordinate systems is also possible.
Step S412, assigning the first component target pose information to first target pose information, assigning the target pose information of the end-of-arm instrument of the image obtained by conversion in a second coordinate system to second target pose information, assigning the second component target pose information to third target pose information, and assigning the target pose information of each end-of-arm instrument of uncontrolled operation in the second coordinate system to fourth target pose information of the corresponding end-of-arm instrument of uncontrolled operation.
In the above-described step S5, i.e., the second determination step, since the set of pose information has been determined to be valid, the first object pose information and the third object pose information are valid. Therefore, only the second target pose information and the fourth target pose information need to be judged to be effective actually.
And if the first to fourth target pose information is not all effective, ending the control.
If the first to fourth target pose information are all valid, the process proceeds to step S6, i.e., the control step. As shown in fig. 6, manipulation tip instruments 34B include controlled manipulation tip instrument 34B1 and uncontrolled manipulation tip instrument 34B 2. The manipulator arm 21 is controlled to move according to the first target pose information so as to enable the power mechanism 22 at the distal end thereof to reach the corresponding target pose, the manipulator arm 31A is controlled to move according to the second target pose information so as to enable the image end instrument 34A to be kept at the current pose, the manipulator arm 31B is controlled to move according to the third target pose information so as to enable the controlled manipulation end instrument 34B1 to reach the corresponding target pose (first desired pose), and the manipulator arm 31C is controlled to move according to the fourth target pose information so as to enable the uncontrolled manipulation end instrument 34B2 to be kept at the current pose.
In one embodiment, referring to fig. 7 and 8, when the controlled operation end instrument is configured to be a plurality of instruments, and when it is determined in step S3 that only one pose information set is valid and the remaining pose information sets are invalid, step S4 is a calculation step including:
step S421, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in the second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of each uncontrolled operation end instrument in the second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain the second expected target pose information of each uncontrolled operation end instrument in the second coordinate system.
Step S422, assigning the first component target pose information in the effective pose information set to first target pose information, assigning the target pose information of the image end instrument in a second coordinate system to second target pose information, assigning the second component target pose information in the effective pose information set to third target pose information of the associated controlled operation end instrument, assigning each second expected target pose to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
In the above step S5, i.e., the second judgment step, it is only necessary to actually judge whether or not the second target pose information of the image end instrument 34A and the third target pose information of the controlled manipulation end instrument 34B associated with each invalid set of pose information are valid.
And if the first target pose information, the second target pose information, the third target pose information and the fourth target pose information are not all effective, ending the control.
If the first target pose information, the second target pose information, each third target pose information, and each fourth target pose information are all valid, the process proceeds to step S6, which is a control step. As shown in FIG. 8, the manipulation tip instruments 34B include controlled manipulation tip instruments 34B 1-34B 3 and an uncontrolled manipulation tip instrument 34B 4. If one set of pose information associated with the controlled manipulation end instrument 34B1 is valid and two sets of pose information associated with the controlled manipulation end instruments 34B 2-34B 3 are invalid:
controlling the mechanical arm 21 to move according to the first target pose information so that the power mechanism 22 at the far end of the mechanical arm reaches the corresponding target pose, controls the operation arm 31A to move in accordance with the second target pose information to hold the image end instrument 34A in the current pose, the manipulation arm 31B is controlled to move in accordance with the third target pose information of the controlled manipulation tip instrument 34B1 to bring the controlled manipulation tip instrument 34B to the corresponding target pose (first desired pose), and controls the operation arms 31C to 31D to move according to the third target pose information of the controlled operation terminal instruments 34B2 to 34B3 respectively so that the controlled operation terminal instruments 34B2 to 34B3 reach the corresponding target pose (the second desired pose, i.e. the current pose is maintained), and controls the movement of the manipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Steps S1 to S6 including steps S421 to S422 are also applicable to the case where the same robot arm 21 has two, or four or more, controlled operation distal end instruments 34B, in accordance with the principle thereof, except that the same robot arm 21 has three controlled operation distal end instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 9, when there are a plurality of controlled operation end instruments, and more than two pose information sets are valid and the remaining pose information sets are invalid, the step S4 includes:
and step S431, respectively converting the current pose information of the instrument at the end of the image to obtain the target pose information of the instrument in the second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S432, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end apparatus is valid, the process proceeds to step S433; if more than two of the target pose information of the image end instrument is valid, the process proceeds to step S438.
Step S433, under the condition that the distal end of the mechanical arm reaches the target pose associated with the target pose information of the effective image terminal instrument and corresponding to the first component target pose information in the effective pose information set, the current pose information of each controlled operation terminal instrument associated with the ineffective target pose information set is converted to obtain the second expected target pose information of the controlled operation terminal instrument in the second coordinate system, the current pose information of each uncontrolled operation terminal instrument is converted to obtain the target pose information of the uncontrolled operation terminal instrument in the second coordinate system, and the initial target pose information of each controlled operation terminal instrument associated with the rest effective target pose information sets is converted to obtain the first expected target pose information of the controlled operation terminal instrument in the second coordinate system.
In step S434, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first expected target is valid, the step S435 is entered; if at least part of the first expected target posture information is invalid, the process proceeds to step S436.
Step S435, assigning first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assigning the target pose information of the active image end instrument to second target pose information, assigning second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; and correspondingly assigning the second desired target pose information to third target pose information of the controlled operational tip instrument associated with the invalid set of target pose information.
And step S436, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S437, assigning first component target pose information in the pose information set associated with the target pose information of the effective image end instrument to first target pose information, assigning the target pose information of the effective image end instrument to second target pose information, assigning second component target pose information in the pose information set associated with the target pose information of the effective image end instrument to third target pose information of the associated controlled operation end instrument, assigning each effective first expected target pose information to third target pose information of the corresponding controlled operation end instrument, assigning each second expected target pose information obtained at different stages (i.e., conditions) to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; each second desired target pose information is associated with two cases, one case being associated with an invalid set of target pose information and the other case being associated with an invalid first desired target pose information (converting the first desired pose to the second desired pose), and therefore its correspondence needs to be assigned to the controlled operation end instrument associated with the respective case. In step S438, one of the target pose information of the end-of-image instrument that is valid is selected as valid, and the other is selected as invalid, and the process proceeds to step S433 when only one of the target pose information of the end-of-image instrument is valid.
In step S438, a plurality of combinations of the target pose information of the plurality of valid image end instruments as valid and invalid may be configured and calculated in step S433 to step S437, respectively, and the control step of step S6 may be performed by setting the calculated first target pose information, second target pose information, and each third target pose information to be valid.
In some embodiments, if more than two sets of the first object pose information, the second object pose information, the third object pose information, and the fourth object pose information calculated for different combinations are valid, then the control step of step S6 may be performed based on some metrics to determine which set of the first object pose information, the second object pose information, the third object pose information, and the fourth object pose information is valid. These metrics include, but are not limited to, one or more of the range of motion, the velocity of motion, and the acceleration of motion of the arm (the mechanical arm or the manipulator arm), and the first object pose information, the second object pose information, and each third object pose information used in step S6 may be selected using, for example, one or more of a smaller range of motion, a smaller velocity of motion, and a smaller acceleration of motion of the arm. These metrics include, but are not limited to, the number of controlled operational end instruments that can achieve the first desired pose and the second desired pose. A larger number of positions that can reach the first expected position are preferentially selected for control. These metrics may also be combined with each other to select an optimum set for the control of step S6.
In some embodiments, priorities may be set for each controlled operation end instrument in advance or in real time (e.g., through voice instructions) during the operation of the controlled operation end instrument, and the target pose information of the valid image end instrument associated with the controlled operation end instrument with higher priority is sequentially selected as valid and the rest as invalid in step S438, so as to ensure that the control object with higher priority can achieve the first desired pose as much as possible. The setting of the priority can be set according to the authority of an operator, can also be set according to a specific control object, and can be flexibly configured.
As shown in FIG. 8, the manipulation tip instruments 34B include controlled manipulation tip instruments 34B 1-34B 3 and an uncontrolled manipulation tip instrument 34B 4. Assuming that one set of pose information associated with the controlled operation distal end instrument 34B1 is invalid and both sets of pose information associated with the controlled operation distal end instruments 34B2 to 34B3 are valid, the target pose information of the image distal end instrument in the second coordinate system associated with the controlled operation distal end instruments 34B2, 34B3 calculated according to the above step S431 is C2, C3, respectively.
Case (1.1): if it is determined in step S432 that both C2 and C3 are invalid, the control is terminated.
Case (1.2): assume that it is judged in step S432 that C2 is valid and C3 is invalid.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B2, the target pose information of the controlled operation tip instrument 34B1 in the second coordinate system (second desired target pose information), the target pose information of the controlled operation tip instrument 34B3 in the second coordinate system (first desired target pose information), and the target pose information of the uncontrolled operation tip instrument 34B4 in the second coordinate system are calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is valid as judged according to step S434:
after the value is assigned in step S435, the process proceeds to step S5, and if the first to fourth object pose information is valid, the process proceeds to step S6. In step S6, the robot arm 21 is controlled to move according to the first target pose information so that the power mechanism 22 at the distal end of the robot arm reaches the corresponding target pose, controls the operation arm 31A to move in accordance with the second target pose information to hold the image end instrument 34A in the current pose, controls the movement of the manipulation arm 31B in accordance with the third target pose information of the controlled manipulation end instrument 34B1 to bring the controlled manipulation end instrument 34B to the corresponding target pose (second desired pose, i.e., hold current pose), controlling the movement of the manipulation arms 31C to 31D according to the third target pose information of the respective controlled manipulation tip instruments 34B2 to 34B3 to bring the controlled manipulation tip instruments 34B2 to 34B3 to the corresponding target pose (first desired pose), and controls the movement of the manipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is determined to be invalid according to step S434:
based on the first component pose information of the set of pose information associated with the controlled manipulation tip instrument 34B2, the second desired target pose information of the controlled manipulation tip instrument 34B3 in the second coordinate system is calculated according to step S436, and after the value is assigned in step S437, the process proceeds to step S5, and when the first to fourth target pose information are all valid, the process proceeds to step S6. In step S6, the robot arm 21 is controlled to move according to the first target pose information so that the power mechanism 22 at the distal end of the robot arm reaches the corresponding target pose, controls the operation arm 31A to move in accordance with the second target pose information to hold the image end instrument 34A in the current pose, controls the movement of the manipulation arms 31B, 34D in accordance with the third target pose information of the respective controlled manipulation tip instruments 34B1, 34B3 to bring the controlled manipulation tip instruments 34B1, 34B3 to the corresponding target pose (second desired pose, i.e., to hold the current pose), and controls the operation arm 31C to move in accordance with the third target pose information of the controlled operation tip instrument 34B2 to bring the controlled operation tip instrument 34B2 to the corresponding target pose (first desired pose), and controls the movement of the manipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Case (1.3): assume that both C2 and C3 are valid as determined in step S432.
The selection of C2 as valid, C3 as invalid and/or C2 as invalid, C3 as valid goes to the above case (1.2).
Steps S1 to S6 including steps S431 to S438 are also applicable to the case where the same robot arm 21 has four or more controlled operation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlled operation tip instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 10, when the controlled operation terminal device has more than two (including two or more) instruments and each posture information set is valid, the step S4 includes:
and step S441, respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S442, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end instrument is valid, the process proceeds to step S443; if more than two of the target pose information of the image end instrument are valid, the process proceeds to step S448.
Step S443, under the condition that the distal end of the manipulator reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system.
In step S444, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first desired target is valid, the step proceeds to step S445; if at least part of the first expected target pose information is invalid, the process proceeds to step S446.
Step S445, assigning first component target pose information in the set of pose information associated with the target pose information of the active image end instruments to first target pose information, assigning the target pose information of the active image end instruments to second target pose information, assigning second component target pose information in the set of pose information associated with the target pose information of the active image end instruments to third target pose information of the associated controlled operation end instruments, assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Step S446, under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid is converted to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S447, assign first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assign the target pose information of the active image end instrument to second target pose information, assign second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, assign each first active desired target pose information to third target pose information of the corresponding controlled operation end instrument, assign each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assign the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
The second desired target pose information here is associated only with the case where the first desired target pose information is invalid (the first desired pose is converted into the second desired pose).
In step S448, one of the target pose information of the end-of-image instrument to be valid is selected and the others are invalidated, and the flow proceeds to step S443 when only one of the target pose information of the end-of-image instrument is valid.
As shown in FIG. 8, the manipulation tip instruments 34B include controlled manipulation tip instruments 34B 1-34B 3 and an uncontrolled manipulation tip instrument 34B 4. Assuming that the three sets of pose information associated with the controlled operation distal end instruments 34B 1-34B 3 are valid, the target pose information of the image distal end instruments associated with the controlled operation distal end instruments 34B1, 34B2, 34B3 in the second coordinate system calculated according to the above step S441 are C1, C2, C3, respectively.
Case (2.1): if it is determined in step S442 that none of C1-C3 is valid, the control is ended.
Case (2.2): assume that only C1 is valid and C2 and C3 are invalid as determined by step S442.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the target pose information (first desired target pose information) of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is calculated according to step S433, and the target pose information (similarly, the second desired target pose) of the uncontrolled operation tip instrument 34B4 in the second coordinate system is calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is valid according to the step S444:
after the value is assigned in step S445, the process proceeds to step S5, and if the first to fourth object pose information is valid, the process proceeds to step S6. In step S6, the manipulator 21 is controlled to move according to the first object pose information to make the power mechanism 22 at the distal end reach the corresponding object pose, the manipulator 31A is controlled to move according to the second object pose information to make the image end instrument 34A maintain the current pose, the manipulator 31B-31D is controlled to move according to the third object pose information of the controlled operation end instruments 34B 1-34B 3 to reach the corresponding object pose (the first desired pose), and the manipulator 31E is controlled to move according to the fourth object pose information to make the uncontrolled operation end instrument 34B4 maintain the current pose.
Assuming that it is determined according to step S444 that the first desired target posture information of the controlled operation tip instrument 34B2, 34B3 in the second coordinate system is at least partially invalid, if the first desired target posture information of the controlled operation tip instrument 34B2 in the second coordinate system is valid, and the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instrument 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each of the third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator arm 21 is controlled to move according to the first object pose information so that the power mechanism 22 at the distal end thereof reaches the corresponding object pose, the manipulator arm 31A is controlled to move according to the second object pose information so that the image end instrument 34A maintains the current pose, the manipulator arms 31B to 31C are controlled to move according to the third object pose information of the controlled manipulation end instruments 34B1 to 34B2 so that the manipulator arms 31D are controlled to move according to the third object pose information of the controlled manipulation end instrument 34B2 so that the manipulator arm 31D reaches the corresponding object pose (the second desired pose), and the manipulator arm 31E is controlled to move according to the fourth object pose information so that the uncontrolled manipulation end instrument 34B4 maintains the current pose.
Assuming that it is judged according to step S444 that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator arm 21 is controlled to move so that the power mechanism 22 at the distal end thereof reaches the corresponding target pose based on the first target pose information, the manipulator arm 31A is controlled to move so that the image end instrument 34A is kept at the current pose based on the second target pose information, the manipulator arm 31B is controlled to move so that the manipulator arm 31B reaches the corresponding target pose (first desired pose) based on the third target pose information of the controlled manipulation end instrument 34B1, the manipulator arms 31C and 31D are controlled to move so that the manipulator arms 34B4 are kept at the current pose based on the third target pose information of the controlled manipulation end instruments 34B2 and 34B3, and the manipulator arm 31E is controlled to move so that the non-controlled manipulation end instrument 34B4 is kept at the current pose based on the fourth target pose information.
Steps S1 to S6 including steps S441 to S448 are also applicable to the case where the same robot arm 21 has two or more controlled operation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlled operation tip instruments 34B as shown in fig. 8.
In the above embodiments, the number of the uncontrolled operation terminal instruments is not limited.
Through the above embodiment, by combining the movements of the mechanical arm 21 and the operation arm 31, it is possible to ensure as much as possible that the controlled operation end instrument 34B can reach the first desired pose to achieve the surgical purpose under the condition that the image end instrument 34A and the uncontrolled operation end instrument are kept at the current pose, and it is also possible to ensure that the controlled operation end instrument 34B is kept at the current pose to reduce the surgical risk if the first desired pose cannot be reached.
In an embodiment, as shown in fig. 11, the step S1, namely the obtaining step, includes:
in step S11, motion information of the movement of the teleoperation controlled object input by the motion input device is acquired.
The controlled object is herein specifically referred to as a controlled operation tip instrument.
And step S12, analyzing the motion information into initial target pose information of the controlled object.
Typically, the motion information may be pose information of the motion-input device.
In one embodiment, as shown in fig. 12, the step S12 includes:
and step S121, analyzing and mapping the motion information into pose increment information of the controlled object.
Where "mapping" is a transformation relationship, it may include natural mapping relationships and unnatural mapping relationships.
The natural mapping relationship is a one-to-one correspondence relationship, and refers to a relationship from horizontal movement increment information to horizontal movement increment information, a relationship from vertical movement increment information to vertical movement increment information, a relationship from front and back movement increment information to front and back movement increment information, a relationship from yaw angle rotation increment information to yaw angle rotation increment information, a relationship from pitch angle rotation increment information to pitch angle rotation increment information, and a relationship from roll angle rotation increment information to roll angle rotation increment information between a controlled motion input device and a controlled object.
The non-natural mapping relationship is a mapping relationship other than the natural mapping relationship. In one example, the unnatural mapping includes, but is not limited to, a conversion mapping, which includes, but is not limited to, the aforementioned one-to-one mapping of the horizontal movement increment information, the vertical movement increment information, and the rotation increment information of the fixed coordinate system to the yaw increment information, the pitch increment information, and the roll increment information of the controlled object. The configuration as the unnatural mapping enables easier control of the controlled object in some cases, such as a two-to-one operation mode.
Step S122, position information of each joint component in the controlled object is acquired.
The corresponding position information can be obtained by a position sensor such as an encoder installed at each joint component in the controlled object. In the exemplary embodiment illustrated in fig. 1 and 13, the robot arm 21 has 5 degrees of freedom, and a set of position information (d1, θ) can be detected by means of position sensors2345)。
And step S123, calculating the current pose information of the controlled object in the first coordinate system according to the position information of each joint assembly.
Where calculations can be generally made in conjunction with positive kinematics. Establishing a kinematic model from the fixed point of the mechanical arm 21 (namely, the point C, the origin of the tool coordinate system of the mechanical arm 21 is on the fixed point) to the base of the mechanical arm 21, and outputting a model conversion matrix of the point C and the base
Figure BDA0002197794510000231
The calculation method is
Figure BDA0002197794510000232
And step S124, calculating initial target pose information of the controlled object in the first coordinate system by combining the incremental pose information and the current pose information.
Wherein, the model conversion matrix is based on the C point and the base
Figure BDA0002197794510000233
And acquiring the pose information of the point C in the fixed coordinate system. Assuming that the coordinate system of the point C is rotated to the posture described by the model transformation matrix without changing the position of the point C, the rotation axis angle [ theta ] can be obtainedx0y0z0]As shown in fig. 14. Thetax0Is the roll angle, thetay0Is yaw angle, θz0For pitch, in fact in the arm 21 shown in fig. 13, there is a lack of freedom of roll angle and hence theta in factx0Is not adjustable. The fixed coordinate system may for example be defined at the display, but may of course also be defined at a location which is not movable at least during operation.
Further, specifically, in the control step of step S6, the control of the robot arm, the image end instrument, and the manipulation end instrument as the control objects may include the steps of:
and calculating the target position information of each corresponding joint component according to the target pose information of the far end of the control object. Such as may be calculated by inverse kinematics.
And controlling each joint assembly in the control object to reach the corresponding target pose in a linkage manner according to the target position information of each joint assembly.
In one embodiment, as shown in fig. 15, the step S12 includes:
in step S125, a selection instruction associated with the operation mode type input for the controlled object is acquired.
The operation modes include a two-to-one operation mode and a one-to-one operation mode, the two-to-one operation mode refers to control of one controlled object with two controlled motion input devices, and the one-to-one operation mode refers to control of one controlled object with one controlled motion input device. When controlling the movement of a controlled object, a one-to-one operation mode or a two-to-one operation mode can be selected. For the one-to-one operation mode, it is further selectable which motion-input device is to be used as the controlled motion-input device for control. For example, when the same operator moves with both hands, the same operator may control one controlled object in a two-to-one operation mode or may control two controlled objects in a one-to-one operation mode according to the configuration. This is still true for more than two operators when the surgical robot provides enough motion-input devices.
And step S126, acquiring the motion information input by the controlled motion input equipment by combining the type of the operation mode, and analyzing and mapping the motion information into the incremental pose information of the far end of the controlled object in the first coordinate system.
In one embodiment, for one-to-one operation mode, the formula P is used for examplen=KPnPose information P of a corresponding one of the controlled motion-input devices 11 at the nth time is obtained, where K is a scaling factor,in general, K>0, more preferably, 1. gtoreq.K>And 0, so as to realize the scaling of the pose and facilitate the control.
In one embodiment, for the two-to-one operation mode, the formula P is used for examplen=K1PnL+K2PnRObtaining pose information P for the respective two controlled motion-input devices 11 at time n, where K1And K2Respectively representing the scaling factors of different motion-input devices 11, typically K1>0,K2>0; more preferably, 1 is not less than K1>0,1≥K2>0。
Calculating incremental pose information Δ p of the controlled motion input device 11 corresponding to a one-to-one operation mode or a two-to-one operation mode at a time before and after a certain timen_n-1The method can be calculated according to the following formula:
Δpn_n-1=Pn-Pn-1
in an embodiment, as shown in fig. 16 and fig. 17, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 includes:
in step S1261, the first position and orientation information of the two controlled motion input devices at the previous time are respectively obtained.
Step S1262, respectively obtaining second position and orientation information of the two controlled motion input devices at the later time.
And S1263, calculating and acquiring the incremental pose information of the two controlled motion input devices in a fixed coordinate system by combining the first scale coefficient and the first pose information and the second pose information of the two controlled motion input devices.
In step S1263, the following steps may be specifically implemented:
and calculating the incremental pose information of the first pose information and the second pose information of one controlled motion input device in the fixed coordinate system, and calculating the incremental pose information of the first pose information and the second pose information of the other controlled motion input device in the fixed coordinate system.
And calculating the increment pose information of one motion input device in the fixed coordinate system and the increment pose information of the other motion input device in the fixed coordinate system by combining the first scale coefficient to respectively obtain the increment pose information of the two motion input devices in the fixed coordinate system.
In the two-to-one operation mode, the first scaling factor is 0.5, i.e. K, for example1And K2And if the values of the two controlled motion input devices are both 0.5, the acquired incremental pose information represents the incremental pose information of the central point of the connecting line between the two controlled motion input devices. According to the actual situation, K can also be matched1And K2Additional assignments are made. K1And K2May be the same or different.
And S1264, mapping the incremental pose information of the two controlled motion input devices in the fixed coordinate system to the incremental pose information of the far end of the controlled object in the first coordinate system.
In an embodiment, as shown in fig. 18, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 may also include:
in step S1265, first position information of the two controlled motion input devices in the fixed coordinate system at the previous time is respectively obtained.
In step S1266, second position information of the two controlled motion input devices in the fixed coordinate system at the later time is respectively obtained.
And S1267, calculating and acquiring horizontal movement increment information, vertical movement increment information and rotation increment information of the two controlled motion input devices in the fixed coordinate system by combining the second proportionality coefficient and the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system.
And S1268, mapping the horizontal movement increment information, the vertical movement increment information and the rotation increment information of the two controlled motion input devices in a fixed coordinate system into the yaw angle increment information, the pitch angle increment information and the roll angle increment information of the remote end of the controlled object in a first coordinate system correspondingly.
Further, as shown in fig. 19 and 20, the step S1268 of calculating and acquiring incremental rotation information of the two controlled motion input devices in the fixed coordinate system according to the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system includes:
step S12681, a first position vector between the two controlled motion-input devices at a previous time is established.
Step S12682, a second position vector between the two controlled motion input devices at a later time is established.
Step S12683, the rotation increment information of the two controlled motion input devices in the fixed coordinate system is obtained by combining the third scaling factor and the included angle between the first position vector and the second position vector.
In an embodiment, as shown in fig. 21 and fig. 22, when the selection instruction acquired in step S125 is associated with a one-to-one operation mode, step S126 may include:
in step S12611, first pose information of the controlled motion input device in the fixed coordinate system at the previous time is obtained.
In step S12612, second pose information of the controlled motion input device in the fixed coordinate system at the later time is obtained.
And S12613, calculating and acquiring the incremental pose information of the controlled motion input equipment in the fixed coordinate system by combining the fourth proportionality coefficient and the first pose information and the second pose information of the controlled motion input equipment in the fixed coordinate system.
Step S12614, mapping the incremental pose information of the controlled motion input device in the fixed coordinate system to the incremental pose information of the remote end of the controlled object in the first coordinate system.
It should be noted that, in some usage scenarios, when the mechanical arm 21 moves, it is necessary to ensure that the distal end of the mechanical arm 21 moves around a stationary point (a distal end Center of Motion) when the mechanical arm 21 moves, that is, performs RCM constrained Motion, and specifically, the task degree of freedom at the distal end of the mechanical arm may be ensured to be implemented by setting the task degree of freedom, which is only related to the pose degree of freedom. The task degree of freedom of the distal end of the arm body can be understood as the degree of freedom of the distal end of the arm body allowing movement in cartesian space, which is at most 6. The degree of freedom actually possessed by the distal end of the arm body in the cartesian space is an effective degree of freedom, which is related to the configuration (i.e., structural features) thereof, and can be understood as the degree of freedom that can be realized by the distal end of the arm body in the cartesian space.
The stationary point has a relatively fixed positional relationship with the distal end of the robotic arm. Depending on the particular control objective, the origin of the second coordinate system may be the fixed point in some embodiments, or a point on the distal end of the robotic arm in other embodiments.
In an embodiment, as shown in fig. 23, specifically in step S2, i.e. the decomposition step, the method may include:
in step S211, an input operation command associated with the degree of freedom of the task at the distal end of the robot arm is acquired.
Step S212, decomposing each initial target pose information by combining the task freedom degrees respectively to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
Wherein the operation command may include a first operation command and a second operation command. The first operation command is associated with the condition that the task degree of freedom of the distal end of the mechanical arm 21 is completely matched with the effective degree of freedom of the mechanical arm 21, so that the distal end of the mechanical arm can move freely in the effective degree of freedom of the mechanical arm; the second operating command, which corresponds to the above-mentioned RCM constrained motion, is associated with the case where the task degree of freedom of the distal end of the robot arm 21 perfectly matches the attitude degree of freedom in the effective degrees of freedom of the robot arm 21, so as to ensure that the distal end thereof, i.e., the power mechanism 22, moves around the motionless point when the robot arm 21 moves. Of course, other combinations of task degrees of freedom may be defined to facilitate control, and are not described in detail herein.
For example, when the second operation command is acquired in step S211, only the information on the degree of freedom of the attitude is changed while the information on the degree of freedom of the position is kept unchanged in the first component target pose information obtained by decomposition. In this way, the distal end of the robot arm 21 moves around the fixed point, and the desired posture is achieved mainly depending on the movement of the controlled operation end instrument 34B, and the safety of the operation can be ensured.
In an embodiment, specifically in step S2, i.e. the decomposition step, the method may include:
step S221, acquiring current pose information of the far end of the mechanical arm in a first coordinate system;
step S222, converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm;
step S223, judging the validity of the pose information of the second component target;
in this step, if the pose information of the second component target is valid, go to step S224; otherwise, the process proceeds to step S225.
Step S224, under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the second component target pose information, converting the initial target pose information to obtain first component target pose information;
and step S225, adjusting the second component target pose information to be effective, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the updated second component target pose information.
Through the steps S221 to S225, when the pose of the controlled operation terminal instrument is adjusted, the corresponding operation arm is preferentially adjusted, and if the movement of the operation arm meets the adjustment of the controlled operation terminal instrument, only the operation arm needs to move; if the movement of the manipulator arm is not sufficient for adjustment of the controlled manipulation tip instrument, the adjustment may be made in conjunction with the movement of the robotic arm.
Furthermore, in the above step S3, i.e. the first determination step, since the pose information of the second component target itself is valid or is valid after adjustment, only the pose information of the first component target needs to be determined, and when the pose information of the first component target is valid, the corresponding pose information set can be determined to be valid, otherwise, the pose information set is determined to be invalid.
Referring to fig. 6, the steps S221 to S222 may be implemented as follows, specifically, by the following formula (1):
Figure BDA0002197794510000271
wherein the content of the first and second substances,
Figure BDA0002197794510000272
is the initial target pose information of controlled manipulation tip instrument 34B in the first coordinate system,
Figure BDA0002197794510000273
is the current pose information of the far end of the mechanical arm in a first coordinate system,
Figure BDA0002197794510000274
is the target pose information of controlled manipulation tip instrument 34B in the second coordinate system. T2 is the tool coordinate system of the controlled manipulation tip instrument 34B, T1 is the tool coordinate system of the robotic arm, and B is the base coordinate system of the robotic arm. When calculating, because the calculation is carried out first
Figure BDA0002197794510000281
And
Figure BDA0002197794510000282
is known and can thus be calculated
Figure BDA0002197794510000283
If it is judged in the step S223
Figure BDA0002197794510000284
Is invalid, can
Figure BDA0002197794510000285
Is adjusted to be effective then due to
Figure BDA0002197794510000286
And
Figure BDA0002197794510000287
is known, calculation
Figure BDA0002197794510000288
It will be understood by those skilled in the art that the foregoing embodiments, which relate to calculating (converting) the target pose information of the distal end of the corresponding arm in the corresponding coordinate system, can be implemented by using the above equation (1), only for the case of
Figure BDA0002197794510000289
Etc. may vary depending on the circumstances, e.g.,
Figure BDA00021977945100002810
or the current pose information of the corresponding arm body in the first coordinate system, which is not described in detail herein.
In an embodiment, the step of determining the validity of the arbitrarily obtained target pose information includes:
and step S71, resolving the target pose information into target motion state parameters of each joint component in the corresponding arm body.
Step S72, the target motion state parameters of each joint component in the arm body are compared with the motion state threshold of each joint component in the arm body.
Step S73, if more than one target motion state parameter of each joint component in the arm body exceeds the motion state threshold of the corresponding joint component, the target pose information is judged to be invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is effective.
The motion state parameters comprise position parameters, speed parameters and acceleration parameters, and the motion state thresholds comprise position parameter thresholds, speed parameter thresholds and acceleration parameter thresholds. When comparing, they are of the same type.
Further, in the aforementioned step S225, specifically, in the step of adjusting the pose information of the second component object to be valid, the motion state of each joint assembly in the arm body exceeding the motion state threshold may be adjusted to be within the corresponding motion state threshold so as to be valid. In one embodiment, the motion state of each joint assembly in the arm body exceeding the motion state threshold can be adjusted to the corresponding motion state threshold to be effective, so that the operation arm can move to the limit as much as possible and then be adjusted by matching with the mechanical arm.
The above described embodiments are suitable for controlling end instruments in a surgical robot of the type shown in fig. 1. The surgical robot of this type includes one robot arm 21 and one or more operation arms 31 having end instruments 34 installed at the distal end of the robot arm 21, and the robot arm 21 and the operation arms 31 each have several degrees of freedom.
The above embodiments are equally applicable to the control of end instruments in a surgical robot of the type shown in fig. 26. The surgical robot of this type includes a main arm 32 ', one or more adjusting arms 30' installed at a distal end of the main arm 32 ', and one or more manipulation arms 31' having a distal end instrument installed at a distal end of the adjusting arm 30 ', the main arm 32', the adjusting arm 30 ', and the manipulation arm 31' each having several degrees of freedom. As shown in fig. 26, in the surgical robot, four adjustment arms 30 ' may be provided, and only one operation arm 31 ' may be provided for each adjustment arm 30 '. According to the actual use scenario, the three-segment arm structure of the surgical robot shown in fig. 26 can be configured as the two-segment arm structure of the surgical robot shown in fig. 1 to realize control. In an embodiment, in the case where the concepts of the operation arms in the two types of surgical robots are identical, for example, depending on the configuration, each of the adjustment arms 30' in the type of surgical robot shown in fig. 26 may be regarded as the robot arm 21 in the type of surgical robot shown in fig. 1 for control; for example, depending on the arrangement, the entire adjustment arm 30 'and the main arm 32' of the surgical robot of the type shown in fig. 26 may be controlled as the robot arm 21 of the surgical robot of the type shown in fig. 1. In one embodiment, the main arm 32 ' of the surgical robot shown in fig. 26 may be regarded as the mechanical arm 21 of the surgical robot shown in fig. 1, and the whole of the adjusting arm 30 ' and the corresponding operation arm 31 ' of the surgical robot shown in fig. 26 may be regarded as the operation arm 31 of the surgical robot shown in fig. 1.
The control method is particularly suitable for the single-hole surgical robot. The method described above is also applicable if the multi-hole surgical robot is provided with a manipulator arm having a distal end instrument for imaging, at the distal end of the manipulator arm, and a manipulator arm having a distal end instrument for manipulation.
In one embodiment, the control method of the surgical robot is generally configured to be implemented in a processing system of the surgical robot, and the processing system has more than one processor.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being configured to be executed by one or more processors to implement the steps of the control method according to any one of the above-mentioned embodiments.
The surgical robot, the control method thereof and the computer readable storage medium of the invention have the following advantages:
the operation terminal instrument 34B is controlled to move to keep the current pose while the operation terminal instrument 34B reaches the target pose by controlling the movement of the mechanical arm 21, so that the operation space of the operation terminal instrument 34B can be enlarged by the movement of the mechanical arm 21 on the premise of keeping the visual field unchanged, and the operation terminal instrument is convenient and safe to use.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A method for controlling a distal end instrument in a surgical robot including a robot arm provided at a distal end thereof with two or more operation arms having distal end instruments including an image distal end instrument and one or more operation distal end instruments including a controlled operation distal end instrument and an uncontrolled operation distal end instrument, the method comprising:
an acquisition step of acquiring initial target pose information of each controlled operation terminal instrument;
decomposing the initial target pose information to obtain a group of pose information sets respectively, wherein each group of pose information set comprises first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system, the first coordinate system refers to a base coordinate system of the mechanical arm, and the second coordinate system refers to a tool coordinate system of the mechanical arm;
a first judgment step of judging the validity of each set of pose information sets;
a calculation step of calculating, in a condition that at least one group of pose information sets is valid and the image end apparatus and each of the uncontrolled operation end apparatuses are kept in the current pose, first target pose information of the distal end of the mechanical arm in a first coordinate system, second target pose information of the image end apparatus in a second coordinate system, third target pose information of each of the controlled operation end apparatuses in the second coordinate system, and fourth target pose information of each of the uncontrolled operation end apparatuses in the second coordinate system, respectively, by combining the pose information sets;
a second judgment step of judging validity of the first to fourth target pose information;
and a control step of, when the first to fourth target pose information are all valid, controlling the movement of the robot arm according to the first target pose information so as to enable the distal end of the robot arm to reach a corresponding target pose, controlling the movement of the operation arm corresponding to the image end instrument according to the second target pose information so as to enable the image end instrument to be kept at a current pose, controlling the movement of the operation arm corresponding to the controlled operation end instrument according to each third target pose information so as to enable the controlled operation end instrument to reach a corresponding target pose, and controlling the movement of the operation arm corresponding to the uncontrolled operation end instrument according to each fourth target pose information so as to enable each uncontrolled operation end instrument to be kept at a current pose.
2. The control method according to claim 1, wherein when there is one controlled operation tip instrument and the set of pose information is valid, the calculating step includes:
under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, and converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system;
assigning the first component target pose information to the first target pose information, assigning the target pose information of the end-of-image instrument in a second coordinate system to the second target pose information, assigning the second component target pose information to the third target pose information, and assigning the target pose information of each of the uncontrolled operation end instruments in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
3. The control method according to claim 1, wherein when the controlled operation tip instruments are two or more, and one of the pose information sets is valid and the remaining pose information sets are invalid, the calculating step includes:
under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain second expected target pose information of the controlled operation end instrument in the second coordinate system;
assigning first component target pose information in the set of effective pose information to the first target pose information, assigning target pose information of the end-of-image instrument in a second coordinate system to the second target pose information, assigning second component target pose information in the set of effective pose information to associated third target pose information of the end-of-controlled-operation instrument, assigning each second expected target pose information to third target pose information corresponding to the end-of-controlled-operation instrument, and assigning target pose information of each end-of-uncontrolled-operation instrument in the second coordinate system to fourth target pose information corresponding to the end-of-uncontrolled-operation instrument.
4. The control method according to claim 1, wherein when the controlled operation tip instrument is plural, and two or more of the pose information sets are valid and the remaining pose information sets are invalid, the calculating step includes:
respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set;
judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system;
when only one target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to a first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each controlled operation end instrument associated with the ineffective pose information set to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective pose information set to obtain first expected target pose information of the controlled operation end instrument in the second coordinate system;
judging the validity of each first expected target position and attitude information;
assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning target pose information of each uncontrolled operation end instrument in a second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument, if each first desired target pose information is valid;
if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system;
assigning first component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the first target pose information, assigning valid target pose information for the end-of-image instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information for the end-of-image instrument to the third target pose information for the associated controlled operation end instrument, assigning each of the valid first desired target pose information to the corresponding third target pose information for the controlled operation end instrument, assigning each of the second desired target pose information to the corresponding third target pose information for the controlled operation end instrument, and assigning target pose information for each of the uncontrolled operation end instruments in a second coordinate system to the corresponding third target pose information for the uncontrolled operation end instrument Four target pose information;
when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
5. The control method according to claim 1, wherein when the number of the controlled operation tip instruments is two or more and each of the posture information sets is valid, the calculating step includes:
respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set;
judging the effectiveness of the pose information of each target of the image terminal instrument in a second coordinate system;
when only one target pose information of the image end instrument is effective, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in an effective pose information set associated with the effective target pose information of the image end instrument, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in a second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective pose information set to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system;
judging the validity of each first expected target position and attitude information;
assigning first component target pose information in the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information in the set of pose information associated with valid target pose information of the image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning target pose information of each uncontrolled operation end instrument in a second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument if each first desired target pose information is valid;
if at least part of the first expected target pose information is invalid, under the condition that the far end of the mechanical arm reaches a target pose corresponding to first component target pose information in the valid pose information set and associated with valid target pose information of the image end instrument, converting current pose information of the controlled operation end instrument associated with the invalid first expected target pose information to obtain second expected target pose information of the controlled operation end instrument in a second coordinate system;
assigning first component target pose information of the set of pose information associated with valid target pose information of the image end instrument to the first target pose information, assigning valid target pose information of the image end instrument to the second target pose information, assigning second component target pose information of the set of pose information associated with valid target pose information of the image end instrument to the associated third target pose information of the controlled operation end instrument, assigning each valid first desired target pose information to the corresponding third target pose information of the controlled operation end instrument, assigning each second desired target pose information to the corresponding third target pose information of the controlled operation end instrument, assigning each target pose information of the uncontrolled operation end instrument in a second coordinate system to the corresponding third target pose information of the uncontrolled operation end instrument Fourth target pose information;
when more than two pieces of target pose information of the image end instrument are effective, selecting one of the effective target pose information of the image end instrument as effective and the rest as ineffective, and entering the step when only one of the effective target pose information of the image end instrument is effective.
6. The control method according to claim 1, wherein the decomposing step includes:
acquiring an input operation command related to the task degree of freedom of the remote end of the mechanical arm;
and decomposing each initial target pose information by combining the task freedom degrees to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
7. The control method according to claim 6, characterized in that:
the operation commands comprise a first operation command and a second operation command;
the first operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches an effective degree of freedom of the robot arm;
the second operation command is associated with a case where a task degree of freedom of the distal end of the robot arm completely matches a pose degree of freedom in the effective degrees of freedom of the robot arm.
8. The control method according to claim 1, wherein the decomposing step includes:
acquiring current pose information of the far end of the mechanical arm in a first coordinate system;
converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm;
judging the validity of the pose information of the second component target;
if the first component target pose information is valid, converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation end instrument reaches a target pose corresponding to the second component target pose information;
and if the first component target pose information is invalid, adjusting the second component target pose information to be valid, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches a target pose corresponding to the updated second component target pose information.
9. The control method according to claim 8, wherein the first determination step includes:
judging the validity of the first component target pose information obtained by converting the initial target pose information;
if the first component target pose information is valid, judging that the pose information set is valid;
and if the first component target pose information is invalid, judging that the pose information set is invalid.
10. The control method according to any one of claims 1 to 9, wherein the step of determining the validity of the first to fourth target pose information includes:
analyzing the target pose information into target motion state parameters of each joint assembly in a corresponding arm body, wherein the arm body is the mechanical arm or the operating arm;
comparing the target motion state parameters of each joint component in the arm body with the motion state threshold of each joint component in the arm body;
if more than one target motion state parameter of each joint assembly in the arm body exceeds the motion state threshold of the corresponding joint assembly, judging that the target pose information is invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is valid.
11. The control method according to claim 1, wherein the acquiring step includes:
acquiring motion information of the remote operation of the controlled operation terminal instrument input by a motion input device, wherein the motion information is pose information of the motion input device;
and analyzing the motion information into initial target pose information of the controlled operation terminal instrument.
12. A computer-readable storage medium, characterized in that it stores a computer program configured to be loaded by a processor and to execute steps implementing a control method according to any one of claims 1 to 11.
13. A control device for a surgical robot, comprising:
a memory for storing a computer program;
and a processor for loading and executing the computer program;
wherein the computer program is configured to be loaded by the processor and to execute steps implementing a control method according to any of claims 1-11.
14. A surgical robot, comprising:
a mechanical arm;
one or more operation arms provided with end instruments and arranged at the distal end of the mechanical arm, wherein the end instruments comprise an image end instrument and one or more operation end instruments, and the operation end instruments are all configured to be controlled to operate the end instruments;
and a control device respectively connected with the mechanical arm and the operating arm;
the control device is used for executing the steps of realizing the control method according to any one of claims 1-11.
CN201910854104.1A 2019-09-10 2019-09-10 Surgical robot, method and device for controlling distal end instrument, and storage medium Active CN110464469B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910854104.1A CN110464469B (en) 2019-09-10 2019-09-10 Surgical robot, method and device for controlling distal end instrument, and storage medium
PCT/CN2020/114113 WO2021047520A1 (en) 2019-09-10 2020-09-08 Surgical robot and control method and control device for distal instrument thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910854104.1A CN110464469B (en) 2019-09-10 2019-09-10 Surgical robot, method and device for controlling distal end instrument, and storage medium

Publications (2)

Publication Number Publication Date
CN110464469A CN110464469A (en) 2019-11-19
CN110464469B true CN110464469B (en) 2020-12-01

Family

ID=68515401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910854104.1A Active CN110464469B (en) 2019-09-10 2019-09-10 Surgical robot, method and device for controlling distal end instrument, and storage medium

Country Status (1)

Country Link
CN (1) CN110464469B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047520A1 (en) * 2019-09-10 2021-03-18 深圳市精锋医疗科技有限公司 Surgical robot and control method and control device for distal instrument thereof
CN111805540A (en) * 2020-08-20 2020-10-23 北京迁移科技有限公司 Method, device and equipment for determining workpiece grabbing pose and storage medium
CN115227407B (en) * 2022-09-19 2023-03-03 杭州三坛医疗科技有限公司 Surgical robot control method, device, system, equipment and storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839612B2 (en) * 2001-12-07 2005-01-04 Institute Surgical, Inc. Microwrist system for surgical procedures
US7386365B2 (en) * 2004-05-04 2008-06-10 Intuitive Surgical, Inc. Tool grip calibration for robotic surgery
US9526587B2 (en) * 2008-12-31 2016-12-27 Intuitive Surgical Operations, Inc. Fiducial marker design and detection for locating surgical instrument in images
EP2037794B1 (en) * 2006-06-13 2021-10-27 Intuitive Surgical Operations, Inc. Minimally invasive surgical system
US8935003B2 (en) * 2010-09-21 2015-01-13 Intuitive Surgical Operations Method and system for hand presence detection in a minimally invasive surgical system
JP5612971B2 (en) * 2010-09-07 2014-10-22 オリンパス株式会社 Master-slave manipulator
KR102307790B1 (en) * 2013-02-04 2021-09-30 칠드런스 내셔널 메디컬 센터 Hybrid control surgical robotic system
WO2014156250A1 (en) * 2013-03-29 2014-10-02 オリンパス株式会社 Master-slave system
CN107106249B (en) * 2015-01-16 2019-11-29 奥林巴斯株式会社 Operation input device and medical manipulator system
JP2017177297A (en) * 2016-03-31 2017-10-05 ソニー株式会社 Control device and control method
CN106553195B (en) * 2016-11-25 2018-11-27 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN107468293A (en) * 2017-08-31 2017-12-15 中国科学院深圳先进技术研究院 Micro-wound operation robot and apply its surgical device
CN110051436B (en) * 2018-01-18 2020-04-17 上海舍成医疗器械有限公司 Automated cooperative work assembly and application thereof in surgical instrument
JP7064190B2 (en) * 2018-01-23 2022-05-10 国立大学法人東海国立大学機構 Surgical instrument control device and surgical instrument control method
CN109048890B (en) * 2018-07-13 2021-07-13 哈尔滨工业大学(深圳) Robot-based coordinated trajectory control method, system, device and storage medium
CN109223183A (en) * 2018-09-30 2019-01-18 深圳市精锋医疗科技有限公司 Starting method, readable access to memory and the operating robot of operating robot
CN110116407B (en) * 2019-04-26 2021-03-30 哈尔滨工业大学(深圳) Flexible robot position and posture measuring method and device

Also Published As

Publication number Publication date
CN110464469A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110464471B (en) Surgical robot and control method and control device for tail end instrument of surgical robot
CN110559083B (en) Surgical robot and control method and control device for tail end instrument of surgical robot
CN110464469B (en) Surgical robot, method and device for controlling distal end instrument, and storage medium
CN111315309B (en) System and method for controlling a robotic manipulator or associated tool
CN110559082B (en) Surgical robot and control method and control device for mechanical arm of surgical robot
CN110464470B (en) Surgical robot and control method and control device for arm body of surgical robot
CN110464473B (en) Surgical robot and control method and control device thereof
US11541551B2 (en) Robotic arm
CN113876434A (en) Master-slave motion control method, robot system, device, and storage medium
Stroppa et al. Human interface for teleoperated object manipulation with a soft growing robot
WO2021047520A1 (en) Surgical robot and control method and control device for distal instrument thereof
JP7079899B2 (en) Robot joint control
WO2020209165A1 (en) Surgical operation system and method for controlling surgical operation system
JP7079881B2 (en) Robot joint control
EP4313508A1 (en) Systems and methods for controlling a robotic manipulator or associated tool
CN117442347A (en) Remote control method for underdrive mechanism
WO2022101175A1 (en) Adaptive robot-assisted system and method for evaluating the position of the trocar in a robot-assisted laparoscopic surgery intervention
CN116763450A (en) Robot system including bending tool and master-slave motion control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant