WO2022126996A1 - Robot chirurgical, procédé de commande associé et dispositif de commande associé - Google Patents

Robot chirurgical, procédé de commande associé et dispositif de commande associé Download PDF

Info

Publication number
WO2022126996A1
WO2022126996A1 PCT/CN2021/092563 CN2021092563W WO2022126996A1 WO 2022126996 A1 WO2022126996 A1 WO 2022126996A1 CN 2021092563 W CN2021092563 W CN 2021092563W WO 2022126996 A1 WO2022126996 A1 WO 2022126996A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
visible area
new
control method
arm
Prior art date
Application number
PCT/CN2021/092563
Other languages
English (en)
Chinese (zh)
Inventor
高元倩
王建辰
Original Assignee
深圳市精锋医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市精锋医疗科技有限公司 filed Critical 深圳市精锋医疗科技有限公司
Publication of WO2022126996A1 publication Critical patent/WO2022126996A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Master-slave robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/302Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities

Definitions

  • the present application relates to the field of medical devices, and in particular, to a surgical robot, a control method and a control device thereof.
  • Minimally invasive surgery refers to a surgical method that uses modern medical instruments such as laparoscope and thoracoscope and related equipment to perform surgery inside the human cavity. Compared with traditional surgical methods, minimally invasive surgery has the advantages of less trauma, less pain, and faster recovery.
  • the surgical robot includes a master operation table and a slave operation device, and the slave operation device includes a plurality of operation arms, and the operation arms include a camera arm with an image end instrument and a surgical arm with an operation end instrument.
  • the main console includes a display and a handle. The doctor operates the handle to control the movement of the camera arm or the surgical arm under the field of view provided by the camera arm displayed on the monitor.
  • the present application provides a method for controlling a surgical robot, the surgical robot has an operation arm, the operation arm includes a camera arm with an image end instrument and a surgical arm with an operation end instrument, the operation arm has a
  • the above can be configured as a first feature part of a follower part, and the control method includes the steps of: acquiring the follower part configured based on the first feature part;
  • the image end device obtains a new viewable area of the image end device in the reference coordinate system so that at least part of the follower part falls into the new viewable area.
  • the following parts are the operating end instruments and/or joints of the surgical arm.
  • the step of adjusting the image end device to obtain a new viewable area of the image end device in the reference coordinate system so that at least part of the following part falls into the new viewable area includes: acquiring the image end device The union area of all visible areas of the instrument in the reference coordinate system; determine whether the following parts can fall into the union area; when the following parts can all fall into the union area, adjust the image The end instrument obtains a new viewable area of the image end instrument in the reference coordinate system so that all the following parts fall into the new viewable area.
  • the step of adjusting the image end device to obtain a new viewable area of the image end device in the reference coordinate system so that at least part of the following part falls into the new viewable area includes: acquiring the image end device The union area of all visible areas of the instrument in the reference coordinate system; determine whether the following parts can fall into the union area; when the following parts cannot all fall into the union area, according to the operation instruction At least part of the following parts that can fall into the union region are configured as first following parts; adjusting the image end device to obtain a new viewable area of the image end device in the reference coordinate system so that the first The follower part falls into the new viewable area.
  • the operation instruction includes a first operation instruction
  • the first operation instruction is related to all or part of the following parts that can fall into the union region, and at least part of the following parts can fall into the union according to the operation instruction
  • the step of configuring the following parts of the region as the first following parts includes: configuring all or part of the following parts that can fall into the union region as the first following parts according to the first operation instruction.
  • the surgical arm includes a first priority surgical arm and a second priority surgical arm, each of the surgical arms is configured with the following part
  • the operation instruction includes a second operation instruction
  • the first The second operation instruction is associated with the following position of the surgical arm of the first priority, and all or part of it can fall into the union region, and according to the operation instruction, at least part of all the parts that can fall into the union region will be
  • the configuring of the following parts as the first following parts includes: according to the second operation instruction, configuring all or part of the following parts on the surgical arm of the first priority that can fall into the union region as the first following parts part.
  • the operating arm of the first priority refers to the operating arm in a moving state
  • the operating arm of the second priority refers to the operating arm in a stationary state
  • the operating arm of the first priority refers to the operating arm whose movement speed of the operation end instrument reaches a preset threshold
  • the operation arm of the second priority refers to the operation arm whose movement speed of the operation end instrument is lower than preset thresholds for the surgical arm.
  • the step of judging whether the following part falls into the current visible area includes: acquiring an operation image of the current visible area collected by the camera arm; identifying whether the following part is located in the operation image And it is judged whether the following part falls into the current visible area.
  • the step of judging whether the following part falls into the current visible area includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm; combining the kinematic model and the The joint variable determines the position of the following part in the reference coordinate system; converts the current visible area to the position range in the reference coordinate system; judges whether the position of the following part is within the position range; in the following part When it is within the position range, it is determined that the following part is within the current visible area.
  • the image end device has adjustable camera parameters, and the image end device is adjusted to obtain a new viewable area of the image end device in the reference coordinate system so that at least part of the following part falls into the new
  • the step of visualizing the area includes: acquiring the current position of the following part in the reference coordinate system; adjusting the camera parameters of the image end device according to the current position to generate the new visual area so that at least part of the The follower part falls into the new viewable area.
  • the camera parameters include focal length and/or aperture.
  • the step of adjusting the image end device to obtain a new visible area of the image end device in the reference coordinate system so that at least part of the following part falls into the new visible area includes: acquiring the following part At the current position of the reference coordinate system; obtain the task degrees of freedom configured according to the effective degrees of freedom of the image end device; adjust the movement of the image end device according to the current position and the task degrees of freedom to obtain the image end device Referring to the new viewable area of the coordinate system so that at least part of the follower portion falls within the new viewable area.
  • the effective degrees of freedom of the image end device include positional degrees of freedom and attitude degrees of freedom.
  • the degree of freedom of the task is selected from the degree of freedom that is consistent with the direction of the depth of field among the degrees of freedom of the position, and the movement of the image end device is adjusted to obtain the new viewable area of the image end device in the reference coordinate system.
  • Making the follower part fall into the new visible area is specifically: adjusting the image end device to expand and contract on the degree of freedom consistent with the direction of the depth of field to obtain a new visible area of the image end device in the reference coordinate system so that at least Part of the following portion falls into the new viewable area.
  • the degree of freedom of the task is selected from the plane degree of freedom in the direction perpendicular to the depth of field in the position degrees of freedom, and the movement of the image end device is adjusted to obtain the new viewable area of the image end device in the reference coordinate system
  • To make the following part fall into the new visible area is specifically: adjusting the translation of the image end device on the plane degree of freedom to obtain the new visible area of the image end device in the reference coordinate system, so that the following The site falls into the new viewable area.
  • the degree of freedom of the task is selected from the degree of freedom of the attitude, and the movement of the image end device is adjusted to obtain the new visible area of the image end device in the reference coordinate system, so that the following part falls into the new visible area.
  • the viewing area is specifically: adjusting the position of the image end device to remain unchanged but the posture changes to generate a new visible area, so that at least part of the follower part falls into the new visible area.
  • the method includes: ending the matching The adjustment of the image end instrument maintains the new viewable area obtained.
  • the step of ending the adjustment of the image end device includes: detecting whether an interrupt instruction is acquired; when the interrupt instruction is acquired, ending the adjustment of the image end device.
  • the surgical arm has more than one second characteristic part that can be configured as a target part
  • the method includes: obtaining the new visual area based on the current position of the surgical arm in the new visual area.
  • the target part configured by the second feature part in the area; the movement of the target part in the new visible area is defined based on the new visible area.
  • the step of acquiring the target site configured based on the second characteristic site where the surgical arm is currently located in the new visible area includes: acquiring the target site that the surgical arm can be configured to the second characteristic part; determine whether the second characteristic part is currently located in the new visible area; let the second characteristic part determined to be currently located in the new visible area be the first part, based on the The first part acquires the target part.
  • the step of judging whether the second characteristic part is currently located in the new visible area includes: acquiring an operation image of the new visible area collected by the camera arm; identifying whether the second characteristic part is located in the new visible area; When the second characteristic part is located in the operation image, it is determined that the second characteristic part is located in the new visible area.
  • the step of judging whether the second characteristic part is currently located in the new visible area includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm; combining the kinematic model with the kinematics model and the joint variable to determine the position of the second feature part in the reference coordinate system; convert the new visible area to the position range in the reference coordinate system; determine whether the position of the second feature part is located in the position range when the second characteristic part is located in the position range, it is determined that the second characteristic part is located in the new visible area.
  • the step of limiting the movement of the target part in the new visible area based on the new visible area includes: judging whether the target part reaches the boundary of the new visible area; judging that the target part reaches the boundary of the new visible area; When the boundary of the new visible area, determine whether the movement direction of the target part at the next moment is towards the outside of the new visible area; determine whether the movement direction of the target part at the next moment is towards the new visible area When it is outside, the target part is prohibited from moving toward the outside of the new viewable area.
  • prohibiting the movement of the target part towards the outside of the new visible area includes prohibiting the movement of the target part towards the outside of the new visible area or prohibiting the movement of the operating arm.
  • the step of judging whether the target part reaches the boundary of the new visible area includes: acquiring an operation image of the new visible area collected by the camera arm; identifying whether the target part reaches the boundary of the operation image edge; when the target part reaches the edge of the operation image, it is determined that the target part reaches the boundary of the new visible area.
  • the step of judging whether the target part reaches the boundary of the new visible area includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm; combining the kinematic model and all the joint variables of the operating arm;
  • the joint variable determines the position of the target part in the reference coordinate system; converts the new visible area into a position range in the reference coordinate system; judges whether the position of the target part reaches the boundary of the position range; When the target part reaches the boundary of the position range, it is determined that the target part reaches the boundary of the new visible area.
  • the step of judging whether the moving direction of the target part at the next moment is toward the outside of the new visible area includes: obtaining the current position of the target part when it reaches the boundary of the new visible area; obtaining the target The target position of the part at the next moment; according to the target position and the current position, it is determined whether the movement direction of the target part at the next moment is towards the outside of the new visible area.
  • the step of acquiring the current position of the target part when it reaches the boundary of the visible area includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm at the current moment; The kinematic model and each of the joint variables are used to calculate the current position of the target part at the current moment.
  • the surgical robot includes a motion input part
  • the step of acquiring the target position of the target part at the next moment includes: acquiring the target pose information input by the motion input part; calculating the surgery according to the target pose information joint variables of each joint in the arm; acquiring a kinematic model of the operating arm; determining the target position of the target part at the next moment in combination with the kinematic model and each of the joint variables.
  • the step of limiting the movement of the target part in the new visible area based on the new visible area includes: acquiring a safe movement area located in the new visible area, and making the area within the safe movement area is the first area, let the area outside the safe movement area and within the new visible area be the second area; according to the position and movement direction of the target part in the first area and the second area change to change the movement speed of the target part.
  • the step of changing the movement speed of the target part according to the change of the position and movement direction of the target part in the first area and the second area includes: moving from the first area to the target part When the boundary of the second area moves to the outer boundary of the second area, the movement speed of the target part in the corresponding direction is reduced; when the target part moves from the outer boundary of the second area to the boundary of the first area , increase the movement speed of the target part in the corresponding direction.
  • the movement speed of the target part in the corresponding direction is positively correlated with the distance between the target part and the outer boundary of the second area.
  • the movement speed of the target part in the corresponding direction has a linear positive correlation with the distance between the target part and the outer boundary of the second region.
  • the surgical robot includes a mechanical motion input unit for inputting a control command for controlling the motion of the surgical arm, and the step of defining the movement of the target part in the new visual area based on the new visual area Including: acquiring the configured safe movement area within the new visible area, setting the area within the safe movement area as the first area, and setting the area outside the safe movement area and within the new visible area as the second area; when the target part moves from the boundary of the first area to the outer boundary of the second area, increase the resistance of the motion input part when moving in the corresponding direction; When the outer boundary of the second area moves toward the boundary of the first area, the resistance of the motion input part when moving in a corresponding direction is reduced.
  • the resistance when the motion input part moves in the corresponding direction is negatively correlated with the distance between the target part and the outer boundary of the second region.
  • the resistance when the motion input part moves in the corresponding direction is linearly negatively correlated with the distance between the target part and the outer boundary of the second region.
  • the new visible area is a plane area or a three-dimensional space determined by the angle of view and the depth of field of the camera arm.
  • the camera parameters of the image end device are adjustable, and before the step of limiting the movement of the target part in the new visible area based on the new visible area, the step includes: acquiring at least part of the configuration located in the new visible area. an enlarged motion area outside the viewing area; adjusting the camera parameters based on the new visible area and the enlarged motion area to generate the new visible area to cover the visible area and the enlarged motion area, the Camera parameters include focal length and aperture, the focal length is related to the angle of view, and the aperture is related to the depth of field.
  • the camera parameters of the image end device are adjustable, and the step of acquiring the configured target part of the surgical arm currently located in the new visible area includes: judging whether the target part is located in the new visible area If the target part is not in the new visible area, adjust the camera parameters based on the position of each target part to generate a new visible area to cover the maximum motion area, the camera parameters include focal length and/or aperture, so The focal length is related to the angle of view, and the aperture is related to the depth of field.
  • the step of adjusting the camera parameters to generate a new visible area to cover each of the target parts includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm; combining the kinematic model with the kinematics model and the joint variables to determine the position of the target part in the reference coordinate system; construct a maximum motion area according to the position of each target part; adjust the camera parameters based on the maximum motion area to generate the new visible area to Cover each of the target sites.
  • the second characteristic part can be selected from the joints and/or end instruments of the operating arm.
  • the second characteristic part is a point on the joint and/or the end device, a region on the joint and/or the end device, and/or the joint and/or the end device The entirety of the end device.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is configured to be loaded and executed by a processor to achieve the implementation of any of the foregoing embodiments. the steps of the control method described.
  • the present application provides a control device for a surgical robot, comprising: a memory for storing a computer program; and a processor for loading and executing the computer program; wherein the computer program is configured by The processor loads and executes the steps of implementing the control method according to any one of the above embodiments.
  • the present application provides a surgical robot, comprising: a manipulation arm including a camera arm and a surgical arm; and a controller coupled to the manipulation arm and configured to perform The steps of the control method according to any one of the above embodiments.
  • FIG. 1 is a schematic structural diagram of an embodiment of a surgical robot of the present application
  • FIG. 2 is a partial schematic diagram of an embodiment of the surgical robot shown in FIG. 1;
  • FIG. 3 is a flowchart of an embodiment of a control method of the surgical robot shown in FIG. 1;
  • FIG. 4 is a schematic structural diagram of an operating arm and a power part in the surgical robot shown in FIG. 1;
  • FIG. 5 is a partial schematic diagram of the surgical robot shown in FIG. 1 under a surgical state
  • FIG. 6 is a flowchart of an embodiment of a control method for a surgical robot
  • FIG. 7 is a schematic diagram of the state of an embodiment of a surgical arm in a surgical robot
  • 11 to 13 are schematic diagrams of the configuration interface of the surgical robot in the state associated with the surgical arm shown in FIG. 7 respectively;
  • 19 is a schematic diagram of the movement of the target site of the operating arm in different regions
  • 20 to 22 are flowcharts of an embodiment of a control method for a surgical robot
  • 23 is a schematic diagram of the visible part of the camera arm under the current camera parameters and the visible area after adjustment of the camera parameters;
  • 24 is a flowchart of an embodiment of a control method for a surgical robot
  • 25 is a flowchart of another embodiment of a control method for a surgical robot
  • 26 is a flowchart of another embodiment of a control method for a surgical robot
  • FIG. 27 is a flowchart of another embodiment of a control method for a surgical robot
  • Fig. 28 is a partial schematic view of the surgical robot shown in Fig. 1 in an operation state
  • Fig. 29 is a partial schematic diagram of the surgical robot shown in Fig. 1 in a surgical state
  • FIG. 30 is a flowchart of another embodiment of a control method for a surgical robot
  • Fig. 31 is a partial schematic diagram of the surgical robot shown in Fig. 1 in an operation state
  • FIG. 32 is a schematic structural diagram of another embodiment of the surgical robot of the present application.
  • Fig. 33 is a partial schematic view of the surgical robot shown in Fig. 32 in an operation state
  • FIG. 34 is a schematic structural diagram of a control device of a surgical robot according to an embodiment of the application.
  • distal end and proximal end are used in this application as orientation words, which are common terms in the field of interventional medical devices, wherein “distal end” means the end away from the operator during the operation, and “proximal end” means The end closest to the operator during surgery.
  • first/second etc. refer to a component as well as a class of two or more components having common characteristics.
  • FIG. 1 to FIG. 2 are respectively a schematic structural diagram and a partial schematic diagram of an embodiment of the surgical robot of the present application.
  • the surgical robot includes a master console 2 and a slave operation device 3 controlled by the master console 2 .
  • the master console 2 has a motion input device 21 and a display 22, and the doctor sends a control command to the slave operation device 3 by operating the motion input device 21, so that the slave operation device 3 performs the corresponding operation according to the control command of the doctor operating the motion input device 21, and
  • the surgical field is viewed through the display 22 .
  • the slave operating device 3 has an arm mechanism, and the arm mechanism has a mechanical arm 30 and an operating arm 31 detachably installed at the distal end of the mechanical arm 30 .
  • the robotic arm 30 includes a base and a connecting assembly that are connected in sequence, and the connecting assembly has a plurality of joint assemblies.
  • the operating arm 31 includes a connecting rod 32, a connecting component 33 and an end device 34 connected in sequence, wherein the connecting component 33 has a plurality of joint components, and the posture of the end device 34 is adjusted by adjusting the joint components of the operating arm 31; the end device 34 has an image
  • the end device 34A and the operation end device 34B are used to capture an image within the field of view, and display 22 is used to display the image.
  • the manipulation end instrument 34B is used to perform surgical operations such as cutting, suturing.
  • the manipulation arm with the image end instrument 34A be the camera arm 31A
  • the manipulation arm with the manipulation end instrument 34B be the surgical arm 31B.
  • the surgical robot shown in FIG. 1 is a single-hole surgical robot, and each operating arm 31 is inserted into the patient's body through the same trocar 4 installed at the distal end of the robotic arm 30 .
  • the doctor In a single-hole surgical robot, the doctor generally only controls the operating arm 31 to complete basic surgical operations.
  • the manipulating arm 31 of the single-hole surgical robot should have both a positional degree of freedom (that is, a positioning degree of freedom) and a posture degree of freedom (that is, the orientational degree of freedom), so as to realize the change of the posture and attitude within a certain range, for example, the manipulating arm 31 has The degree of freedom of horizontal movement x, the degree of freedom of vertical movement y, the rotation degree of freedom ⁇ , the pitch degree of freedom ⁇ and the yaw degree of freedom ⁇ , the operating arm 31 can also be driven by the distal joint of the mechanical arm 30, that is, the power mechanism 301.
  • the movement degree of freedom z (that is, the feeding degree of freedom), in addition, in some embodiments, redundant degrees of freedom can also be set for the operating arm 31 to realize the possibility of more functions, for example, in the above-mentioned premise that 6 degrees of freedom can be realized Next, set one, two, or even more degrees of freedom in addition.
  • the power mechanism 301 has a guide rail and a power part slidably arranged on the guide rail, and the operating arm 31 is detachably installed on the power part.
  • the power part provides power for the joints of the operating arm 31 to realize the remaining 5 degrees of freedom (ie [x, y, ⁇ , ⁇ , ⁇ ]).
  • the surgical robot also includes a controller.
  • the controller can be integrated into the master console 2 or into the slave operation device 3 .
  • the controller can also be independent of the master operating console 2 and the slave operating device 3, which can be deployed locally, for example, or the controller can be deployed in the cloud.
  • the controller may be constituted by more than one processor.
  • the surgical robot further includes an input unit.
  • the input unit may be integrated into the main console 2 .
  • the input unit may also be integrated in the slave operating device 3 .
  • the input part can also be independent of the master console 2 and the slave operation device 3 .
  • the input unit may be, for example, a mouse, a keyboard, a voice input device, or a touch screen.
  • a touch screen is used as the input unit, and the touch screen may be disposed on the armrest of the main console 2 , for example.
  • the manipulator arm 31 also includes sensors that sense joint variables of the joint. These sensors include an angle sensor that senses the rotational motion of the joint assembly and a displacement sensor that senses the linear motion of the joint assembly, and an appropriate sensor can be configured according to the type of the joint.
  • the controller is coupled to these sensors, and to the input and display 22 .
  • a method for controlling a surgical robot is provided, the first application of which aims to limit the movement of the target part of the configured surgical arm within the visual area of the image end instrument of the camera arm.
  • the control method may be executed by the controller. As shown in Figure 3, the control method includes the following steps:
  • step S1 the visible area of the camera arm is acquired.
  • the viewable area of the camera arm 31A is determined by the image end instrument 34A of the camera arm 31A.
  • the step of determining the visible area includes:
  • the currently configured camera parameters of the camera arm are obtained in real time, and the visible area is obtained according to the camera parameters.
  • Camera parameters usually include field of view and depth of field.
  • the angle of view is related to the focal length, and the depth of field is related to the aperture. Among them, the smaller the focal length, the larger the field of view, the closer the visual distance; the larger the focal length, the smaller the field of view, the farther the visual distance.
  • obtaining the visible area according to the camera parameters is specifically obtaining the visible area based on the angle of view and the depth of field.
  • the visible area can be calculated using the trigonometric function formula combined with the angle of view and the depth of field.
  • the visible area can be obtained by real-time calculation, or can be obtained directly from a preset database such as a comparison table according to the angle of view and depth of field.
  • a three-dimensional space can be obtained, and it can also be a plane area of the three-dimensional space.
  • f(x, y, z) is used to represent the three-dimensional space
  • f(x , y) represents the plane area corresponding to the corresponding depth of field z in the three-dimensional space.
  • f(x,y,z) can be converted into f'(x,y,z) in the reference coordinate system
  • f(x,y) can also be converted into f in the reference coordinate system '(x,y), and then obtain the position range of the visible area in the reference coordinate system.
  • Step S2 acquiring the target site that the configured operating arm is currently located in the visible area.
  • the target part that needs to be restricted to move in the visible area is usually in the initial state, that is, should be located in the visible area at the current moment.
  • the target site may be default.
  • the target site defaults to the end instrument 34B of the surgical arm 31B, or the target site defaults to the joint in the surgical arm 31B that is connected to the distal end of the end instrument 34B, because the doctor usually pays more attention to whether the distal end of the surgical arm 31B is in the in visible state.
  • the default target site can be defined in the system file for acquisition and then configured by the surgical robot system autonomously.
  • the target part can also be configured individually by the doctor.
  • the surgical arm 31B usually has more than one part that can be configured as the target part, and these parts can be predefined in the database. There can be different parts of this.
  • a storage unit 311 is installed on the abutment surface of the drive box 310 of the operating arm 31 abutting against the power part 302 of the power mechanism 301 .
  • the abutting surface is provided with a reading unit 303 matched with the storage unit 311 , and the reading unit 303 is coupled to the controller.
  • the reading unit 303 is coupled to the storage unit 311 .
  • Communication the reading unit 303 reads the relevant information from the storage unit 311 .
  • the storage unit 311 is, for example, a memory or an electronic tag.
  • the storage unit stores, for example, the type of the manipulator arm, the part where the manipulator arm can be configured as a target part, the kinematic model of the manipulator arm.
  • camera parameters may be stored in the storage unit 311 of the camera arm 31A.
  • Step S3 based on the visible area, the target part is limited to move in the visible area.
  • the above steps S1 to S3 are used to ensure that the target part in the operating arm is controlled and moved under the visible area, especially when the target part includes an end device, it can effectively prevent the end device from being visible in the visible area. Unintentional movement outside the area can cause accidental injury to the patient.
  • the step S2 that is, the step of acquiring the target site of the configured operating arm currently located in the visible area includes:
  • Step S21 acquiring a site where the surgical arm can be configured as a target site.
  • the site can be read directly from a database.
  • Step S22 it is determined whether the part is currently located in the visible area.
  • step S23 the part determined to be currently located in the visible area is the first part, and the configured target part is acquired based on the first part.
  • step S23 that is, the target part originates from the first part located in the visible area.
  • FIG. 7 it simply shows the positional relationship between an operating arm and the visible area.
  • the terminal device E, the joint J1, the joint J2, the joint J3 and the joint J4 are currently located within the visible area, while the joint J5 and the joint J6 are located outside the visible area. That is, the parts E and J1 to J4 can be set as the first parts to be selectively arranged as target parts.
  • the above-mentioned step S21 that is, obtaining the site where the surgical arm can be configured as the target site can be implemented by the following two implementations.
  • step S21 includes:
  • step S221 an operation image of the visible area collected by the camera arm is acquired.
  • Step S222 identifying whether the part is located in the operation image.
  • step S222 image recognition may be performed in combination with a neural network such as a convolutional neural network.
  • step S222 whether the part is located in the operation image may be identified according to a preset strategy. Exemplarily, it can be determined whether the part is located in the operation image by identifying whether a specific point on the part is located in the operation image. Exemplarily, it can also be determined whether the part is located in the operation image by identifying whether a specific area on the part is located in the operation image. Exemplarily, it can also be determined whether the part is located in the operation image by identifying whether the overall outline of the part is located in the operation image.
  • a preset strategy can be preset, or selected according to an operation instruction input by a doctor during use.
  • Step S223 it is determined that the part is located in the visible area.
  • Step S224 it is determined that the part is not located in the visible area.
  • step S21 includes:
  • Step S221' acquiring the kinematic model of the operating arm and the joint variables of each joint in the operating arm.
  • the joint variable refers to the joint amount of the rotating joint and/or the joint offset of the moving joint in the joint.
  • the kinematic model of the operating arm can be called directly from the storage unit of the operating arm. On the other hand, it can also be obtained from the link parameters of the operating arm.
  • Step S222' combining the kinematic model and joint variables to determine the position of the part in the reference coordinate system.
  • the reference coordinate system may be arbitrarily set, for example, the reference coordinate system may be defined as the base coordinate system of the robot arm, and for example, the reference coordinate system may be defined as the tool coordinate system of the robot arm. In addition, it is feasible even to define the reference coordinate system as some coordinate system other than the surgical robot.
  • the position of the determined part in the reference coordinate system may refer to the position of a point, an area or a whole (contour) on the part.
  • each point on the area or the whole outline can be obtained discretely, and then calculate the Obtain the positions of these points.
  • Step S223' converting the visible area into a position range in the reference coordinate system.
  • Step S224' judging whether the position of the part is within the position range.
  • step S225' when the position of the part is within the position range, go to step S225'; otherwise, go to step S226'.
  • step S224' it is also possible to judge whether the position of the part is within the position range according to the preset strategy.
  • Such a preset strategy can be preset, or selected according to an operation instruction input by a doctor during use.
  • Step S225' it is determined that the part is located in the visible area.
  • Step S226' it is determined that the part is not located in the visible area.
  • Embodiment 1 and Embodiment 2 can accurately and quickly determine whether the part is located in the visible area.
  • the above step S23 that is, the step of acquiring the configured target part based on the first part includes:
  • Step S231 generating a configuration interface including controls associated with the first feature part that can be configured as the target part according to the first part.
  • the controls in the configuration interface can be, for example, text controls, option controls, such as drop-down list controls, button controls, and other forms.
  • the configuration interface may be as shown in FIG. 11 , the controls in the configuration interface are button controls, and in the configuration interface, button controls of parts E, J1 to J4 are generated, Among them, FIG. 11 also shows that the parts E and J1 are configured as target parts.
  • step S231 it can also be implemented as follows: first, the model image of the operating arm is obtained; then, a configuration interface containing the model image is generated, and a position corresponding to the first part on the model image is generated containing a model image related to the available A control configured as a first feature part of the target part.
  • these controls are icon controls, and the icon may be, for example, a light spot, an aperture, etc., which is not particularly limited here.
  • FIGS. 12 and 13 FIG. 12 only shows the first characteristic parts E, J1 to J4 corresponding to the state shown in FIG. 7 that the surgical arm can currently be configured as the target part.
  • FIG. 13 also shows the first characteristic parts E, J1 to J4 that the surgical arm can currently be configured to be the target part in the state shown in FIG. 7 . Different from FIG. 12 , FIG. 13 also shows the surgical arm Others cannot be configured as the first feature parts J5 and J6 of the target part.
  • the model image of the operating arm can usually be pre-stored in a database for direct recall during use, and the database can also be stored in a storage unit of the operating arm, for example.
  • the model image may be a projection image or a computer model image, and the model image at least needs to be able to reflect the first feature part that can be configured as the target part so as to be conveniently provided to the doctor for configuration.
  • the model image can be associated with the current motion state of the operating arm; the model image can also be associated with the initial motion state of the operating arm (eg, in zero position, such as swinging into a straight line), as shown in FIG. 12 and FIG. 13 .
  • the above step S21 that is, the step of acquiring the first characteristic part of the operating arm that can be configured as the target part
  • the first characteristic parts that are expected to be configured as the target part are configured one by one.
  • the configuration interface contains a model image and the model image has controls corresponding to the first parts, for example, these controls can be clicked one by one to configure the associated first parts as the target parts.
  • the configuration interface contains a model image and the model image reflects the first feature part that can be configured as the target part (the first feature part here optionally includes only the above-mentioned first part, or includes all parts of the surgical arm (because in a certain In some cases, for example, when the visible area can cover all parts, they may be configured as target parts)), the closed graph drawn by the doctor through the input part covering at least part of the model image can be obtained, and then the graph can be included in the image. All contained (ie enclosed) first sites are all used as target sites. Such a design can improve the placement efficiency of the target site. 12 and 13, the large circles covering the parts E, J1 and J2 represent the closed graphics drawn by the doctor. The system analyzes the positional relationship between the graphics and the parts, and then configures the parts E, J1 and J2. become the target site.
  • Step S232 acquiring the target part configured through the controls of the configuration interface.
  • the surgical arm may be configured such that the first feature of the target site may include a joint and/or an end instrument. If these joints and/or end instruments are located in the field of view, they can all be considered the first site for deployment.
  • the above-mentioned step S3, that is, the step of restricting the movement of the target part in the visible area based on the visible area includes:
  • Step S31 judging whether the target part reaches the boundary of the visible area.
  • step S31 when it is determined that the target part reaches the boundary of the visible area, the process proceeds to step S32; otherwise, the process proceeds to step S31.
  • Step S32 it is judged whether the movement direction of the target part at the next moment is toward the outside of the visible area.
  • step S32 when it is determined that the moving direction of the target part at the next moment is toward the outside of the visible area, go to step S33; otherwise, proceed to step S31.
  • Step S33 at least prohibit the target part from moving out of the visible area.
  • step S31 if the target part does not reach the boundary of the visible area in step S31, and/or the movement direction of the target part in step S32 does not face the visible area at the next moment, no special treatment is performed, that is, the operating arm is still allowed to move normally.
  • step S33 there are the following two strategies to at least prohibit the movement of the target part toward the outside of the visible area.
  • One is to prohibit only the target part from moving out of the visible area; the other is to prohibit the movement of the entire surgical arm.
  • the strategy to be executed can be preset by default, or can be customized and initialized according to the control instructions input by the doctor every time the surgical robot is powered on.
  • the target site may only be configured as a distal joint or end instrument.
  • step S31 that is, determining whether the target part reaches the boundary of the visible area, can be implemented in the following two ways, for example.
  • this step S31 may include:
  • step S311 an operation image of the visible area collected by the camera arm is acquired.
  • Step S312 identifying whether the target part reaches the edge of the operation image.
  • step S312 if the target part reaches the edge of the operation image, the process proceeds to step S313.
  • it can be determined whether the target part is located in the operation image by identifying whether a specific point on the target part is located in the operation image.
  • it can also be determined whether the target part is located in the operation image by identifying whether a specific area on the target part is located in the operation image.
  • it can also be determined whether the target part is located in the operation image by identifying whether the overall outline of the target part is located in the operation image.
  • Step S313 it is determined that the target part reaches the boundary of the visible area.
  • Step S314 it is determined that the target part does not reach the boundary of the visible area.
  • this step S31 may include:
  • Step S311' acquiring the kinematic model of the operating arm and the joint variables of each joint in the operating arm.
  • Step S312' combining the kinematic model and joint variables to determine the position of the target part in the reference coordinate system.
  • Step S313' convert the visible area into a position range in the reference coordinate system.
  • Step S314' judging whether the position of the target part reaches the boundary of the position range.
  • step S314' if it is judged that the target part reaches the boundary of the position range, go to step S315'; otherwise, go to step S316'.
  • Step S315' it is determined that the target part reaches the boundary of the visible area.
  • Step S316' it is determined that the target part does not reach the boundary of the visible area.
  • step S32 that is, judging whether the movement direction of the target part at the next moment is toward the outside of the visible area includes the following steps:
  • Step S321 acquiring the current position of the target part at the current moment when the boundary of the visible area is reached.
  • the current position can be acquired through the following steps.
  • This step includes: first, acquiring the kinematic model of the operating arm and the joint variables of each joint in the operating arm at the current moment; then, calculating the current position of the target part at the current moment according to the kinematic model and each joint variable.
  • Step S322 acquiring the target position of the target part at the next moment.
  • the surgical robot usually includes a motion input part, and the motion input part is used for inputting control instructions for controlling the motion of the operating arm, including, for example, the camera arm and the surgical arm.
  • the target position can be acquired through the following steps. The steps include: acquiring the target pose information input by the motion input part; calculating the joint variables of each joint in the operating arm according to the target pose information; acquiring the kinematics model of the operating arm; target position at a time.
  • Step S323 according to the target position and the current position, determine whether the movement direction of the target part at the next moment is toward the outside of the visible area.
  • the movement direction of the target part is toward the outside of the visible area.
  • the movement direction of the target part is not toward the outside of the visible area.
  • the above-mentioned step S3, that is, the step of restricting the movement of the target part in the visible area based on the visible area may further include:
  • step S34 the configured safe movement area located in the visible area is acquired.
  • the area within the safe motion area may be the first area, and the area outside the safe motion area and within the visible area may be the second area.
  • Step S35 changing the movement speed of the target part according to changes in the position and movement direction of the target part in the first area and the second area.
  • step S35 the step of changing the movement speed of the target part according to the change of the position and movement direction of the target part in the first area and the second area can be specifically implemented as follows:
  • the second area includes an inner boundary and an outer boundary, the inner boundary of the second area is the same as the boundary of the first area, and both refer to the boundary of the safe movement area, and the outer boundary of the second area refers to the boundary of the visible area. As shown in Figure 19, point A is located in the first area, point B is located in the second area, and point C is located outside the second area.
  • the target part moves from point A through point B to point C during the entire movement process. , divided into three stages, including the first stage from point A to the boundary of the first region, the second stage from the boundary of the first region to the outer boundary of the second region, and the second stage from the outer boundary of the second region to C
  • the whole movement process only includes two stages, the first stage and the second stage.
  • the entire movement process from point C through point B to point A essentially only includes two stages, namely the first stage from the outer boundary of the second region to the boundary of the first region, and the first stage. From the boundary of the area to the second stage of point A, the movement speed of the first stage is v1, and the movement speed of the second stage is v2. At this time, v1 ⁇ v2.
  • the movement speed of the target part in the corresponding direction is positively correlated with the distance between the target part and the outer boundary of the second area, that is, when the distance between the target part and the outer boundary of the second area is small, The movement speed is small; when the distance between the target part and the outer boundary of the second area is large, the movement speed is also large.
  • the target part reaches the boundary of the visible area and the movement direction is outside the visible area, its movement speed is basically equal to 0; and when the target part reaches the boundary of the safe area and the movement direction is far from the visible area, its movement speed recovers to basically normal.
  • the movement speed of the target part in the corresponding direction has a linear positive correlation with the distance between the target part and the outer boundary of the second area.
  • the movement speed of the target part in the corresponding direction has an exponential positive correlation with the distance between the target part and the outer boundary of the second region. All such designs can enable the doctor to noticeably feel that the target part is moving from the inner boundary of the second region to the boundary of the outer boundary.
  • the target site may move at a first constant speed in the first region and move at a second constant speed in the second region.
  • first constant speed is greater than the second constant speed.
  • the change of the movement speed of the target part in different regions and/or different movement directions is usually based on the change of the overall movement speed of the surgical arm.
  • the movement speed of the target site can be changed by changing the proportional value of the movement speed of the surgical arm.
  • the proportional value is related to the region where the target part is located and the direction of movement.
  • the change of the movement speed of the target part in different regions and/or different movement directions may not be changed based on the change of the overall movement speed of the surgical arm.
  • the degree of freedom of the surgical arm is sufficiently redundant compared to the degree of freedom of the task expected to be achieved
  • the target parts in different regions and/or different movement directions can be obtained by calculation to obtain different movement speeds.
  • the motion input part is a mechanical motion input part, which has a plurality of joint components, a sensor coupled to the controller for sensing the state of each joint component, and a sensor coupled to the controller for driving each joint component to move drive motor.
  • the above-mentioned step S3 that is, the step of limiting the movement of the target part in the visible area based on the visible area may also include:
  • Step S34' acquiring the configured safe movement area located in the visible area.
  • step S34' the visible area and the safe motion area are also divided into the first area and the second area described above.
  • Step S35' changing the resistance of the motion input part according to the changes of the position and motion direction of the target part in the first area and the second area.
  • step S35' mainly causes the drive motor in the associated direction to generate a reverse torque according to the resistance.
  • step S35' the step of changing the resistance of the motion input part according to the change of the position and motion direction of the target part in the first area and the second area can be specifically implemented as follows:
  • the resistance of the motion input part moving in the corresponding direction is negatively correlated with the distance between the target part and the outer boundary of the second region.
  • the target part reaches the boundary of the visible area and the movement direction is outside the visible area, its movement speed is basically equal to 0.
  • the resistance of the doctor operating the movement input part will be extremely large.
  • the input part can hardly be moved by the doctor, which can make the movement speed of the target part approach 0; when the target part reaches the boundary of the safe area and the movement direction is far from the visible area, its movement speed returns to normal.
  • the resistance of the motion input part moving in the corresponding direction is linearly negatively correlated with the distance between the target part and the outer boundary of the second region.
  • the resistance of the motion input part moving in the corresponding direction is exponentially negatively correlated with the distance between the target site and the outer boundary of the second region.
  • the resistance when the motion input part moves in the corresponding direction is the first constant resistance
  • the resistance when the motion input part moves in the corresponding direction is the second constant resistance.
  • the second constant resistance is greater than the first constant resistance
  • the image end device of the camera arm that is, the camera has adjustable camera parameters, such as adjustable focal length and/or adjustable aperture, and the focal length and aperture are intrinsic parameters of the camera.
  • the above step S3 that is, before the step of limiting the movement of the target part in the visible area based on the visible area, may also include:
  • Step S301 acquiring the configured enlarged motion area located outside the visible area.
  • the enlarged motion area is at least partially outside the viewable area.
  • the visible area is completely within the enlarged motion area; for another example, the visible area and the enlarged motion area are independent of each other, that is, there is no intersection; for another example, part of the visible area is within the enlarged motion area.
  • the visible area refers to the area visible before the parameters of the camera are readjusted, and the new visible area refers to the area visible after the parameters of the camera are readjusted.
  • Step S302 adjusting the parameters of the camera based on the visible area and the enlarged motion area to generate a new visible area to cover the visible area and the enlarged motion area.
  • the parameters of the camera include focal length and/or aperture, the focal length is related to the angle of view, and the aperture is related to the depth of field.
  • the above step S3 is specifically as follows: based on the new visible area, the target part is limited to move in the new visible area. Through the steps S301 and S302, the movement range of the target part in the operating arm can be expanded.
  • the safe motion area and the enlarged motion area can be configured based on the visible area at the same time, so that the doctor can operate the surgical arm to move in a larger and safe range of motion.
  • the above-mentioned safe movement area and/or expanded movement area may be set by default in the system.
  • the safe motion area is automatically obtained by the system by setting the zoom factor on the basis of the current visible area
  • the enlarged motion area is automatically obtained by the system by setting the magnification factor on the basis of the current visible area.
  • zoom factors and/or magnification factors can be pre-stored for recall in a database, which is usually stored in a storage unit of the camera arm.
  • the above-mentioned safe motion area and/or expanded motion area may also be customized by the doctor.
  • a basic boundary image of the plane range f(X i , Y i ) of the visible area corresponding to the corresponding depth of field zi is generated in the display, and the image corresponding to the safe motion area in the basic boundary image drawn by the doctor through the input unit is obtained.
  • These boundary images are usually closed images.
  • the corresponding safe motion area and/or the expanded motion area are obtained by calculation according to the relationship between the safe border image and/or the expanded border image and the basic border image, such as the position and the like.
  • the boundary image drawn by the doctor is usually a regular image, such as a circular image, a rectangular image, and an elliptical image.
  • the drawn border image can be converted into the closest regular image through processing.
  • the above step S2 that is, the step of acquiring the target site where the configured operating arm is currently located in the visible area may include:
  • Step S24 acquiring a target site configured based on a site where the surgical arm can be configured as a target site.
  • Step S25 determine whether the target part is located in the visible area, if the currently configured target part is not in the visible area, adjust the parameters of the camera based on the position of each target part to generate a new visible area to cover each target part.
  • the adjusted parameters refer to the intrinsic parameters of the camera such as focal length and/or aperture, but do not include the extrinsic parameters of the camera such as position and attitude. If the current parameters of the camera have reached the adjustable limit or the visible area obtained by reaching the adjustable limit still cannot cover the target parts, the doctor can be reminded to adjust the operating arm before operating the operating arm to make the target parts move to the within the viewable area, or adjust the camera arm so that the viewable area can cover each target site.
  • a corresponding appropriate visible area can be generated according to the selection of the target site, so as to facilitate the doctor's subsequent surgical operations.
  • the dotted circle represents the visible area before adjustment. Only parts E, J1 to J4 can be seen in the visible area before adjustment. In fact, the doctor needs to place parts E, J1 to J5 are all configured as target parts. At this time, the camera parameters can be adaptively adjusted according to the positions of these target parts, so that the parts that did not originally exist in the visual area before adjustment are located in the visual area after adjustment. Steps such as step S1 to step S3 are configured and executed.
  • step S25 that is, adjusting the parameters of the camera based on the position of each target part to generate a new visible area to cover each target part can be specifically implemented by the following steps:
  • Step S261 acquiring the kinematic model of the operating arm and the joint variables of each joint in the operating arm.
  • Step S262 the position of the target part in the reference coordinate system is determined in combination with the kinematic model and the joint variables.
  • Step S263 construct the maximum motion area according to the position of each target part.
  • Step S264 adjusting the parameters of the camera based on the maximum motion area to generate a new visible area to cover the maximum motion area.
  • all target parts can be controlled to be limited to move within the visible area based on the visible area;
  • the proportional value between the target site and all the target sites within the range controls the movement of the surgical arm. For example, when the proportional value reaches a threshold (eg 50%), the movement of the surgical arm can be freely controlled without restricting too much. , while the movement of the surgical arm can be inhibited when the proportional value is below the threshold.
  • a threshold eg 50%
  • another control method of a surgical robot is also provided, the second application of which is to adjust the image end instrument of the camera arm so that the follower part of the configured surgical arm falls within the viewable area of the image end instrument.
  • the control method can also be performed by the controller.
  • the second application purpose control method can be used independently of the first application purpose control method; in addition, the second application purpose control method can also be used in conjunction with the first application purpose control method, for example, the second application purpose control method and
  • the first application purpose control method may be used interspersed, and for example, one application purpose control method may be used before or after the implementation of the other application purpose control method.
  • the control method includes:
  • Step S5 acquiring the following parts based on the part configuration.
  • the surgical arm has one or more parts that can be configured to follow the part.
  • the sites that can be configured as follow sites can be identical, partially identical, or completely different from the aforementioned sites that can be configured as target sites.
  • a site that can be configured as a follower site is identical to the previously described site that can be configured as a target site.
  • the site is an operating end instrument and/or joint in the surgical arm, so the follower site can also be configured to operate the end device and/or joint, and the follower site can be configured as one or more.
  • the follower site may be configured only to operate the end instrument.
  • the part for configuring as the follower part may be the first characteristic part of the surgical arm, and the part for configuring as the target part may be the second characteristic part of the surgical arm.
  • Step S6 acquiring the current visible area of the image end device in the reference coordinate system.
  • Step S7 judging whether the following part falls into the current visible area.
  • step S7 if the following part does not fall within the current visible area, go to step S8 ; otherwise, go to step S5 .
  • Step S8 adjusting the image end device to obtain a new viewable area of the image end device in the reference coordinate system so that at least part of the follower part falls into the new viewable area.
  • the new visible area of the image end device in the reference coordinate system can be obtained by adjusting the image end device, so that the following part of the image end device that does not fall in the current visible area of the reference coordinate system falls into the new visible area. view area.
  • the above step S8, that is, the step of adjusting the image end device to obtain a new visible area of the image end device in the reference coordinate system so that the follower part falls into the new visible area includes:
  • Step S811 acquiring the union area of all visible areas of the image end device in the reference coordinate system.
  • the union area is the union range of all visible areas that can be obtained by adjusting the intrinsic parameters (camera parameters) and extrinsic parameters (pose parameters) of the device at the end of the image, which reflects the possible field of view of the device at the end of the image that can be covered.
  • the union region can be obtained by real-time computation.
  • the union region may be pre-stored in a storage unit of the camera arm.
  • Step S812 it is judged whether the following part can fall into the union region.
  • step S812 if all the following parts can fall into the union region, go to step S813; and if part of the following parts fall into the union region, go to step S814; and if all the following parts cannot fall into the union region , to end the process.
  • Step S813 adjust the image end device to obtain a new visible area of the image end device in the reference coordinate system so that all the following parts fall into the new visible area.
  • Step S814 configure at least part of the following parts that can fall into the union region as the first following parts according to the operation instruction, and then adjust the image end device to obtain a new visible area of the image end device in the reference coordinate system to make the first follow part. Falls into the new viewable area.
  • the process can also be ended without adjusting the image end device, which can be configured according to requirements.
  • the operation instruction includes a first operation instruction
  • the first operation instruction is associated with all or part of the following parts that can fall into the union region.
  • at least part of the following parts that can fall into the union region are configured as the first following parts, specifically:
  • the surgical arm includes a first priority surgical arm and a second priority surgical arm, each surgical arm is configured with a follower part, and generally, the first priority can be set higher than the second priority.
  • the operation instruction includes a second operation instruction, and the second operation instruction is associated with a follow-up part of the surgical arm of the first priority, and all or part of it can fall into the union region.
  • configuring at least part of the following parts that can fall into the union region into the first following parts according to the operation instruction includes:
  • the surgical arm of the first priority refers to the surgical arm in a moving state
  • the surgical arm of the second priority refers to the surgical arm in a stationary state
  • the controlled state of the surgical arm can be defined as a motion state
  • the uncontrolled state of the surgical arm can be defined as a stationary state.
  • any part of the operating arm is moving as a motion state
  • any part of the operating arm at rest can be defined as a motion state.
  • the motion state can also be defined to include the above two meanings
  • the static state can be defined to include the above two corresponding meanings.
  • the operating arm of the first priority may also refer to the operating arm whose movement speed of the operating end instrument reaches a preset threshold
  • the operating arm of the second priority refers to the operating arm whose movement speed of the operating end instrument is lower than the preset threshold. Threshold of the surgical arm.
  • the image end instrument can more easily follow the corresponding operating arm and its following part. Further, when used in conjunction with the control method for the purpose of the first application, it is helpful to reduce the restriction on the movement range of the corresponding operating arm.
  • step S7 that is, the step of judging whether the following part falls into the current visible area
  • step S7 can also be implemented by two implementations.
  • the step S7 includes: acquiring an operation image of the visible area collected by the camera arm; and determining whether the following area falls within the current visible area by identifying whether the following area is located in the operation image.
  • the basic principle of realizing this embodiment is roughly as shown in FIG. 8 corresponding to the control method of the first application.
  • the step S7 includes: acquiring a kinematic model of the operating arm and joint variables of each joint in the operating arm; determining that the following position is in the reference position based on the kinematic model and the joint variables.
  • the position of the coordinate system convert the current visible area into the position range of the reference coordinate system; judge whether the position of the following part is within the position range; when the following part is within the position range, judge It is shown that the following part is located in the current visible area.
  • the basic principle for realizing this embodiment is roughly as shown in FIG. 9 corresponding to the control method of the first application.
  • the new viewable area may be obtained by adjusting only the camera parameters of the image end instrument.
  • the above-mentioned step S8, i.e., the step of adjusting the image end device to obtain a new visible area of the image end device in the reference coordinate system so that the follower part falls into the new visible area includes:
  • step S81 the current position of the follower part in the reference coordinate system is acquired.
  • the current position can also be calculated and obtained according to the kinematic model of the operating arm and the joint variables of each joint.
  • Step S82 according to the current position, adjust the camera parameters of the image end device to change autonomously to obtain a new visible area of the image end device in the reference coordinate system, so that the following part falls into the new visible area.
  • the camera parameters that can be adjusted independently include focal length and/or aperture.
  • the focal length can be decreased to increase the field of view, so as to obtain an enlarged new visible area so that the follower part falls into the new visible area.
  • the change of camera parameters can be adjusted gradually, or it can be adjusted in place at one time.
  • the new camera parameters can be searched or calculated according to the current position of the follower part, and then the controller adjusts the image end device directly from the current camera parameters to the new camera parameters.
  • the image end device is maintained At the same time, by reducing the focal length of the device at the end of the image, the field of view angle is changed from ⁇ 1 to ⁇ 2, and ⁇ 1 ⁇ ⁇ 2, thereby expanding the visible area so that G1 to G7 can all fall into the new visible area.
  • the new visible area can also be obtained only by adjusting the position and/or posture (ie, posture) of the image end device.
  • the above step S8, that is, the step of adjusting the image end device to obtain a new visible area of the image end device in the reference coordinate system so that the follower part falls into the new visible area includes:
  • Step S81' obtaining the current position of the follower part in the reference coordinate system.
  • Step S82' acquiring the task degrees of freedom configured according to the effective degrees of freedom of the image end device.
  • the effective degree of freedom refers to the allowed spatial degree of freedom of the image end device, and the task degree of freedom is included in the spatial degree of freedom.
  • Step S83' adjusting the autonomous motion of the image end device according to the current position and the degree of freedom of the task to obtain a new visible area of the image end device in the reference coordinate system so that the following part falls into the new visible area.
  • FIG. 31 while maintaining the camera parameters of the image end device, by changing the pose of the image end device, G1 to G7 can all fall within their new visible area.
  • priority adjustment objects can be set, such as those defined by system files or configured by doctors through the configuration interface.
  • the camera parameters of the end device for adjusting the image are set as the first priority adjustment object
  • the pose of the end device for adjusting the image is set as the second priority adjustment object.
  • the pose of the end device for adjusting the image is set as the first priority adjustment object
  • the camera parameter of the end device for adjusting the image is set as the second priority adjustment object.
  • the effective degree of freedom of the image end instrument is determined by the configuration of the arm that drives the motion of the image end instrument.
  • its effective degree of freedom can be determined only by the configuration of the camera arm itself ;
  • its effective degrees of freedom can be determined by the overall configuration of the camera arm and the manipulator arm connected to the camera arm;
  • its effective degrees of freedom can also be determined in combination with the overall configuration of the corresponding robotic arm.
  • the image end instrument includes positional degrees of freedom and attitude degrees of freedom.
  • the task degrees of freedom may be selected from the positional degrees of freedom that are aligned with the direction of the depth of field.
  • step S83' namely, adjusting the autonomous motion of the image end device to obtain a new visible area of the image end device in the reference coordinate system so that the follower part falls into the new visible area is specifically: adjusting the freedom of the image end device in the direction consistent with the depth of field
  • the new visible area of the image end instrument in the reference coordinate system is obtained by expanding and contracting in degrees, so that the following part falls into the new visible area.
  • the task degrees of freedom are selected from the plane degrees of freedom in the positional degrees of freedom in the direction perpendicular to the depth of field. It is assumed that the positional degrees of freedom include the degrees of freedom in the X-axis direction, the degrees of freedom in the Y-axis direction, and the Z-axis direction in the Cartesian coordinate system.
  • the degree of freedom in the Z-axis direction is defined as the degree of freedom in the direction of the depth of field, and the degree of freedom in the X-axis direction and the degree of freedom in the Y-axis direction are the plane degrees of freedom in the direction perpendicular to the depth of field.
  • step S83' namely, adjusting the autonomous motion of the image end device to generate a new visible area so that the follower part falls into the new visible area is specifically: adjusting the image end device to translate in the plane degree of freedom to obtain the image end device in the reference coordinate system. New viewable area so that the follower part falls into the new viewable area.
  • the task degrees of freedom are selected from the attitude degrees of freedom.
  • the above-mentioned step S83' namely, adjusting the autonomous motion of the image end device to generate a new visible area so that the follower part falls into the new visible area is specifically: adjusting the position of the image end device to remain unchanged but the posture changes to generate a new visible area so that the follower part falls into the new visible area. into the new viewable area.
  • the above step S8, after the step of adjusting the image end device to obtain a new visible area of the image end device in the reference coordinate system so that the follower part falls into the new visible area may further include: :
  • Step S9 end the adjustment of the image end device and keep the obtained new visible area.
  • step S9 is beneficial to switch to the control method of the first application, and then control the operating arm in the fixed visual area.
  • the step S9 that is, the step of ending the adjustment of the image end device, includes:
  • the interrupt command can usually be input through the input unit.
  • the surgical robot of the above-described embodiment may also be a porous surgical robot.
  • the difference between the multi-hole surgical robot and the single-hole surgical robot is mainly in the operating equipment.
  • Figure 32 illustrates a slave operating device of a porous surgical robot.
  • the robotic arm of the slave operating device in the multi-hole surgical robot has a main arm 110 , an adjustment arm 120 and a manipulator 130 which are connected in sequence. There are more than two adjustment arms 120 and manipulators 130 , such as four.
  • the distal end of the main arm 110 has an orientation platform, the proximal end of the adjustment arm 120 is connected to the orientation platform, and the proximal end of the manipulator 130 is connected to the distal end of the adjustment arm 120 .
  • the manipulator 130 is used for detachably connecting the operating arm 150, and the manipulator 130 has a plurality of joint components.
  • different operating arms 150 are inserted into the patient through different trocars.
  • the operating arm 150 of the multi-hole surgical robot generally has less degrees of freedom.
  • the manipulating arm 150 only has the freedom of attitude (ie, the degree of freedom of orientation), of course, the change of its attitude generally also affects the position, but because the effect is small, it can usually be ignored.
  • the position of the manipulator 150 is often assisted by the manipulator 130.
  • the operation arm 150 also includes a camera arm 150a having an image end instrument and a surgical arm 150b having an operation end instrument.
  • the operation end instrument of the surgical arm 150b far from the camera arm 150a falls into the camera arm 150a within the viewable area of the instrument at the end of the image.
  • the image end device of the camera arm 150a can be adjusted so that at least part of the surgical arm 150b close to the camera arm 150a, such as the operation end device, falls within the visible area after the adjustment of the image end device. Further, after at least a part of the surgical arm 150b close to the camera arm 150a is made to fall within the visible area after the adjustment of the end-of-image instrument, it is also possible to control at least part of the surgical arm according to the first application purpose of the present application. A portion of the site, such as an operating end device, is controlled to define a portion of the site that falls within the adjusted viewing area to move within the viewing area.
  • porous surgical robots can also be applied to other types of porous surgical robots.
  • other types of porous robots include: a plurality of manipulator arms corresponding to the same number of manipulator arms, each manipulator arm is mounted on one of the manipulator arms, and the other manipulator arms are installed in one of the manipulator arms. independent of each other.
  • a computer-readable storage medium stores a computer program, and the computer program is configured to be loaded by a processor and executed to implement the following steps: obtaining the visible area of the camera arm; obtaining the configuration The surgical arm is currently located at the target part within the visible area; the target part is defined to move within the visible area based on the visible area.
  • a control device of a surgical robot may include: a processor (processor) 501 , a communication interface (Communications Interface) 502 , a memory (memory) 503 , and a communication bus 504 .
  • processor processor
  • Communication interface Communication interface
  • memory memory
  • communication bus 504 a communication bus
  • the processor 501 , the communication interface 502 , and the memory 503 communicate with each other through the communication bus 504 .
  • the communication interface 502 is used to communicate with network elements of other devices such as various types of sensors or motors or solenoid valves or other clients or servers.
  • the processor 501 is configured to execute the program 505, and specifically may execute the relevant steps in the foregoing method embodiments.
  • the program 505 may include program code including computer operation instructions.
  • the processor 505 may be a central processing unit (CPU), or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present application, or a graphics processing unit (GPU) (Graphics Processing Unit). ).
  • CPU central processing unit
  • ASIC Application Specific Integrated Circuit
  • GPU graphics processing unit
  • One or more processors included in the control device may be the same type of processors, such as one or more CPUs, or one or more GPUs; or may be different types of processors, such as one or more CPUs and one or more GPUs.
  • the memory 503 is used to store the program 505 .
  • the memory 503 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 505 can specifically be used to cause the processor 501 to perform the following operations: obtain the visible area of the camera arm; obtain the target part currently located in the visible area of the configured surgical arm; limit the movement of the target part within the visible area based on the visible area .

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Manipulator (AREA)

Abstract

La présente invention concerne un robot chirurgical, un procédé de commande associé, et un dispositif de commande associé. Le robot chirurgical comprend un bras de caméra ayant un instrument d'extrémité de queue d'image et un bras chirurgical ayant un instrument d'extrémité de queue de fonctionnement ; et un dispositif de commande, le dispositif de commande étant conçu pour : acquérir une partie suivante conçue sur la base d'une première partie de caractéristique du bras chirurgical ; acquérir une zone visible actuelle de l'instrument d'extrémité de queue d'image dans un système de coordonnées de référence ; déterminer si la partie suivante rentre dans la zone visible actuelle ; lorsque la partie suivante ne rentre pas dans la zone visible actuelle, ajuster l'instrument d'extrémité de queue d'image pour obtenir une nouvelle zone visible de l'instrument d'extrémité de queue d'image dans le système de coordonnées de référence, de sorte qu'au moins une partie de la partie suivante rentre dans la nouvelle zone visible. Le robot chirurgical selon la présente demande peut ajuster de manière autonome un instrument d'extrémité de queue d'image, de sorte qu'une partie suivante d'un bras chirurgical situé à l'extérieur du champ de vision actuel de l'instrument d'extrémité de queue d'image rentre dans une zone visible de l'instrument d'extrémité de queue d'image ajusté, de manière à assurer le mouvement sûr et fiable du bras chirurgical dans le champ de vision.
PCT/CN2021/092563 2020-12-15 2021-05-10 Robot chirurgical, procédé de commande associé et dispositif de commande associé WO2022126996A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011476155.4A CN112587244A (zh) 2020-12-15 2020-12-15 手术机器人及其控制方法、控制装置
CN202011476155.4 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022126996A1 true WO2022126996A1 (fr) 2022-06-23

Family

ID=75195702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092563 WO2022126996A1 (fr) 2020-12-15 2021-05-10 Robot chirurgical, procédé de commande associé et dispositif de commande associé

Country Status (2)

Country Link
CN (1) CN112587244A (fr)
WO (1) WO2022126996A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112587244A (zh) * 2020-12-15 2021-04-02 深圳市精锋医疗科技有限公司 手术机器人及其控制方法、控制装置
CN113413214B (zh) * 2021-05-24 2022-12-30 上海交通大学 一种基于混合现实的引导的手术机器人力反馈方法和装置
CN113768627B (zh) * 2021-09-14 2024-09-03 武汉联影智融医疗科技有限公司 视觉导航仪感受野获取方法、设备、手术机器人
CN116965937A (zh) * 2022-04-23 2023-10-31 深圳市精锋医疗科技股份有限公司 手术机器人系统及其控制装置

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015053A1 (en) * 2000-05-22 2004-01-22 Johannes Bieger Fully-automatic, robot-assisted camera guidance susing positions sensors for laparoscopic interventions
CN104470456A (zh) * 2012-07-10 2015-03-25 现代重工业株式会社 手术机器人系统以及手术机器人控制方法
CN104837432A (zh) * 2012-12-11 2015-08-12 奥林巴斯株式会社 内窥镜装置和控制内窥镜装置的方法
CN105407828A (zh) * 2013-07-30 2016-03-16 高姆技术有限责任公司 确定机器人工作区域的方法和系统
US20160206389A1 (en) * 2005-12-27 2016-07-21 Intuitive Surgical Operations, Inc. Constraint based control in a minimally invasive surgical apparatus
CN105992568A (zh) * 2014-02-12 2016-10-05 皇家飞利浦有限公司 手术仪器可见性的机器人控制
CN108882971A (zh) * 2016-03-29 2018-11-23 索尼公司 医疗支撑臂控制设备、医疗支撑臂设备控制方法、及医疗系统
CN109288591A (zh) * 2018-12-07 2019-02-01 微创(上海)医疗机器人有限公司 手术机器人系统
CN109330685A (zh) * 2018-10-31 2019-02-15 南京航空航天大学 一种多孔腹腔手术机器人腹腔镜自动导航方法
US20190213770A1 (en) * 2009-03-31 2019-07-11 Intuitive Surgical Operations, Inc. Rendering tool information as graphic overlays on displayed images of tools
CN110650704A (zh) * 2017-05-25 2020-01-03 柯惠Lp公司 用于检测图像捕获装置的视野内的物体的系统和方法
CN112587244A (zh) * 2020-12-15 2021-04-02 深圳市精锋医疗科技有限公司 手术机器人及其控制方法、控制装置
CN112641513A (zh) * 2020-12-15 2021-04-13 深圳市精锋医疗科技有限公司 手术机器人及其控制方法、控制装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9789608B2 (en) * 2006-06-29 2017-10-17 Intuitive Surgical Operations, Inc. Synthetic representation of a surgical robot
US9089256B2 (en) * 2008-06-27 2015-07-28 Intuitive Surgical Operations, Inc. Medical robotic system providing an auxiliary view including range of motion limitations for articulatable instruments extending out of a distal end of an entry guide
CN107049492B (zh) * 2017-05-26 2020-02-21 微创(上海)医疗机器人有限公司 手术机器人系统及手术器械位置的显示方法
CN107440748B (zh) * 2017-07-21 2020-05-19 西安交通大学医学院第一附属医院 一种手术野智能化自动跟踪腔镜系统
CN108113750B (zh) * 2017-12-18 2020-08-18 中国科学院深圳先进技术研究院 柔性手术器械跟踪方法、装置、设备及存储介质
WO2020223569A1 (fr) * 2019-05-01 2020-11-05 Intuitive Surgical Operations, Inc. Système et procédé pour un mouvement intégré avec un dispositif d'imagerie
CN211094672U (zh) * 2019-08-05 2020-07-28 泰惠(北京)医疗科技有限公司 微创手术机器人控制系统
CN111870349A (zh) * 2020-07-24 2020-11-03 前元运立(北京)机器人智能科技有限公司 一种手术机器人的安全边界与力控制方法
CN112043397B (zh) * 2020-10-08 2021-09-24 深圳市精锋医疗科技有限公司 手术机器人及其运动错误检测方法、检测装置

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040015053A1 (en) * 2000-05-22 2004-01-22 Johannes Bieger Fully-automatic, robot-assisted camera guidance susing positions sensors for laparoscopic interventions
US20160206389A1 (en) * 2005-12-27 2016-07-21 Intuitive Surgical Operations, Inc. Constraint based control in a minimally invasive surgical apparatus
US20190213770A1 (en) * 2009-03-31 2019-07-11 Intuitive Surgical Operations, Inc. Rendering tool information as graphic overlays on displayed images of tools
CN104470456A (zh) * 2012-07-10 2015-03-25 现代重工业株式会社 手术机器人系统以及手术机器人控制方法
CN104837432A (zh) * 2012-12-11 2015-08-12 奥林巴斯株式会社 内窥镜装置和控制内窥镜装置的方法
CN105407828A (zh) * 2013-07-30 2016-03-16 高姆技术有限责任公司 确定机器人工作区域的方法和系统
CN105992568A (zh) * 2014-02-12 2016-10-05 皇家飞利浦有限公司 手术仪器可见性的机器人控制
CN108882971A (zh) * 2016-03-29 2018-11-23 索尼公司 医疗支撑臂控制设备、医疗支撑臂设备控制方法、及医疗系统
CN110650704A (zh) * 2017-05-25 2020-01-03 柯惠Lp公司 用于检测图像捕获装置的视野内的物体的系统和方法
CN109330685A (zh) * 2018-10-31 2019-02-15 南京航空航天大学 一种多孔腹腔手术机器人腹腔镜自动导航方法
CN109288591A (zh) * 2018-12-07 2019-02-01 微创(上海)医疗机器人有限公司 手术机器人系统
CN112587244A (zh) * 2020-12-15 2021-04-02 深圳市精锋医疗科技有限公司 手术机器人及其控制方法、控制装置
CN112641513A (zh) * 2020-12-15 2021-04-13 深圳市精锋医疗科技有限公司 手术机器人及其控制方法、控制装置

Also Published As

Publication number Publication date
CN112587244A (zh) 2021-04-02

Similar Documents

Publication Publication Date Title
WO2022126996A1 (fr) Robot chirurgical, procédé de commande associé et dispositif de commande associé
WO2022126995A1 (fr) Robot chirurgical, procédé de commande associé et dispositif de commande associé
US11596487B2 (en) Robotic surgical system control scheme for manipulating robotic end effectors
US20240180636A1 (en) Surgical robotic system and control of surgical robotic system
JP6164964B2 (ja) 医療用システムおよびその制御方法
WO2022126997A1 (fr) Robot chirurgical, procédé de commande et appareil de commande associés
US20230301730A1 (en) Device and method for controlled motion of a tool
US11986260B2 (en) Robotic surgical system and methods utilizing virtual boundaries with variable constraint parameters
US11583358B2 (en) Boundary scaling of surgical robots
JP2015506850A (ja) 関節器具に好適な姿勢を取らせるような命令をするように入力装置のオペレータを促す、入力装置での力フィードバックの適用
US20240065781A1 (en) Surgical robot, and graphical control device and graphic display method therefor
EP3579780B1 (fr) Système de repositionnement pour manipulateur pouvant être commandé à distance
WO2023023186A1 (fr) Techniques pour suivre des commandes d'un dispositif d'entrée à l'aide d'un mandataire contraint
AU2019369303B2 (en) Binding and non-binding articulation limits for robotic surgical systems
US20190090971A1 (en) Input device handle for robotic surgical systems capable of large rotations about a roll axis
WO2022127650A1 (fr) Robot chirurgical, ainsi que procédé de commande et appareil de commande associés
Karnam et al. Augmented reality for 6-DOF motion recording, preview, and execution to enable intuitive surgical robot control
US20230363841A1 (en) Surgical robot, and graphical control device and graphical display method thereof
Park et al. Endoscopic Camera Manipulation planning of a surgical robot using Rapidly-Exploring Random Tree algorithm
US20220331025A1 (en) Method and system for controlling instrument grip behavior
Nageotte et al. Computer-aided suturing in laparoscopic surgery
CN116849818A (zh) 手术机器人的控制方法、手术机器人及可读存储介质
CN116849817A (zh) 手术机器人的控制方法、手术机器人及可读存储介质
CN118680685A (zh) 手术机器人及其套管对接控制方法、系统及介质
CN118490360A (zh) 机器人运动控制方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21904935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21904935

Country of ref document: EP

Kind code of ref document: A1