WO2024027647A1 - 机器人控制方法、系统和计算机程序产品 - Google Patents

机器人控制方法、系统和计算机程序产品 Download PDF

Info

Publication number
WO2024027647A1
WO2024027647A1 PCT/CN2023/110233 CN2023110233W WO2024027647A1 WO 2024027647 A1 WO2024027647 A1 WO 2024027647A1 CN 2023110233 W CN2023110233 W CN 2023110233W WO 2024027647 A1 WO2024027647 A1 WO 2024027647A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
binocular
coordinate system
coordinate
trajectory
Prior art date
Application number
PCT/CN2023/110233
Other languages
English (en)
French (fr)
Inventor
顾定一
蒋知义
熊麟霏
朱祥
何超
Original Assignee
深圳微美机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳微美机器人有限公司 filed Critical 深圳微美机器人有限公司
Publication of WO2024027647A1 publication Critical patent/WO2024027647A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00743Type of operation; Specification of treatment sites
    • A61B2017/00747Dermatology
    • A61B2017/00752Hair removal or transplantation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions

Definitions

  • the present application relates to the field of image processing technology, and in particular to a robot control method, system and computer program product.
  • the doctor needs to adjust the posture of the instrument (such as a gem knife) before circumcision.
  • Existing methods include: doctors do not use auxiliary equipment and directly rely on experience to manually adjust; doctors use auxiliary equipment (such as implanted magnifiers) to adjust the posture of operating instruments, etc.
  • Traditional hair follicle extraction is completed by multiple assistant physicians assisting experienced doctors. Before extracting hair follicles, the hair target area needs to be screened. If manual screening is used, it consumes a lot of manpower and time; moreover, it is affected by the location of the extracted hair follicles and artificial extraction. Affected by factors such as subjective experience, the efficiency is often low, and the accuracy of extraction is not guaranteed.
  • the current hair follicle transplant robot mainly relies on doctors to manually adjust the posture of the instrument, which is cumbersome to operate and has low work efficiency.
  • This application provides a robot control method, which method includes:
  • binocular natural images obtained by photographing the target part from two different directions, and obtain binocular matching results of each key object in the binocular natural images, and determine based on the binocular matching results of each key object visual odometry;
  • the path trajectory is used to control the robot to execute the target operation requirements according to the path trajectory. Corresponding preset operation.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, which implements the steps of the above method when executed by a processor.
  • This application also provides a robot control system, which includes:
  • a control vehicle is used to obtain binocular natural images obtained by photographing the target part from two different directions, and to obtain binocular matching results of each key object in the binocular natural image.
  • each of the key objects Determine the visual odometry based on the binocular matching results; determine the camera coordinate system based on the visual odometry, and obtain the first coordinates of each of the key objects in the camera coordinate system; respectively convert each first coordinate into the robot coordinate system, Obtain the second coordinates of each of the key objects in the robot coordinate system; obtain the constraints based on the target operation requirements, perform nonlinear quadratic planning based on at least one second coordinate and the constraints, and obtain the path trajectory;
  • a robotic arm which is installed on the console vehicle and is used to perform preset operations corresponding to the target operation requirements according to the path trajectory;
  • An end effector is installed at the end of the robotic arm and used to move with the robotic arm and perform preset operations corresponding to the target operation requirements according to the path trajectory.
  • the above-mentioned robot control method, system and computer program product obtains binocular natural images obtained by photographing the target part from two different directions, and obtains binocular matching results of each key object in the binocular natural image.
  • the binocular matching result of the object determines the visual odometry; determines the camera coordinate system based on the visual odometry, and obtains the first coordinates of each key object in the camera coordinate system; converts each first coordinate to the robot coordinate system, respectively, to obtain The second coordinates of each key object in the robot coordinate system; obtain the constraints based on the target operation requirements, perform nonlinear quadratic planning based on at least one second coordinate and the constraints, and obtain the path trajectory, which is used to control the robot to follow the path
  • the trajectory executes preset operations corresponding to the target operation requirements.
  • the path trajectory can be determined based on the second coordinates of multiple key objects, and the robot can be controlled to automatically execute presets according to the path trajectory. operate. There is no need to manually adjust the robot posture, which can reduce the difficulty of robot operation and improve the work efficiency of the robot.
  • Figure 1 is a schematic flow chart of a robot control method in one embodiment
  • Figure 2 is a schematic flow chart of the feature point method of visual odometry in one embodiment
  • Figure 3 is a schematic flow chart of the optical flow tracking method of visual odometry in one embodiment
  • Figure 4 is a schematic flowchart of dual-target determination in one embodiment
  • Figure 5 is a schematic diagram of the geometric relationship of dual-object determination in one embodiment
  • Figure 6 is a schematic structural diagram of hand-eye calibration in one embodiment
  • Figure 7 is a schematic flow chart of hand-eye calibration in one embodiment
  • Figure 8 is a schematic flow chart of coordinate system conversion in one embodiment
  • Figure 9 is a schematic flow chart of path trajectory planning in one embodiment
  • Figure 10 is a schematic flow chart of path trajectory planning in another embodiment
  • Figure 11 is a schematic flowchart of security detection in one embodiment
  • Figure 12 is a schematic diagram of the usage scenario of the hair follicle transplant robot in one embodiment
  • Figure 13 is a schematic flow chart of a hair follicle extraction robot control method in one embodiment
  • Figure 14 is a schematic flow chart of a hair follicle transplantation robot control method in one embodiment
  • Figure 15 is a structural block diagram of an automatic hair follicle transplant robot control system in one embodiment
  • Figure 16 is a structural block diagram of an automatic hair follicle transplant robot control system in another embodiment
  • Figure 17 is a schematic structural diagram of the design of the state space controller unit in one embodiment
  • Figure 18 is a schematic structural diagram of the design of the state space controller unit in another embodiment
  • Figure 19 is a schematic structural diagram of a PBVS controller in one embodiment
  • Figure 20 is a schematic structural diagram of an IBVS controller in one embodiment
  • FIG. 21 is a schematic diagram of data processing of the IBVS controller in one embodiment
  • Figure 22 is a structural block diagram of a robot control device in one embodiment.
  • the robot control method provided by the embodiments of the present application can be applied to robots.
  • the robot at least includes a controller and an end effector.
  • a robot is a machine device that performs work automatically. It can accept human command, run pre-programmed programs, and act according to principles and programs formulated with artificial intelligence technology.
  • the task of robots is to assist or replace human work, such as production, construction, medical and other tasks.
  • a robot control method is provided.
  • the application of this method to a hair transplant surgical robot is used as an example to illustrate, including the following steps:
  • Step 102 Obtain binocular natural images obtained by photographing the target part from two different directions, and obtain binocular matching results of each key object in the binocular natural image. According to the binocular matching results of each key object The matching results determine the visual odometry.
  • the target part is preferably the patient's head image.
  • the binocular natural image may be obtained by using a binocular camera to capture the target part, or may be obtained by using two monocular cameras to capture the target part from two different directions, which is not limited in this embodiment.
  • a camera is used to obtain a binocular natural image of the target part according to a preset shooting cycle.
  • Step 104 Determine the camera coordinate system according to the visual odometry, and obtain the first coordinates of each of the key objects in the camera coordinate system.
  • key objects refer to objects with preset characteristics, and the number of key objects in the target image can be multiple.
  • the key object can be, for example, a specific object to be abnormally detected, operated, or outlined.
  • a hair follicle is equivalent to a key object.
  • each feature point in the binocular natural image is identified, each key object corresponding to each feature point is determined through binocular matching, and then the depth information corresponding to each key object is calculated using a visual odometry, thereby Determine the first coordinates of each key object in the camera coordinate system (three-dimensional Cartesian coordinate system).
  • Step 106 Convert each first coordinate into the robot coordinate system to obtain the second coordinates of each key object in the robot coordinate system.
  • a first coordinate transformation matrix (equivalent to a homogeneous transformation matrix) is determined based on the relative positions of the camera that captures the binocular natural image and the robot, and each third coordinate in the camera coordinate system is transformed through the first coordinate transformation matrix.
  • the first coordinate is converted into the robot coordinate system, and the second coordinates of each key object in the robot coordinate system are obtained.
  • Both the robot coordinate system and the camera coordinate system belong to the three-dimensional Cartesian coordinate system, but the reference coordinate systems are different.
  • Step 108 Obtain constraints based on the target operation requirements, perform nonlinear quadratic planning based on at least one second coordinate and the constraints, and obtain a path trajectory.
  • the path trajectory is used to control the robot to execute the robot according to the path trajectory. Preset operations corresponding to target operation requirements.
  • a path trajectory of the robot is planned based on the determined second coordinates of multiple key objects.
  • Control parameters such as speed and acceleration of the robot in the path trajectory need to be smooth and meet safety requirements.
  • the controller After the controller acquires new binocular natural images in the next shooting cycle, it will process the new binocular natural images again to obtain new Path trajectory, and update the path trajectory obtained in the original shooting cycle according to the new path trajectory, and control the end effector of the robot to perform preset operations according to the real-time updated path trajectory.
  • binocular natural images obtained by shooting the target part from two different directions are obtained; each key object in the binocular natural image is identified, and the first position of each key object in the camera coordinate system is obtained. coordinates; convert each first coordinate into the robot coordinate system to obtain the second coordinates of each key object in the robot coordinate system; obtain the path trajectory based on at least one second coordinate, and the path trajectory is used to control the robot to execute according to the path trajectory Default actions.
  • the path trajectory can be determined based on the second coordinates of multiple key objects, and the robot can be controlled to automatically execute presets according to the path trajectory. operate.
  • the first coordinate and the second coordinate will also be updated through real-time calculation, thus ensuring that the path trajectory is continuously updated. There is no need to manually adjust the robot posture, which can reduce the difficulty of robot operation and improve the robot's work efficiency and path accuracy.
  • feature extraction is first performed on binocular natural images to extract feature information of key objects, and the ORB (oriented FAST and rotated BRIEF) of each key object is obtained by defining customized key points and descriptors.
  • Features construct an image pyramid based on scale invariance, downsample image information at different resolutions, and extract ORB feature points.
  • epipolar correction and global feature matching are performed on the left and right eye images in the same frame after the feature extraction is completed based on the internal and external parameters, essential matrix, fundamental matrix and other dual-objective parameters obtained from the dual-objective determination.
  • visual odometry is used to estimate the depth information of the left and right eye images in the same frame through the triangulation principle, and calculate the spatial pose information in the camera coordinate system.
  • the spatial pose information is reflected in the form of three-dimensional coordinates.
  • binocular stereo vision is composed of a pair of monocular 2D cameras, which is essentially a core problem of visual odometry, that is, how to estimate the relative motion of the camera based on the image.
  • the feature point method shown in Figure 2 can be used.
  • Image feature matching is achieved by designing methods to extract key points and descriptors.
  • the rotation and scale invariance of features are introduced through the grayscale centroid method and downsampling methods. Then the relative motion of the camera is estimated.
  • feature points all information except the feature points is ignored and features that match the key objects are extracted.
  • optical flow tracking method shown in Figure 3 can also be used to calculate the multi-layer sparse optical flow and solve the optimization problem of photometric error to estimate the relative motion of the camera. While retaining the key points of calculation, optical flow tracking is used to replace the descriptor to achieve the purpose of camera relative motion estimation and binocular matching. The advantage is that it can save the time of calculating feature points and descriptors, but there may be non-convexity problems.
  • the prerequisite for establishing stereoscopic vision is to perform binocular calibration of the camera to obtain the internal and external parameters of the camera.
  • monocular internal parameter calibration and distortion correction are first performed, and internal parameter calibration and distortion correction are performed on the left and right cameras respectively to obtain the corresponding internal parameter matrix and distortion parameters.
  • the feature point method in the visual odometry mentioned above can be used.
  • matching is performed according to the epipolar constraints.
  • the epipolar geometric relationship determined by the dual targets is shown in Figure 5.
  • O 1 and O 2 are the left and right camera centers.
  • the matching is successful, it means that they are indeed the projection of the same space point on the two imaging planes, forming an epipolar constraint.
  • fundamental matrix and essential matrix let the rotation matrix of the relative motion between the left and right cameras be R, and the translation matrix be t.
  • the corresponding basic matrix and essential matrix can be obtained from the epipolar constraint, and the relative pose relationship of the left and right cameras can be further solved and recorded as an external parameter matrix. Finally, the recorded internal and external parameters are output to complete the dual-target determination.
  • feature extraction is performed on the left eye image to identify left eye feature points corresponding to each key object in the left eye image; feature extraction is performed on the right eye image to identify right eye feature points corresponding to each key object in the right eye image; based on Binocularly determine parameters, perform binocular matching on at least one left eye feature point and at least one right eye feature point to obtain at least one feature point pair; the feature point pair includes one left eye feature point and one right eye feature point; each feature is determined based on the visual odometry
  • the depth information corresponding to the point pair is obtained, and the three-dimensional coordinates of each feature point pair in the camera coordinate system are obtained based on the depth information, and the three-dimensional coordinates are used as the first coordinates. It can automatically detect the position coordinates of key objects based on binocular vision.
  • converting each first coordinate into the robot coordinate system to obtain the second coordinates of each key object in the robot coordinate system includes: determining hand-eye calibration parameters based on the positional relationship between the binocular camera and the robot;
  • a binocular camera is a camera that captures binocular natural images;
  • the first coordinate transformation matrix is determined based on the hand-eye calibration parameters, and each first coordinate is calculated according to the first coordinate transformation matrix to obtain the second coordinate corresponding to each first coordinate.
  • the camera is installed on the end of the robot's robotic arm so that the camera moves together with the robotic arm.
  • Hand-eye calibration calculates the conversion relationship between the robot arm flange coordinate system and the camera coordinate system. Define each coordinate system: the robot arm base coordinate system, the robot arm flange coordinate system, the camera coordinate system and the calibration plate coordinate system. As shown in Figure 6, multiple groups of positions are selected to shoot the calibration board, and the poses of the robotic arm and camera are recorded.
  • the transformation from the robot arm base coordinate system to the calibration plate coordinate system can be decomposed into the transformation from the robot arm base coordinate system to the robot arm flange coordinate system, multiplied by The conversion from the coordinate system to the camera coordinate system is multiplied by the conversion from the camera coordinate system to the calibration plate coordinate system.
  • a matrix records the conversion relationship between adjacent poses of the manipulator flange
  • B matrix records the motion estimation of adjacent camera poses
  • establish the AX XB relationship
  • Tsai Lenz's method solves the optimization problem; it can calculate the first coordinate transformation matrix X from the flange coordinate system to the camera coordinate system.
  • the flange coordinate system can be used as the robot coordinate system.
  • the second coordinate corresponding to the first coordinate can be calculated through the first coordinate transformation matrix X.
  • the camera identifies key objects, and the conversion relationship between the camera coordinate system and the key objects can be through the second coordinate transformation matrix
  • the hand-eye calibration solution is used to obtain the first coordinate transformation matrix from the robot arm flange coordinate system to the camera coordinate system.
  • the Cartesian space of the robot arm represents the third coordinate transformation matrix from the robot arm base coordinate system to the robot arm flange coordinate system.
  • the inverse kinematics of the manipulator converts the Cartesian space into the joint space, obtains the values of each joint angle, and then controls the joints and motors.
  • the hand-eye calibration parameters are determined based on the positional relationship between the binocular camera and the robot;
  • the binocular camera is a camera that captures binocular natural images;
  • the first coordinate transformation matrix is determined based on the hand-eye calibration parameters, and each coordinate transformation matrix is determined based on the first coordinate transformation matrix.
  • the first coordinates are calculated to obtain second coordinates respectively corresponding to each first coordinate, as the second coordinates of each key object in the robot coordinate system.
  • the second coordinate position of the key object in the robot coordinate system can be calculated based on the first coordinate position of the key object in the camera coordinate system, which facilitates subsequent control of the robot to perform preset operations.
  • nonlinear quadratic programming is performed according to at least one second coordinate and the constraint condition to obtain a path trajectory, including: A set of robot joint parameters is determined according to each second coordinate; a set of robot joint parameters includes multiple sub-joint parameters, and the sub-joint parameters are used to control the motion of each joint of the robot; the motion of each joint of the robot is controlled according to at least one set of robot joint parameters, To enable the robot to perform preset operations according to the path trajectory.
  • each second coordinate in the robot coordinate system is solved to the joint space according to inverse kinematics, and a set of robot joint parameters can be solved based on each second coordinate.
  • the set of robot joint parameters includes multiple sub-sets. Joint parameters, the controller controls one joint movement of the robot based on each sub-joint parameter.
  • a set of robot joint parameters is determined according to each second coordinate; a set of robot joint parameters includes a plurality of sub-joint parameters, and the sub-joint parameters are used to control the motion of each joint of the robot; the robot is controlled according to at least one set of robot joint parameters.
  • the motion of each joint enables the robot to perform preset operations according to the path trajectory.
  • the second coordinates can be solved into the joint space of the robot to obtain the sub-joint parameters of each joint of the robot, thereby controlling the movement of each joint according to the parameters of each sub-joint to ensure that the robot performs preset operations according to the path trajectory.
  • the method further includes: obtaining the target coordinates in the camera coordinate system based on the binocular natural image of the target part; determining the target pose deviation based on the target coordinates; and correcting the second coordinates based on the target pose deviation.
  • Obtaining the path trajectory based on at least one second coordinate includes: obtaining the corrected path trajectory based on at least one corrected second coordinate.
  • the target is a marker placed at the calibration position of the target part, which is used to determine the position of the target part or key object.
  • the target posture When there is a difference in the target posture at consecutive times, it means that the position of the target part has changed.
  • the camera controller identifies each feature point in the binocular natural image, determines the target corresponding to each feature point through binocular matching, and then uses a visual odometry to calculate the depth information corresponding to the target, thereby determining The first target coordinate of the target in the camera coordinate system.
  • the target coordinates in the camera coordinate system are converted into the robot coordinate system through the first coordinate transformation matrix to obtain the second target coordinates of the target in the robot coordinate system. Compare the two second target coordinates obtained in two consecutive shooting cycles to obtain the target pose deviation, and then correct the second coordinates in real time based on the target pose deviation to ensure that each second coordinate is consistent with the second target coordinate.
  • the distance parameter always remains the same
  • Trajectory planning design generally needs to first determine the discrete path points (i.e., second coordinates) in space, that is, path planning (determined by the visual odometry and coordinate system conversion unit). Since the path points are relatively sparse and do not carry time information, it is necessary to Plan a smooth curve (dense trajectory points are formed according to the control cycle) through these path points, and distribute them according to time. The position, velocity, acceleration, jerk (third derivative of position), and snap (position of position) of each trajectory point are fourth-order derivatives) are all known.
  • the path trajectory is planned using Minimum-jerk trajectory planning, as shown in Figure 9.
  • the trajectory is expressed as a function of time (usually an n-order polynomial is used).
  • Perform k-order differentiation on the trajectory function to obtain the trajectory derivative general formula, such as velocity, acceleration, jerk, etc.
  • Complex trajectories require multi-segment polynomials (piecewise functions), such as m segments.
  • the segmented trajectory there are a total of 6*m unknown coefficients. Construct the objective function. Add boundary conditions: derivative constraints and continuity constraints. Solve the optimization problem, obtain 6*m unknown coefficients, and determine the trajectory.
  • the path trajectory is planned using Minimum-snap trajectory planning, as shown in Figure 10.
  • the trajectory is expressed as a function of time (usually an n-order polynomial is used).
  • Perform k-order differentiation on the trajectory function to obtain the general equations of trajectory derivatives such as velocity, acceleration, jerk, snap, etc.
  • Complex trajectories require multi-segment polynomials (piecewise functions), such as m segments.
  • the segmented trajectory there are a total of 8*m unknown coefficients. Construct the objective function.
  • Add boundary conditions: derivative constraints and continuity constraints. Solve the optimization problem, obtain 8*m unknown coefficients, and determine the trajectory.
  • the target coordinates in the camera coordinate system are obtained based on the binocular natural image of the target part; the target pose deviation is determined based on the target coordinates; the second coordinates are corrected based on the target pose deviation; and the second coordinates are corrected based on at least one corrected Get the corrected path trajectory. It can ensure that the robot will automatically update the path trajectory as the position of the target part changes, so that the robot can complete the preset operation without being affected by the position change of the target part.
  • the method further includes: detecting the operating parameters of the robot according to a preset period; when the operating parameters meet the preset failure conditions, obtaining the fault type corresponding to the operating parameters; and performing a corresponding type of shutdown on the robot according to the fault type. operate.
  • the controller can track and monitor the movement performance of the robot in real time at preset intervals. For example, it can detect it every 0.5 seconds.
  • the detected operating parameters can include:
  • Position detection including Cartesian space position over-limit detection, joint space position over-limit detection, Cartesian space pose deviation over-limit detection and joint space pose deviation over-limit detection.
  • Speed detection including Cartesian space speed over-limit detection, joint space speed over-limit detection, Cartesian space speed deviation over-limit detection and joint space speed deviation over-limit detection.
  • Acceleration detection including Cartesian space acceleration over-limit detection, joint space acceleration over-limit detection, Cartesian space acceleration deviation over-limit detection and joint space acceleration deviation over-limit detection.
  • External force detection including external force over-limit detection at the end of Cartesian space and external force over-limit detection in joint space.
  • Torque detection joint space torque over-limit detection and joint space torque deviation over-limit detection.
  • the above detection can return the corresponding fault code from the robot.
  • the controller determines the fault category and faulty joint location based on the fault code, and performs the corresponding shutdown operation.
  • the operating parameters of the robot are detected according to the preset period; when the operating parameters meet the preset fault conditions, the fault type corresponding to the operating parameters is obtained; and the corresponding type of shutdown operation is performed on the robot according to the fault type. It can provide a complete safety detection solution to make the robot operation process more accurate and safer.
  • a robot control method is applied to a hair follicle transplant robot as an example.
  • the usage scenario of the hair follicle transplant robot is as shown in Figure 12, which may include: a robot control system and a seat.
  • the robot control system can perform automatic transplantation operations or perform transplantation operations under the supervision of doctors.
  • the robot control system includes a robotic arm, a control trolley, and an end effector as shown in the figure.
  • the console vehicle is used to obtain binocular natural images obtained by shooting the target part from two different directions, and to obtain binocular matching results of each key object in the binocular natural image.
  • the binocular matching result determines the visual odometry; determines the camera coordinate system according to the visual odometry, and obtains the first coordinates of each of the key objects in the camera coordinate system; respectively converts each first coordinate into the robot coordinate system, and obtains The second coordinates of each key object in the robot coordinate system; constraints are obtained based on the target operation requirements, and nonlinear quadratic planning is performed based on at least one second coordinate and constraints to obtain the path trajectory.
  • the robotic arm is installed on the console vehicle and is used to perform preset operations corresponding to the target operation requirements according to the path trajectory.
  • the end effector is installed at the end of the robotic arm and is used to move with the robotic arm and perform preset operations corresponding to the target operation requirements according to the path trajectory.
  • the robot control system further includes a camera module installed inside the end effector for moving with the end effector and acquiring binocular natural images. Both the camera module and the robot motion module are controlled by the host computer in the console car.
  • the hair follicle transplant robot can be used for hair follicle extraction or hair follicle transplantation, and the hair follicles are equivalent to key objects.
  • control vehicle also includes a visual servo unit, which is used to obtain the target coordinates in the camera coordinate system based on the binocular natural image of the target part, determine the target pose deviation based on the target coordinates, and correct the target pose deviation based on the target position deviation.
  • a visual servo unit which is used to obtain the target coordinates in the camera coordinate system based on the binocular natural image of the target part, determine the target pose deviation based on the target coordinates, and correct the target pose deviation based on the target position deviation.
  • nonlinear quadratic programming is performed based on at least one modified second coordinate and constraint conditions to obtain a modified path trajectory.
  • control vehicle also includes a safety detection unit, which is used to detect the operating parameters of the robotic arm according to a preset period. When the operating parameters meet the preset fault conditions, obtain the fault type corresponding to the operating parameters. According to The fault type performs a shutdown operation of the corresponding category on the robotic arm.
  • a safety detection unit which is used to detect the operating parameters of the robotic arm according to a preset period. When the operating parameters meet the preset fault conditions, obtain the fault type corresponding to the operating parameters. According to The fault type performs a shutdown operation of the corresponding category on the robotic arm.
  • a hair follicle extraction robot control method includes: real-time collection of intraoperative natural images, and performing two-dimensional image feature extraction and hair follicle unit identification, through binocular matching, polar lines Correction, triangulation, and depth estimation complete the generation of intraoperative three-dimensional images. Convert the image coordinate system from the Cartesian space of the image to the joint space of the robot arm, and automatically generate a real-time planned trajectory for the converted waypoints. At the same time, by adaptively adjusting the needle insertion posture parameters, the end effector can automatically perform hair follicle ringing. Cut and extract until the planned number of hair follicles are extracted and the hair follicle extraction is completed.
  • a hair follicle transplantation robot control method includes: importing the hair follicle transplantation hole positions planned by the doctor before surgery, collecting natural intraoperative images in real time, and performing two-dimensional image features Extraction and target identification (finding the location of the hair follicle planting hole through the target), and completing the generation of intraoperative three-dimensional images through binocular matching, epipolar correction, triangulation, and depth estimation.
  • the path point is then determined based on the position of the hair follicle implantation hole relative to the implantation target coordinate system.
  • the image is converted from the Cartesian space to the robot arm joint space, and a real-time planned trajectory is automatically generated.
  • the end effector can automatically punch holes and transplant hair follicles until the planned number of hair follicles are transplanted. , ending hair follicle transplantation.
  • a robot control method is applied to an automatic hair follicle transplant robot control system as shown in Figure 15 as an example.
  • the system includes:
  • the vision module is used to capture binocular natural images and output three-dimensional information of key objects and targets to the motion control module.
  • the motion control module is used to automatically plan the robot's operating path trajectory based on three-dimensional information and conduct safety inspections during robot operation.
  • Auxiliary module used to configure relevant parameters involved in the vision module and motion control module, as well as configure the signal response of the system.
  • the vision module also includes a monocular image acquisition unit, a monocular feature extraction unit, a binocular matching unit, a visual odometry unit and a data storage unit.
  • the monocular image acquisition unit is used to acquire binocular natural images obtained by shooting the target part from two different directions; the binocular natural images include left eye images and right eye images.
  • the monocular feature extraction unit is used to extract features from the left eye image to identify the left eye feature points corresponding to each key object in the left eye image; and to extract features from the right eye image to identify the right eye features corresponding to each key object in the right eye image. point.
  • the binocular matching unit is used to perform binocular matching on at least one left eye feature point and at least one right eye feature point based on binocular determined parameters to obtain at least one feature point pair; the feature point pair includes one left eye feature point and one right eye feature point.
  • the visual odometry unit is used to determine the depth information corresponding to each feature point pair based on the visual odometry, and obtain the three-dimensional coordinates of each feature point pair in the camera coordinate system based on the depth information, using the three-dimensional coordinates as the first coordinates.
  • the data storage unit is used to store the binocular natural images collected by the monocular image acquisition unit.
  • the motion control module also includes a coordinate system conversion unit, a trajectory planning unit, an operation execution unit and a safety detection unit.
  • the coordinate system conversion unit is used to determine the hand-eye calibration parameters based on the positional relationship between the binocular camera and the robot;
  • the binocular camera is a camera that captures binocular natural images;
  • the first coordinate transformation matrix is determined based on the hand-eye calibration parameters, and the first coordinate transformation matrix is
  • Each first coordinate is calculated to obtain a second coordinate respectively corresponding to each first coordinate, as the second coordinate of each key object in the robot coordinate system.
  • a set of robot joint parameters is determined according to each second coordinate; a set of robot joint parameters includes multiple sub-joint parameters, and the sub-joint parameters are used to control the motion of each joint of the robot.
  • the trajectory planning unit is used to obtain the path trajectory according to at least one second coordinate, and control the motion of each joint of the robot according to at least one set of robot joint parameters, so that the robot can perform preset operations according to the path trajectory.
  • the operation execution unit is used to execute preset operations according to the path trajectory.
  • the safety detection unit is used to detect the operating parameters of the robot according to the preset period; when the operating parameters meet the preset fault conditions, obtain the fault type corresponding to the operating parameters; and perform corresponding types of shutdown operations on the robot according to the fault type.
  • the auxiliary module also includes a hand-eye calibration auxiliary unit, a state space controller unit, a visual servo controller unit and a dual-target calibration auxiliary unit.
  • the hand-eye calibration auxiliary unit is used to configure hand-eye calibration parameters.
  • the state space controller unit is used to ensure the motion control accuracy, stability and robustness of the robot.
  • the visual servo controller unit is used for vision and motion control to improve hand-eye coordination performance and safety.
  • the dual-target calibration auxiliary unit is used to configure dual-target calibration parameters.
  • the automatic hair follicle transplant robot control system can also include a human-computer interaction module, and the human-computer interaction module is configured with a display device and interactive software.
  • the human-computer interaction module is used to interact with the monocular feature extraction unit, and the user can independently and semi-autonomously design the density and area of feature point extraction.
  • the human-computer interaction module is also used to interact with the trajectory planning unit so that the user can autonomously and semi-autonomously design the path trajectory, thereby independently designing the implant hole position and the hairstyle.
  • the human-computer interaction module is also used to interact with the coordinate system conversion unit.
  • the robot only automatically collects and processes visual images, stops automatic coordinate conversion and path planning, and the user artificially pauses or controls the operation process.
  • the human-computer interaction module is also used to interact with the data storage unit so that the user can view the data stored in the data storage unit.
  • the state space controller unit mainly consists of an integral controller, a controlled object and a full state feedback control law.
  • Full state feedback control refers to a method of designing the optimal regulating structure by solving the relevant Riccati matrix differential equations for mostly coupled regulating objects with quadratic performance functions. By simultaneously feeding back the system output and state quantities, arbitrary configuration of poles is achieved to obtain the control law K, thereby adjusting the system characteristics to achieve optimal performance.
  • the specific performance bit changes the dynamic response and anti-disturbance capabilities of the system, further improving system stability. Due to the introduction of full state feedback control, the system state expands the error state quantity and passes through the pole configuration.
  • the configuration can affect the eigenvector and eigenvalue of the system, so that the system characteristics can be designed to adjust the system characteristics to achieve the optimal performance of the hair follicle transplant robot.
  • the introduction of the integral controller can also effectively eliminate the steady-state error and improve the system accuracy.
  • the state space controller unit mainly consists of an integral controller, a controlled object, a state observer, and a full state feedback control law.
  • the pole configuration method is used to obtain the control law K, thereby adjusting the system characteristics to achieve optimal performance.
  • a state observer and an integral controller are added. Due to the introduction of full state feedback control and state observer, the system state (state) is expanded to include estimated state quantities and error state quantities. The addition of the state observer well makes up for the problem when some state quantities cannot be completely detected.
  • Full state feedback can affect the eigenvector and eigenvalue of the system through pole configuration, so that the design can be Adjust the system characteristics to achieve the optimal performance of the hair follicle transplant robot.
  • the visual servo controller unit is implemented through a PBVS (position based visual-servoing) controller, so that the steady-state error between the actual pose and the desired pose obtained through feedback is quickly attenuated.
  • PBVS position based visual-servoing
  • the actual pose information fed back is obtained from the target pose calculated by the visual odometry.
  • the steady-state error between the actual posture and the desired posture is calculated in real time, and the joint parameters are adjusted through each joint controller of the robot, so that the steady-state error between the actual posture and the desired posture obtained by feedback quickly attenuates to zero. It can solve the problem of patients shaking and jittering during the operation.
  • This application designs a visual servo controller to assist the motion control module to plan the optimal trajectory in real time.
  • the visual servo controller unit is implemented through an IBVS (image based visual-servoing) controller, so that the steady-state error between the actual image features obtained by feedback and the expected image features can be quickly
  • the attenuation is zero, which means that the system reaches the system response with a small adjustment time without overshoot.
  • the actual image feature information fed back is derived through visual odometry, which omits the step of motion estimation and uses image features directly.
  • the IBVS controller involves the derivation of the image Jacobian matrix, which puts the pixels in the world coordinates.
  • the velocity vector in the coordinate system is converted to the velocity vector of the camera in the world coordinate system.
  • the three-dimensional depth information of the object and the internal and external parameters of the camera are obtained by combining the binocular vision camera, visual odometry and binocular calibration. These parameters are also used to derive the image Jacobian matrix. This establishes a bridge between the optical flow velocity vector of the pixel coordinate system and the camera velocity vector. Through the image Jacobian matrix, the camera's motion state can be obtained based on the speed loop, and the motion instructions of the robotic arm can be solved.
  • embodiments of the present application also provide a robot control device for implementing the above-mentioned robot control method.
  • the solution to the problem provided by this device is similar to the solution recorded in the above method. Therefore, for the specific limitations in one or more robot control device embodiments provided below, please refer to the above limitations on the robot control method. I won’t go into details here.
  • a robot control device 220 including: a shooting module 221, a vision module 222, a conversion module 223 and a control module 224, wherein:
  • the shooting module 221 is used to acquire binocular natural images obtained by shooting the target part from two different directions.
  • the vision module 222 is used to identify each key object in the binocular natural image and obtain the first coordinates of each key object in the camera coordinate system.
  • the conversion module 223 is used to convert each first coordinate into the robot coordinate system to obtain the second coordinates of each key object in the robot coordinate system.
  • the control module 224 is used to obtain a path trajectory according to at least one second coordinate, and the path trajectory is used to control the robot to perform preset operations according to the path trajectory.
  • the binocular natural image includes a left-eye image and a right-eye image.
  • the vision module 222 is also used to perform feature extraction on the left-eye image to identify the left-eye feature points corresponding to each key object in the left-eye image; and perform feature extraction on the right-eye image.
  • Extract to identify the right eye feature points corresponding to each key object in the right eye image perform binocular matching on at least one left eye feature point and at least one right eye feature point based on the binocular parameters to obtain at least one feature point pair; feature point pair Includes one left eye feature point and one right eye feature point; determined based on visual odometry Depth information corresponding to each feature point pair is obtained, and the three-dimensional coordinates of each feature point pair in the camera coordinate system are obtained based on the depth information, and the three-dimensional coordinates are used as the first coordinates.
  • the conversion module 223 is also used to determine hand-eye calibration parameters based on the positional relationship between the binocular camera and the robot;
  • the binocular camera is a camera that captures binocular natural images;
  • determine the first coordinate transformation matrix based on the hand-eye calibration parameters, according to The first coordinate transformation matrix calculates each first coordinate to obtain a second coordinate corresponding to each first coordinate respectively, as the second coordinate of each key object in the robot coordinate system.
  • control module 224 is further configured to determine a set of robot joint parameters according to each second coordinate; a set of robot joint parameters includes a plurality of sub-joint parameters, and the sub-joint parameters are used to control the motion of each joint of the robot; according to at least A set of robot joint parameters controls the motion of each joint of the robot to enable the robot to perform preset operations according to the path trajectory.
  • the vision module 222 is also used to obtain the target coordinates in the camera coordinate system based on the binocular natural image of the target part.
  • the conversion module 223 is also used to determine the target pose deviation according to the target coordinates; and correct the second coordinates according to the target pose deviation.
  • the control module 224 is also configured to obtain the corrected path trajectory according to at least one corrected second coordinate.
  • control module 224 is also used to detect the operating parameters of the robot according to a preset period; when the operating parameters meet the preset fault conditions, obtain the fault type corresponding to the operating parameters; and perform corresponding categories on the robot according to the fault type. shutdown operation.
  • Each module in the above-mentioned robot control device can be realized in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor executes the computer program, it implements the steps in the above method embodiments.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps in the above method embodiments are implemented.
  • a computer program product including a computer program that implements the steps in each of the above method embodiments when executed by a processor.
  • the computer program can be stored in a non-volatile computer-readable storage.
  • the computer program when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Manipulator (AREA)

Abstract

本申请涉及一种机器人控制方法、系统、计算机设备、存储介质。所述方法包括:获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像;识别双目自然图像中的各关键对象,并获取各关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标;根据至少一个第二坐标获取路径轨迹,路径轨迹用于控制机器人按照路径轨迹执行预设操作。采用本方法无需手动调整机器人姿态,能够降低机器人的操作难度并提高机器人的工作效率。

Description

机器人控制方法、系统和计算机程序产品
本申请要求于2022年8月2日提交中国专利局,申请号为2022109256728,申请名称为“机器人控制方法、系统、计算机设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种机器人控制方法、系统和计算机程序产品。
背景技术
在毛囊移植的过程中,医生为了保证环切毛囊的精度,需要在环切前调整器械(如宝石刀)的姿态。现有方式包括:医生不利用辅助设备直接靠经验手动调整;医生结合辅助设备(如植发放大镜)调整操作器械的姿态等。传统的毛囊提取由多位助手医师辅助经验丰富的医生来完成,在提取毛囊前,需要筛选毛发目标区域,若采用人工进行筛选,则耗费大量人力与时间;而且,受提取毛囊的位置、人为主观经验等因素影响,往往效率低下,且提取的精确度没有保障。
另外,目前的毛囊移植机器人主要还是依赖医生进行手动调整器械姿态,操作繁琐且工作效率较低。
发明内容
基于此,有必要针对上述技术问题,提供一种能够提高工作效率且操作便捷的机器人控制方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。
本申请提供一种机器人控制方法,所述方法包括:
获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计;
根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标;
分别将各第一坐标转换到机器人坐标系中,得到各所述关键对象分别在机器人坐标系中的第二坐标;
基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,所述路径轨迹用于控制机器人按照所述路径轨迹执行与所述目标操作需求对应的预设操作。
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法的步骤。
本申请还提供一种机器人控制系统,所述系统包括:
控制台车,用于获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计;根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各所述关键对象分别在机器人坐标系中的第二坐标;基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹;
机械臂,其安装于所述控制台车上,用于按照所述路径轨迹执行与所述目标操作需求对应的预设操作;以及
末端执行机构,其安装于所述机械臂的末端,用于随所述机械臂运动,按照所述路径轨迹执行与所述目标操作需求对应的预设操作。
上述机器人控制方法、系统和计算机程序产品,获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取双目自然图像中的各关键对象的双目匹配结果,根据各关键对象的双目匹配结果确定视觉里程计;根据视觉里程计确定相机坐标系,并获取各关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标;基于目标操作需求获取约束条件,根据至少一个第二坐标和约束条件进行非线性二次规划,得到路径轨迹,路径轨迹用于控制机器人按照路径轨迹执行与目标操作需求对应的预设操作。这样,通过双目视觉技术检测关键对象,并计算出关键对象相对于机器人坐标系的第二坐标,就能根据多个关键对象的第二坐标确定路径轨迹,控制机器人按照路径轨迹自动执行预设操作。无需人为手动调整机器人姿态,能够降低机器人的操作难度并提高机器人的工作效率。
附图说明
图1为一个实施例中机器人控制方法的流程示意图;
图2为一个实施例中视觉里程计的特征点法流程示意图;
图3为一个实施例中视觉里程计的光流追踪法流程示意图;
图4为一个实施例中双目标定的流程示意图;
图5为一个实施例中双目标定的几何关系示意图;
图6为一个实施例中手眼标定的结构示意图;
图7为一个实施例中手眼标定的流程示意图;
图8为一个实施例中坐标系转换的流程示意图;
图9为一个实施例中路径轨迹规划的流程示意图;
图10为另一个实施例中路径轨迹规划的流程示意图;
图11为一个实施例中安全检测的流程示意图;
图12为一个实施例中毛囊移植机器人的使用场景示意图;
图13为一个实施例中毛囊提取机器人控制方法的流程示意图;
图14为一个实施例中毛囊种植机器人控制方法的流程示意图;
图15为一个实施例中自动毛囊移植机器人控制系统的结构框图;
图16为另一个实施例中自动毛囊移植机器人控制系统的结构框图;
图17为一个实施例中状态空间控制器单元的设计结构示意图;
图18为另一个实施例中状态空间控制器单元的设计结构示意图;
图19为一个实施例中PBVS控制器的结构示意图;
图20为一个实施例中IBVS控制器的结构示意图;
图21为一个实施例中IBVS控制器的数据处理示意图;
图22为一个实施例中机器人控制装置的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的机器人控制方法,可以应用于机器人,机器人至少包括控制器和末端执行器。机器人是自动执行工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。机器人的任务是协助或取代人类工作的工作,例如生产、建筑业、医疗等工作。
在一个实施例中,如图1所示,提供了一种机器人控制方法,以该方法应用于植发手术机器人为例进行说明,包括以下步骤:
步骤102,获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计。
其中,目标部位优选为患者的头部图像,当然在本申请的其他实施例的应用场景下,也可以是患者的其他身体部位,本申请中对此不做限定。双目自然图像可以是采用双目相机对目标部位进行拍摄得到,也可以是采用两个单目相机从两个不同方位对目标部位进行拍摄得到,本实施例中亦对此不做限定。
在一实施例中,通过相机,按照预设拍摄周期获取目标部位的双目自然图像。
步骤104,根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标。
其中,关键对象是指具有预设特征的对象,目标图像中的关键对象的数量可以是多个。其中,关键对象比如可以是待进行异常检测、手术或者勾画的具体对象,例如,在毛囊移植操作中,一个毛囊就相当于一个关键对象。
在一实施例中,通过识别双目自然图像中的各特征点,并通过双目匹配确定出各特征点对应的各关键对象,然后采用视觉里程计计算出各关键对象对应的深度信息,从而确定各关键对象分别在相机坐标系(三维笛卡尔坐标系)中的第一坐标。
步骤106,分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标。
在一实施例中,根据拍摄双目自然图像的相机和机器人的相对位置,确定第一坐标转换矩阵(相当于齐次变换矩阵),并通过第一坐标转换矩阵将相机坐标系中的各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标。机器人坐标系和相机坐标系均属于三维笛卡尔坐标系,只是参考坐标系不同。
步骤108,基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,所述路径轨迹用于控制机器人按照所述路径轨迹执行与所述目标操作需求对应的预设操作。
在一实施例中,基于确定的多个关键对象的第二坐标,规划出机器人的路径轨迹,路径轨迹中机器人的速度、加速度等控制参数需要平滑且满足安全要求。按照预设拍摄周期实时获取双目自然图像,并处理双目自然图像得到路径轨迹,控制器在下一拍摄周期获取到新的双目自然图像之后,会再次处理新的双目自然图像得到新的路径轨迹,并根据新的路径轨迹对原先拍摄周期得到的路径轨迹进行更新,控制机器人的末端执行器按照实时更新的路径轨迹执行预设操作。
上述机器人控制方法中,获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像;识别双目自然图像中的各关键对象,并获取各关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标;根据至少一个第二坐标获取路径轨迹,路径轨迹用于控制机器人按照路径轨迹执行预设操作。这样,通过双目视觉技术检测关键对象,并计算出关键对象相对于机器人坐标系的第二坐标,就能根据多个关键对象的第二坐标确定路径轨迹,控制机器人按照路径轨迹自动执行预设操作。并且当关键对象的位置发生变化,第一坐标、第二坐标也会通过实时的计算进行更新,从而保证路径轨迹持续更新。无需手动调整机器人姿态,能够降低机器人的操作难度并提高机器人的工作效率和路径准确性。
在一个实施例中,双目自然图像包括左目图像和右目图像,识别双目自然图像中的各关键对象,并获取各关键对象分别在相机坐标系中的第一坐标,包括:对左目图像进行特征提取,以识别左目图像中各关键对象分别对应的左目特征点;对右目图像进行特征提取,以识别右目图像中各关键对象分别对应的右目特征点;基于双目标定参数,对至少一个左目特征点以及至少一个右目特征点进行双目匹配,得到至少一个特征点对;特征点对包括一个左目特征点和一个右目特征点;根据视觉里程计确定各特征点对对应的深度信息,并根据深度信息获取各特征点对在相机坐标系中的三维坐标,将三维坐标作为第一坐标。
在一实施例中,首先对双目自然图像分别进行特征提取,提取关键对象的特征信息,通过定义客制化的关键点和描述子来获得每个关键对象的ORB(oriented FAST and rotated BRIEF)特征,根据尺度不变性来构建图像金字塔对不同分辨率下的图像信息降采样,从而提取到ORB特征点。然后对完成了特征提取的同一帧下的左、右目图像,基于双目标定得到的内外参、本质矩阵、基础矩阵等双目标定参数进行极线矫正与全局特征匹配。最后采用视觉里程计,通过三角测量原理对完成了同一帧下的左、右目图像估计深度信息,以及计算相机坐标系中的空间位姿信息,空间位姿信息通过三维坐标的形式体现。
在一个可行的实施方式中,双目立体视觉是由一对单目2D相机构成,其实质是一个视觉里程计的核心问题,即如何根据图像估计相机相对运动。可以采用如图2所示的特征点法。通过设计提取关键点和描述子的方法实现图像特征匹配,同时又通过灰度质心法、降采样的方法引入了特征的旋转性和尺度不变性。进而估计出相机的相对运动。使用特征点时,忽略了除特征点以外的所有信息,提取符合关键对象的特征。
还可以采用如图3所示的光流追踪法,通过计算多层稀疏光流,求解光度误差的优化问题从而估计相机相对运动。在保留了计算关键点的同时使用了光流追踪替换描述子来达到相机相对运动估计与双目匹配的目的。优势是能够省去计算特征点、描述子的时间,但可能存在非凸性的问题。
在一实施例中,搭建立体视觉的前提是需要对相机进行双目标定,得到相机的内外参。如图4所示,首先进行单目内参标定及畸变矫正,对左、右相机分别进行内参标定与畸变矫正,从而得到对应的内参矩阵和畸变参数。接着进行标定板角点特征提取,可以采用上述视觉里程计中的特征点法。然后根据对极约束进行匹配,双目标定的对极几何关系如图5所示,O1和O2为左右相机中心,考虑识别对象在像素平面I1和I2对应 的特征点为p1和p2,如果匹配成功,说明它们确实是同一空间点在两个成像平面上的投影,构成对极约束。进一步解算外参矩阵、基础矩阵和本质矩阵,设左、右相机之间的相对运动的旋转矩阵为R,平移矩阵为t。由对极约束可求出对应的基础矩阵和本质矩阵,从而进一步解得左、右相机的相对位姿关系,记录为外参矩阵。最后输出所记录的内外参,完成双目标定。
本实施例中,对左目图像进行特征提取,以识别左目图像中各关键对象分别对应的左目特征点;对右目图像进行特征提取,以识别右目图像中各关键对象分别对应的右目特征点;基于双目标定参数,对至少一个左目特征点以及至少一个右目特征点进行双目匹配,得到至少一个特征点对;特征点对包括一个左目特征点和一个右目特征点;根据视觉里程计确定各特征点对对应的深度信息,并根据深度信息获取各特征点对在相机坐标系中的三维坐标,将三维坐标作为第一坐标。能够基于双目视觉自动检测出关键对象的位置坐标。
在一个实施例中,分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标,包括:根据双目相机和机器人的位置关系确定手眼标定参数;双目相机是拍摄双目自然图像的相机;基于手眼标定参数确定第一坐标转换矩阵,根据第一坐标转换矩阵对各第一坐标进行计算,得到与各第一坐标分别对应的第二坐标,作为各关键对象分别在机器人坐标系中的第二坐标。
在一实施例中,将相机安装在机器人的机械臂末端上,使相机随着机械臂一起运动。手眼标定解算出机械臂法兰盘坐标系到相机坐标系的转换关系。定义各坐标系:机械臂基座坐标系、机械臂法兰盘坐标系、相机坐标系以及标定板坐标系。如图6所述,选取多组位置拍摄标定板,记录机械臂、相机位姿。对于多组拍摄,存在坐标关系:机械臂基座坐标系到标定板坐标系的转换可以分解为机械臂基座坐标系到机械臂法兰盘坐标系的转换,乘上,机械臂法兰盘坐标系到相机坐标系的转换,再乘上,相机坐标系到标定板坐标系的转换。
如图7所示,记录20-30组数据;A矩阵记录了机械臂法兰盘相邻位姿的转换关系,B矩阵记录了相机相邻位姿的运动估计;建立AX=XB关系;通过手眼标定算法Tsai Lenz的方法求解最优化问题;能够计算得到法兰盘坐标系到相机坐标系的第一坐标转换矩阵X。法兰盘坐标系就能够作为机器人坐标系。通过第一坐标转换矩阵X就能计算出第一坐标对应的第二坐标。
进一步的,对于机器人机械臂的控制,虽然许多轨迹规划问题可以停留在笛卡尔空间,但最终还是需要落实到关节控制与电机驱动上。
如图6和图8所示,相机识别关键对象,则相机坐标系到关键对象的转换关系可以通过第二坐标转换矩阵
手眼标定解算得到机械臂法兰盘坐标系到相机坐标系的转换第一坐标转换矩阵
机械臂笛卡尔空间代表了机械臂基座坐标系到机械臂法兰盘坐标系的第三坐标转换矩阵
机械臂逆运动学将笛卡尔空间转换为关节空间,得到各关节角数值,进而给到关节、电机控制。
本实施例中,根据双目相机和机器人的位置关系确定手眼标定参数;双目相机是拍摄双目自然图像的相机;基于手眼标定参数确定第一坐标转换矩阵,根据第一坐标转换矩阵对各第一坐标进行计算,得到与各第一坐标分别对应的第二坐标,作为各关键对象分别在机器人坐标系中的第二坐标。能够根据关键对象在相机坐标系中的第一坐标位置计算得到关键对象在机器人坐标系中的第二坐标位置,便于后续控制机器人执行预设操作。
在一个实施例中,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,包括: 根据各第二坐标分别确定一组机器人关节参数;一组机器人关节参数包括多个子关节参数,子关节参数用于控制机器人的各关节运动;根据至少一组机器人关节参数控制机器人的各关节运动,以实现机器人按照路径轨迹执行预设操作。
在一实施例中,根据逆运动学将机器人坐标系中的每个第二坐标解算到关节空间,根据每个第二坐标可以解算出一组机器人关节参数,一组机器人关节参数包括多个子关节参数,控制器根据每个子关节参数分别控制机器人的一个关节运动。
本实施例中,根据各第二坐标分别确定一组机器人关节参数;一组机器人关节参数包括多个子关节参数,子关节参数用于控制机器人的各关节运动;根据至少一组机器人关节参数控制机器人的各关节运动,以实现机器人按照路径轨迹执行预设操作。能够将第二坐标解算到机器人的关节空间中,得到机器人各关节的子关节参数,从而控制各关节按照各子关节参数运动,保证机器人按照路径轨迹执行预设操作。
在一个实施例中,方法还包括:根据目标部位的双目自然图像获取相机坐标系中的靶标坐标;根据靶标坐标确定靶标位姿偏差;根据靶标位姿偏差修正第二坐标。根据至少一个第二坐标获取路径轨迹,包括:根据至少一个修正后的第二坐标获取修正后的路径轨迹。
其中,靶标是放置在目标部位标定位置处的标志物,用于对目标部位或关键对象进行位置判定,当连续时刻下靶标位姿出现差异,说明目标部位的位置发生变化。
在一实施例中,相机控制器识别双目自然图像中的各特征点,还要通过双目匹配确定出各特征点对应的靶标,然后采用视觉里程计计算出靶标对应的深度信息,从而确定靶标在相机坐标系中的第一靶标坐标。通过第一坐标转换矩阵将相机坐标系中的靶标坐标转换到机器人坐标系中,得到靶标在机器人坐标系中的第二靶标坐标。将前后连续两个拍摄周期得到的两个第二靶标坐标进行对比,得到靶标位姿偏差,然后根据靶标位姿偏差实时修正第二坐标,保证每个第二坐标与第二靶标坐标之间的距离参数始终保持不变
进一步的,根据实时调整的第二坐标不断的修正路径轨迹,保证机器人真正能够在目标部位上按照规划的路径轨迹完成预设操作。轨迹规划设计一般需要先确定空间中的离散路径点(即第二坐标),也就是路径规划(由视觉里程计和坐标系转换单元确定),而由于路径点比较稀疏且不带时间信息,需要规划一条平滑的曲线(根据控制周期形成稠密的轨迹点)穿过这些路径点,且按时间分布,每个轨迹点的位置、速度、加速度、jerk(位置的三阶导数)、snap(位置的四阶导数)皆可知。
在一个可行的实施方式中,路径轨迹的规划采用Minimum-jerk轨迹规划,如图9所示。根据输入的路径点表示轨迹关于时间的函数(一般用n阶多项式)。对轨迹函数作k阶微分,得到轨迹导数通项式,如速度、加速度、jerk等。复杂的轨迹需要多段多项式组成(分段函数),如分为m段。基于minimum-jerk约束条件确定多项式阶次,此处为n=5阶,根据分段轨迹,共存在6*m个未知系数。构造目标函数。添加边界条件:导数约束与连续性约束。求解最优化问题,解得6*m个未知系数,确定轨迹。
在另一个可行的实施方式中,路径轨迹的规划采用Minimum-snap轨迹规划,如图10所示。根据输入的路径点表示轨迹关于时间的函数(一般用n阶多项式)。对轨迹函数作k阶微分,得到轨迹导数通项式如速度、加速度、jerk、snap等。复杂的轨迹需要多段多项式组成(分段函数),如分为m段。基于minimum-snap约束条件确定多项式阶次,此处为n=7阶,根据分段轨迹,共存在8*m个未知系数。构造目标函数。添加边界条件:导数约束与连续性约束。求解最优化问题,解得8*m个未知系数,确定轨迹。
本实施例中,根据目标部位的双目自然图像获取相机坐标系中的靶标坐标;根据靶标坐标确定靶标位姿偏差;根据靶标位姿偏差修正第二坐标;根据至少一个修正后的第二坐标获取修正后的路径轨迹。能够保证机器人会随着目标部位的位置变化而自动更新路径轨迹,从而使机器人不受目标部位位置变化的影响完成预设操作。
在一个实施例中,方法还包括:按照预设周期检测机器人的运行参数;在运行参数满足预设故障条件的情况下,获取运行参数对应的故障类型;根据故障类型对机器人执行相应类别的停机操作。
在一实施例中,如图11所示,机器人作业过程中,控制器可以间隔预设周期对机器人的运动性能实时跟踪监测,例如,可以每间隔0.5秒检测一次,检测的运行参数可以包括:
(1)位置检测:包含笛卡尔空间位置超限检测、关节空间位置超限检测、笛卡尔空间位姿偏差超限检测和关节空间位姿偏差超限检测。
(2)速度检测:包含笛卡尔空间速度超限检测、关节空间速度超限检测、笛卡尔空间速度偏差超限检测和关节空间速度偏差超限检测。
(3)加速度检测:包含笛卡尔空间加速度超限检测、关节空间加速度超限检测、笛卡尔空间加速度偏差超限检测和关节空间加速度偏差超限检测。
(4)外力检测:包括笛卡尔空间末端外力超限检测和关节空间外力超限检测。
(5)扭矩检测:关节空间扭矩超限检测和关节空间扭矩偏差超限检测。
以上检测均能从机器人返回相应的故障码,控制器根据故障码确定出故障类别以及故障关节部位,进行相应类别的停机操作。
本实施例中,通过按照预设周期检测机器人的运行参数;在运行参数满足预设故障条件的情况下,获取运行参数对应的故障类型;根据故障类型对机器人执行相应类别的停机操作。能够提供完善的安全检测方案,使得机器人作业过程更准确、更安全。
在一个实施例中,一种机器人控制方法,以该方法应用于毛囊移植机器人为例,毛囊移植机器人的使用场景如图12所示,可包含:机器人控制系统和座椅等。机器人控制系统可进行自动移植操作,也可以在医生的监控下进行移植操作。机器人控制系统包括如图所示的机械臂、控制台车、末端执行机构。控制台车用于获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计;根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标;基于目标操作需求获取约束条件,根据至少一个第二坐标和约束条件进行非线性二次规划,得到路径轨迹。机械臂安装于控制台车上,用于按照路径轨迹执行与目标操作需求对应的预设操作。末端执行机构安装于机械臂的末端,用于随机械臂运动,按照路径轨迹执行与目标操作需求对应的预设操作。
在一实施例中,该机器人控制系统还包括相机模块,其安装于末端执行机构内部,用于随末端执行机构运动,并获取双目自然图像。相机模块与机器人运动模块均通过控制台车内的上位机控制。毛囊移植机器人可用于进行毛囊提取或毛囊种植,毛囊相当于关键对象。
在一实施例中,控制台车还包括视觉伺服单元,其用于根据目标部位的双目自然图像获取相机坐标系中的靶标坐标,根据靶标坐标确定靶标位姿偏差,根据靶标位姿偏差修正第二坐标,根据至少一个修正后的第二坐标和约束条件进行非线性二次规划,得到修正后的路径轨迹。
在一实施例中,控制台车还包括安全检测单元,其用于按照预设周期检测机械臂的运行参数,在运行参数满足预设故障条件的情况下,获取运行参数对应的故障类型,根据故障类型对机械臂执行相应类别的停机操作。
在一个可行的实施方式中,如图13所示,一种毛囊提取机器人控制方法,包括:实时采集术中自然图像,并进行二维图像特征提取与毛囊单元识别,通过双目匹配、极线矫正、三角测量、深度估计完成术中三维图像的生成。对图像坐标系进行转换,从图像笛卡尔空间转换至机械臂关节空间,对转换后的路点自动生成实时规划的轨迹,同时通过自适应调整进针姿态参数,末端执行器可以自动进行毛囊环切提取,直至完成计划数量的毛囊提取,结束毛囊提取。
在另一个可行的实施方式中,如图14所示,一种毛囊种植机器人控制方法,包括:导入医生术前规划完成的毛囊种植孔位,实时采集术中自然图像,并进行二维图像特征提取与靶标识别(通过靶标查找毛囊种植孔位的位置),通过双目匹配、极线矫正、三角测量、深度估计完成术中三维图像的生成。再根据毛囊种植孔位相对于种植靶标坐标系的位置确定路径点。进而从图像笛卡尔空间转换至机械臂关节空间,自动生成实时规划的轨迹,同时通过自适应调整进针姿态参数,末端执行器可以自动进行打孔与毛囊种植,直至完成计划数量的毛囊种植完成,结束毛囊种植。
在一个实施例中,一种机器人控制方法,以该方法应用于如图15所示的自动毛囊移植机器人控制系统为例,系统包括:
视觉模块,用于拍摄双目自然图像并输出关键对象、靶标的三维信息给运动控制模块。
运动控制模块,用于根据三维信息自动规划机器人的操作路径轨迹,并在机器人作业时进行安全检测。
辅助模块,用于配置视觉模块和运动控制模块中涉及的相关参数,以及配置系统的信号响应。
具体的,视觉模块还包括单目图像采集单元、单目特征提取单元、双目匹配单元、视觉里程计单元和数据存储单元。
单目图像采集单元用于获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像;双目自然图像包括左目图像和右目图像。
单目特征提取单元用于对左目图像进行特征提取,以识别左目图像中各关键对象分别对应的左目特征点;以及对右目图像进行特征提取,以识别右目图像中各关键对象分别对应的右目特征点。
双目匹配单元用于基于双目标定参数,对至少一个左目特征点以及至少一个右目特征点进行双目匹配,得到至少一个特征点对;特征点对包括一个左目特征点和一个右目特征点。
视觉里程计单元用于根据视觉里程计确定各特征点对对应的深度信息,并根据深度信息获取各特征点对在相机坐标系中的三维坐标,将三维坐标作为第一坐标。
数据存储单元用于存储单目图像采集单元采集的双目自然图像。
具体的,运动控制模块还包括坐标系转换单元、轨迹规划单元、操作执行单元和安全检测单元。
坐标系转换单元用于根据双目相机和机器人的位置关系确定手眼标定参数;双目相机是拍摄双目自然图像的相机;基于手眼标定参数确定第一坐标转换矩阵,根据第一坐标转换矩阵对各第一坐标进行计算,得到与各第一坐标分别对应的第二坐标,作为各关键对象分别在机器人坐标系中的第二坐标。根据各第二坐标分别确定一组机器人关节参数;一组机器人关节参数包括多个子关节参数,子关节参数用于控制机器人的各关节运动。
轨迹规划单元用于根据至少一个第二坐标获取路径轨迹,并根据至少一组机器人关节参数控制机器人的各关节运动,以实现机器人按照路径轨迹执行预设操作。
操作执行单元用于按照路径轨迹执行预设操作。
安全检测单元用于按照预设周期检测机器人的运行参数;在运行参数满足预设故障条件的情况下,获取运行参数对应的故障类型;根据故障类型对机器人执行相应类别的停机操作。
具体的,辅助模块还包括手眼标定辅助单元、状态空间控制器单元、视觉伺服控制器单元和双目标定辅助单元。
手眼标定辅助单元用于配置手眼标定参数。
状态空间控制器单元用于保证机器人的运动控制精度、稳定性于鲁棒性。
视觉伺服控制器单元用于视觉与运动控制,提高手眼协调的性能与安全。根据目标部位的双目自然图像获取相机坐标系中的靶标坐标;根据靶标坐标确定靶标位姿偏差;根据靶标位姿偏差修正第二坐标;再根据至少一个修正后的第二坐标获取修正后的路径轨迹。
双目标定辅助单元用于配置双目标定参数。
在一个可行的实施方式中,如图16所示,自动毛囊移植机器人控制系统还可以包括人机交互模块,人机交互模块配置有显示设备与交互软件。
人机交互模块用于通过与单目特征提取单元进行交互,用户可自主、半自主地设计特征点取法密度与区域。
人机交互模块还用于通过与轨迹规划单元交互,用户可自主、半自主地设计路径轨迹,从而自主设计种植孔位与所构成的发型。
人机交互模块还用于通过与坐标系转换单元交互,机器人仅自动采集并处理视觉图像,停止自动进行坐标转换和路径规划,用户人为暂停或控制操作过程。
人机交互模块还用于通过与数据存储单元交互,用户可以查看数据存储单元存储的数据。
在一个可行的实施方式中,如图17所示,状态空间控制器单元主要由积分控制器、被控对象与全状态反馈控制律组成。全状态反馈控制是指对于具有二次型性能函数的多为耦合的调节对象,通过求解有关的Riccati矩阵微分方程来设计最优调节结构的方法。通过同时反馈系统输出与状态量的方法,实现极点任意配置来获取控制律K,从而调整系统特性,使之达到最优性能。具体表现位改变系统的动态响应、抗扰动能力,进一步提升系统稳定性。由于引入了全状态反馈控制,故系统状态(state)扩张出了误差状态量并过极点配 置,来影响系统的特征向量(eigenvector)与特征值(eigenvalue),从而可设计地调整系统特性,使之达成毛囊移植机器人地最优性能。积分控制器的引入也可以很好地消除稳态误差,提高系统精度。
在另一个可行的实施方式中,如图18所示,状态空间控制器单元主要由积分控制器、被控对象、状态观测器、全状态反馈控制律组成。采用极点配置的方法来获取控制律K,从而来调整系统特性,使之达到最优性能。在此之上,为了增加系统的鲁棒性与进一步减小系统稳态误差,增加了状态观测器和积分控制器。由于引入了全状态反馈控制与状态观测器,故系统状态(state)扩张出了估计状态量与误差状态量。状态观测器的加入很好地弥补了一些状态量无法被完全检测到时的问题,全状态反馈又能通过极点配置,来影响系统的特征向量(eigenvector)与特征值(eigenvalue),从而可设计地调整系统特性,使之达成毛囊移植机器人地最优性能。
在一个可行的实施方式中,如图19所示,视觉伺服控制器单元通过PBVS(position based visual-servoing)控制器来实现,使反馈得到的实际位姿与期望位姿的稳态误差快速衰减为零,是系统在无超调量的前提下以很小的调整时间达到系统响应。其中反馈的实际位姿信息是通过视觉里程计计算出的靶标位姿得到的。实时计算实际位姿与期望位姿的稳态误差,通过机器人的各关节控制器调整关节参数,从而使反馈得到的实际位姿与期望位姿的稳态误差快速衰减为零。能够解决患者在手术过程晃动、抖动的问题,本申请通过设计视觉伺服控制器辅助运动控制模块,实时规划最优化轨迹。
在另一个可行的实施方式中,如图20所示,视觉伺服控制器单元通过IBVS(image based visual-servoing)控制器来实现,使反馈得到的实际图像特征与期望图像特征的稳态误差快速衰减为零,是系统在无超调量的前提下以很小的调整时间达到系统响应。其中反馈的实际图像特征信息是通过视觉里程计推导得到,省略了运动估计这一步骤,直接使用了图像特征,但相对地,IBVS控制器涉及到了图像雅可比矩阵的推导,将像素在世界坐标系下的速度矢量转换到了相机在世界坐标系下的速度矢量。如图21所示,结合双目视觉相机、视觉里程计与双目标定,获得对象的三维深度信息与相机内外参,这些参数也被用来推导图像雅可比矩阵。从而建立起了像素坐标系光流速度矢量与相机速度矢量之间的桥梁。通过图像雅可比矩阵可以基于速度环获得相机的运动状态,解算出机械臂的运动指令。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的机器人控制方法的机器人控制装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个机器人控制装置实施例中的具体限定可以参见上文中对于机器人控制方法的限定,在此不再赘述。
在一个实施例中,如图22所示,提供了一种机器人控制装置220,包括:拍摄模块221、视觉模块222、转换模块223和控制模块224,其中:
拍摄模块221,用于获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像。
视觉模块222,用于识别双目自然图像中的各关键对象,并获取各关键对象分别在相机坐标系中的第一坐标。
转换模块223,用于分别将各第一坐标转换到机器人坐标系中,得到各关键对象分别在机器人坐标系中的第二坐标。
控制模块224,用于根据至少一个第二坐标获取路径轨迹,路径轨迹用于控制机器人按照路径轨迹执行预设操作。
在一个实施例中,双目自然图像包括左目图像和右目图像,视觉模块222还用于对左目图像进行特征提取,以识别左目图像中各关键对象分别对应的左目特征点;对右目图像进行特征提取,以识别右目图像中各关键对象分别对应的右目特征点;基于双目标定参数,对至少一个左目特征点以及至少一个右目特征点进行双目匹配,得到至少一个特征点对;特征点对包括一个左目特征点和一个右目特征点;根据视觉里程计确定 各特征点对对应的深度信息,并根据深度信息获取各特征点对在相机坐标系中的三维坐标,将三维坐标作为第一坐标。
在一个实施例中,转换模块223还用于根据双目相机和机器人的位置关系确定手眼标定参数;双目相机是拍摄双目自然图像的相机;基于手眼标定参数确定第一坐标转换矩阵,根据第一坐标转换矩阵对各第一坐标进行计算,得到与各第一坐标分别对应的第二坐标,作为各关键对象分别在机器人坐标系中的第二坐标。
在一个实施例中,控制模块224还用于根据各第二坐标分别确定一组机器人关节参数;一组机器人关节参数包括多个子关节参数,子关节参数用于控制机器人的各关节运动;根据至少一组机器人关节参数控制机器人的各关节运动,以实现机器人按照路径轨迹执行预设操作。
在一个实施例中,视觉模块222还用于根据目标部位的双目自然图像获取相机坐标系中的靶标坐标。
转换模块223还用于根据靶标坐标确定靶标位姿偏差;根据靶标位姿偏差修正第二坐标。
控制模块224还用于根据至少一个修正后的第二坐标获取修正后的路径轨迹。
在一个实施例中,控制模块224还用于按照预设周期检测机器人的运行参数;在运行参数满足预设故障条件的情况下,获取运行参数对应的故障类型;根据故障类型对机器人执行相应类别的停机操作。
上述机器人控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (14)

  1. 一种机器人控制方法,包括:
    获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计;
    根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标;
    分别将各第一坐标转换到机器人坐标系中,得到各所述关键对象分别在机器人坐标系中的第二坐标;
    基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,所述路径轨迹用于控制机器人按照所述路径轨迹执行与所述目标操作需求对应的预设操作。
  2. 根据权利要求1所述的方法,其中,所述获取所述双目自然图像中的各关键对象的双目匹配结果,包括:
    对所述左目图像进行特征提取,以识别所述左目图像中各关键对象分别对应的左目特征点;
    对所述右目图像进行特征提取,以识别所述右目图像中各关键对象分别对应的右目特征点;
    基于双目标定参数,对至少一个左目特征点以及至少一个右目特征点进行双目匹配,得到至少一个特征点对,将至少一个特征点对作为所述双目匹配结果;所述特征点对包括一个左目特征点和一个右目特征点。
  3. 根据权利要求1所述的方法,其中,所述根据视觉里程计确定相机坐标系,包括:
    根据所述视觉里程计获取双目相机空间位姿信息,并根据所述双目相机空间位姿信息确定所述相机坐标系。
  4. 根据权利要求1所述的方法,其中,所述获取各所述关键对象分别在相机坐标系中的第一坐标,包括:
    在所述相机坐标系中,通过三角测量计算各关键对象对应的深度信息;
    根据所述深度信息获取各关键对象在所述相机坐标系中的三维坐标,将所述三维坐标作为所述第一坐标。
  5. 根据权利要求1所述的方法,其中,所述分别将各第一坐标转换到机器人坐标系中,得到各所述关键对象分别在机器人坐标系中的第二坐标,包括:
    根据双目相机和机器人的位置关系确定手眼标定参数;所述双目相机是拍摄所述双目自然图像的相机;
    基于所述手眼标定参数确定第一坐标转换矩阵,根据所述第一坐标转换矩阵对各第一坐标进行计算,得到与各第一坐标分别对应的第二坐标,作为各所述关键对象分别在机器人坐标系中的第二坐标。
  6. 根据权利要求1所述的方法,其中,所述基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,包括:
    根据至少一个第二坐标建立轨迹函数,以及获取轨迹分段数;
    对所述轨迹函数关于时间求导,得到轨迹导数通项式;
    根据所述轨迹分段数和所述约束条件,获取所述轨迹导数通项式对应的轨迹多项式;
    基于所述目标操作需求构建所述轨迹多项式的目标函数和边界条件;基于所述目标函数、所述边界条件和所述约束条件,求解所述轨迹多项式,得到所述路径轨迹。
  7. 根据权利要求1所述的方法,还包括:
    根据所述路径轨迹控制机器人执行预设操作,所述根据所述路径轨迹控制机器人执行预设操作,包括:
    根据所述路径轨迹中的各第二坐标分别确定一组机器人关节参数;所述一组机器人关节参数包括多个子关节参数,所述子关节参数用于控制所述机器人的各关节运动;
    根据至少一组机器人关节参数控制所述机器人的各关节运动,以实现所述机器人按照所述路径轨迹执行预设操作。
  8. 根据权利要求1所述的方法,还包括:
    根据目标部位的双目自然图像获取所述相机坐标系中的靶标坐标;
    根据所述靶标坐标确定靶标位姿偏差;
    根据所述靶标位姿偏差修正所述第二坐标;
    所述根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹,包括:
    根据至少一个修正后的第二坐标和所述约束条件进行非线性二次规划,得到修正后的路径轨迹。
  9. 根据权利要求1至8中任一项所述的方法,还包括:
    按照预设周期检测所述机器人的运行参数;
    在所述运行参数满足预设故障条件的情况下,获取所述运行参数对应的故障类型;
    根据所述故障类型对所述机器人执行相应类别的停机操作。
  10. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的方法的步骤。
  11. 一种机器人控制系统,包括:
    控制台车,用于获取从两个不同方位对目标部位进行拍摄所得到的双目自然图像,并获取所述双目自然图像中的各关键对象的双目匹配结果,根据各所述关键对象的双目匹配结果确定视觉里程计;根据视觉里程计确定相机坐标系,并获取各所述关键对象分别在相机坐标系中的第一坐标;分别将各第一坐标转换到机器人坐标系中,得到各所述关键对象分别在机器人坐标系中的第二坐标;基于目标操作需求获取约束条件,根据至少一个第二坐标和所述约束条件进行非线性二次规划,得到路径轨迹;
    机械臂,其安装于所述控制台车上,用于按照所述路径轨迹执行与所述目标操作需求对应的预设操作;以及
    末端执行机构,其安装于所述机械臂的末端,用于随所述机械臂运动,按照所述路径轨迹执行与所述目标操作需求对应的预设操作。
  12. 根据权利要求11所述的系统,还包括:
    相机模块,安装于所述末端执行机构内部,用于随所述末端执行机构运动,并获取所述双目自然图像。
  13. 根据权利要求11所述的系统,其中,所述控制台车还包括:
    视觉伺服单元,用于根据目标部位的双目自然图像获取所述相机坐标系中的靶标坐标,根据所述靶标坐标确定靶标位姿偏差,根据所述靶标位姿偏差修正所述第二坐标,根据至少一个修正后的第二坐标和所述约束条件进行非线性二次规划,得到修正后的路径轨迹。
  14. 根据权利要求11所述的系统,其中,所述控制台车还包括:
    安全检测单元,用于按照预设周期检测所述机械臂的运行参数,在所述运行参数满足预设故障条件的情况下,获取所述运行参数对应的故障类型,根据所述故障类型对所述机械臂执行相应类别的停机操作。
PCT/CN2023/110233 2022-08-02 2023-07-31 机器人控制方法、系统和计算机程序产品 WO2024027647A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210925672.8 2022-08-02
CN202210925672.8A CN115179294A (zh) 2022-08-02 2022-08-02 机器人控制方法、系统、计算机设备、存储介质

Publications (1)

Publication Number Publication Date
WO2024027647A1 true WO2024027647A1 (zh) 2024-02-08

Family

ID=83521216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110233 WO2024027647A1 (zh) 2022-08-02 2023-07-31 机器人控制方法、系统和计算机程序产品

Country Status (2)

Country Link
CN (1) CN115179294A (zh)
WO (1) WO2024027647A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115179294A (zh) * 2022-08-02 2022-10-14 深圳微美机器人有限公司 机器人控制方法、系统、计算机设备、存储介质
CN115741732A (zh) * 2022-11-15 2023-03-07 福州大学 一种按摩机器人的交互式路径规划及运动控制方法
CN115507857B (zh) * 2022-11-23 2023-03-14 常州唯实智能物联创新中心有限公司 高效机器人运动路径规划方法及系统
CN115880291B (zh) * 2023-02-22 2023-06-06 江西省智能产业技术创新研究院 汽车总成防错识别方法、系统、计算机及可读存储介质
CN117283555A (zh) * 2023-10-29 2023-12-26 北京小雨智造科技有限公司 一种用于机器人自主标定工具中心点的方法及装置

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702607A (zh) * 2011-07-08 2014-04-02 修复型机器人公司 相机系统的坐标系统的校准和变换
CN104281148A (zh) * 2013-07-07 2015-01-14 哈尔滨点石仿真科技有限公司 基于双目立体视觉的移动机器人自主导航方法
CN109940626A (zh) * 2019-01-23 2019-06-28 浙江大学城市学院 一种基于机器人视觉的画眉机器人系统及其控制方法
CN113070876A (zh) * 2021-03-19 2021-07-06 深圳群宾精密工业有限公司 一种基于3d视觉的机械手点胶路径引导纠偏方法
CN113284111A (zh) * 2021-05-26 2021-08-20 汕头大学 一种基于双目立体视觉的毛囊区域定位方法及系统
US20220032461A1 (en) * 2020-07-31 2022-02-03 GrayMatter Robotics Inc. Method to incorporate complex physical constraints in path-constrained trajectory planning for serial-link manipulator
CN114280153A (zh) * 2022-01-12 2022-04-05 江苏金晟元控制技术有限公司 一种复杂曲面工件智能检测机器人及检测方法和应用
CN114670177A (zh) * 2022-05-09 2022-06-28 浙江工业大学 一种两转一移并联机器人姿态规划方法
CN114714356A (zh) * 2022-04-14 2022-07-08 武汉理工大学重庆研究院 基于双目视觉的工业机器人手眼标定误差精确检测方法
CN115179294A (zh) * 2022-08-02 2022-10-14 深圳微美机器人有限公司 机器人控制方法、系统、计算机设备、存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702607A (zh) * 2011-07-08 2014-04-02 修复型机器人公司 相机系统的坐标系统的校准和变换
CN104281148A (zh) * 2013-07-07 2015-01-14 哈尔滨点石仿真科技有限公司 基于双目立体视觉的移动机器人自主导航方法
CN109940626A (zh) * 2019-01-23 2019-06-28 浙江大学城市学院 一种基于机器人视觉的画眉机器人系统及其控制方法
US20220032461A1 (en) * 2020-07-31 2022-02-03 GrayMatter Robotics Inc. Method to incorporate complex physical constraints in path-constrained trajectory planning for serial-link manipulator
CN113070876A (zh) * 2021-03-19 2021-07-06 深圳群宾精密工业有限公司 一种基于3d视觉的机械手点胶路径引导纠偏方法
CN113284111A (zh) * 2021-05-26 2021-08-20 汕头大学 一种基于双目立体视觉的毛囊区域定位方法及系统
CN114280153A (zh) * 2022-01-12 2022-04-05 江苏金晟元控制技术有限公司 一种复杂曲面工件智能检测机器人及检测方法和应用
CN114714356A (zh) * 2022-04-14 2022-07-08 武汉理工大学重庆研究院 基于双目视觉的工业机器人手眼标定误差精确检测方法
CN114670177A (zh) * 2022-05-09 2022-06-28 浙江工业大学 一种两转一移并联机器人姿态规划方法
CN115179294A (zh) * 2022-08-02 2022-10-14 深圳微美机器人有限公司 机器人控制方法、系统、计算机设备、存储介质

Also Published As

Publication number Publication date
CN115179294A (zh) 2022-10-14

Similar Documents

Publication Publication Date Title
WO2024027647A1 (zh) 机器人控制方法、系统和计算机程序产品
CN105082161B (zh) 双目立体摄像机机器人视觉伺服控制装置及其使用方法
Stavnitzky et al. Multiple camera model-based 3-D visual servo
JP2013516264A (ja) リアルタイム速度最適化を使用した校正不要のビジュアルサーボ
Hao et al. Vision-based surgical tool pose estimation for the da vinci® robotic surgical system
CN116766194A (zh) 基于双目视觉的盘类工件定位与抓取系统和方法
Dehghani et al. Colibridoc: An eye-in-hand autonomous trocar docking system
US20220392084A1 (en) Scene perception systems and methods
JP2014053018A (ja) 情報処理装置、情報処理装置の制御方法及びプログラム
Moustris et al. Shared control for motion compensation in robotic beating heart surgery
JP2015135333A (ja) 情報処理装置、情報処理装置の制御方法、およびプログラム
US9672621B2 (en) Methods and systems for hair transplantation using time constrained image processing
CN109542094B (zh) 无期望图像的移动机器人视觉镇定控制
US11559888B2 (en) Annotation device
JP2019077026A (ja) 制御装置、ロボットシステム、制御装置の動作方法及びプログラム
WO2023051706A1 (zh) 抓取的控制方法、装置、服务器、设备、程序及介质
US10832422B2 (en) Alignment system for liver surgery
Huang et al. An autonomous throat swab sampling robot for nucleic acid test
Wang et al. Image-based pose estimation and tracking of surgical instruments in minimally invasive surgery
Jeddi et al. Eye In-hand Stereo Image Based Visual Servoing for Robotic Assembly and Set-Point Calibration used on 4 DOF SCARA robot
Gu et al. A Binocular Vision-Guided Puncture Needle Automatic Positioning Method
US20220005199A1 (en) Image segmentation with kinematic data in a robotic surgical system
CN117474906B (zh) 基于脊柱x光图像匹配的术中x光机复位方法
Staub et al. Micro camera augmented endoscopic instruments: Towards superhuman performance in remote surgical cutting
CN112368739B (zh) 用于肝脏手术的对准系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23849354

Country of ref document: EP

Kind code of ref document: A1