CN113290560A - Robot motion control method, device, electronic equipment and storage medium - Google Patents

Robot motion control method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113290560A
CN113290560A CN202110586084.1A CN202110586084A CN113290560A CN 113290560 A CN113290560 A CN 113290560A CN 202110586084 A CN202110586084 A CN 202110586084A CN 113290560 A CN113290560 A CN 113290560A
Authority
CN
China
Prior art keywords
robot
joint
motion transformation
time point
transformation sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110586084.1A
Other languages
Chinese (zh)
Inventor
冷晓琨
常琳
吴雨璁
白学林
柯真东
王松
何治成
黄贤贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leju Shenzhen Robotics Co Ltd
Original Assignee
Leju Shenzhen Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leju Shenzhen Robotics Co Ltd filed Critical Leju Shenzhen Robotics Co Ltd
Priority to CN202110586084.1A priority Critical patent/CN113290560A/en
Publication of CN113290560A publication Critical patent/CN113290560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot motion control method and device, electronic equipment and a storage medium, and relates to the technical field of robots. The method comprises the following steps: generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point; performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point; and controlling the robot to move according to the motion transformation sequence of each joint of the robot. The method obtains the motion transformation sequence of the terminal point in a graphical configuration mode, thereby improving the efficiency of obtaining the motion transformation sequence of each joint.

Description

Robot motion control method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method and an apparatus for controlling robot motion, an electronic device, and a storage medium.
Background
A humanoid robot is a robot designed and manufactured to simulate the shape and behavior of a human, and generally has limbs and a head of the human, respectively or simultaneously. The intelligent robot integrates multiple sciences such as machinery, electronics, computers, materials, sensors and control technologies, finally realizes high intelligence, and can replace manual work to operate to a certain extent.
In the prior art, when the humanoid robot is debugged, the humanoid robot is mainly realized by adjusting the joint track of the robot. Specifically, each joint forms a motion transformation sequence which changes along with time, each motion in the sequence is represented by a joint angle of the robot, and the robot is controlled to sequentially transform according to the joint angles of the sequence so as to make corresponding motion.
However, in the above method, it is necessary to generate one motion transformation sequence for each joint, and the motion transformation sequence is usually implemented by writing a code manually, which results in a great effort for generating the motion transformation sequence and a low efficiency in debugging the motion of the robot.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for controlling robot motion, an electronic device, and a storage medium, so as to solve the problem of low robot motion debugging efficiency in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a robot motion control method, including:
generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point;
and performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point;
and controlling the robot to move according to the motion transformation sequence of each joint of the robot.
Optionally, the generating a motion transformation sequence of the robot end point based on the configuration operation of the user on the graphical interface includes:
acquiring position parameters and posture parameters of the terminal points in response to a pose debugging instruction of the virtual robot input in the graphical interface, wherein the terminal points comprise: sole and/or palm;
and generating a motion transformation sequence of the tail end point of the robot according to the position parameter and the attitude parameter of the tail end point.
Optionally, the acquiring the position parameter and the posture parameter of the terminal point in response to the pose debugging instruction of the virtual robot input in the graphical interface includes:
and responding to a pose debugging instruction of the virtual robot, which is input in sequence according to a preset time interval, and sequentially acquiring the position parameters and the posture parameters of the tail end point at each time point, wherein the pose debugging instruction is used for debugging the position and the posture of the tail end point of the virtual robot.
Optionally, the generating a motion transformation sequence of the robot end point according to the position parameter and the posture parameter of the end point includes:
and generating a motion transformation sequence of the tail end point of the robot according to the position parameters and the posture parameters of the tail end point at each time point.
Optionally, the obtaining the motion transformation sequence of each joint of the robot by performing inverse kinematics solution processing using the motion transformation sequence of the end point includes:
performing inverse kinematics solution operation on the motion transformation sequence of the terminal point to obtain angle parameters of each joint of the robot corresponding to the terminal point at the same time point;
and obtaining a motion transformation sequence of each joint of the robot according to the angle parameters of each joint corresponding to the end point at the same time point.
Optionally, the performing inverse kinematics solution operation on the motion transformation sequence of the end point to obtain the angle parameter of each joint of the robot at the same time point as the end point includes:
performing inverse kinematics solution operation on the position parameters and the attitude parameters of the tail end points at each time point in the action transformation sequence of the tail end points to respectively obtain at least one angle parameter of each joint of the robot at each preset time point;
and determining a target angle parameter of each joint at each preset time point from at least one angle parameter of each joint at each preset time point.
Optionally, the obtaining a motion transformation sequence of each joint of the robot according to the angle parameter of each joint corresponding to the end point at the same time point includes:
and obtaining a motion transformation sequence of each joint of the robot according to the target angle parameter of each joint at each preset time point.
Optionally, the controlling the robot to move according to the motion transformation sequence of each joint of the robot includes:
and adjusting the joint angle of the robot according to the target angle parameter of each joint of the robot at each preset time point, and controlling the robot to move.
In a second aspect, an embodiment of the present application further provides a robot motion control apparatus, including: the device comprises a generating module, an obtaining module and a control module;
the generating module is used for generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, and the motion transformation sequence comprises: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point;
the acquiring module is configured to perform inverse kinematics solution processing by using the motion transformation sequence of the end point to obtain a motion transformation sequence of each joint of the robot, where the motion transformation sequence of each joint includes: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point;
and the control module is used for controlling the robot to move according to the motion transformation sequence of each joint of the robot.
Optionally, the generating module is specifically configured to, in response to a pose debugging instruction of the virtual robot input in the graphical interface, acquire a position parameter and a pose parameter of the end point, where the end point includes: sole and/or palm; and generating a motion transformation sequence of the tail end point of the robot according to the position parameter and the attitude parameter of the tail end point.
Optionally, the generating module is specifically configured to sequentially acquire the position parameters and the posture parameters of the end points at each time point in response to pose debugging instructions of the virtual robot, which are sequentially input according to a preset time interval, where the pose debugging instructions are used to debug the position and the posture of the end points of the virtual robot.
Optionally, the generating module specifically generates a motion transformation sequence of the end point of the robot according to the position parameter and the posture parameter of the end point at each time point.
Optionally, the obtaining module specifically performs inverse kinematics solution operation on the motion transformation sequence of the end point to obtain angle parameters of each joint of the robot at the same time point as the end point; and obtaining a motion transformation sequence of each joint of the robot according to the angle parameters of each joint corresponding to the end point at the same time point.
Optionally, the generating module specifically performs inverse kinematics solution operation on the position parameters and the posture parameters of the end points at each time point in the motion transformation sequence of the end points to obtain at least one angle parameter of each joint of the robot at each preset time point; and determining a target angle parameter of each joint at each preset time point from at least one angle parameter of each joint at each preset time point.
Optionally, the generating module obtains a motion transformation sequence of each joint of the robot according to the target angle parameter of each joint at each preset time point.
Optionally, the control module specifically adjusts the angle of the joints of the robot according to the target angle parameters of the joints of the robot at each preset time point, so as to control the robot to move.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the robot motion control method as provided in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the robot motion control method provided in the first aspect.
The beneficial effect of this application is:
the application provides a robot motion control method, a device, an electronic device and a storage medium, wherein the method comprises the following steps: generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point; performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point; and controlling the robot to move according to the motion transformation sequence of each joint of the robot. In the scheme, the motion transformation sequence of each joint of the robot can be obtained through inverse solution operation according to the generated motion transformation sequence of the tail end point of the robot, and the motion transformation sequence of the tail end point is obtained through a graphical configuration mode, so that the workload for generating the motion transformation sequence of the tail end point can be reduced, and when the motion transformation sequence of each joint is obtained based on the obtained motion transformation sequence of the tail end point, the efficiency for obtaining the motion transformation sequence of each joint can be effectively improved, and the debugging efficiency of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a robot motion control method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another robot motion control method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a graphical interface provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of another robot motion control method provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of another robot motion control method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a robot motion control device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
Fig. 1 is a schematic flowchart of a robot motion control method according to an embodiment of the present disclosure. The main body of execution of the method may be an electronic device such as a computer or a server independent of the robot, or may be a processor installed in the robot. As shown in fig. 1, the method may include:
s101, generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises: at least one element, each element corresponding to a point in time, each element comprising a position parameter and a pose parameter of the end point at the corresponding point in time.
Optionally, in the method, the motion transformation sequence of the robot end point may be generated based on an operation of a user on the graphical interface, wherein the user may conveniently input any operation in the graphical interface, and different operation data may be generated for different operations of the user, so that the motion transformation sequence of the robot end point may be generated more efficiently.
Alternatively, the action transformation sequence of the end point is a sequence composed of a series of actions that change with time. The action transformation sequence comprises at least one element, and in practical application, more than one element can be used for forming the sequence, each element corresponds to a time point, and each element records the position parameter and the posture parameter of the terminal point at the corresponding time point. Wherein the position parameter and the attitude parameter are both determined with respect to a certain reference coordinate.
S102, performing inverse kinematics solution processing by adopting the motion transformation sequence of the terminal point to obtain the motion transformation sequence of each joint of the robot, wherein the motion transformation sequence of each joint comprises the following steps: and each element corresponds to a time point, and each element comprises the angle parameter of each joint at the corresponding time point.
Generally, the kinematics of the robot comprises forward kinematics and reverse kinematics, wherein the forward kinematics is that the position parameters and the attitude parameters of the terminal points of the robot are calculated given the parameters of each joint of the robot; and (3) inverse kinematics, namely the position parameters and the attitude parameters of the tail end points of the known robot, and calculating all joint variables of the corresponding positions of the robot.
In the present embodiment, the motion conversion sequence of each joint of the robot can be obtained by performing inverse kinematics calculation based on the motion conversion sequence of the generated terminal point. Similarly to the motion transformation sequence of the terminal point, the motion transformation sequence of each joint also includes at least one element, each element corresponds to a time point, and each element records the angle parameter of each joint at the corresponding time point.
And S103, controlling the robot to move according to the motion conversion sequence of each joint of the robot.
Alternatively, based on the obtained motion conversion sequence of each joint of the robot, the robot may be controlled to move according to the motion conversion sequence of each joint, thereby performing motion debugging on the robot.
In summary, the present embodiment provides a robot motion control method, including: generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point; performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point; and controlling the robot to move according to the motion transformation sequence of each joint of the robot. In the scheme, the motion transformation sequence of each joint of the robot can be obtained through inverse solution operation according to the generated motion transformation sequence of the tail end point of the robot, and the motion transformation sequence of the tail end point is obtained through a graphical configuration mode, so that the workload for generating the motion transformation sequence of the tail end point can be reduced, and when the motion transformation sequence of each joint is obtained based on the obtained motion transformation sequence of the tail end point, the efficiency for obtaining the motion transformation sequence of each joint can be effectively improved, and the debugging efficiency of the robot is improved.
Fig. 2 is a schematic flowchart of another robot motion control method according to an embodiment of the present disclosure. Fig. 3 is a schematic diagram of a graphical interface according to an embodiment of the present application. Optionally, in step S101, generating a motion transformation sequence of the robot end point based on the configuration operation of the user on the graphical interface may include:
s201, responding to a pose debugging instruction of the virtual robot input on the graphical interface, and acquiring position parameters and pose parameters of end points, wherein the end points comprise: the sole and/or the palm of the hand.
It should be noted that the robot applied in this embodiment may be a humanoid robot, wherein the end points may include the sole and the palm of the hand of the robot, or only include the sole or the palm of the hand.
As shown in fig. 3, the graphical interface may include an operation interface and a configuration interface, where the operation interface displays a virtual robot model, and a user may input a pose adjustment instruction for the virtual robot by mouse control or touch, so that the virtual robot may display a desired pose.
In an implementation manner, a pose adjustment instruction for the sole and/or the palm of the hand of the virtual robot can be input, for example, the mouse is directly controlled to move to the sole position of the virtual robot, and the posture and the position of the sole are adjusted to change the current state of the virtual robot into an adjusted state.
Optionally, in response to the pose adjustment instruction for the virtual robot, the parameters of each part of the virtual robot under the adjusted current pose may be displayed in a corresponding parameter frame in the configuration interface, where the parameters include: the position parameters and posture parameters of the sole and the palm, so that the position parameters and posture parameters of the end point can be acquired.
And S202, generating a motion transformation sequence of the tail end point of the robot according to the position parameter and the posture parameter of the tail end point.
Alternatively, the pose debugging instructions for the tail end points of the virtual robot at different input time points can be respectively responded to so as to acquire the position parameters and the posture parameters of the tail end points at the time points, and therefore the action transformation sequence of the tail end points of the robot can be generated according to the position parameters and the posture parameters of the tail end points at the time points.
Optionally, in step S201, acquiring the position parameter and the pose parameter of the end point in response to the pose debugging instruction of the virtual robot input in the graphical interface may include: and responding to pose debugging instructions of the virtual robot, which are sequentially input according to a preset time interval, and sequentially acquiring the position parameters and the pose parameters of the tail end points at each time point, wherein the pose debugging instructions are used for debugging the position and the pose of the tail end points of the virtual robot.
In this embodiment, the debugging instructions for the virtual robot may be sequentially input according to the preset time interval to sequentially change the position and the posture of the terminal point of the virtual robot, so as to obtain the position parameters and the posture parameters of the terminal point at different time points. The position parameter and the attitude parameter of the tail end point after the current debugging can be obtained from the configuration interface every time the debugging instruction for the virtual robot is input, namely the position parameter and the attitude parameter of the tail end point at the time point corresponding to the debugging instruction are obtained.
It should be noted that the time point corresponding to the current debug instruction is not necessarily the current real time, and may be a time set by the user, as long as the time point of the next debug instruction input is continuous with the time point of the last debug instruction and is later than the time point of the last debug instruction, for example: the currently input debugging instruction corresponds to 6 points 10 minutes, and the next input debugging instruction corresponds to 6 points 11 minutes and the like.
Based on the method, the position parameters and the attitude parameters of the terminal points which change continuously along with the time change can be obtained.
Optionally, in step S202, generating a motion transformation sequence of the end point of the robot according to the position parameter and the posture parameter of the end point may include: and generating a motion transformation sequence of the tail end point of the robot according to the position parameters and the posture parameters of the tail end point at each time point.
Alternatively, the motion transformation sequence of the robot end point may be generated based on the position parameters and the posture parameters acquired as the end point changes with time.
Assuming that the acquired position parameters and attitude parameters of the terminal point are as follows in sequence: 6 points 10-parameter a, parameter b; point 6, 11-parameter a1, parameter b 1; point 6, 12 points-parameter a2, parameter b 2; 6 points 13-parameter a3, parameter b 3; then, the motion transformation sequence of the generated end point may be [ parameter a, parameter b), (parameter a1, parameter b1), (parameter a2, parameter b2), (parameter a3, parameter b3) … … ].
Fig. 4 is a schematic flowchart of another robot motion control method provided in the embodiment of the present application; alternatively, in step S102, performing inverse kinematics solution processing using the motion transformation sequence of the end point to obtain a motion transformation sequence of each joint of the robot may include:
s401, performing inverse kinematics solution operation on the motion transformation sequence of the terminal point to obtain angle parameters of each joint of the robot at the same time point as the terminal point.
Alternatively, the generated motion transformation sequence of the end point may be used as an input parameter and input into a kinematic inverse solution calculation formula to obtain an angle parameter of each joint of the robot at the same time point as the end point.
For example: the motion transformation sequence of the terminal points are respectively the position parameters and the posture parameters of the terminal points under 6 points 10 minutes, 6 points 11 minutes, 6 points 12 minutes and 6 points 13 minutes, and the angle parameters corresponding to the joints under 6 points 10 minutes, 6 points 11 minutes, 6 points 12 minutes and 6 points 13 minutes can be respectively obtained through inverse kinematics solution operation.
It should be noted that the joints of the robot described in the present application may include joints of the elbow, ankle, arm, and the like of the robot that are decisive for the movement of the robot.
S402, obtaining a motion transformation sequence of each joint of the robot according to the angle parameters of each joint and the end point at the same time point.
Alternatively, a motion conversion sequence of each joint of the robot may be generated based on the obtained angle parameters of each joint at each corresponding time point.
Assuming that each joint herein includes a joint 1, a joint 2, and a joint 3, the angle parameters of each joint of the robot at the corresponding time point are respectively obtained as follows: 6 o' clock 10 min-joint 1, joint 2, joint 3-a1, b1, c 1; 6 point 11-joint 1, joint 2, joint 3-a2, b2, c 2; 6 point 12-joint 1, joint 2, joint 3-a3, b3, c 3; 6 point 13-joint 1, joint 2, joint 3-a4, b4, c 4;
for example: the motion conversion sequence of each joint obtained may be [ a1, b1, c1 ], [ a2, b2, c2 ], [ a3, b3, c3 ], [ a4, b4, c4 ].
Fig. 5 is a schematic flowchart of another robot motion control method according to an embodiment of the present disclosure; optionally, in step S401, performing inverse kinematics solution operation on the motion transformation sequence of the end point to obtain angle parameters of each joint of the robot at the same time point as the end point, which may include:
s501, performing inverse kinematics solution operation on the position parameters and the attitude parameters of the tail end points at each time point in the action transformation sequence of the tail end points to respectively obtain at least one angle parameter of each joint of the robot at each preset time point.
In general, the solution obtained by the inverse kinematics calculation is not unique and may include a plurality of solutions, and the number of inverse solutions may depend on the number of joints of the robot (the number of joints), the configuration of the robot (link parameters), and the range of motion of the joints.
That is, inverse kinematics solution operation is performed on the position parameters and the attitude parameters of the end points at each time point in the motion transformation sequence of the end points, so as to obtain at least one angle parameter of each joint at each preset time point, and to control the motion of the robot, a target angle parameter needs to be determined from the at least one angle parameter.
S502, determining target angle parameters of each joint at each preset time point from at least one angle parameter of each joint at each preset time point.
Alternatively, there are many different criteria for selecting the appropriate solution, and one reasonable approach is to select the "closest" solution, i.e., the one that has the least amount of joint movement.
There are several ways of defining this "recent". For example, for a typical 6-degree-of-freedom articulated robot, the first three joints are larger and the last three joints are smaller. Therefore, different weights are given to different joints when defining the distance in the joint space, for example, the first three joints are set with large weight, and the last three joints are set with small weight. Then moving the smaller joint rather than the larger joint may be prioritized in selecting a solution.
The specific calculation process of the inverse kinematics solution operation and the selection mode of the target solution adopted in the embodiment can be understood by referring to the existing method, and the method is only applied simply and is not set forth too much.
Optionally, in step S402, obtaining a motion transformation sequence of each joint of the robot according to the angle parameter of each joint corresponding to the end point at the same time point may include: and obtaining the motion transformation sequence of each joint of the robot according to the target angle parameter of each joint at each preset time point.
Optionally, a motion transformation sequence of each joint of the robot may be generated according to the determined target angle parameter of each joint at each preset time point, where one element in the motion transformation sequence may include the target angle parameter of each joint at the time point corresponding to the element.
Alternatively, in step S103, controlling the robot to move according to the motion transformation sequence of each joint of the robot may include: and adjusting the angle of the joints of the robot according to the target angle parameters of the joints of the robot at preset time points, and controlling the robot to move.
Optionally, the robot may be controlled to move according to the angle parameter of each joint changing with time in the motion change sequence of each joint, so as to debug the robot. Namely, the joint angle of the robot is controlled to be changed continuously along with time, so that the motion track of the robot is formed.
In summary, the present embodiment provides a robot motion control method, including: generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point; performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point; and controlling the robot to move according to the motion transformation sequence of each joint of the robot. In the scheme, the motion transformation sequence of each joint of the robot can be obtained through inverse solution operation according to the generated motion transformation sequence of the tail end point of the robot, and the motion transformation sequence of the tail end point is obtained through a graphical configuration mode, so that the workload for generating the motion transformation sequence of the tail end point can be reduced, and when the motion transformation sequence of each joint is obtained based on the obtained motion transformation sequence of the tail end point, the efficiency for obtaining the motion transformation sequence of each joint can be effectively improved, and the debugging efficiency of the robot is improved.
The following describes a device, an electronic device, a storage medium, and the like for executing the robot motion control method provided in the present application, and specific implementation processes and technical effects thereof are referred to above, and are not described again below.
Fig. 6 is a schematic diagram of a robot motion control device according to an embodiment of the present application, where functions implemented by the robot motion control device correspond to steps executed by the foregoing method. The apparatus may be understood as the server or the processor of the server, or may be understood as a component which is independent of the server or the processor and implements the functions of the present application under the control of the server, as shown in fig. 6, the apparatus may include: a generating module 610, an obtaining module 620 and a control module 630;
a generating module 610, configured to generate a motion transformation sequence of the robot end point based on a configuration operation of the user on the graphical interface, where the motion transformation sequence includes: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point;
an obtaining module 620, configured to perform inverse kinematics solution processing by using the motion transformation sequence of the end point to obtain a motion transformation sequence of each joint of the robot, where the motion transformation sequence of each joint includes: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point;
and the control module 630 is configured to control the robot to move according to the motion transformation sequence of each joint of the robot.
Optionally, the generating module 610 is specifically configured to obtain, in response to a pose debugging instruction of the virtual robot input on the graphical interface, a position parameter and a pose parameter of a terminal point, where the terminal point includes: sole and/or palm; and generating a motion transformation sequence of the tail end point of the robot according to the position parameter and the posture parameter of the tail end point.
Optionally, the generating module 610, specifically, in response to a pose debugging instruction of the virtual robot that is sequentially input according to a preset time interval, sequentially obtains the position parameters and the pose parameters of the end point at each time point, where the pose debugging instruction is used to debug the position and the pose of the end point of the virtual robot.
Optionally, the generating module 610 generates a motion transformation sequence of the end point of the robot according to the position parameter and the posture parameter of the end point at each time point.
Optionally, the obtaining module 620 specifically performs inverse kinematics solution operation on the motion transformation sequence of the end point to obtain angle parameters of each joint of the robot at the same time point as the end point; and obtaining the motion transformation sequence of each joint of the robot according to the angle parameters of each joint and the end point at the same time point.
Optionally, the generating module 610 specifically performs inverse kinematics solution operation on the position parameters and the posture parameters of the end points at each time point in the motion transformation sequence of the end points to obtain at least one angle parameter of each joint of the robot at each preset time point; and determining a target angle parameter of each joint at each preset time point from at least one angle parameter of each joint at each preset time point.
Optionally, the generating module 610 obtains a motion transformation sequence of each joint of the robot according to the target angle parameter of each joint at each preset time point.
Optionally, the control module 630 specifically performs angle adjustment of the joints of the robot according to target angle parameters of the joints of the robot at preset time points, so as to control the robot to move.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The modules may be connected or in communication with each other via a wired or wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application.
It should be noted that the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device may include: a processor 801 and a memory 802.
The memory 802 is used for storing programs, and the processor 801 calls the programs stored in the memory 802 to execute the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
In which the memory 802 stores program code that, when executed by the processor 801, causes the processor 801 to perform the various steps in the robot motion control method according to various exemplary embodiments of the present application described in the "exemplary methods" section above in this specification.
The Processor 801 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 802, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 802 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Optionally, the present application also provides a program product, such as a computer readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (11)

1. A robot motion control method, comprising:
generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, wherein the motion transformation sequence comprises the following steps: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point;
and performing inverse kinematics solution processing by using the motion transformation sequence of the terminal point to obtain motion transformation sequences of joints of the robot, wherein the motion transformation sequences of the joints comprise: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point;
and controlling the robot to move according to the motion transformation sequence of each joint of the robot.
2. The method of claim 1, wherein generating a sequence of action transformations for a robot endpoint based on a configuration operation of a user at a graphical interface comprises:
acquiring position parameters and posture parameters of the terminal points in response to a pose debugging instruction of the virtual robot input in the graphical interface, wherein the terminal points comprise: sole and/or palm;
and generating a motion transformation sequence of the tail end point of the robot according to the position parameter and the attitude parameter of the tail end point.
3. The method according to claim 2, wherein the obtaining the position parameters and the posture parameters of the end point in response to the pose debugging instructions of the virtual robot input in the graphical interface comprises:
and responding to a pose debugging instruction of the virtual robot, which is input in sequence according to a preset time interval, and sequentially acquiring the position parameters and the posture parameters of the tail end point at each time point, wherein the pose debugging instruction is used for debugging the position and the posture of the tail end point of the virtual robot.
4. The method of claim 3, wherein generating a sequence of motion transformations for a robot end point from position and pose parameters of the end point comprises:
and generating a motion transformation sequence of the tail end point of the robot according to the position parameters and the posture parameters of the tail end point at each time point.
5. The method according to any one of claims 1 to 4, wherein the obtaining a motion transformation sequence of each joint of the robot by performing inverse kinematics solution using the motion transformation sequence of the terminal point comprises:
performing inverse kinematics solution operation on the motion transformation sequence of the terminal point to obtain angle parameters of each joint of the robot corresponding to the terminal point at the same time point;
and obtaining a motion transformation sequence of each joint of the robot according to the angle parameters of each joint corresponding to the end point at the same time point.
6. The method according to claim 5, wherein the performing inverse kinematics solution operation on the motion transformation sequence of the end point to obtain the angle parameter of each joint of the robot at the same time point as the end point comprises:
performing inverse kinematics solution operation on the position parameters and the attitude parameters of the tail end points at each time point in the action transformation sequence of the tail end points to respectively obtain at least one angle parameter of each joint of the robot at each preset time point;
and determining a target angle parameter of each joint at each preset time point from at least one angle parameter of each joint at each preset time point.
7. The method according to claim 6, wherein the obtaining of the motion transformation sequence of each joint of the robot according to the angle parameter of each joint at the same time point as the end point comprises:
and obtaining a motion transformation sequence of each joint of the robot according to the target angle parameter of each joint at each preset time point.
8. The method of claim 7, wherein controlling the robot to move according to the motion transformation sequence of each joint of the robot comprises:
and adjusting the joint angle of the robot according to the target angle parameter of each joint of the robot at each preset time point, and controlling the robot to move.
9. A robot motion control apparatus, comprising: the device comprises a generating module, an obtaining module and a control module;
the generating module is used for generating a motion transformation sequence of the robot terminal point based on the configuration operation of the user on the graphical interface, and the motion transformation sequence comprises: at least one element, each element corresponding to a time point, each element comprising a position parameter and a posture parameter of the terminal point at the corresponding time point;
the acquiring module is configured to perform inverse kinematics solution processing by using the motion transformation sequence of the end point to obtain a motion transformation sequence of each joint of the robot, where the motion transformation sequence of each joint includes: at least one element, each element corresponding to a time point, each element comprising an angle parameter of each joint at the corresponding time point;
and the control module is used for controlling the robot to move according to the motion transformation sequence of each joint of the robot.
10. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing program instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the program instructions to perform the steps of the robot motion control method according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the robot motion control method according to any one of claims 1 to 8.
CN202110586084.1A 2021-05-27 2021-05-27 Robot motion control method, device, electronic equipment and storage medium Pending CN113290560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110586084.1A CN113290560A (en) 2021-05-27 2021-05-27 Robot motion control method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586084.1A CN113290560A (en) 2021-05-27 2021-05-27 Robot motion control method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113290560A true CN113290560A (en) 2021-08-24

Family

ID=77325658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586084.1A Pending CN113290560A (en) 2021-05-27 2021-05-27 Robot motion control method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113290560A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997175A (en) * 2016-10-21 2017-08-01 遨博(北京)智能科技有限公司 A kind of robot simulation control method and device
KR20180063515A (en) * 2016-12-02 2018-06-12 두산로보틱스 주식회사 Teaching Device of a Robot and Teaching Method thereof
US20180297202A1 (en) * 2017-04-14 2018-10-18 Seiko Epson Corporation Simulation Apparatus And Robot Control Apparatus
CN110434856A (en) * 2019-08-23 2019-11-12 珠海格力电器股份有限公司 A kind of welding control method, device, storage medium and welding robot
CN110815189A (en) * 2019-11-20 2020-02-21 福州大学 Robot rapid teaching system and method based on mixed reality
CN111390908A (en) * 2020-03-26 2020-07-10 哈尔滨工业大学 Webpage-based mechanical arm virtual dragging method
CN112659124A (en) * 2020-12-14 2021-04-16 南昌大学 Virtual simulation and control system based on Android system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997175A (en) * 2016-10-21 2017-08-01 遨博(北京)智能科技有限公司 A kind of robot simulation control method and device
KR20180063515A (en) * 2016-12-02 2018-06-12 두산로보틱스 주식회사 Teaching Device of a Robot and Teaching Method thereof
US20180297202A1 (en) * 2017-04-14 2018-10-18 Seiko Epson Corporation Simulation Apparatus And Robot Control Apparatus
CN110434856A (en) * 2019-08-23 2019-11-12 珠海格力电器股份有限公司 A kind of welding control method, device, storage medium and welding robot
CN110815189A (en) * 2019-11-20 2020-02-21 福州大学 Robot rapid teaching system and method based on mixed reality
CN111390908A (en) * 2020-03-26 2020-07-10 哈尔滨工业大学 Webpage-based mechanical arm virtual dragging method
CN112659124A (en) * 2020-12-14 2021-04-16 南昌大学 Virtual simulation and control system based on Android system

Similar Documents

Publication Publication Date Title
US20180036882A1 (en) Layout setting method and layout setting apparatus
US20190143524A1 (en) Programming assistance apparatus, robot system, and method for generating program
CN107443375B (en) Robot origin calibration method, apparatus, storage medium and computer equipment
CN109960178B (en) Robot and joint motion control method and device thereof
US20180036883A1 (en) Simulation apparatus, robot control apparatus and robot
CN111002315B (en) Trajectory planning method and device and robot
CN113119104B (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
CN111113429B (en) Action simulation method, action simulation device and terminal equipment
CN109311155B (en) Method and device for calibrating tool coordinate system origin of industrial robot
CN109877828B (en) Machine point location debugging method and device, computer equipment and storage medium
CN112720479B (en) Robot posture control method and device and robot
CN115703227A (en) Robot control method, robot, and computer-readable storage medium
CN113084791B (en) Mechanical arm control method, mechanical arm control device and terminal equipment
CN113290560A (en) Robot motion control method, device, electronic equipment and storage medium
CN113290553A (en) Trajectory generation device, multi-link system, and trajectory generation method
JP2020175471A (en) Information processing device, information processing method, program and recording medium
CN114193436B (en) Robot working space optimization method, device, storage medium and equipment
CN112975954B (en) Control method of robot arm, computer device, and storage medium
CN113733107B (en) Robot drag teaching method, robot and computer storage medium
RU160034U1 (en) TACTICAL FEEDBACK INPUT DEVICE WITH ADDITIONAL DEGREES OF FREEDOM
JP7232704B2 (en) ROBOT PROGRAM EVALUATION DEVICE, ROBOT PROGRAM EVALUATION METHOD AND ROBOT PROGRAM EVALUATION PROGRAM
JP2020060996A (en) Simulation device, simulation method, and simulation program
Chen Virtual model for a multi-finger robot hand design
JP7276359B2 (en) Motion command generation device, mechanism control system, computer program, motion command generation method, and mechanism control method
Rodríguez Hoyos et al. Virtual reality interface for assist in programming of tasks of a robotic manipulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210824

RJ01 Rejection of invention patent application after publication