CN117245640A - Robot control method, device, equipment and storage medium - Google Patents

Robot control method, device, equipment and storage medium Download PDF

Info

Publication number
CN117245640A
CN117245640A CN202211246781.3A CN202211246781A CN117245640A CN 117245640 A CN117245640 A CN 117245640A CN 202211246781 A CN202211246781 A CN 202211246781A CN 117245640 A CN117245640 A CN 117245640A
Authority
CN
China
Prior art keywords
robot
matrix
equation
predicted
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211246781.3A
Other languages
Chinese (zh)
Inventor
金龙
延晶坤
郑宇�
刘梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Lanzhou University
Original Assignee
Tencent Technology Shenzhen Co Ltd
Lanzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Lanzhou University filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211246781.3A priority Critical patent/CN117245640A/en
Publication of CN117245640A publication Critical patent/CN117245640A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Numerical Control (AREA)

Abstract

The application discloses a control method, a control device, control equipment and a control storage medium for a robot, and relates to the technical field of artificial intelligence. The method comprises the following steps: acquiring an estimated Jacobian matrix of the robot based on the historical measured pose of the end effector of the robot and the historical measured joint angle of the robot; constructing a quadratic programming equation of the robot based on the actually measured joint angle, the actual pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, the predicted joint angle sequence and the estimated Jacobian matrix corresponding to the target moment; solving a quadratic programming equation to obtain the predicted joint angular speed of the robot at the target time; and controlling the end effector to move along with the expected pose at the target time according to the predicted joint angular velocity at the target time. The robot joint constraint problem can be directly processed, and the pose following task of the robot with unknown model can be accurately executed, so that the control accuracy and the universality of the robot are improved.

Description

Robot control method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a control method, a device, equipment and a storage medium of a robot.
Background
With the development of artificial intelligence technology, robots have become an important tool that can free people from dangerous, complex, high-precision tasks, etc. Among them, trajectory and pose (i.e., pose) following control is a core problem in robot motion control.
Taking the redundant robot as an example, the related technology performs joint constraint of different levels on the redundant robot by introducing additional parameters and reserving a certain margin so as to realize track and gesture following control of the redundant robot. However, the related art may reduce a viable range of joint angles, resulting in inaccurate control of the robot.
Disclosure of Invention
The embodiment of the application provides a control method, a device, equipment and a storage medium of a robot, which can improve the control accuracy and the universality of the robot, and the technical scheme is as follows:
according to an aspect of the embodiments of the present application, there is provided a control method of a robot, the method including:
acquiring an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a predicted period, a predicted joint angle sequence of the robot in the predicted period and a predicted joint angular velocity sequence of the robot in a control period; wherein the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected with the target moment as a starting point;
Acquiring an estimated Jacobian matrix of the robot based on the historical actual measured pose and the historical actual measured joint angle at the previous sampling time of the target moment;
constructing a quadratic programming equation of the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence and the estimated jacobian matrix;
solving the quadratic programming equation to obtain the predicted joint angular velocity of the robot at the target moment;
and controlling the end effector to move along with the expected pose at the target time according to the predicted joint angular velocity at the target time.
According to an aspect of the embodiments of the present application, there is provided a control device of a robot, the device including:
the system comprises a data acquisition module, a control module and a control module, wherein the data acquisition module is used for acquiring an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a prediction period, a predicted joint angle sequence of the robot in the prediction period and a predicted joint angular velocity sequence of the robot in a control period; wherein the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected with the target moment as a starting point;
The matrix estimation module is used for acquiring an estimated Jacobian matrix of the robot based on the historical actual measured pose and the historical actual measured joint angle at the previous sampling time of the target time;
the equation construction module is used for constructing a quadratic programming equation of the robot based on the actually measured joint angle, the actually measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, the predicted joint angle sequence and the estimated Jacobian matrix;
the equation solving module is used for solving the quadratic programming equation to obtain the predicted joint angular speed of the robot at the target time;
and the pose following module is used for controlling the end effector to follow the expected pose movement at the target moment according to the predicted joint angular velocity at the target moment.
According to an aspect of the embodiments of the present application, there is provided a computer device including a processor and a memory, in which a computer program is stored, the computer program being loaded and executed by the processor to implement the control method of the robot described above.
According to an aspect of the embodiments of the present application, there is provided a computer readable storage medium having stored therein a computer program loaded and executed by a processor to implement the control method of a robot as described above.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the control method of the robot.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
the secondary planning equation of the robot is constructed based on the actually measured joint angle, the real pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence and the estimated jacobian matrix, and then the secondary planning equation is solved to obtain the predicted joint angular velocity at the target moment, so that the joint constraint problem of the robot can be directly processed, the reduction of the feasible region of the joint angle is avoided, and the control accuracy of the robot is improved.
In addition, the Jacobian matrix of the model-unknown robot can be effectively simulated based on the historic actual measurement pose and the historic actual measurement joint angle, so that the track and the pose of the model-unknown robot can be controlled in a following manner, the problem of inaccurate control caused by control of the robot based on an inaccurate model (such as deviation between an actual model and an original model caused by factors such as low production precision and component abrasion of the robot) in the related technology is avoided, and the control accuracy of the robot is further improved. Meanwhile, the technical scheme provided by the embodiment of the application can be applied to a scene with unknown model of any robot, so that the control universality of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment for an embodiment provided herein;
FIG. 2 is a flow chart of a robot control method provided in one embodiment of the present application;
FIG. 3 is a flow chart of conversion of a predictive control equation provided by one embodiment of the present application into a quadratic programming equation;
FIG. 4 is a flow chart of a method for obtaining a predicted joint angular velocity under a thermodynamic solver provided in one embodiment of the present application;
FIG. 5 is a schematic diagram of a control method for a 7-degree-of-freedom mechanical arm according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a 7-degree-of-freedom robotic arm performing a pose follow task provided by an embodiment of the present application;
FIG. 7 is a schematic illustration of the position of the end effector of a 7-degree-of-freedom robotic arm following an error change provided in one embodiment of the present application;
FIG. 8 is a schematic diagram of a desired gesture profile and an actual gesture profile corresponding to a 7-degree-of-freedom robotic arm provided by one embodiment of the present application;
FIG. 9 is a schematic diagram of a rotation vector corresponding to a 7-degree-of-freedom mechanical arm according to one embodiment of the present application following an error change;
FIG. 10 is a schematic diagram of a Jacobian matrix estimation error variation corresponding to a 7-degree-of-freedom mechanical arm according to one embodiment of the present application;
FIG. 11 is a schematic diagram of a 7-degree-of-freedom robotic arm performing a track following task according to another embodiment of the present application;
FIG. 12 is a block diagram of a control device for a robot provided in one embodiment of the present application;
fig. 13 is a block diagram of a control device of a robot according to another embodiment of the present application;
fig. 14 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The technical scheme of the application mainly relates to the robot technology in the artificial intelligence technology, and mainly relates to intelligent control of the robot. The robot is a mechanical electronic device which is formed by combining mechanical transmission and modern microelectronic technology and can simulate a certain skill of a person, and the robot is developed on the basis of electronic, mechanical and information technology. The robot does not necessarily have to look like a human, and belongs to a member of a large family of robots as long as it can autonomously complete his tasks and commands given by the human. Robots are automated machines that have some intelligent capabilities similar to humans or living beings, such as perceptive, planning, action and coordination capabilities, and are highly flexible. With the development of computer technology and artificial intelligence technology, the functions and technical levels of robots are greatly improved, and the technologies such as mobile robots and visual and tactile technologies of robots are typical representatives.
In the method provided by the embodiment of the application, the execution subject of each step may be a computer device, and the computer device refers to an electronic device with data computing, processing and storage capabilities. The computer device may be a terminal such as a PC (Personal Computer ), tablet, smart phone, desktop computer, smart robot, etc.; or may be a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing service.
The technical scheme provided by the embodiment of the application is suitable for any robot control scene, in particular for a control scene of a model unknown robot, such as a manufacturing scene, a laboratory scene, a family scene, a medical scene and the like. The control method of the robot can improve the control accuracy and the universality of the robot.
In one example, as shown in fig. 1, a robot control system may include a robot 10 and a computer device 20.
Robot 10 may refer to an industrial robot, a non-industrial robot, a homemade robot, or the like. Illustratively, robot 10 may refer to a multi-degree-of-freedom robot (e.g., a four-foot robot, a two-foot robot, etc.), a multi-degree-of-freedom robotic arm (e.g., a 6-degree-of-freedom robotic arm, a 7-degree-of-freedom robotic arm, an 8-degree-of-freedom robotic arm, etc.), and so forth. With respect to the task space dimension, the robot 10 may have redundancy in that the degree of freedom to which the robot 10 corresponds is greater than or equal to the task space dimension, and there are an infinite number of possible solutions to accomplish the task, both from a kinematic perspective and from a kinetic perspective. For example, for tasks requiring 6 degrees of freedom, a 7 degree of freedom robot is used, which is redundant. Alternatively, the embodiments of the present application will take a redundant robot as an example for the description of a robot control method.
The computer device 20 may be a terminal such as a PC, tablet, smart phone, desktop computer, wearable device, smart robot, etc.; or may be a server. The computer device 20 may be used to control the robot 10. For example, the computer device 20 is an upper computer of the robot 10, and the computer device 20 transmits a control signal to a lower computer (e.g., a lower controller) on the robot 10, and the lower computer controls the movement of the robot 10 based on the control signal (e.g., joint angle, joint angular velocity, joint moment, etc.). Optionally, the computer device 20 may in turn be used to store measured data from the robot 10 (e.g., measured joint angles, measured joint angular velocities, measured joint angular accelerations, etc.), and the computer device 20 may also be used to provide data to the robot 10 (e.g., desired trajectories, desired poses, etc. of joints or end effectors).
Communication between the robot 10 and the computer device 20 may be through a network, such as a wireless network, a wired network, or the like.
Illustratively, taking the robot 10 as an example of a redundant robot whose model is unknown (i.e., forward kinematics equation is ambiguous or inaccurate or cannot be known), the computer device 20 stores therein a desired trajectory required by the end effector of the robot 10 to perform a task, and the computer device 20 determines a predicted joint angular velocity of the robot 10 based on desired poses (including a desired position vector and a desired pose vector (e.g., a desired rotation vector)) of the desired trajectory, which correspond to respective moments in the future, so as to perform pose following control on the robot 10, which may be as follows: the computer device 20 acquires a desired pose of the end effector of the robot 10 at a current time (i.e., a target time hereinafter) according to a desired trajectory, acquires an actual measured joint angle at the current time from the robot 10, and constructs a predicted pose sequence and a desired pose sequence of the end effector in a predicted period, a predicted joint angular velocity sequence of the robot 10 in a predicted period, and a predicted joint angle sequence of the robot 10 in a control period by using an MPC (Model Predictive Control ) method. The computer device 20 constructs a predictive control equation and estimates a jacobian matrix based on the measured joint angle, the real pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, and the predicted joint angle sequence, and the historical measured joint angle and the historical real pose, and converts the predictive control equation into a quadratic programming equation, the computer device 20 solves the quadratic programming equation by using a neuro-dynamics method to obtain a predicted control angular velocity of the robot 10 at the current time, and then transmits the predicted control angular velocity of the robot 10 at the current time to the robot 10, and the robot 10 follows the expected pose at the current time according to the predicted control angular velocity at the current time.
In other examples, the robot control system may include only the robot 10, with the robot 10 having data computing, processing, and storage capabilities. After the robot 10 obtains the desired position, the actual measurement pose and the actual measurement angular velocity at the current time, the same method as described above is adopted to obtain the predicted control angular velocity at the previous time, and the following of the desired pose at the current time is performed according to the predicted control angular velocity at the current time, which is not limited in the embodiment of the present application.
Referring to fig. 2, a flowchart of a control method of a robot according to an embodiment of the present application is shown. The subject of execution of the steps of the method may be the robot 10 or the computer device 20 in the implementation environment of the solution shown in fig. 1. The method may comprise the following steps (201-205):
step 201, acquiring an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a predicted period, a predicted joint angle sequence of the robot in the predicted period, and a predicted joint angular velocity sequence of the robot in a control period; the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected with the target moment as a starting point.
In the embodiment of the present application, the above robot may refer to a redundant robot, such as a 6-degree-of-freedom redundant robot arm, a 7-degree-of-freedom redundant robot arm, or the like. The robotic end effector may be used to perform tasks such as mounting, handling, sanding, painting, drawing, inspection, and the like. For example, for different tasks, different expected pose sequences can be configured for the robot, so as to control the robot to follow the expected pose sequences, and different tasks can be realized. The desired pose sequence may include a plurality of desired poses ordered in time, the desired poses being poses that the robot needs to reach, the poses being indicative of the position and pose of the robot. For example, the pose of the end effector may be used to indicate the position and pose of the end effector. For example, the pose may be represented by an axial angle method, that is, the pose corresponding to the pose may be represented by a rotation vector, and the position corresponding to the pose may be represented by a simple three-dimensional coordinate (position vector), which is not limited in the embodiment of the present application.
The actual measured joint angle is the joint angle actually measured by the robot through the detecting part, and the actual measured pose is the pose actually measured by the robot through the detecting part. For example, the measured joint angles of the 7-degree-of-freedom redundant robot at the target time include the measured joint angles corresponding to the 7 joint axes, respectively, and the measured pose of the 7-degree-of-freedom redundant robot at the target time is the measured pose corresponding to the end effector.
The predicted period refers to a future period, which can be set and adjusted according to actual usage requirements. For example, the prediction period may refer to a period starting from the next sampling time of the target time. The sampling time is a time point determined according to the sampling interval time, the sampling interval time can also be set and adjusted according to actual use requirements, the sampling time included in the control period can be part or all of the sampling time in the prediction period, and the sampling interval time of the control period is the same as the sampling interval time of the prediction period. Optionally, the target time also belongs to a sampling time, which may refer to a current time or a current sampling time, and may refer to any time required for control, which is not limited in the embodiment of the present application.
The predicted pose sequence comprises predicted poses of the end effector at each sampling time in a predicted period, wherein the predicted poses are possible poses obtained by predicting tasks by pointers, and the predicted poses are predicted data (belonging to unknowns). The expected pose sequence comprises expected poses of the end effector at each sampling time within a prediction period, wherein the expected poses belong to control signals and known numbers. The predicted joint angle sequence comprises predicted joint angles of the robot at each sampling time in a predicted period, the predicted joint angles comprise predicted joint angles corresponding to each joint axis respectively, the predicted joint angles are possible joint angles obtained by predicting tasks by pointers, and the predicted joint angles are predicted data (belonging to unknowns). The predicted joint angular velocity sequence comprises predicted joint angular velocities of the robot at each sampling time in a control period, wherein the predicted joint angular velocities comprise predicted joint angular velocities corresponding to each joint axis respectively, the predicted joint angular velocities are possible joint angular velocities obtained by predicting tasks by a pointer, and the predicted joint angular velocities are predicted data (belonging to unknowns).
For example, taking a predicted period of 10s, a control period of 5s, and a sampling interval duration of 0.01s as an example, the predicted pose sequence includes predicted poses at intervals of 0.01s within 10s in the future, the expected pose sequence includes expected poses at intervals of 0.01s within 10s in the future, the predicted joint angular velocity sequence includes predicted joint angles at intervals of 0.01s within 10s in the future, and the predicted joint angular velocity sequence includes predicted joint angular velocities at intervals of 0.01s and at target time and within 5s in the future.
In one example, embodiments of the present application control a robot by employing an MPC method. The MPC method predicts the performance of a system (such as a robot) in a certain future time period through a model (such as a digital equation) so as to perform optimal control, and the predicted time period in the embodiment of the application is the certain future time period. For example, the indices of the target time instant and the sampling time instant may be denoted as k, k+1., k+n-1, k+n, where k is the index of the target time instant, k+1., k+n u ,., k+N-1, k+N being the index corresponding to the prediction period, k, k+1,..k+N u -1 is an index corresponding to the control period, N is the number of sampling instants in the prediction period, N is a positive integer, N u To control the number of sampling instants (including the target instant) within the period. Recording the sampling interval duration as delta, and increasing the increment between the sampling time corresponding to the index k+N and the target time as follows: (k+N) δ.
Step 202, obtaining an estimated jacobian matrix of the robot based on the historical measured pose and the historical measured joint angle at the previous sampling time of the target time.
The estimated jacobian matrix refers to an estimated jacobian matrix, and the jacobian matrix is used for describing partial differential relations between the end pose of the robot and the position values of various joints of the robot. The historic real pose refers to the pose actually measured by the end effector at the previous sampling time of the target time, and the historic real joint angle refers to the joint angle actually measured by the robot at the previous sampling time of the target time.
In one example, the acquisition process of estimating the jacobian matrix can be as follows:
1. and acquiring the historical actual measured speed and the historical actual measured acceleration corresponding to the historic actual measured pose, and acquiring the historical actual measured joint angular speed and the historical actual measured joint angular acceleration corresponding to the historical actual measured joint angle.
Optionally, the historical measured acceleration may be obtained by deriving the historical measured pose, and then deriving the historical measured velocity. The historical measured speed and the historical measured acceleration of the end effector can also be obtained through actual measurement of the detection component.
The historical actual joint angular velocity can be obtained by deriving the historical actual joint angle, and then the historical actual joint angular velocity is derived to obtain the historical actual joint angular acceleration. The historical actual measured joint angular velocity and the historical actual measured joint angular acceleration of each joint axis can also be obtained through the actual measurement of the detection component.
2. Acquiring an estimated joint angular velocity based on the historical actual joint angular velocity and the bounded random noise; wherein the random noise satisfies the independent same distribution.
The estimated joint angular velocity refers to the joint angular velocity at the previous sampling time of the estimated target time. For example, the estimated joint angular velocity corresponding to the historic measured joint angular velocity may be expressed as follows:
wherein,for the historically measured joint angular velocity at the previous sampling instant at the target instant,to estimate the angular velocity of the joint +.>Is random noise satisfying independent and same distribution, and the variance of the random noise is sigma 2 I n×n ,σ 2 The mean value of the independent co-distribution is 0, < >>Boundary for random noiseI is an identity matrix, and n is the number of joints of the robot.
3. A discrete Jacobian matrix estimator is constructed based on the historical measured acceleration, the historical measured joint angular acceleration, the historical measured velocity, and the estimated joint angular velocity.
The discrete jacobian matrix estimator is configured to obtain an estimated jacobian matrix. Illustratively, the discrete jacobian matrix estimator can be expressed as follows:
wherein,estimated jacobian matrix at the target time point>An estimated jacobian matrix representing the target moment at the last sampling moment +.>Representing the historical measured acceleration at the last sampling instant at the target instant, +>Indicating the historical measured speed at the last sampling time of the target moment, < >>For the historical measured joint angular acceleration at the last sampling time of the target moment, +.>For the historical measured joint angular velocity at the last sampling instant of the target instant,is->M is the dimension corresponding to the pose (e.g., 6 in the axial angle method), and the estimated jacobian matrix at the initial time is zero.
4. And solving by using a discrete Jacobian matrix estimator to obtain an estimated Jacobian matrix of the robot at the target time.
Optionally, based on the estimated jacobian matrix at the last sampling time of the target time, and the historical actual measured pose and the historical actual measured joint angle, solving by using a discrete jacobian matrix estimator, so as to obtain the estimated jacobian matrix of the robot at the target time.
Optionally, in the case that the target time needs to be replaced, the discrete jacobian matrix estimator may be utilized to reacquire an estimated jacobian matrix corresponding to the replaced target time, so as to perform the desired pose following at the replaced target time.
According to the embodiment of the application, the estimated Jacobian matrix of the robot is obtained based on the estimated joint angular velocity, so that the Jacobian matrix of the model unknown robot can be effectively simulated, the pose following control of the model unknown robot is further realized, the control application range of the robot is expanded, and the control universality of the robot is further improved.
And 203, constructing a quadratic programming equation of the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence and the estimated jacobian matrix.
The quadratic programming equation may be used to obtain the predicted joint angular velocity at the target time. Illustratively, under the constraint of a constraint equation corresponding to the quadratic programming equation, minimizing the quadratic programming equation to obtain the predicted joint angular velocity at the target moment.
In the embodiment of the application, a prediction control equation of the robot can be constructed by adopting an MPC method, then the prediction control equation is converted to obtain a quadratic programming equation of the robot, then a neural dynamics method is adopted to construct a neural dynamics solver based on the quadratic programming equation, and further the predicted joint angular velocity at the target moment is solved. Therefore, the accuracy of predicting the angular velocity of the joint can be improved by solving the quadratic programming equation by adopting a neural dynamics method, and the complexity of an algorithm structure can be reduced, so that the control accuracy and the control efficiency of the robot are improved.
In one example, referring to fig. 3, step 203 may further comprise the following sub-steps:
203a, constructing a predictive control equation of the robot based on the real pose, the predictive pose sequence, the expected pose sequence and the predictive joint angular velocity sequence; the prediction control equation is used for predicting a pose following error of the end effector and a predicted joint angular speed of the robot, and the pose following error is used for representing a difference between a desired pose and the predicted pose.
In the embodiment of the present application, the above-mentioned predictive control equation is used to obtain the predicted joint angular velocity of the robot at the target time. Illustratively, minimizing the predictive control equation under the constraint of the constraint equation corresponding to the predictive control equation yields the predicted joint angular velocity at the target time. The predicted joint angular velocity at the target time can also be obtained by solving based on a predictive control equation by adopting a neural dynamics method.
The predictive control equation may be constructed according to an MPC method, and the predictive control equation is a predictive control model of the robot at the target time, and the specific process may be as follows:
1. a first matrix is constructed based on the increment between each predicted pose and the measured pose in the predicted pose sequence.
Alternatively, the first matrix may be represented as follows:
wherein y (k+n|k) =r (k+n) -r (k) represents the predicted pose at the sampling time of index k+n (hereinafter simply referred to as the k+n sampling time) and the measured pose at the target timeIncrement between, r (t) = (p (t);t e k+1..k+n represents the predicted pose of the end effector of the robot at the sampling instant t, p (t) is the predicted position vector, and ψ (t) is the predicted rotation vector.
2. A second matrix is constructed based on the increment between each desired pose and the measured pose in the desired pose sequence.
Alternatively, the second matrix may be represented as follows:
wherein r is d (k+N) is the expected pose at the kth+N sampling time, r (k) is the actual measured pose at the target time k,p d (k+N) is the expected position vector, ψ, at the kth+N sample time d (k+N) is the desired rotation vector at the k+N sample time instant.
3. And performing difference solving on the first matrix and the second matrix to obtain a first difference value.
Optionally, the first difference is used to characterize a pose tracking error of the end effector, and the first difference may be expressed as:
4. and constructing a third matrix by taking each predicted joint angular velocity in the predicted joint angular velocity sequence as an element.
Optionally, a third matrix is used to characterize the predicted joint angular velocity of the robot, which may be expressed as follows:
Wherein,is the (k+N) u -a predicted joint angular velocity at a sampling instant of 1.
5. And constructing a predictive control equation of the robot based on the first difference value and the third matrix.
Alternatively, the predictive control equation may be expressed as follows:
wherein,and->For a positively symmetric weight matrix, +.>And->Representing euclidean norms, +.>
Optionally, under the condition that the target time needs to be replaced, a prediction control equation needs to be reconstructed according to the replaced target time so as to realize that the pose of the robot at each target time follows the task until the task is finished.
203b, constructing a constraint equation corresponding to a predictive control equation based on the actually measured joint angle and the predicted joint angular velocity sequence, and a joint angle lower limit set, a joint angle upper limit set, a joint angular velocity lower limit set and a joint angular velocity upper limit set of the robot; the constraint equation corresponding to the predictive control equation is used for constraining the joint angle and the joint angular speed of the robot.
In this embodiment of the present application, the joint angle lower limit set includes joint angle lower limits that each shutdown of the robot corresponds to, respectively, the joint angle upper limit set includes joint angle upper limits that each shutdown of the robot corresponds to, respectively, the joint angle lower limit set includes joint angle lower limits that each shutdown of the robot corresponds to, respectively, and the joint angle upper limit set includes joint angle upper limits that each shutdown of the robot corresponds to, respectively.
In one example, the construction process of the constraint equation corresponding to the predictive control equation may be as follows:
1. and constructing a first constraint matrix of the robot based on the difference value between each joint angle lower limit in the joint angle lower limit set and the actually measured joint angle.
The first constraint matrix is used to constrain the lower limit of the joint angle, which can be expressed as follows:
wherein (1)>And θ (k) represents the actual measured joint angles corresponding to the joints at the target time, which is the lower limit set of the joint angles of the robot.
2. And constructing a second constraint matrix of the robot based on the difference value between each joint angle upper limit in the joint angle upper limit set and the actually measured joint angle.
The second constraint matrix is used to constrain the upper limit of the joint angle, which can be expressed as follows:
wherein (1)>Is the upper limit set of the joint angles of the robot.
3. And constructing a third constraint matrix by taking the lower limit of each joint angle in the lower limit set of joint angular velocity as an element.
The third constraint matrix is used to constrain the lower limit of the angular velocity of the joint, which can be expressed as follows:
wherein (1)>Is a lower limit set of the angular velocity of the joints of the robot.
4. And constructing a fourth constraint matrix by taking the upper limit of each joint angle in the upper limit set of joint angular velocity as an element.
The fourth constraint matrix is used to constrain the upper limit of the joint angular velocity, which can be expressed as follows:
wherein (1)>Is the upper limit set of the joint angular velocity of the robot.
5. A fourth matrix of the robot is constructed based on the increment between each predicted joint angle and the measured joint angle in the sequence of predicted joint angles.
Alternatively, the fourth matrix may be represented as follows:
where x (k+n|k) =θ (k+n) - θ (k) is the increment between the predicted joint angle at the k+n sampling time and the measured joint angle at the target time k.
6. And constraining the fourth matrix by using the first constraint matrix and the second constraint matrix, and constraining the third matrix of the robot by using the third constraint matrix and the fourth constraint matrix to construct a constraint equation corresponding to the predictive control equation.
Alternatively, the constraint equation corresponding to the predictive control equation may be expressed as follows:
and 203c, converting to obtain a quadratic programming equation of the robot according to the predictive control equation, the constraint equation and the estimated Jacobian matrix.
In one example, the above quadratic programming equation may be obtained as follows:
1. and constructing a fifth matrix of the robot according to the estimated Jacobian matrix and the sampling interval duration corresponding to the sampling time, wherein the product between the fifth matrix and a third matrix of the robot is equal to the first matrix of the robot.
The fifth matrix is used for representing the mapping relation between the third matrix and the first matrix, and is marked as follows:the fifth matrix B may be expressed as follows: />
2. And constructing a first conversion matrix corresponding to the quadratic programming equation based on the fifth matrix.
The first transformation matrix may be expressed as follows:
wherein B is T Transpose of B, +.>
3. And constructing a second conversion matrix corresponding to the quadratic programming equation based on the fifth matrix and the second matrix of the robot.
The second transformation matrix may be expressed as follows:
4. and according to the first conversion matrix, the second conversion matrix and the third matrix of the robot, converting to obtain a quadratic programming equation of the robot.
Alternatively, the quadratic programming equation may be expressed as follows:
wherein (1)>Is->Transpose of ρ T Is a transpose of ρ.
5. And constructing a sixth matrix of the robot according to the sampling interval duration and the identity matrix, wherein the product between the sixth matrix and the third matrix of the robot is equal to the fourth matrix of the robot.
The sixth matrix is used for representing the mapping relation between the third matrix and the fourth matrix, and is marked as follows:the sixth matrix a may be expressed as follows:
6. and converting to obtain a constraint equation corresponding to the quadratic programming equation according to the six matrices, the constraint equation corresponding to the predictive control equation and the third matrix.
The constraint equation corresponding to the quadratic programming equation can be expressed as:/>
wherein,for a third transformation matrix corresponding to the quadratic programming equation,fourth transformation matrix corresponding to quadratic programming equation, < >>
And 204, solving a quadratic programming equation to obtain the predicted joint angular velocity of the robot at the target time.
In the embodiment of the present application, the quadratic programming equation may be solved by using a neuro-dynamics method to obtain the predicted joint angular velocity of the robot at the target time, referring to fig. 4, step 204 may further include the following sub-steps:
and 204a, converting the quadratic programming equation by adopting a Lagrangian multiplier method and a nonlinear complementary problem function to obtain a nonlinear equation of the robot, wherein the theoretical value of the nonlinear equation is 0.
The Lagrangian multiplier method is a method of finding extrema of a multiple function whose variables are constrained by one or more conditions, which can be used to solve an optimization problem with equality constraints. The nonlinear complementary problem function is used to convert the quadratic programming equation to a nonlinear equation equivalent thereto.
In one example, the nonlinear equation acquisition process may be as follows:
1. and constructing a first target matrix corresponding to the nonlinear equation by taking the third matrix of the robot and the Lagrange function multiplier as elements.
The lagrangian function multiplier refers to a multiplier corresponding to the lagrangian multiplier method, and the first target matrix may be expressed as follows:
wherein (1)>As Lagrangian function multiplier,/>And is a third matrix.
2. Constructing a second target matrix corresponding to the nonlinear equation based on the first conversion matrix and the third conversion matrix corresponding to the quadratic programming equation; the third conversion matrix is constructed by taking a sixth matrix and an identity matrix of the robot as elements.
The second target matrix may be represented as follows:
wherein D is the first transformation matrix, G is the third transformation matrix, c E (0, 1).
3. Constructing a third target matrix corresponding to the nonlinear equation based on the second conversion matrix and the fourth conversion matrix corresponding to the quadratic programming equation; the fourth conversion matrix is constructed by taking a first constraint matrix, a second constraint matrix, a third constraint matrix and a fourth constraint matrix corresponding to the robot as elements.
The third target matrix may be represented as follows:
wherein ρ is the second conversion matrix, ζ is the fourth conversion matrix, o is Hadamard product.
4. And according to the product between the first target matrix and the second target matrix and the third target matrix, converting to obtain a nonlinear equation of the robot.
The nonlinear equation can be expressed as: fω+Φ (ω) =0.
Step 204b, constructing a thermodynamic solver of the robot according to the nonlinear equation, wherein the thermodynamic solver is used for solving the predicted joint angular velocity of the robot.
A thermodynamic method may be employed to construct a robot's thermodynamic solver from nonlinear equations, which may include the following:
1. constructing a first target matrix of the nonlinear equation under the ith iteration and a first target matrix of the nonlinear equation under the (i+1) th iteration, wherein i is an integer.
Wherein the first target matrix at the ith iteration may be denoted as ω i The first target matrix at the i+1st iteration may be denoted as ω i+1
2. And constructing a first solving expression based on a third conversion matrix, a fourth conversion matrix and Lagrange function multipliers corresponding to the quadratic programming equation and a second target matrix corresponding to the nonlinear equation.
The first solution expression may be expressed as follows: g (F+H (#) i )) -1
Wherein g is the step size, F is the second target matrix,() -1 for inversion, G is the third conversion matrix, e is->Or->Representing the Hadamard division, ζ is the fourth conversion matrix,>is a Lagrangian function multiplier, +. >Is the third matrix of the robot.
3. A second solution expression is constructed based on the expression of the nonlinear equation at the ith iteration and the overall expression of the nonlinear equation from the 0 th iteration to the ith iteration.
The second solution expression may be expressed as follows:
wherein,and->For design parameters F omega i +φ(ω i ) Expression of the nonlinear equation under the ith iteration, +.>For the overall expression of the nonlinear equation from iteration 0 to iteration i, j is an integer between 0 and i.
4. And constructing a neural dynamics solver of the robot according to the first target matrix under the ith iteration, the first target matrix under the (i+1) th iteration, the first solving expression and the second solving expression.
The thermodynamic solver can be expressed as follows:
and 204c, carrying out iterative solution by using a neuro-dynamics solver to obtain the predicted joint angular velocity of the robot at the target time.
The nonlinear equation can be targeted to have a value equal to 0, advantageouslyCarrying out iterative solution by using a neuro-dynamics solver to obtain an output first target matrix; front in first target matrix to be outputA third matrix of the robot determined as an output; wherein (1) >N is the number of joints of the robot, N u For controlling the number of sampling instants in the time period; and determining the first n elements in the output third matrix of the robot as the predicted joint angular velocity of the robot at the target time.
Illustratively, targeting Fω+φ (ω) =0, iteratively solving using a thermodynamic solver to obtain the output ω, and then taking the previous one of the output ωElements, determined as output +.>Finally output +.>The first n elements u (k|k) of (a) are determined as the predicted joint angular velocity of the robot at the target time instant.
Step 205, controlling the end effector to move along with the expected pose at the target time according to the predicted joint angular velocity at the target time.
In one example, the computer device, after acquiring the predicted joint angular velocity at the target time, transmits the predicted joint angular velocity at the target time to the robot, which controls each joint axis motion according to the predicted joint angular velocity at the target time to achieve the desired pose movement of the end effector following the target time. In some possible embodiments, the predicted joint angular velocity at the target time is obtained by the robot itself, and after the robot obtains the predicted joint angular velocity at the target time, the robot controls the movement of each joint axis according to the predicted joint angular velocity at the target time so as to realize the movement of the end effector along the expected pose at the target time.
Optionally, for the pose following task, after the pose following task at the target time is completed, determining the next sampling time at the target time as the replaced target time, and acquiring the measured joint angle, the real pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence and the estimated jacobian matrix at the replaced target time, so as to construct a predicted control equation at the replaced target time, converting the predicted control equation into a quadratic programming equation, and finally solving the predicted joint angular velocity at the replaced target time by adopting a neurodynamics method to control the end effector to move along with the expected pose at the replaced target time until the pose following task is completed.
In summary, according to the technical scheme provided by the embodiment of the application, the quadratic programming equation of the robot is constructed based on the actually measured joint angle, the actually measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, the predicted joint angle sequence and the estimated jacobian matrix, and then the quadratic programming equation is solved to obtain the predicted joint angular velocity at the target moment, so that the joint constraint problem of the robot can be directly processed, the reduction of the feasible region of the joint angle is avoided, and the control accuracy of the robot is improved.
In addition, the Jacobian matrix of the model-unknown robot can be effectively simulated based on the historic actual measurement pose and the historic actual measurement joint angle, so that the track and the pose of the model-unknown robot can be controlled in a following manner, the problem of inaccurate control caused by control of the robot based on an inaccurate model (such as deviation between an actual model and an original model caused by factors such as low production precision and component abrasion of the robot) in the related technology is avoided, and the control accuracy of the robot is further improved. Meanwhile, the technical scheme provided by the embodiment of the application can be applied to a scene with unknown model of any robot, so that the control universality of the robot is improved.
In addition, a prediction control equation of the robot is constructed by adopting an MPC method, the control prediction equation is converted into a quadratic programming equation, the quadratic programming equation is converted into a nonlinear equation by adopting a Lagrange multiplier method, a neural dynamics solver is constructed according to the nonlinear equation by adopting a neural dynamics method, and the neural dynamics solver is utilized for solving, so that an accurate prediction joint angle is obtained, and the control accuracy of the robot is further improved.
In an exemplary embodiment, taking a 7-degree-of-freedom mechanical arm as an example, the technical solution provided in the embodiment of the present application is described with reference to fig. 5, which may include the following.
As shown in fig. 5, the robotic arm 500 includes a robotic arm 501 and an end effector 502. The robotic arm 501 includes 7 joint axes, such as 3 shoulder joint axes, 1 elbow joint axis, and 3 wrist joint axes. The end effector 502 may be moved by adjusting the 7 joint axes of the robotic arm 501. The end effector 502 may be used to perform pose following tasks such as mounting, handling, sanding, painting, drawing, inspection, and the like. Optionally, a lower controller is disposed on the mechanical arm 500, and the lower controller is correspondingly provided with an external port, and the external port is used for being connected with the computer device. The computer device may be, for example, a NUC (Next Unit of Computing, next computing unit) mini-computer.
The computer device obtains the measured pose of the end effector 502 at the current time (i.e., the target time), the measured joint angles of the 7 joint axes at the current time, the predicted pose sequence and the expected pose sequence of the end effector 502 in the predicted period corresponding to the current time, the predicted joint angle sequence of the 7 joint axes in the predicted period, and the predicted joint angular velocity sequence of the 7 joint axes in the control period corresponding to the current time, respectively.
The computer equipment constructs a discrete Jacobian matrix estimator according to the historical actual measurement pose and the historical actual measurement joint angle at the previous sampling time of the current moment, and solves the discrete Jacobian matrix estimator to obtain an estimated Jacobian matrix at the current moment.
The computer equipment adopts an MPC algorithm, and constructs a prediction control equation corresponding to the current moment and a constraint equation corresponding to the prediction control equation according to the actually measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence and the actually measured joint angle corresponding to the current moment.
And the computer equipment converts the quadratic programming equation corresponding to the current moment and the constraint condition corresponding to the quadratic programming equation according to the prediction control equation corresponding to the current moment and the constraint equation corresponding to the prediction control equation.
The computer equipment adopts Lagrange multiplier method and nonlinear complementary problem function, and converts the Lagrange multiplier method and nonlinear complementary problem function into a nonlinear equation corresponding to the current moment according to a quadratic programming equation corresponding to the current moment and constraint conditions corresponding to the quadratic programming equation.
The computer equipment adopts a neural dynamics method, constructs a neural dynamics solver corresponding to the current moment according to a nonlinear equation corresponding to the current moment, and carries out iterative solution by using the neural dynamics solver to obtain the predictive control angular velocity at the current moment.
The computer device transmits the predicted control angular velocity at the current time to the lower controller of the mechanical arm 500, and the lower controller controls 7 joint axes to move according to the predicted control angular velocity at the current time so as to realize that the end effector moves along with the expected pose at the current time. This process is until the pose following task is completed.
Referring to fig. 6, a schematic diagram of a 7-degree-of-freedom mechanical arm performing a pose follow task according to an embodiment of the present application is shown. As shown in fig. 6, according to the expected track curve 601 and the actually measured track curve 602 corresponding to the end effector, it can be seen that by adopting the technical scheme provided by the embodiment of the application, the 7-degree-of-freedom mechanical arm can be accurately and effectively controlled to complete the pose following task, and the control accuracy and the effectiveness of the technical scheme provided by the embodiment of the application are embodied. Wherein, the abscissa and the ordinate form a two-dimensional plan view, units: and (5) rice.
Referring to fig. 7, a schematic diagram of the position following error variation of the end effector of the 7-degree-of-freedom robotic arm provided by one embodiment of the present application is shown. Wherein the abscissa is time (unit: second) and the ordinate is displacement (unit: meter). As shown in fig. 7, according to the position following error curve 701 of the end effector in the X direction, the position following error curve 702 in the Y direction, and the position following error curve 703 in the Z direction, it can be seen that, in performing the pose following task, the position following error of the end effector is kept within an acceptable range, which embodies the control accuracy of the technical solution provided in the embodiments of the present application.
Referring to fig. 8, a schematic diagram of a desired gesture profile and an actual gesture profile corresponding to a 7-degree-of-freedom mechanical arm according to an embodiment of the present application is shown. Wherein the abscissa is time (unit: second) and the ordinate is rotation vector. As shown in fig. 8, the X-direction actual rotation vector curve 801 coincides with the X-direction desired rotation vector curve 804, the Y-direction actual rotation vector curve 802 coincides with the Y-direction desired rotation vector curve 805, and the Z-direction actual rotation vector curve 803 coincides with the Z-direction desired rotation vector curve 806.
Referring to fig. 9, a schematic diagram of a rotation vector following error change corresponding to a 7-degree-of-freedom mechanical arm according to an embodiment of the present application is shown. Wherein the abscissa is time (unit: second) and the ordinate is rotation vector following error. As shown in fig. 9, according to the rotation vector following error curve 901 of the mechanical arm in the X direction, the rotation vector following error curve 902 of the mechanical arm in the Y direction, and the rotation vector following error curve 903 of the mechanical arm in the Z direction, it can be seen that in executing the pose following task, the rotation vector following error of the mechanical arm is kept within an acceptable range, which embodies the control accuracy of the technical scheme provided in the embodiments of the present application.
Referring to fig. 10, a schematic diagram of a jacobian matrix estimation error variation corresponding to a 7-degree-of-freedom mechanical arm according to an embodiment of the present application is shown. As shown in fig. 10, according to the jacobian matrix estimation error curve 1001 corresponding to the mechanical arm, it can be seen that, in executing the pose following task, the jacobian matrix estimation error corresponding to the mechanical arm is kept within an acceptable range, which embodies the accuracy of estimating the jacobian matrix, and further embodies the accuracy of controlling the technical scheme provided by the embodiment of the present application.
Referring to fig. 11, a schematic diagram of a 7-degree-of-freedom mechanical arm performing a track following task according to another embodiment of the present application is shown. As shown in fig. 11, the 7-degree-of-freedom mechanical arm 1101 successfully completes the pose following task, which embodies the effectiveness of the technical scheme provided in the embodiment of the present application. X, Y and Z-axis are in meters for characterizing the three-dimensional space in which the 7-degree-of-freedom robotic arm 1101 is located.
Based on fig. 6 to 11, it can be seen that by adopting the technical scheme provided by the embodiment of the application, the control accuracy, the effectiveness and the universality of the robot can be effectively improved.
In summary, according to the technical scheme provided by the embodiment of the application, the quadratic programming equation of the robot is constructed based on the actually measured joint angle, the actually measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, the predicted joint angle sequence and the estimated jacobian matrix, and then the quadratic programming equation is solved to obtain the predicted joint angular velocity at the target moment, so that the joint constraint problem of the robot can be directly processed, the reduction of the feasible region of the joint angle is avoided, and the control accuracy of the robot is improved.
In addition, the Jacobian matrix of the model-unknown robot can be effectively simulated based on the historic actual measurement pose and the historic actual measurement joint angle, so that the track and the pose of the model-unknown robot can be controlled in a following manner, the problem of inaccurate control caused by control of the robot based on an inaccurate model (such as deviation between an actual model and an original model caused by factors such as low production precision and component abrasion of the robot) in the related technology is avoided, and the control accuracy of the robot is further improved. Meanwhile, the technical scheme provided by the embodiment of the application can be applied to a scene with unknown model of any robot, so that the control universality of the robot is improved.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 12, a block diagram of a control device of a robot according to an embodiment of the present application is shown. The device has the function of realizing the control method of the robot, and the function can be realized by hardware or by executing corresponding software by the hardware. The apparatus may be or may be provided in a computer device as described above. As shown in fig. 12, the apparatus 1200 includes: a data acquisition module 1201, a matrix estimation module 1202, an equation construction module 1203, an equation solving module 1204, and a pose following module 1205.
The data acquisition module 1201 is configured to acquire an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a predicted period, a predicted joint angle sequence of the robot in the predicted period, and a predicted joint angular velocity sequence of the robot in a control period; the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected from the target moment as a starting point.
A matrix estimation module 1202, configured to obtain an estimated jacobian matrix of the robot based on a historical measured pose and a historical measured joint angle at a time point previous to the target time point.
The equation construction module 1203 is configured to construct a quadratic programming equation of the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence, and the estimated jacobian matrix.
And an equation solving module 1204, configured to solve the quadratic programming equation to obtain a predicted joint angular velocity of the robot at the target time.
The pose tracking module 1205 is configured to control the end effector to track a desired pose movement at the target time according to the predicted joint angular velocity at the target time.
In some embodiments, as shown in fig. 13, the equation construction module 1203 includes: a prediction equation construction sub-module 1203a, a constraint equation construction sub-module 1203b, and a prediction equation conversion sub-module 1203c.
A prediction equation construction submodule 1203a, configured to construct a prediction control equation of the robot based on the measured pose, the predicted pose sequence, the expected pose sequence, and the predicted joint angular velocity sequence; the prediction control equation is used for predicting a pose following error of the end effector and a predicted joint angular speed of the robot, and the pose following error is used for representing a difference between the expected pose and the predicted pose.
A constraint equation construction submodule 1203b, configured to construct a constraint equation corresponding to the predictive control equation based on the actually measured joint angle and the predicted joint angular velocity sequence, and a joint angle lower limit set, a joint angle upper limit set, a joint angular velocity lower limit set, and a joint angular velocity upper limit set of the robot; the constraint equation corresponding to the predictive control equation is used for constraining the joint angle and the joint angular speed of the robot.
And a prediction equation conversion submodule 1203c, configured to convert to obtain a quadratic programming equation of the robot according to the prediction control equation, the constraint equation and the estimated jacobian matrix.
In some embodiments, the predictive equation construction submodule 1203a is configured to:
constructing a first matrix based on the increment between each predicted pose and the measured pose in the predicted pose sequence;
constructing a second matrix based on the increment between each expected pose in the expected pose sequence and the actually measured pose;
performing difference solving on the first matrix and the second matrix to obtain a first difference value;
constructing a third matrix by taking each predicted joint angular velocity in the predicted joint angular velocity sequence as an element;
and constructing a predictive control equation of the robot based on the first difference value and the third matrix.
In some embodiments, the constraint equation construction submodule 1203b is configured to:
constructing a first constraint matrix of the robot based on the difference between each joint angle lower limit in the joint angle lower limit set and the actually measured joint angle;
constructing a second constraint matrix of the robot based on the difference between each joint angle upper limit in the joint angle upper limit set and the measured joint angle;
Constructing a third constraint matrix by taking the lower limit of each joint angle in the lower limit set of joint angular velocity as an element;
constructing a fourth constraint matrix by taking the upper limit of each joint angle in the upper limit set of joint angular velocity as an element;
constructing a fourth matrix of the robot based on the increment between each predicted joint angle and the measured joint angle in the sequence of predicted joint angles;
and constraining the fourth matrix by using the first constraint matrix and the second constraint matrix, and constraining the third matrix of the robot by using the third constraint matrix and the fourth constraint matrix to construct a constraint equation corresponding to the predictive control equation.
In some embodiments, the predictive equation conversion submodule 1203c is configured to:
constructing a fifth matrix of the robot according to the estimated Jacobian matrix and the sampling interval duration corresponding to the sampling time, wherein the product of the fifth matrix and a third matrix of the robot is equal to the first matrix of the robot;
constructing a first conversion matrix corresponding to the quadratic programming equation based on the fifth matrix;
constructing a second conversion matrix corresponding to the quadratic programming equation based on the fifth matrix and a second matrix of the robot;
According to the first conversion matrix, the second conversion matrix and the third matrix of the robot, a quadratic programming equation of the robot is obtained through conversion;
constructing a sixth matrix of the robot according to the sampling interval duration and the identity matrix, wherein the product of the sixth matrix and the third matrix of the robot is equal to a fourth matrix of the robot;
and converting according to the six matrixes, the constraint equation corresponding to the predictive control equation and the third matrix to obtain the constraint equation corresponding to the quadratic programming equation.
In some embodiments, as shown in fig. 13, the equation solving module 1204 includes: nonlinear equation acquisition submodule 1204a, solver building submodule 1204b, and equation solving submodule 1204c.
And the nonlinear equation acquisition submodule 1204a is used for converting the quadratic programming equation by adopting a Lagrangian multiplier method and a nonlinear complementary problem function to obtain a nonlinear equation of the robot, wherein the theoretical value of the nonlinear equation is 0.
A solver construction submodule 1204b is configured to construct a thermodynamic solver of the robot according to the nonlinear equation, where the thermodynamic solver is configured to solve a predicted joint angular velocity of the robot.
And an equation solving sub-module 1204c, configured to perform iterative solution by using the thermodynamic solver, so as to obtain a predicted joint angular velocity of the robot at the target time.
In some embodiments, the nonlinear equation acquisition submodule 1204a is configured to:
constructing a first target matrix corresponding to the nonlinear equation by taking a third matrix of the robot and Lagrangian function multipliers as elements;
constructing a second target matrix corresponding to the nonlinear equation based on a first conversion matrix and a third conversion matrix corresponding to the quadratic programming equation; the third conversion matrix is constructed by taking a sixth matrix and an identity matrix of the robot as elements;
constructing a third target matrix corresponding to the nonlinear equation based on a second conversion matrix and a fourth conversion matrix corresponding to the quadratic programming equation; the fourth conversion matrix is constructed by taking a first constraint matrix, a second constraint matrix, a third constraint matrix and a fourth constraint matrix corresponding to the robot as elements;
and according to the product between the first target matrix and the second target matrix and the third target matrix, converting to obtain a nonlinear equation of the robot.
In some embodiments, the solver builds sub-module 1204b to:
constructing a first target matrix of the nonlinear equation under the ith iteration and a first target matrix of the nonlinear equation under the (i+1) th iteration, wherein i is an integer;
constructing a first solving expression based on a third conversion matrix, a fourth conversion matrix and a Lagrange function multiplier corresponding to the quadratic programming equation and a second target matrix corresponding to the nonlinear equation;
constructing a second solving expression based on the expression of the nonlinear equation under the ith iteration and the integral expression of the nonlinear equation under the iteration from the 0 th iteration to the ith iteration;
and constructing a neural dynamics solver of the robot according to the first target matrix under the ith iteration, the first target matrix under the (i+1) th iteration, the first solving expression and the second solving expression.
In some embodiments, the equation solving sub-module 1204c is configured to:
iteratively solving by using the neural dynamics solver with the value of the nonlinear equation equal to 0 as a target to obtain the output first target matrix;
Front in the first target matrix to be outputA third element of the robot determined as an output; wherein (1)>N is the number of joints of the robot, N u Is the number of sampling instants in the control period;
and determining the first n elements in the output third matrix of the robot as the predicted joint angular velocity of the robot at the target moment.
In some embodiments, the matrix estimation module 1202 is configured to:
acquiring a history measured speed and a history measured acceleration corresponding to the history measured pose, and acquiring a history measured joint angular speed and a history measured joint angular acceleration corresponding to the history measured joint angle;
acquiring an estimated joint angular velocity based on the historical measured joint angular velocity and bounded random noise; wherein the random noise satisfies independent same distribution;
constructing a discrete jacobian matrix estimator based on the historical measured acceleration, the historical measured joint angular acceleration, the historical measured speed, and the estimated joint angular speed;
and solving by using the discrete Jacobian matrix estimator to obtain an estimated Jacobian matrix of the robot at the target time.
In summary, according to the technical scheme provided by the embodiment of the application, the quadratic programming equation of the robot is constructed based on the actually measured joint angle, the actually measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angular velocity sequence, the predicted joint angle sequence and the estimated jacobian matrix, and then the quadratic programming equation is solved to obtain the predicted joint angular velocity at the target moment, so that the joint constraint problem of the robot can be directly processed, the reduction of the feasible region of the joint angle is avoided, and the control accuracy of the robot is improved.
In addition, the Jacobian matrix of the model-unknown robot can be effectively simulated based on the historic actual measurement pose and the historic actual measurement joint angle, so that the track and the pose of the model-unknown robot can be controlled in a following manner, the problem of inaccurate control caused by control of the robot based on an inaccurate model (such as deviation between an actual model and an original model caused by factors such as low production precision and component abrasion of the robot) in the related technology is avoided, and the control accuracy of the robot is further improved. Meanwhile, the technical scheme provided by the embodiment of the application can be applied to a scene with unknown model of any robot, so that the control universality of the robot is improved.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 14, a block diagram of a computer device according to an embodiment of the present application is shown. The computer device may be used to implement the control method of the robot provided in the above-described embodiment. Specifically, the following may be included.
The computer apparatus 1400 includes a central processing unit (such as a CPU (Central Processing Unit, central processing unit), a GPU (Graphics Processing Unit, graphics processor), an FPGA (Field Programmable Gate Array ), etc.) 1401, a system Memory 1404 including a RAM (Random-Access Memory) 1402 and a ROM (Read-Only Memory) 1403, and a system bus 1405 connecting the system Memory 1404 and the central processing unit 1401. The computer device 1400 also includes a basic input/output system (Input Output System, I/O system) 1406 that facilitates the transfer of information between the various devices within the server, and a mass storage device 1407 for storing an operating system 1413, application programs 1414, and other program modules 1415.
The basic input/output system 1406 includes a display 1408 for displaying information and an input device 1409, such as a mouse, keyboard, etc., for the user to input information. Wherein the display 1408 and the input device 1409 are connected to the central processing unit 1401 via an input output controller 1410 connected to the system bus 1405. The basic input/output system 1406 may also include an input/output controller 1410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1407 is connected to the central processing unit 1401 through a mass storage controller (not shown) connected to the system bus 1405. The mass storage device 1407 and its associated computer-readable media provide non-volatile storage for the computer device 1400. That is, the mass storage device 1407 may include a computer readable medium (not shown) such as a hard disk or CD-ROM (Compact Disc Read-Only Memory) drive.
Without loss of generality, the computer readable medium may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc, high density digital video disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the ones described above. The system memory 1404 and mass storage device 1407 described above may be collectively referred to as memory.
The computer device 1400 may also operate in accordance with embodiments of the present application through a network, such as the internet, to remote computers connected to the network. I.e., the computer device 1400 may connect to the network 1412 through a network interface unit 1411 connected to the system bus 1405, or other types of networks or remote computer systems (not shown) may be connected to the system using the network interface unit 1411.
The memory also includes a computer program stored in the memory and configured to be executed by the one or more processors to implement the control method of the robot described above.
In some embodiments, a computer readable storage medium is also provided, in which a computer program is stored which, when being executed by a processor, implements the above-mentioned control method of the robot.
Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid State disk), optical disk, or the like. The random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory ), among others.
In some embodiments, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device executes the control method of the robot.
It should be noted that, information (including, but not limited to, object device information, object personal information, etc.), data (including, but not limited to, data for analysis, stored data, presented data, etc.), and signals related to the present application are all subject authorized or fully authorized by each party, and the collection, use, and processing of related data is required to comply with related laws and regulations and standards of related countries and regions. For example, the robot architecture, the control method of the robot, and the like referred to in the present application are all acquired with sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limited by the embodiments of the present application.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and scope of the invention.

Claims (14)

1. A method of controlling a robot, the method comprising:
acquiring an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a predicted period, a predicted joint angle sequence of the robot in the predicted period and a predicted joint angular velocity sequence of the robot in a control period; wherein the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected with the target moment as a starting point;
acquiring an estimated Jacobian matrix of the robot based on the historical actual measured pose and the historical actual measured joint angle at the previous sampling time of the target moment;
constructing a quadratic programming equation of the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence and the estimated jacobian matrix;
Solving the quadratic programming equation to obtain the predicted joint angular velocity of the robot at the target moment;
and controlling the end effector to move along with the expected pose at the target time according to the predicted joint angular velocity at the target time.
2. The method of claim 1, wherein the constructing a quadratic programming equation for the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the desired pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence, and the estimated jacobian matrix comprises:
constructing a predictive control equation of the robot based on the measured pose, the predicted pose sequence, the expected pose sequence and the predicted joint angular velocity sequence; the prediction control equation is used for predicting a pose following error of the end effector and a predicted joint angular speed of the robot, and the pose following error is used for representing the difference between the expected pose and the predicted pose;
constructing a constraint equation corresponding to the predictive control equation based on the actually measured joint angle and the predicted joint angular velocity sequence, and a joint angle lower limit set, a joint angle upper limit set, a joint angular velocity lower limit set and a joint angular velocity upper limit set of the robot; the constraint equation corresponding to the predictive control equation is used for constraining the joint angle and the joint angular speed of the robot;
And converting to obtain a quadratic programming equation of the robot according to the predictive control equation, the constraint equation and the estimated Jacobian matrix.
3. The method of claim 2, wherein the constructing a predictive control equation for the robot based on the measured pose, the predicted pose sequence, the desired pose sequence, and the predicted joint angular velocity sequence comprises:
constructing a first matrix based on the increment between each predicted pose and the measured pose in the predicted pose sequence;
constructing a second matrix based on the increment between each expected pose in the expected pose sequence and the actually measured pose;
performing difference solving on the first matrix and the second matrix to obtain a first difference value;
constructing a third matrix by taking each predicted joint angular velocity in the predicted joint angular velocity sequence as an element;
and constructing a predictive control equation of the robot based on the first difference value and the third matrix.
4. The method of claim 2, wherein constructing constraint equations corresponding to the predictive control equations based on the measured joint angles and the predicted joint angular velocity sequence, and the lower set of joint angles, the upper set of joint angles, the lower set of joint angular velocities, and the upper set of joint angular velocities of the robot, comprises:
Constructing a first constraint matrix of the robot based on the difference between each joint angle lower limit in the joint angle lower limit set and the actually measured joint angle;
constructing a second constraint matrix of the robot based on the difference between each joint angle upper limit in the joint angle upper limit set and the measured joint angle;
constructing a third constraint matrix by taking the lower limit of each joint angle in the lower limit set of joint angular velocity as an element;
constructing a fourth constraint matrix by taking the upper limit of each joint angle in the upper limit set of joint angular velocity as an element;
constructing a fourth matrix of the robot based on the increment between each predicted joint angle and the measured joint angle in the sequence of predicted joint angles;
and constraining the fourth matrix by using the first constraint matrix and the second constraint matrix, and constraining the third matrix of the robot by using the third constraint matrix and the fourth constraint matrix to construct a constraint equation corresponding to the predictive control equation.
5. The method of claim 2, wherein said converting to a quadratic programming equation for the robot based on the predictive control equation, the constraint equation, and the estimated jacobian matrix comprises:
Constructing a fifth matrix of the robot according to the estimated Jacobian matrix and the sampling interval duration corresponding to the sampling time, wherein the product of the fifth matrix and a third matrix of the robot is equal to the first matrix of the robot;
constructing a first conversion matrix corresponding to the quadratic programming equation based on the fifth matrix;
constructing a second conversion matrix corresponding to the quadratic programming equation based on the fifth matrix and a second matrix of the robot;
according to the first conversion matrix, the second conversion matrix and the third matrix of the robot, a quadratic programming equation of the robot is obtained through conversion;
constructing a sixth matrix of the robot according to the sampling interval duration and the identity matrix, wherein the product of the sixth matrix and the third matrix of the robot is equal to a fourth matrix of the robot;
and converting according to the six matrixes, the constraint equation corresponding to the predictive control equation and the third matrix to obtain the constraint equation corresponding to the quadratic programming equation.
6. The method of claim 1, wherein solving the quadratic programming equation results in a predicted joint angular velocity of the robot at the target time, comprising:
Converting the quadratic programming equation by adopting a Lagrangian multiplier method and a nonlinear complementary problem function to obtain a nonlinear equation of the robot, wherein the theoretical value of the nonlinear equation is 0;
constructing a neural dynamics solver of the robot according to the nonlinear equation, wherein the neural dynamics solver is used for solving the predicted joint angular velocity of the robot;
and carrying out iterative solution by using the neural dynamics solver to obtain the predicted joint angular velocity of the robot at the target time.
7. The method of claim 6, wherein converting the quadratic programming equation using a lagrangian multiplier and a nonlinear complementary problem function to obtain a nonlinear equation for the robot comprises:
constructing a first target matrix corresponding to the nonlinear equation by taking a third matrix of the robot and Lagrangian function multipliers as elements;
constructing a second target matrix corresponding to the nonlinear equation based on a first conversion matrix and a third conversion matrix corresponding to the quadratic programming equation; the third conversion matrix is constructed by taking a sixth matrix and an identity matrix of the robot as elements;
Constructing a third target matrix corresponding to the nonlinear equation based on a second conversion matrix and a fourth conversion matrix corresponding to the quadratic programming equation; the fourth conversion matrix is constructed by taking a first constraint matrix, a second constraint matrix, a third constraint matrix and a fourth constraint matrix corresponding to the robot as elements;
and according to the product between the first target matrix and the second target matrix and the third target matrix, converting to obtain a nonlinear equation of the robot.
8. The method of claim 6, wherein constructing the robot's neuro-dynamics solver from the nonlinear equation comprises:
constructing a first target matrix of the nonlinear equation under the ith iteration and a first target matrix of the nonlinear equation under the (i+1) th iteration, wherein i is an integer;
constructing a first solving expression based on a third conversion matrix, a fourth conversion matrix and a Lagrange function multiplier corresponding to the quadratic programming equation and a second target matrix corresponding to the nonlinear equation;
constructing a second solving expression based on the expression of the nonlinear equation under the ith iteration and the integral expression of the nonlinear equation under the iteration from the 0 th iteration to the ith iteration;
And constructing a neural dynamics solver of the robot according to the first target matrix under the ith iteration, the first target matrix under the (i+1) th iteration, the first solving expression and the second solving expression.
9. The method of claim 8, wherein iteratively solving with the neuro-dynamic solver results in a predicted joint angular velocity of the robot at the target time, comprising:
iteratively solving by using the neural dynamics solver with the value of the nonlinear equation equal to 0 as a target to obtain the output first target matrix;
front in the first target matrix to be outputA third element of the robot determined as an output; wherein (1)>n is the machineNumber of joints of robot, N u Is the number of sampling instants in the control period;
and determining the first n elements in the output third matrix of the robot as the predicted joint angular velocity of the robot at the target moment.
10. The method of claim 1, wherein the obtaining the estimated jacobian matrix for the robot based on the historical measured pose and the historical measured joint angle at the previous sample time of the target time comprises:
Acquiring a history measured speed and a history measured acceleration corresponding to the history measured pose, and acquiring a history measured joint angular speed and a history measured joint angular acceleration corresponding to the history measured joint angle;
acquiring an estimated joint angular velocity based on the historical measured joint angular velocity and bounded random noise; wherein the random noise satisfies independent same distribution;
constructing a discrete jacobian matrix estimator based on the historical measured acceleration, the historical measured joint angular acceleration, the historical measured speed, and the estimated joint angular speed;
and solving by using the discrete Jacobian matrix estimator to obtain an estimated Jacobian matrix of the robot at the target time.
11. A control device for a robot, the device comprising:
the system comprises a data acquisition module, a control module and a control module, wherein the data acquisition module is used for acquiring an actually measured joint angle of a robot at a target time, an actually measured pose of an end effector of the robot at the target time, a predicted pose sequence and an expected pose sequence of the end effector in a prediction period, a predicted joint angle sequence of the robot in the prediction period and a predicted joint angular velocity sequence of the robot in a control period; wherein the prediction period includes a plurality of sampling moments after the target moment, and the control period includes a plurality of sampling moments sequentially selected with the target moment as a starting point;
The matrix estimation module is used for acquiring an estimated Jacobian matrix of the robot based on the historical actual measured pose and the historical actual measured joint angle at the previous sampling time of the target time;
the equation construction module is used for constructing a quadratic programming equation of the robot based on the measured joint angle, the measured pose, the predicted pose sequence, the expected pose sequence, the predicted joint angle sequence, the predicted joint angular velocity sequence and the estimated Jacobian matrix;
the equation solving module is used for solving the quadratic programming equation to obtain the predicted joint angular speed of the robot at the target time;
and the pose following module is used for controlling the end effector to follow the expected pose movement at the target moment according to the predicted joint angular velocity at the target moment.
12. A computer device, characterized in that it comprises a processor and a memory in which a computer program is stored, which computer program is loaded and executed by the processor to implement the method of controlling a robot according to any of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program that is loaded and executed by a processor to realize the control method of the robot according to any one of claims 1 to 10.
14. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which a processor reads and executes them to implement the method of controlling a robot according to any of claims 1 to 10.
CN202211246781.3A 2022-10-12 2022-10-12 Robot control method, device, equipment and storage medium Pending CN117245640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211246781.3A CN117245640A (en) 2022-10-12 2022-10-12 Robot control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211246781.3A CN117245640A (en) 2022-10-12 2022-10-12 Robot control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117245640A true CN117245640A (en) 2023-12-19

Family

ID=89130069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211246781.3A Pending CN117245640A (en) 2022-10-12 2022-10-12 Robot control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117245640A (en)

Similar Documents

Publication Publication Date Title
Du et al. Markerless human–manipulator interface using leap motion with interval Kalman filter and improved particle filter
Cheah et al. Adaptive vision and force tracking control for robots with constraint uncertainty
CN112549024B (en) Robot sensorless collision detection method based on time series analysis and application
Xu et al. Dynamic identification of the KUKA LBR iiwa robot with retrieval of physical parameters using global optimization
Shetab-Bushehri et al. As-rigid-as-possible shape servoing
Meeussen et al. Contact-state segmentation using particle filters for programming by human demonstration in compliant-motion tasks
CN111975771A (en) Mechanical arm motion planning method based on deviation redefinition neural network
Wang et al. Design of robotic visual servo control based on neural network and genetic algorithm
Lepora et al. Pose-based tactile servoing: Controlled soft touch using deep learning
Landgraf et al. A hybrid neural network approach for increasing the absolute accuracy of industrial robots
CN110967017B (en) Cooperative positioning method for rigid body cooperative transportation of double mobile robots
Sun Kinematics model identification and motion control of robot based on fast learning neural network
Wiedemann et al. Probabilistic modeling of gas diffusion with partial differential equations for multi-robot exploration and gas source localization
CN113910218B (en) Robot calibration method and device based on kinematic and deep neural network fusion
Yan Error recognition of robot kinematics parameters based on genetic algorithms
Shim et al. Denavit-Hartenberg notation-based kinematic constraint equations for forward kinematics of the 3–6 Stewart platform
Jia et al. Improved dynamic parameter identification method relying on proprioception for manipulators
Tian et al. A general approach for robot pose error compensation based on an equivalent joint motion error model
Kumar et al. Sensor-based estimation and control of forces and moments in multiple cooperative robots
Wang et al. A numerically stable algorithm for analytic inverse kinematics of 7-degrees-of-freedom spherical-rotational-spherical manipulators with joint limit avoidance
CN117245640A (en) Robot control method, device, equipment and storage medium
Vithani et al. Estimation of object kinematics from point data
CN117162080A (en) Control method, device, equipment, storage medium and program product for mechanical arm
Cheng et al. “Adult” robot enabled learning process in high precision assembly automation
Pankert et al. Learning Contact-Based State Estimation for Assembly Tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination