CN115480483A - Method, device, equipment and medium for identifying kinetic parameters of robot - Google Patents

Method, device, equipment and medium for identifying kinetic parameters of robot Download PDF

Info

Publication number
CN115480483A
CN115480483A CN202110605017.XA CN202110605017A CN115480483A CN 115480483 A CN115480483 A CN 115480483A CN 202110605017 A CN202110605017 A CN 202110605017A CN 115480483 A CN115480483 A CN 115480483A
Authority
CN
China
Prior art keywords
model
parameter
robot
identification
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110605017.XA
Other languages
Chinese (zh)
Inventor
徐佳锋
郑宇�
王帅
来杰
陈科
姜鑫洋
王海涛
张竞帆
张东胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110605017.XA priority Critical patent/CN115480483A/en
Publication of CN115480483A publication Critical patent/CN115480483A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Abstract

The application discloses a method, a device, equipment and a medium for identifying kinetic parameters of a robot, and relates to the field of robot control. The method comprises the following steps: determining geometric information of the robot according to the kinematic model of the robot; according to the geometric information, a dynamic model of the robot is constructed; and performing parameter identification on target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters, wherein the target parameters comprise at least one of mass parameters, centroid position parameters and rotational inertia parameters of the robot.

Description

Method, device, equipment and medium for identifying kinetic parameters of robot
Technical Field
The present disclosure relates to the field of robot control, and more particularly, to a method, an apparatus, a device, and a medium for identifying kinetic parameters of a robot.
Background
The parameter identification technology is a technology for predicting by combining a theoretical model and experimental data. The kinetic parameter identification of the robot is a process of processing a kinetic model of the robot to obtain a numerical value of the kinetic parameter of the connecting rod of the robot.
In the related art, the motion trajectory of the robot meeting multiple constraints is usually designed to realize linear optimization of a kinetic model, and then parameter identification is performed on the kinetic model after linear optimization to obtain corresponding kinetic parameters.
The linear optimization of the dynamic model requires the experimenter to have professional mathematics and robot knowledge, so that the difficulty of parameter identification is high.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a medium for identifying kinetic parameters of a robot, linear optimization of a kinetic model is not needed, and parameter identification of the kinetic model can be realized by adopting a nonlinear optimization method. The technical scheme is as follows:
according to an aspect of the present application, there is provided a method for identifying kinetic parameters of a robot, the method including:
determining geometric information of the robot according to a kinematic model of the robot;
according to the geometric information, a dynamic model of the robot is constructed;
and performing parameter identification on target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters, wherein the target parameters comprise at least one of mass parameters, centroid position parameters and rotational inertia parameters of the robot.
According to an aspect of the present application, there is provided a kinetic parameter identification device, the device comprising:
the determining module is used for determining the geometric information of the robot according to the kinematic model of the robot;
the building module is used for building a dynamic model of the robot according to the geometric information;
and the identification module is used for carrying out parameter identification on the target parameters in the dynamic model by adopting a nonlinear optimization method so as to obtain identification values of the target parameters, wherein the target parameters comprise at least one of mass parameters, centroid position parameters and rotational inertia parameters of the robot.
According to an aspect of the present application, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one program code, the program code being loaded by the processor and performing the kinetic parameter identification method as described above.
According to an aspect of the present application, there is provided a computer-readable storage medium having at least one program code stored therein, the program code being loaded and executed by a processor to implement the kinetic parameter identification method as described above.
The beneficial effects that technical scheme that this application embodiment brought include at least:
the target parameters in the dynamic model are subjected to parameter identification through a nonlinear optimization method, linear optimization of the dynamic model is not needed, information required by parameter identification can be provided through the motion track of the robot under the normal working condition, the modeling difficulty and the data acquisition difficulty of the dynamic model of the robot are reduced, and therefore the difficulty of parameter identification is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an application scenario provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for identifying kinetic parameters of a robot according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for identifying kinetic parameters of a robot according to an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a method for identifying kinetic parameters of a robot according to an exemplary embodiment of the present application;
FIG. 5 is a block diagram of a robot provided in an exemplary embodiment of the present application;
FIG. 6 is a diagrammatic view of a coordinate system of a robot provided in accordance with an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a robot provided in an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a robot provided in an exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for identifying kinetic parameters of a robot according to an exemplary embodiment of the present application;
FIG. 10 is a block diagram of a robot provided in an exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for identifying kinetic parameters of a robot according to an exemplary embodiment of the present application;
fig. 12 is a block diagram of a kinetic parameter identification apparatus of a robot according to an exemplary embodiment of the present application;
FIG. 13 is a block diagram of a computer device provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the case that the artificial intelligence technology is applied to control of a robot, fig. 1 shows an application scenario diagram of kinetic parameter identification of the robot provided by the embodiment of the present application.
Wherein the robot 100 has a communication connection with the computer device 200, the computer device 200 sends the desired mapping model to the robot 100, and the robot 100 controls the movement of the robot according to the desired mapping model.
Specifically, the robot 100 may be any configuration of robot, including but not limited to at least one of a serial robot, a parallel robot, a legged robot, a wheeled robot, and a tracked robot.
Hereinafter, the robot 100 is exemplified as a wheel-legged robot.
The wheel-legged robot is a robot structure for controlling the movement of a robot body through a wheel structure, and since the contact point of the wheel-legged robot with the ground only includes the contact point of the wheels with the ground, there is a problem of balance control in the case where the arrangement of the wheel structure itself is unstable.
In the present embodiment of the present application, a wheel-leg type robot is exemplified as a wheel-type biped robot, that is, the wheel-type biped robot includes two wheels for movement, and the two wheels are respectively connected to a leg structure and connected to a robot main body by the leg structure, so that the robot main body is driven by the two wheels to perform motion control. It should be understood that the wheel-legged robot in the present application is not limited to the above-described structure. A wheel-legged robot is to be understood as any robot comprising a wheeled structure.
As schematically shown in fig. 1, the robot 100 includes a base portion 110 and a wheel leg portion 120.
As shown in fig. 1, the robot 100 includes 4 leg structures 122 in total, and 2 leg structures 122 in the 4 leg structures 122 are respectively connected to one wheel 121.
Illustratively, there is a leg structure a, a leg structure B, a leg structure C, and a leg structure D, and then the leg structure a, the leg structure B are connected with the left wheel, and the leg structure C, the leg structure D are connected with the right wheel.
The leg structure A, the leg structure B and the left wheel, and the leg structure C, the leg structure D and the right wheel form a two-leg plane parallel structure of the wheel-leg robot. The parallel legs have five rotational joints, and have two translational degrees of freedom in the transverse and vertical directions, respectively. Compared with a serial mechanism, the parallel mechanism has the characteristics of compact structure, high rigidity and strong bearing capacity. Therefore, the robot can jump higher and overcome obstacles flexibly.
Schematically, the method for identifying the kinetic parameters of the robot can be applied to the robot with any configuration.
In combination with the above, fig. 2 is a flowchart of a method for identifying kinetic parameters of a robot according to an embodiment of the present application, which may be implemented in the application scenario shown in fig. 1. As an example, the method for identifying kinetic parameters of a robot is executed by a computer device 200, as schematically shown in fig. 2, and the method includes the following steps:
step 102: according to the kinematic model of the robot, geometric information of the robot is determined.
The kinematic model is used for describing the change rule of the position, the speed and the acceleration of the robot along with time from the geometric angle, and does not relate to the physical properties of the robot body, the force applied on the robot and other information.
Based on this, geometrical information of the robot can be determined from the kinematic model.
Illustratively, the geometric information includes, but is not limited to, at least one of the following: joint angle and link form and position information of the robot. Wherein the link form and position information refers to geometric information related to the link of the robot, and includes at least one of shape and position.
Optionally, the geometric information includes joint angles, link lengths, and relative pose descriptions between the joints. The pose refers to the position and the posture of the robot. Robots are generally composed of a series of components and kinematic pairs, and are capable of realizing various complex motions and predetermined operations in three-dimensional space, and the description of the relative poses between joints is intended to describe the motions between two joints.
Illustratively, a kinematic model may be constructed based on a transformation matrix between adjacent links of the robot.
Step 104: and constructing a dynamic model of the robot according to the geometric information.
And the dynamic model of the robot is used for describing the relationship between the force of the robot and the motion of the object. The dynamic model has various expression forms, and the construction can be obtained through various methods, such as Lagrange equation or Newton Euler equation.
In order to describe the relationship between the force of the robot and the motion of the object, information such as the position and the joint angle of the robot needs to be acquired.
According to the foregoing, based on the kinematic model, geometric information of the robot may be determined. Wherein the geometric information includes but is not limited to at least one of the following information: joint angle and link form and position information of the robot. Wherein the link form and position information refers to geometric information related to the link of the robot, and includes at least one of shape and position.
Based on this, a kinetic model of the robot can be constructed.
Alternatively, the dynamic model of the robot may be constructed by: constructing a static model of the robot according to the geometric information of the robot; and constructing an inertia and friction model of the robot according to the static model of the robot.
Schematically, the static model is used for describing the stress condition of the robot in a static equilibrium state. The equilibrium state is determined by taking the earth as a reference frame, and refers to a state that an object is in a static or uniform linear motion relative to an inertial reference frame, namely a state that the acceleration is zero. The inertia and friction model is used for describing the stress condition of the robot under the influence of joint friction force and can be constructed according to part of parameter information in the static model.
Step 106: and performing parameter identification on the target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters.
The parameter identification means that the unknown parameters in the theoretical model are identified according to the theoretical model and experimental data to obtain the determined values of the unknown parameters, so that the numerical results obtained through the theoretical model can achieve a better fitting effect.
In other words, the parameter identification of the dynamic model can obtain the identification value of the model parameter, and the identification value can make the numerical result obtained according to the dynamic model closer to the true value.
Illustratively, the target parameter may also be referred to as a kinetic parameter, the target parameter is a part or all of a model parameter in the kinetic model, and the target parameter includes at least one of a mass parameter, a centroid position parameter, and a moment of inertia parameter of the robot.
Wherein the mass parameter can be m i Representing, the centroid position parameter can be r ci Representing, available parameters of moment of inertia I c And i is used for indicating the ith connecting rod, and c is used for indicating the coordinate system of the centroid.
Specifically, the centroid position parameter refers to a coordinate value of the centroid in the three-dimensional space, and can be expressed as
Figure BDA0003094005480000061
The coordinate values of the centroid on the x axis, the y axis and the z axis are respectively represented, namely, the centroid position parameter comprises three parameters. The moment of inertia is a measure of the inertia (the characteristic of a rotating object keeping its uniform circular motion or stationary) of a rigid body when rotating around an axis, and can be expressed by
Figure BDA0003094005480000062
The representation includes six variables, i.e. the moment of inertia parameter includes six parameters.
Equivalently, the target parameter refers to at least one of one mass parameter, three centroid position parameters, and six rotational inertia parameters.
In step 106, a nonlinear optimization method is used to perform parameter identification on the target parameters in the dynamic model, which means to perform nonlinear optimization processing on the dynamic model. Wherein, the nonlinear optimization processing may be to adopt a nonlinear optimization algorithm.
Optionally, after the identification value of the model parameter is obtained, the accuracy of the identification value of the model parameter may be determined, and the identification value of the target parameter in the dynamic model is determined when the model parameter meets the accuracy requirement.
In summary, in the method for identifying the kinetic parameters of the robot provided by the embodiment of the application, the target parameters in the kinetic model are identified through the nonlinear optimization method, the kinetic model does not need to be linearly optimized, the movement track of the robot under the normal working condition can provide information required by parameter identification, the modeling difficulty and the data acquisition difficulty of the kinetic model of the robot are reduced, and therefore the difficulty of parameter identification is reduced.
In combination with the above, fig. 3 shows a flowchart of a kinetic parameter identification method provided by the embodiment of the present application, which may be implemented in the application scenario shown in fig. 1, and is implemented as an example by the computer device 200 in the kinetic parameter identification method of the robot, where the method includes the following steps:
step 202: according to the kinematic model of the robot, geometric information of the robot is determined.
Illustratively, a kinematic model may be constructed based on a transformation matrix between adjacent links of the robot.
According to the foregoing, the geometric information includes, but is not limited to, at least one of the following: joint angle and link form and position information of the robot. Wherein the link form and position information refers to geometric information related to the link of the robot, and includes at least one of shape and position.
Illustratively, step 202 is the same as step 102, and may be referred to for further description.
Step 204: and constructing a dynamic model of the robot according to the geometric information.
And the dynamic model of the robot is used for describing the relationship between the force of the robot and the motion of the object. The dynamic model has various expression forms, and the construction can be obtained through various methods, such as Lagrange equation or Newton Euler equation.
Alternatively, the kinetic model of the robot may be constructed as follows: according to the geometric information of the robot, a static model of the robot is constructed; and constructing an inertia and friction model of the robot according to the static model of the robot.
In order to describe the relationship between the force of the robot and the motion of the object, information such as the position and the joint angle of the robot needs to be acquired. According to the foregoing, based on the kinematic model, geometric information of the robot can be determined.
Illustratively, step 204 is the same as step 104, and may be referred to for further description.
Step 206: data information for parameter identification is collected.
The parameter identification means that according to a theoretical model and experimental data, an unknown parameter in the theoretical model is identified to obtain a determined value of the unknown parameter, so that a numerical result obtained through the theoretical model can achieve a better fitting effect.
Equivalently, the parameter identification of the dynamic model can obtain the identification value of the model parameter, and the identification value can enable the numerical result obtained according to the dynamic model to be closer to the true value.
According to the foregoing, the dynamic model includes a plurality of unknown model parameters, and the dynamic model also includes other parameters known as fixed values, which need to be collected before parameter identification, so that the dynamic model includes only the unknown model parameters.
Based on this, the data information includes identification data and verification data.
The identification data is used for carrying out parameter identification on the dynamic model, and the verification data is used for verifying the identification value of the model parameter.
Specifically, the identification data and the verification data can be collected according to the motion trail of the robot. The motion trajectory of the robot is also called an excitation trajectory. Optionally, the excitation trajectory includes, but is not limited to, one of the following: sinusoidal excitation tracks and Fourier series excitation tracks.
Specifically, during the execution of the excitation trajectory by the robot, the joint angle q and the joint angular velocity may be acquired
Figure BDA0003094005480000081
Angular acceleration of joint
Figure BDA0003094005480000082
And a plurality of sets of data of the current value of the motor current i, wherein the identification data and the verification data are respectively one or more sets thereof.
Step 208: and carrying out nonlinear optimization processing on the model parameters in the dynamic model according to the data information to obtain the identification values of the model parameters.
The nonlinear optimization processing refers to identification processing of model parameters in a dynamic model by using a nonlinear optimization method.
Optionally, the nonlinear optimization processing is a nonlinear optimization algorithm. Illustratively, the nonlinear optimization algorithm includes at least one of the following algorithms: maximum likelihood estimation method, iterative algorithm, variable scale method, least square method, simplex searching method, complex searching method, random searching method.
Illustratively, the model parameters include target parameters. That is, the dynamic model includes a plurality of unknown model parameters, and the plurality of unknown model parameters may form a parameter set, and the target parameter is a part of the parameter set.
According to the foregoing, the target parameter may also be referred to as a dynamic parameter, the target parameter is a part or all of model parameters in a dynamic model, and the target parameter includes at least one of a mass parameter, a centroid position parameter, and a moment of inertia parameter of the robot.
Illustratively, step 208 may be implemented as follows:
substituting the joint motion parameters and the current values into the dynamic model to obtain an updated dynamic model;
and processing the updated dynamic model by adopting a nonlinear optimization algorithm to obtain an identification value of the model parameter.
Wherein the nonlinear optimization algorithm comprises at least one of the following algorithms: maximum likelihood estimation method, iterative algorithm, variable scale method, least square method, simplex searching method, complex searching method, random searching method.
The data information for parameter identification comprises joint angle q and joint angular velocity
Figure BDA0003094005480000091
Angular acceleration of joint
Figure BDA0003094005480000092
The sum current value i is taken as an example, that is, the joint angle q and the joint angular velocity
Figure BDA0003094005480000093
Angular acceleration of joint
Figure BDA0003094005480000094
And substituting the current value i into the dynamic model to obtain an updated dynamic model, wherein the model only comprises unknown model parameters. Subsequently, a non-linear optimization is appliedAnd processing the updated dynamic model by using a chemometric algorithm.
Equivalently, the arg (f) function is used to find the identification values of model parameters in the kinetic model, the model parameters including a single parameter or a set of parameters.
Wherein the arg (f) function is a function that parameterizes or sets of parameters on the objective function (f). Specifically, the argmin (f) function is a function for specifying a parameter or a parameter set when the objective function (f) takes a minimum value.
Step 210: and determining the identification value of the target parameter under the condition that the identification value of the model parameter meets the preset precision.
After the parameter identification is performed on the dynamic model, the identification value of the model parameter can be obtained. In order to improve the accuracy of the model parameters, the identification values of the model parameters need to be determined with accuracy.
Illustratively, the predetermined accuracy is used to determine the accuracy of the identification value of the model parameter.
According to the foregoing, the target parameter may also be referred to as a dynamic parameter, the target parameter is a part or all of model parameters in a dynamic model, and the target parameter includes at least one of a mass parameter, a centroid position parameter, and a moment of inertia parameter of the robot. That is, the target parameters are a part of the model parameters or all of the model parameters.
According to step 208, identification values for model parameters of the dynamical model may be obtained. Based on the method, the precision of the model parameters is judged, and under the condition that the identification value of the model parameters meets the precision requirement, the value corresponding to the target parameters is determined as the identification value of the target.
In summary, in the method for identifying the kinetic parameters of the robot provided by the embodiment of the present application, the identification value of the model parameters is obtained by performing the nonlinear optimization processing on the model parameters in the kinetic model, the kinetic model does not need to be linearly optimized, the movement track of the robot under the normal working condition can provide information required by parameter identification, the modeling difficulty and the data acquisition difficulty of the kinetic model of the robot are reduced, and the difficulty of parameter identification is reduced; and determining the identification value corresponding to the target parameter under the condition that the identification value of the model parameter meets the preset precision, so that the modeling difficulty of the dynamic model of the robot is reduced, the adjustment difficulty of the parameter weight is reduced, and the difficulty of parameter identification is reduced.
According to the foregoing, the robot dynamics model has a plurality of construction modes, and the embodiment of the present application provides an alternative construction mode: according to the geometric information of the robot, a static model of the robot is constructed; and constructing an inertia and friction model of the robot according to the static model of the robot.
Schematically, fig. 4 shows a flowchart of a method for identifying kinetic parameters of a robot according to another embodiment of the present application, where the method may be implemented in the application scenario shown in fig. 1, and the method for identifying kinetic parameters of a robot is executed as an example by the computer device 200, and the method includes the following steps:
step 302: according to the kinematic model of the robot, geometric information of the robot is determined.
Illustratively, a kinematic model may be constructed based on a transformation matrix between adjacent links of the robot.
According to the foregoing, the geometric information includes, but is not limited to, at least one of the following: joint angles and link form and position information of the robot. Wherein the link form and position information refers to geometric information related to the link of the robot, and includes at least one of shape and position.
Illustratively, step 302 is the same as step 102, and may be referred to for further description.
Step 304: and constructing a static model of the robot according to the geometric information.
Schematically, the static model is used for describing the stress condition of the robot in a static equilibrium state.
The equilibrium state is determined by taking the earth as a reference frame, and refers to a state that an object is in a static or uniform linear motion state relative to an inertial reference frame, namely a state that the acceleration is zero.
Illustratively, step 304 may be implemented as follows:
determining a quality parameter and a centroid position parameter according to the geometric information;
generating a mass center parameter item of the robot according to the mass parameters and the mass center position parameters, wherein the mass center parameter item is used for describing the balance state of each connecting rod in the robot;
and constructing a statics model according to the centroid parameter item and the centroid kinetic equation.
Wherein the mass parameter can be m i Representing, the centroid position parameter can be r ci Expressing that the quality parameter and the centroid position parameter can pass through the matrix p, taking the example that the geometric information comprises joint angle and connecting rod form and position information G Where the matrix p G The position of the center of mass of the robot under a world coordinate system is defined, the world coordinate system is an absolute coordinate system of the robot, and other coordinate systems can be mutually converted with the world coordinate system.
As schematically shown in fig. 5, the robot is a wheel-leg robot, and the wheel-leg robot includes a base portion 110 and a wheel-leg portion 120.
Wherein, base portion 110 is connected with wheel leg portion 120, and wheel leg portion 120 includes 2 wheels 121, and leg structure 122 for connecting wheels 121 and base portion 110. Referring to fig. 1, the robot 100 includes 4 leg structures 122 (not shown in fig. 5) in total, and 2 leg structures 122 of the 4 leg structures 122 are respectively connected to one wheel 121. Illustratively, there is a leg structure a, a leg structure B, a leg structure C, and a leg structure D, and then the leg structure a, the leg structure B are connected with the left wheel, and the leg structure C, the leg structure D are connected with the right wheel.
Optionally, the wheel-legged robot further includes a tail portion 130, and the tail portion 130 is connected to the main body portion 110 for providing balance support for the wheel-legged robot, assisting the wheel-legged robot to walk around the leg portion 120, or other purposes.
Schematically, fig. 6 is a schematic diagram of a coordinate system of a robot constructed according to fig. 5.
Wherein, O mocap A world coordinate system representing a motion capture system (Mocap) for measuring the position or attitude of the robot. O is i Ith connecting rod for marking robotJoint coordinate system of, M i For indicating the capture point fixed on the robot.
For example, M0 is used to indicate a catch point fixed to the body portion and M3 is used to indicate a catch point fixed to the end link. Referring to fig. 5 and 6, M3 has three points divided into the ends fixed to the left and right 2 wheels 121 and the tail 130.
Matrix p G I.e. the position of the center of mass of the robot under the motion capture system.
For matrix p G Performing XOR operation to obtain
Figure BDA0003094005480000111
That is, the centroid parameter item of the robot is generated according to the mass parameter and the centroid position parameter.
Alternatively, the centroid parameter term may be represented by the following sub-formula:
Figure BDA0003094005480000112
wherein m is i As a quality parameter, r ci As a centroid position parameter, matrix p G The position of the center of mass of the robot in the world coordinate system is defined, and the center of mass parameter item is a pair matrix p G And carrying out XOR operation to obtain a diagonal matrix.
In the case of robot motion, there is a center of mass kinetic equation for the robot, which can be expressed as a sub-equation:
Figure BDA0003094005480000113
in the formula, the first three rows are Newton's law, the second three rows are Euler's equation, and the first three rows and the second three rows describe the relationship between the motion of the robot and the external force.
Wherein m is the total mass of the robot, g is the gravity acceleration,
Figure BDA0003094005480000114
is a matrix p G L is the angular momentum of the robot body about the center of mass, I 3×3 Is a 3-dimensional identity matrix, r i For the robot and the groundPosition of the ith contact point on the surface in the world coordinate system (absolute coordinate system), f i The force of the ith contact point.
According to the above, in the equilibrium state, the object is in a static or uniform linear motion with respect to the inertial reference system, that is, the static model is constructed in a state where the acceleration is zero. That is, when each link in the robot is in a balanced state,
Figure BDA0003094005480000121
typically 0.
Based on this, when the robot remains in a stationary state, there is
Figure BDA0003094005480000122
According to the centroid parameter term and the centroid kinetic equation,
Figure BDA0003094005480000123
and substituting the centroid kinetic equation into the centroid kinetic equation to construct a statics model. The statics model may be represented by the following sub-formula:
Figure BDA0003094005480000124
wherein m is the total mass of the robot, g is the acceleration of gravity, L is the angular momentum of the robot body about the center of mass, NC number of contact points, I 3×3 Is a 3-dimensional identity matrix, r i The position of the ith contact point of the robot and the ground in a world coordinate system (an absolute coordinate system), f i The force of the ith contact point is the force,
Figure BDA0003094005480000125
is a centroid parameter term.
Illustratively, the construction of the static model requires that constraints be satisfied. The constraint conditions of the static model are various according to the different configurations of the robot, and are specifically set forth as follows:
1. a general constraint.
Illustratively, the static model satisfies at least one of a geometric constraint and a first physical constraint.
The geometric constraint is used for constraining the position of the mass center of each connecting rod of the robot; a first physical constraint for constraining a sum of the masses of the individual links and the masses of the partial links of the robot.
Illustratively, the geometric constraints may be represented by the following sub-equations: r is min <r ci <r max . That is, the center of mass of each link of the robot encompasses the inside of the box in its dimensions.
Illustratively, the first physical constraint may be represented by the following sub-equation:
Figure BDA0003094005480000126
where m is the mass of the link, and i, j are used to indicate the ith and jth links. The first physical constraint is also referred to as a mass non-negative constraint.
2. And constraint conditions corresponding to the wheel-legged robot.
Referring to fig. 5 and 7, taking an example in which the robot is a wheel-legged robot, the wheel-legged robot includes links 01 and 02 on both left and right sides, and the links on both left and right sides are the same type of link, based on which the static model satisfies the first symmetric constraint.
Wherein the first symmetric constraint is used for constraining at least one of the mass and the centroid position of the same connecting rod of the wheel-legged robot.
Illustratively, the first symmetry constraint may be represented by the following sub-formula:
Figure BDA0003094005480000131
wherein m is the mass of the connecting rod, R is the centroid position of the connecting rod, and L and R are used for indicating the left side or the right side of the wheel-legged robot.
Step 306: and constructing an inertia and friction model of the robot according to the static model.
Schematically, the inertia and friction model is used for describing the stress condition of the robot under the influence of joint friction force, and can be constructed according to part of parameter information in the static model.
Based on this, step 306 may be implemented as follows:
and constructing an inertia and friction model according to the mass parameters and the centroid position parameters in the static model and a kinetic equation of the robot.
Illustratively, the kinetic equation can be represented by the following sub-equation:
Figure BDA0003094005480000132
wherein q is a joint angle,
Figure BDA0003094005480000133
the angular velocity of the joint is the angular velocity of the joint,
Figure BDA0003094005480000134
is the angular acceleration of the joint, H (q) is the inertia matrix of the robot,
Figure BDA0003094005480000135
is a centrifugal force matrix of the robot, G (q) is a gravity matrix of the robot, tau is a driving moment required by the robot to keep the current state,
Figure BDA0003094005480000136
is the Jacobian matrix of the ith contact point, f i The force of the ith contact point.
Illustratively, the inertia matrix, the centrifugal force matrix and the gravity matrix may be set according to actual needs, and are not limited herein.
According to the above, the mass parameter and the centroid position parameter can be obtained by a statics model.
In combination with the mass parameter, the centroid position parameter, and the kinetic equation, the inertia and friction model may be represented as a sub-equation:
Figure BDA0003094005480000137
wherein, func ID For shorthand purposes of the inverse dynamics equation (see in particular the discussion below), func ID Can be obtained according to the kinetic equation I c Is rotated toThe quantity parameter, sign is a sign function,
Figure BDA0003094005480000138
is the angular velocity of the joint, i is the motor current, r i The position of the ith contact point of the robot and the ground in a world coordinate system, f i The force of the ith contact point.
Based on the method, inertia and friction models are constructed according to mass parameters and mass center position parameters in the static model and a kinetic equation of the robot, and the method can be realized as follows:
determining an inverse dynamics equation of the robot according to a dynamics equation, wherein the inverse dynamics equation is used for describing the rotational inertia of each connecting rod in the robot under a center of mass system;
determining a joint friction model of the robot and a mapping model corresponding to the motor current of the robot;
and generating an inertia and friction model according to the mass parameter, the centroid position parameter, the inverse dynamics equation, the joint friction model and the mapping model.
Alternatively, depending on the specific expression of the kinetic equation, the inverse kinetic equation may be represented by the following equation:
Figure BDA0003094005480000141
wherein q is a joint angle,
Figure BDA0003094005480000146
in order to determine the angular velocity of the joint,
Figure BDA0003094005480000142
is angular acceleration of the joint, m i As a quality parameter, r ci As a centroid position parameter, I c Tau is the driving moment needed by the robot to keep the current state,
Figure BDA0003094005480000143
is the Jacobian matrix of the ith contact point, f i The force of the ith contact point.
In particular, the mass parameter m i And a centroid position parameter r ci Can be obtained through a static model. Therefore, in the inverse dynamical equation, the mass parameter and the centroid position parameter may be expressed in a constant value, or may be expressed in the form of an unknown parameter, and in the above-shown equation, the mass parameter and the centroid position parameter are expressed in a constant value.
Schematically, a joint friction model is used for describing joint friction force of the robot joint.
According to the different configurations of the robot, the friction force of the joints corresponding to the joints of the robot is different. Specifically, the joint friction force is generated according to factors such as the transmission mode of the robot, the machining and assembling errors of joint parts, the use abrasion degree and the like. All of the above factors can have an effect on the joint motion data. Illustratively, the articulation data includes at least one of: a joint angle of the robot joint, a joint angular velocity of the robot joint, and a joint angular acceleration of the robot joint.
Therefore, it can be seen from the cause of the joint friction force that the joint friction force is correlated with the joint angle, the joint angular velocity, and the joint angular acceleration of the robot joint. That is, joint friction is affected by the joint motion data.
Alternatively, the joint friction model may be represented by the following sub-formula:
Figure BDA0003094005480000144
wherein, tau f Sign is a sign function for joint friction,
Figure BDA0003094005480000145
is the joint angular velocity.
Illustratively, the joint friction model may be described by using a polynomial base, a trigonometric function base, an exponential function base, and the like of any order, and the above formula is only an example, and the application does not limit this.
Illustratively, a mapping model corresponding to the motor current of the robot is used for describing the mapping relation between the output torque of the robot joint and the motor current, and the mapping relation can be nonlinear or linear.
Alternatively, the mapping model may be represented by the following sub-formula: tau is m =k 1 i. Wherein, tau m To output torque, k i I is the motor current.
Illustratively, similar to the joint friction model, the mapping model may also be described in any order, and the above formula is only an example, and the present application does not limit the present invention.
In the control process of the robot, according to the energy conservation principle, part of the output torque of the robot joint is used for generating friction, and part of the output torque is used for generating motion.
That is, the robot joint needs to satisfy the following constraint conditions:
Figure BDA0003094005480000151
wherein, tau m To output torque, τ f In order to provide the joint friction force,
Figure BDA0003094005480000152
jacobian matrix, f, for the ith contact point of the robot i The force of the ith contact point of the robot. The output torque corresponds to the sum of the joint friction, the drive torque, and the force of the plurality of contact points.
In the case where the mass parameter and the centroid position parameter are expressed by fixed values, τ is obtained from the above-described constraint conditions and inverse kinematics equation m =τ f +func ID (I c )。
Simultaneous joint friction model, mapping model and tau m =τ f +func ID (I c ) Inertia and friction models are obtained, which can be represented by the following sub-equations:
Figure BDA0003094005480000153
wherein, func ID Is an inverse dynamic equation, I c Sign is a sign function for the moment of inertia parameters,
Figure BDA0003094005480000154
is the angular velocity of the joint, i is the motor current, r i The position of the ith contact point of the robot and the ground in a world coordinate system f i The force of the ith contact point.
Similar to the static model, the inertia and the friction model also need to satisfy constraint conditions. According to the configuration of the robot, the constraint conditions of the inertia and friction model are various, and are specifically set forth as follows:
1. a general constraint.
Illustratively, the inertia and friction model satisfies at least one of a first boundary constraint and a second physical constraint.
Wherein the first boundary constraint is used for constraining the boundary of the moment of inertia of each connecting rod of the robot; and the second physical constraint is used for constraining the rotating shaft corresponding to the moment of inertia of each connecting rod of the robot.
Illustratively, the first boundary constraint may be represented by the following sub-formula:
Figure BDA0003094005480000155
wherein the content of the first and second substances,
Figure BDA0003094005480000156
and k, a, b, c and d are model parameters of an inertia and friction model of the ith connecting rod. Equivalently, the values of the rotational inertia of the robot and the parameters in the inertia and friction model are limited.
Illustratively, the second physical constraint may be represented by the following sub-formula:
Figure BDA0003094005480000161
wherein the content of the first and second substances,
Figure BDA0003094005480000162
and
Figure BDA0003094005480000163
and the rotating shaft is used for indicating the corresponding rotating inertia.
2. And constraint conditions corresponding to the wheel-legged robot.
Referring to fig. 5 and 7, taking the robot as an example of a wheel-legged robot, the wheel-legged robot includes links 01 and 02 on left and right sides, and the links on the left and right sides are the same type of links, based on which the inertia and friction model satisfy the second symmetric constraint.
And the second symmetric constraint is used for constraining the moment of inertia parameter values of the same connecting rod of the wheel-legged robot.
Illustratively, the second symmetry constraint may be represented by the following sub-formula:
Figure BDA0003094005480000164
wherein m is the mass of the connecting rod, R is the centroid position of the connecting rod, and L and R are used for indicating the left side or the right side of the wheel-legged robot.
Step 308: and performing parameter identification on target parameters in the static model, the inertia model and the friction model by adopting a nonlinear optimization method to obtain identification values of the target parameters.
According to the above, the parameter identification of the static model and the inertia and friction model can obtain the identification value of the model parameter, and the identification value can make the numerical result obtained according to the static model and the inertia and friction model closer to the true value.
Optionally, after the identification value of the model parameter is obtained, the accuracy of the identification value of the model parameter may be determined, and the identification value of the target parameter is determined when the model parameter meets the accuracy requirement.
Illustratively, the target parameters may also be referred to as dynamic parameters, the target parameters are part or all of model parameters in the static model and the inertia and friction model, and the target parameters include at least one of a mass parameter, a centroid position parameter and a rotational inertia parameter of the robot. Wherein the mass parameter can be m i Expressed, the centroid position parameter can be given as r ci Representing, available parameters of moment of inertia I c And (4) showing.
Specifically, the target parameter in the static model includes at least one of a mass parameter and a centroid position parameter, and the target parameter in the inertia and friction model includes a rotational inertia parameter.
And performing parameter identification on target parameters in the static model and the inertia and friction model by adopting a nonlinear optimization method, namely performing nonlinear optimization processing on the static model and the inertia and friction model respectively. Wherein, the nonlinear optimization processing may be to adopt a nonlinear optimization algorithm.
Illustratively, step 308 may be implemented as follows:
performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; performing parameter identification on the inertia and the rotational inertia parameters in the friction model to obtain identification values of the rotational inertia parameters;
alternatively, the first and second electrodes may be,
performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; and under the condition that the identification values of the mass parameter and the centroid position parameter meet the preset precision, performing parameter identification on the rotational inertia parameter in the inertia and friction model to obtain the identification value of the rotational inertia parameter.
Equivalently, after a static model and an inertia and friction model are respectively constructed, parameter identification can be respectively carried out on the models; or after the static model is constructed, the static model is subjected to parameter identification, and the inertia and friction model is subjected to parameter identification under the condition that the obtained identification values of the mass parameter and the centroid position parameter meet the preset precision.
The following specific procedures for parameter identification are described as follows:
1. and identifying parameters of the static model.
Alternatively, the statics model may be represented as:
Figure BDA0003094005480000171
wherein m is the total mass of the robot, g is the acceleration of gravity, L is the angular momentum of the robot body about the center of mass, NC number of contact points, I 3×3 Is a 3-dimensional identity matrix, r i The position of the ith contact point of the robot and the ground in a world coordinate system (an absolute coordinate system), f i The force of the ith contact point is the acting force,
Figure BDA0003094005480000172
is a centroid parameter term comprising a mass parameter m i And a centroid position parameter r ci
Based on this, the step of performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter can be realized as follows:
collecting statics data of the robot;
according to the statics data, carrying out nonlinear optimization processing on a first model parameter in the statics model to obtain an identification value of the first model parameter, wherein the first model parameter comprises a mass parameter and a centroid position parameter;
and determining the identification values of the quality parameter and the centroid position parameter under the condition that the identification value of the first model parameter meets the preset precision.
Illustratively, the first model parameters include a quality parameter and a centroid position parameter.
According to the foregoing, the statics model includes a plurality of parameters, in addition to a centroid parameter term including a mass parameter and a centroid position parameter
Figure BDA0003094005480000173
Besides, the total mass m of the robot, the angular momentum L of the robot body relative to the mass center, and the contact point position r are included i And force f of contact point i The above parameters may be considered as part of the statics data of the robot. That is, the statics data includes, but is not limited to, at least one of a total mass of the robot, an angular momentum of the robot body with respect to a center of mass, a contact point location, and an applied force of the contact point.
Illustratively, there are various implementations of the collection of static data. The total mass of the robot is obtained, for example, by weighing.
Optionally, the step of collecting statics data of the robot may be implemented as follows: and acquiring static data according to detection data provided by an external detection instrument. Wherein the external detection instrument comprises at least one of: the system comprises a laser tracker, a visual camera, a motion capture system and a force measuring plate.
Referring to fig. 8, taking the robot as a wheel-legged robot, the contact point position and the acting force of the contact point can be obtained by the force measuring plate 03.
The acting force f of the contact point of the wheel-legged robot and the force measuring plate 03 can be measured by placing the wheel-legged robot on the force measuring plate 03, changing the configuration of the wheel-legged robot, for example, changing from the configuration 1 to the configuration 2 i
Meanwhile, according to the six-dimensional contact force information provided by the force measuring plate 03, the contact point position of the wheel-legged robot on the force measuring plate 03 under the current configuration can be calculated.
Schematically, the coordinate system definition of the force plate 03 can refer to fig. 6. Position of contact point r i This can be calculated by the following equation:
Figure BDA0003094005480000181
and r iz And =0. Wherein M is y And M x The moment outputs, f, of the force-measuring plate 03 in the y-direction and the x-direction, respectively iz Is the supporting force of the force measuring plate 03 to the robot (equivalent to the acting force f of the contact point) i ) The component on the Z finding.
After the collected statics data, substituting the statics data into the statics model to obtain an updated statics model; and then, processing the updated static model by adopting a nonlinear optimization algorithm to obtain an identification value of the first model parameter.
Illustratively, the updated statics model may be represented as follows:
Figure BDA0003094005480000182
wherein the arg (f) function is a function that parameterizes or sets of parameters on the objective function (f). Specifically, the argmin (f) function is a function of a parameter or a parameter set when the objective function (f) is determined to be the minimum value.
That is, it is obtained from the updated statics model at the objective function
Figure BDA0003094005480000183
And taking the identification value of each parameter in the parameter set at the minimum value. Wherein, the parameter set comprises a quality parameter m i And a centroid position parameter r ci
2. And performing parameter identification on the inertia and friction model.
Alternatively, the inertia and friction model may be expressed as:
Figure BDA0003094005480000191
wherein, func ID Is an inverse kinetic equation of c Sign is a sign function for the moment of inertia parameter,
Figure BDA0003094005480000192
is the angular velocity of the joint, i is the motor current, r i The position of the ith contact point of the robot and the ground in a world coordinate system f i The force of the ith contact point.
Based on this, the step of performing parameter identification on the inertia moment parameter in the inertia and friction model to obtain an identification value of the inertia moment parameter may be implemented as follows:
collecting joint motion data and motor current of a robot;
according to the joint motion data and the motor current, carrying out nonlinear optimization processing on a second model parameter in the inertia and friction model to obtain an identification value of the second model parameter, wherein the second model parameter comprises a rotary inertia parameter;
and determining the identification value of the moment of inertia parameter under the condition that the identification value of the second model parameter meets the preset precision.
Illustratively, the second model parameters include a rotational inertia parameter.
According to the foregoing, similarly to the parameter identification of the static model, the inertia and friction model includes the joint angle q and the joint angular velocity in addition to the rotational inertia parameter
Figure BDA0003094005480000193
Angular acceleration of joint
Figure BDA0003094005480000194
And motor current i should be constant. Wherein, the joint angle q and the joint angular velocity
Figure BDA0003094005480000195
And angular acceleration of joint
Figure BDA0003094005480000196
Including in the articulation data.
Optionally, the step of collecting the joint motion data and the motor current of the robot may be implemented as follows: and collecting joint motion parameters and current values according to the motion track of the robot.
The motion trajectory of the robot is also called an excitation trajectory. Optionally, the excitation trajectory includes, but is not limited to, one of the following trajectories: sinusoidal excitation tracks and Fourier series excitation tracks. Specifically, during the execution of the excitation track of the robot, the joint angle q and the joint angular velocity can be acquired
Figure BDA0003094005480000197
Angular acceleration of joint
Figure BDA0003094005480000198
And the value of the motor current i.
After the joint motion parameters and the current values are collected, substituting the joint motion parameters and the current values into the inertia and friction model to obtain an updated inertia and friction model; and then, processing the updated inertia and friction model by adopting a nonlinear optimization algorithm to obtain an identification value of a second model parameter.
Illustratively, the updated inertia and friction model may be represented by the following sub-equations:
Figure BDA0003094005480000201
wherein the arg (f) function is a function of parameters or a set of parameters of the objective function (f). Specifically, the argmin (f) function is a function of a parameter or a parameter set when the objective function (f) is determined to be the minimum value.
That is, the updated inertia and friction model is obtained as the objective function
Figure BDA0003094005480000202
And taking the identification value of each parameter in the parameter set at the minimum value. Wherein, the parameter set comprises a rotational inertia parameter I c
In summary, the embodiment of the present application provides a specific construction method of a dynamic model of a robot, which includes first constructing a static model of the robot, then constructing an inertia and friction model of the robot, and providing two optional parameter identification methods according to the specific construction method of the dynamic model.
Specifically, by the method for identifying the parameters of the static model and then constructing the inertia and friction model, the robot parameters are decoupled into the static identifiable part and the dynamic identifiable part for identification respectively, the parameter adjusting quantity of parameter identification is reduced, and the difficulty of parameter identification is reduced.
In combination with the above, fig. 9 shows a flowchart of a method for identifying kinetic parameters of a robot according to another embodiment of the present application, which may be implemented in the application scenario shown in fig. 1, and is implemented as an example by the computer device 200 by using the method for identifying kinetic parameters of a robot, where the method includes the following steps:
step 502: a transformation matrix between adjacent links of the robot is determined.
Robots are generally composed of a series of components and kinematic pairs, each joint corresponding to a joint coordinate system, depending on the joint of the robot. For example, the ith joint corresponds to an i-joint coordinate system, and the (i-1) th joint corresponds to an i-1-joint coordinate system.
Schematically, a transformation matrix between adjacent connecting rods is used for realizing the position conversion of the connecting rods in two adjacent joint coordinate systems. That is, the position of a certain joint of the robot in different joint coordinate systems is represented differently, and the matrix is transformed
Figure BDA0003094005480000203
A matrix can be used to describe the change in the position of a point from the i-joint coordinate system to the i-1 joint coordinate system.
Alternatively, the transformation matrix may be represented by the following sub-formula:
Figure BDA0003094005480000204
wherein i, i-1 are used to indicate the ith joint and the (i-1) th joint, and further, q i Is the joint angle of the ith joint,
Figure BDA0003094005480000211
for a matrix describing the position of the origin in the I-joint coordinate system in the I-1 joint coordinate system, I 3×3 Is a 3-dimensional unit matrix and is a three-dimensional unit matrix,
Figure BDA0003094005480000212
is a rotation matrix of the i joint coordinate system relative to the i-1 joint coordinate system.
Illustratively, rotation matrix
Figure BDA0003094005480000213
The determination may be made according to a variety of forms including, but not limited to, at least one of euler angles, rotation vectors, and quaternions.
Referring to the coordinate system diagram shown in fig. 6, taking euler angles as an example, the rotation matrix can be represented by the following sub-formula:
Figure BDA0003094005480000214
wherein X, Y and Z are used to indicate the respective coordinate axes, and α, β and γ are used to indicate the rotation angle.
Step 504: and constructing a kinematic model of the robot according to the transformation matrix.
And the kinematic model is used for describing the change rule of the position, the speed and the acceleration of the robot along with time from the geometrical angle.
Referring to fig. 6 and the transformation matrix obtained in step 502, the pose transformation equation is needed for constructing the kinematic model.
Illustratively, step 504 may be implemented as follows:
and constructing a kinematic model according to the transformation matrix and a pose transformation equation of the robot.
The pose transformation equation is used for converting the position of the target point in a certain joint coordinate system into a pose in a base coordinate system.
Alternatively, the pose transformation equation may be represented as a sub-equation:
Figure BDA0003094005480000215
wherein the content of the first and second substances,
Figure BDA0003094005480000216
for the position of the target point c under the base coordinate,
Figure BDA0003094005480000217
is a matrix for describing the change of the position of the target point c from the i-joint coordinate system to the i-1 joint coordinate system.
Figure BDA0003094005480000218
Is the position of the target point c in the i-joint coordinates.
Subsequently, the pose transformation equation is rewritten into a functional form containing kinematic error terms, which can be specifically expressed by the following sub-formula:
Figure BDA0003094005480000219
wherein q is a joint angle, Δ q is a joint angle error of the robot, and Δ P is an adjacent joint seatThe relative positional errors between the systems, Δ α, Δ β, Δ γ, describe the RPY euler angular form of the rotational deviation between the adjacent joint coordinate systems.
Referring to fig. 5 and 6 i A joint coordinate system for identifying an ith link of the robot, M0 for indicating a capturing point fixed on the body part, and M3 for indicating a capturing point fixed on the end link. Referring to fig. 5 and 6, M3 has three points in total, divided into two points fixed to the ends of the left and right 2 wheels 121 and the tail 130.
According to position or attitude measurement data provided by an external measurement instrument, a kinematic model can be constructed by combining a transformation matrix and a pose transformation equation.
Illustratively, the kinematic model may be represented by the following sub-formula:
Figure BDA0003094005480000221
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003094005480000222
for capturing values, for indicating the capture point M3 at any time relative to the robot-based coordinate system O O The position of (a);
Figure BDA0003094005480000223
for the measured values, it is calculated according to the functional form described above.
Similar to static models, inertia and friction models, the construction of kinematic models also needs to satisfy constraint conditions. The constraints of the kinematic model are various according to the configuration of the robot, and are specifically set forth as follows:
1. a general constraint.
Illustratively, the kinematic model satisfies the second boundary constraint.
Wherein the second boundary constraint is for constraining a boundary of an error term corresponding to a structural installation of the robot.
Illustratively, the second boundary constraint may be represented as follows: min is less than delta q, delta P, delta alpha, delta beta and delta gamma is less than max.
2. And constraint conditions corresponding to the wheel-legged robot.
Referring to fig. 10, taking the example where the robot is a wheel-leg robot, the kinematic model satisfies at least one of the first shape constraint and the second shape constraint.
The first form and position constraint is used for constraining the balance state of an open chain connecting rod of the wheel-legged robot; a second position constraint for constraining a closed configuration of a plurality of links of the wheel-legged robot.
Illustratively, the first form and position constraint may be represented by the following sub-formula:
Figure BDA0003094005480000224
wherein the content of the first and second substances, M3 for indicating a target point of the wheel-legged robot, O0 the L and R are used for indicating the open chain mechanism of the front leg or the rear leg of the wheel-legged robot.
Illustratively, the second shape constraint may be represented by the following equation:
Figure BDA0003094005480000225
wherein, θ is an inner angle of the wheel-legged robot, Δ θ is an error value of the inner angle, and L and R are used for indicating the left side or the right side of the wheel-legged robot.
Step 506: and performing parameter identification on the kinematic model to obtain geometric information.
According to the foregoing, the geometric information includes, but is not limited to, at least one of the following: joint angle and link form and position information of the robot. Wherein the link form and position information refers to geometrical information related to the links of the robot, including at least one of shape and position.
Illustratively, step 506 may be implemented as follows:
determining a captured value and a measured value of a target joint point of the robot;
according to the capture value and the measured value, carrying out nonlinear optimization processing on a third model parameter in the kinematic model to obtain an identification value of the third model parameter, wherein the third model parameter comprises a parameter corresponding to the geometric information;
and determining the geometric information under the condition that the identification value of the third model parameter meets the preset precision.
Specifically, referring to fig. 5 and 6, the capture point M3 is regarded as a target joint point, and its capture value can be obtained by measurement by the motion capture system.
The measured values are determined as follows:
when the robot moves along any planned track, the joint angle value of the robot can be obtained by reading the joint encoder; since the capturing point is usually attached to the determined position of the link of the robot by designing an additional calibration member with high precision, the capturing point M3 is accurately located at the position corresponding to the joint coordinate system
Figure BDA0003094005480000231
The method can be obtained through corresponding instrument acquisition, such as direct acquisition through CAD; then, will
Figure BDA0003094005480000232
And substitution of joint angle values
Figure BDA0003094005480000233
The measured value can be calculated.
Illustratively, the step of performing nonlinear optimization processing on the third model parameter in the kinematic model according to the captured value and the measured value to obtain an identification value of the third model parameter may be implemented as follows:
substituting the captured value and the measured value into the kinematic model to obtain an updated kinematic model;
and processing the updated kinematic model by adopting a nonlinear optimization algorithm to obtain an identification value of a third model parameter.
And the third model parameters comprise geometrical information of the robot.
Similar to the parameter identification process of the static model, the inertia model and the friction model, the capture value and the measured value are substituted into the kinematics model to obtain an updated kinematics model. Illustratively, the updated kinematic model may be represented by the following sub-equation:
Figure BDA0003094005480000234
wherein the third model parameters in the updated kinematic model include parameters corresponding to the geometric information.
With the above, similar to the static model, the inertia and the friction model, the identification value of the parameter of the third model can be obtained by performing the nonlinear optimization processing on the updated kinematic model. And then, judging the precision of the identification value of the third model parameter, and determining the geometric information under the condition of meeting the preset precision.
Step 508: and constructing a dynamic model of the robot according to the geometric information.
The dynamic model of the robot is used for describing the relation between the force of the robot and the motion of the object. The dynamic model has various expression forms, and the construction can be obtained through various methods, such as Lagrange equation or Newton Euler equation.
Alternatively, the dynamic model of the robot may be constructed by: according to the geometric information of the robot, a static model of the robot is constructed; and constructing an inertia and friction model of the robot according to the static model of the robot.
In order to describe the relationship between the force of the robot and the motion of the object, information such as the position and the joint angle of the robot needs to be acquired. According to the foregoing, based on the kinematic model, geometric information of the robot may be determined.
Illustratively, step 508 is the same as step 104, and may be referred to for further description.
Step 510: and performing parameter identification on the target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters.
The parameter identification of the dynamic model can obtain the identification value of the model parameter, and the identification value can enable the numerical result obtained according to the dynamic model to be closer to the true value.
Illustratively, the target parameters may also be called dynamic parameters, the target parameters are part or all of model parameters in a dynamic model, and the target parameters include the mass of the robotAt least one of a parameter, a centroid location parameter, and a moment of inertia parameter. Wherein the mass parameter can be m i Representing, the centroid position parameter can be r ci Expressing, available as moment of inertia parameter I c And (4) showing.
Optionally, after the identification value of the model parameter is obtained, the accuracy of the identification value of the model parameter may be determined, and the identification value of the target parameter in the dynamic model is determined when the model parameter meets the accuracy requirement.
Illustratively, step 510 is the same as step 106, and may be referred to for further description.
Fig. 11 shows a flowchart of another method for identifying kinetic parameters of a robot, which may be implemented in the application scenario shown in fig. 1, and for example, when the method for identifying kinetic parameters of a robot is executed by the computer device 200, the method includes the following steps:
step 601: kinematic identification data and verification data are collected.
Illustratively, the identification data is used for parameter identification of the kinematic model, and the verification data is used for verifying the identification value of the first model parameter in the kinematics.
Specifically, the identification data and the verification data may be collected or determined from an external measurement instrument.
Step 602: and performing parameter identification of the kinematic model.
The kinematic model is used for describing the change rule of the position, the speed and the acceleration of the robot along with time from the geometric angle, and does not relate to the physical properties of the robot body, the force applied on the robot and other information.
Schematically, the parameter identification performed on the kinematic model refers to processing the kinematic model by using a nonlinear optimization method. The nonlinear optimization method may be a nonlinear optimization algorithm. Illustratively, the nonlinear optimization algorithm includes at least one of the following algorithms: maximum likelihood estimation method, iterative algorithm, variable scale method, least square method, simplex searching method, complex searching method, random searching method.
Step 603: and judging whether the identification parameters meet the precision requirement.
After the parameter identification is performed on the kinematic model, the identification value of the first model parameter can be obtained. In order to improve the accuracy of the model parameters, the identification value of the first model parameter needs to be determined with accuracy.
Illustratively, the first model parameters include parameters corresponding to geometric information.
Based on this, in step 603, under the condition that the identification value of the model parameter meets the precision requirement, the parameter identification of the kinematic model is completed, the geometric information of the robot can be determined, and step 604 is executed; in the case that the identified values of the model parameters do not meet the accuracy requirement, the step 602 is continuously executed.
Step 604: static identification data and verification data are collected.
Illustratively, the identification data is used for parameter identification of the static model, and the verification data is used for verifying the identification value of the first model parameter in the kinematics.
Step 605: and performing parameter identification of the static model.
And the static model is used for describing the stress condition of the robot in a static balance state. The equilibrium state is determined by taking the earth as a reference frame, and refers to a state that an object is in a static or uniform linear motion relative to an inertial reference frame, namely a state that the acceleration is zero.
Schematically, the parameter identification of the static model refers to processing the static model by a nonlinear optimization method. The nonlinear optimization method may be a nonlinear optimization algorithm. Illustratively, the nonlinear optimization algorithm includes at least one of the following algorithms: maximum likelihood estimation method, iterative algorithm, variable scale method, least square method, simplex searching method, complex searching method, random searching method.
Step 606: and judging whether the identification parameters meet the precision requirement.
After the parameter identification is performed on the static model, the identification value of the second model parameter can be obtained. In order to improve the accuracy of the model parameter, the identification value of the second model parameter needs to be determined with accuracy.
Illustratively, the second model parameter includes at least one of a mass parameter and a centroid position parameter.
Based on this, in step 606, under the condition that the identification value of the model parameter meets the accuracy requirement, the parameter identification of the static model is completed, the identification values of the mass parameter and the centroid position parameter of the robot can be determined, and step 607 is executed; if the identified values of the model parameters do not meet the accuracy requirement, the process continues to step 605.
Step 607: and collecting identification data and verification data of inertia and friction.
Illustratively, the identification data is used for parameter identification of the inertia and friction models, and the verification data is used for verifying the identification value of the first model parameter in the kinematics.
Specifically, the identification data and the verification data can be collected according to the motion trail of the robot.
The motion trajectory of the robot is also called an excitation trajectory. Optionally, the excitation trajectory includes, but is not limited to, one of the following: sinusoidal excitation tracks and Fourier series excitation tracks.
Specifically, during the execution of the excitation trajectory by the robot, a plurality of sets of data of joint angles, joint angular velocities, joint angular accelerations, and motor currents i may be collected, and the identification data and the verification data are respectively one or more sets thereof.
Step 608: and performing parameter identification of the inertia and friction model.
And the inertia and friction model is used for describing the stress condition of the robot under the influence of joint friction force.
Schematically, the parameter identification of the inertia and friction model refers to processing the inertia and friction model by a nonlinear optimization method. The nonlinear optimization method may be a nonlinear optimization algorithm. Illustratively, the nonlinear optimization algorithm includes at least one of the following algorithms: maximum likelihood estimation method, iterative algorithm, variable scale method, least square method, simplex searching method, complex searching method, random searching method.
Step 609: and judging whether the identification parameters meet the precision requirement.
After the inertia and friction model is subjected to parameter identification, an identification value of a first model parameter can be obtained. In order to improve the accuracy of the model parameters, the identification value of the first model parameter needs to be determined with accuracy.
Based on this, in step 609, under the condition that the identification value of the model parameter meets the precision requirement, the parameter identification of the inertia and friction model is completed; in the case that the identified values of the model parameters do not meet the accuracy requirement, the process continues to step 608.
In summary, the embodiment of the present application provides a specific construction method of a kinematic model of a robot, and geometric information is obtained through parameter identification of the kinematic model to construct a dynamic model.
Fig. 12 is a block diagram illustrating a structure of a device for identifying kinetic parameters of a robot according to an embodiment of the present disclosure. The device includes:
a determining module 1220, configured to determine geometric information of the robot according to a kinematic model of the robot;
a building module 1240 for building a kinetic model of the robot according to the geometric information;
and the identification module 1260 is used for performing parameter identification on the target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters, wherein the target parameters comprise at least one of a mass parameter, a centroid position parameter and a rotational inertia parameter of the robot.
In an alternative embodiment, the identification module 1260 is used for collecting data information for parameter identification; according to the data information, carrying out nonlinear optimization processing on model parameters in the dynamic model to obtain identification values of the model parameters, wherein the model parameters comprise target parameters; and determining the identification value of the target parameter under the condition that the identification value of the model parameter meets the preset precision.
In an optional embodiment, the identification module 1260 is configured to substitute the data information into the dynamical model to obtain an updated dynamical model; and processing the updated dynamic model by adopting a nonlinear optimization algorithm to obtain an identification value of the model parameter.
In an alternative embodiment, the building module 1240 is configured to build a static model of the robot based on the geometric information; and constructing an inertia and friction model of the robot according to the static model.
In an alternative embodiment, the construction module 1240 is configured to determine the quality parameter and the centroid location parameter according to the geometric information; generating a mass center parameter item of the robot according to the mass parameters and the mass center position parameters, wherein the mass center parameter item is used for describing the balance state of each connecting rod in the robot; and constructing a statics model according to the centroid parameter item and the centroid kinetic equation.
In an alternative embodiment, the building module 1240 is configured to build an inertia and friction model according to the mass parameter and the centroid position parameter in the static model and the kinetic equation of the robot.
In an optional embodiment, the building module 1240 is configured to determine an inverse dynamic equation of the robot according to the dynamic equation, where the inverse dynamic equation is used to describe the rotational inertia of each link in the robot in the center of mass system; determining a joint friction model of the robot and a mapping model corresponding to the motor current of the robot; and generating an inertia and friction model according to the mass parameter, the centroid position parameter, the inverse dynamics equation, the joint friction model and the mapping model.
In an alternative embodiment, the identification module 1260 is configured to perform parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; performing parameter identification on the inertia and the rotational inertia parameters in the friction model to obtain identification values of the rotational inertia parameters; or performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; and under the condition that the identification values of the mass parameter and the centroid position parameter meet the preset precision, performing parameter identification on the rotational inertia parameter in the inertia and friction model to obtain the identification value of the rotational inertia parameter.
In an alternative embodiment, the determining module 1220 is configured to determine a transformation matrix between adjacent links of the robot; constructing a kinematic model of the robot according to the transformation matrix; and performing parameter identification on the kinematic model to obtain geometric information.
In an alternative embodiment, the determining module 1220 is configured to construct the kinematic model according to the transformation matrix and the pose transformation equation of the robot.
In an alternative embodiment, the determining module 1220 is configured to determine captured values and measured values of a target joint of the robot; according to the capture value and the measured value, carrying out nonlinear optimization processing on a third model parameter in the kinematic model to obtain an identification value of the third model parameter, wherein the third model parameter comprises a parameter corresponding to the geometric information; and determining the geometric information under the condition that the identification value of the third model parameter meets the preset precision.
In an alternative embodiment, the determining module 1220 is configured to substitute the captured values and the measured values into the kinematic model to obtain an updated kinematic model; and processing the updated kinematic model by adopting a nonlinear optimization algorithm to obtain an identification value of a third model parameter.
It should be noted that: the apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus provided in the foregoing embodiment has the same concept as the method embodiment in the foregoing, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 13 shows a block diagram of an electronic device 1300 according to an exemplary embodiment of the present application.
The electronic device 1300 may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on. In the embodiment of the present application, the electronic device 1300 is implemented as a control device portion in a robot.
In general, the electronic device 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the kinetic parameter identification of a robot, or a parameter identification method of a desired mapping model, provided by method embodiments herein.
In some embodiments, the electronic device 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, memory 1302 and peripheral interface 1303 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, display screen 1305, camera assembly 1306, audio circuitry 1307, positioning assembly 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the rf circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or Wi-Fi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1305 may be one, disposed on a front panel of the electronic device 1300; in other embodiments, the display 1305 may be at least two, respectively disposed on different surfaces of the electronic device 1300 or in a folded design; in other embodiments, the display 1305 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, the camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the electronic device 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used to locate a current geographic Location of the electronic device 1300 for navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1309 is used to supply power to the various components in the electronic device 1300. The power supply 1309 may be alternating current, direct current, disposable or rechargeable batteries. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, the electronic device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the electronic apparatus 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the electronic device 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user on the electronic device 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 1313 may be disposed on a side bezel of the electronic device 1300 and/or underlying the display 1305. When the pressure sensor 1313 is disposed on the side frame of the electronic device 1300, the holding signal of the user to the electronic device 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is configured to collect a fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the electronic device 1300. When a physical button or vendor Logo is provided on the electronic device 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 may control the display brightness of the display screen 1305 according to the ambient light intensity collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
The proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of the electronic device 1300. The proximity sensor 1316 is used to capture the distance between the user and the front face of the electronic device 1300. In one embodiment, the processor 1301 controls the display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front surface of the electronic device 1300 is gradually decreased; the display 1305 is controlled by the processor 1301 to switch from the breath-screen state to the light-screen state when the proximity sensor 1316 detects that the distance between the user and the front surface of the electronic device 1300 is gradually increasing.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting of the electronic device 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Embodiments of the present application also provide a computer device comprising a processor and a memory, the memory having stored therein at least one program code, the program code being loaded and executed by the processor to implement kinetic parameter identification of a robot as described above.
Embodiments of the present application also provide a computer-readable storage medium having at least one program code stored thereon, the program code being loaded and executed by a processor to implement kinetic parameter identification of a robot as described above.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to perform kinetic parameter recognition of the robot as described above.
Optionally, the computer-readable storage medium may include: read Only Memory (ROM), random Access Memory (RAM), solid State Drive (SSD), or optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for identifying kinetic parameters of a robot, the method comprising:
determining geometric information of the robot according to the kinematic model of the robot;
according to the geometric information, a dynamic model of the robot is constructed;
and performing parameter identification on target parameters in the dynamic model by adopting a nonlinear optimization method to obtain identification values of the target parameters, wherein the target parameters comprise at least one of mass parameters, centroid position parameters and rotational inertia parameters of the robot.
2. The method of claim 1, wherein the performing parameter identification on the target parameter in the dynamical model by using the nonlinear optimization method to obtain the identified value of the target parameter comprises:
collecting data information for parameter identification;
according to the data information, carrying out nonlinear optimization processing on model parameters in the dynamic model to obtain identification values of the model parameters, wherein the model parameters comprise the target parameters;
and determining the identification value of the target parameter under the condition that the identification value of the model parameter meets the preset precision.
3. The method according to claim 2, wherein the performing a nonlinear optimization process on the model parameters in the dynamical model according to the data information to obtain the identification values of the model parameters comprises:
substituting the data information into the dynamic model to obtain an updated dynamic model;
and processing the updated dynamic model by adopting a nonlinear optimization algorithm to obtain an identification value of the model parameter.
4. The method of any of claims 1 to 3, wherein said constructing a kinetic model of said robot from said geometric information comprises:
constructing a static model of the robot according to the geometric information;
and constructing an inertia and friction model of the robot according to the static model.
5. The method of claim 4, wherein said constructing a static model of the robot from the geometric information comprises:
determining the quality parameter and the centroid position parameter according to the geometric information;
generating a mass center parameter item of the robot according to the mass parameters and the mass center position parameters, wherein the mass center parameter item is used for describing the balance state of each connecting rod in the robot;
and constructing the statics model according to the centroid parameter item and the centroid kinetic equation.
6. The method of claim 4, wherein constructing the inertia and friction model of the robot from the statics model comprises:
and constructing the inertia and friction model according to the mass parameter and the centroid position parameter in the static model and a kinetic equation of the robot.
7. The method of claim 6, wherein constructing the inertia and friction model from the mass parameter and the centroid location parameter in the statics model and a kinetic equation for the robot comprises:
determining an inverse dynamic equation of the robot according to the dynamic equation, wherein the inverse dynamic equation is used for describing the rotational inertia of each connecting rod in the robot under a center-of-mass system;
determining a joint friction model of the robot and a mapping model corresponding to a motor current of the robot;
and generating the inertia and friction model according to the mass parameter, the centroid position parameter, the inverse dynamics equation, the joint friction model and the mapping model.
8. The method according to any one of claims 4 to 7, wherein the performing parameter identification on the target parameter in the dynamical model by using the nonlinear optimization method to obtain the identified value of the target parameter comprises:
performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; performing parameter identification on the rotational inertia parameters in the inertia and friction model to obtain identification values of the rotational inertia parameters;
alternatively, the first and second electrodes may be,
performing parameter identification on the mass parameter and the centroid position parameter in the static model to obtain identification values of the mass parameter and the centroid position parameter; and under the condition that the identification values of the mass parameter and the centroid position parameter meet preset precision, performing parameter identification on the rotary inertia parameter in the inertia and friction model to obtain the identification value of the rotary inertia parameter.
9. The method of any of claims 1 to 3, wherein said determining geometrical information of the robot from the kinematic model of the robot comprises:
determining a transformation matrix between adjacent links of the robot;
constructing a kinematic model of the robot according to the transformation matrix;
and performing parameter identification on the kinematic model to obtain the geometric information.
10. The method of claim 9, wherein said constructing a kinematic model of the robot from the transformation matrix comprises:
and constructing the kinematic model according to the transformation matrix and a pose transformation equation of the robot.
11. The method of claim 9, wherein the performing parameter identification on the kinematic model to obtain the geometric information comprises:
determining captured and measured values of a target joint point of the robot;
according to the capture value and the measured value, carrying out nonlinear optimization processing on a third model parameter in the kinematic model to obtain an identification value of the third model parameter, wherein the third model parameter comprises a parameter corresponding to the geometric information;
and determining the geometric information under the condition that the identification value of the third model parameter meets the preset precision.
12. The method of claim 11, wherein said performing a non-linear optimization process on the kinematic model based on the captured values and the measured values to obtain the identified values of the third model parameters comprises:
substituting the capture value and the measured value into the kinematic model to obtain an updated kinematic model;
and processing the updated kinematic model by adopting a nonlinear optimization algorithm to obtain an identification value of the third model parameter.
13. A kinetic parameter identification device for a robot, the device comprising:
the determining module is used for determining the geometric information of the robot according to the kinematic model of the robot;
the building module is used for building a dynamic model of the robot according to the geometric information;
and the identification module is used for carrying out parameter identification on target parameters in the dynamic model by adopting a nonlinear optimization method so as to obtain identification values of the target parameters, wherein the target parameters comprise at least one of mass parameters, centroid position parameters and rotational inertia parameters of the robot.
14. A computer device, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor to implement the method for kinetic parameter identification of a robot according to any of claims 1 to 12.
15. A computer-readable storage medium, wherein at least one program code is stored in the computer-readable storage medium, and the program code is loaded and executed by a processor to implement the method for kinetic parameter identification of a robot according to any of claims 1 to 12.
CN202110605017.XA 2021-05-31 2021-05-31 Method, device, equipment and medium for identifying kinetic parameters of robot Pending CN115480483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110605017.XA CN115480483A (en) 2021-05-31 2021-05-31 Method, device, equipment and medium for identifying kinetic parameters of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110605017.XA CN115480483A (en) 2021-05-31 2021-05-31 Method, device, equipment and medium for identifying kinetic parameters of robot

Publications (1)

Publication Number Publication Date
CN115480483A true CN115480483A (en) 2022-12-16

Family

ID=84419957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110605017.XA Pending CN115480483A (en) 2021-05-31 2021-05-31 Method, device, equipment and medium for identifying kinetic parameters of robot

Country Status (1)

Country Link
CN (1) CN115480483A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331311A (en) * 2023-09-21 2024-01-02 中山大学 Robot dynamics parameter estimation method based on acceleration-free recursive filtering regression

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117331311A (en) * 2023-09-21 2024-01-02 中山大学 Robot dynamics parameter estimation method based on acceleration-free recursive filtering regression

Similar Documents

Publication Publication Date Title
US11275931B2 (en) Human pose prediction method and apparatus, device, and storage medium
CN110967011B (en) Positioning method, device, equipment and storage medium
CN108245893B (en) Method, device and medium for determining posture of virtual object in three-dimensional virtual environment
CN110986930B (en) Equipment positioning method and device, electronic equipment and storage medium
US20160077166A1 (en) Systems and methods for orientation prediction
CN109712224A (en) Rendering method, device and the smart machine of virtual scene
CN109166150B (en) Pose acquisition method and device storage medium
CN113752250A (en) Method and device for controlling robot joint, robot and storage medium
CN111256676B (en) Mobile robot positioning method, device and computer readable storage medium
CN110920631B (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
KR20140129285A (en) Orientation sensing computing devices
CN112150560A (en) Method and device for determining vanishing point and computer storage medium
CN108844529A (en) Determine the method, apparatus and smart machine of posture
CN111928861B (en) Map construction method and device
CN115480483A (en) Method, device, equipment and medium for identifying kinetic parameters of robot
CN108196701A (en) Determine the method, apparatus of posture and VR equipment
CN112527104A (en) Method, device and equipment for determining parameters and storage medium
CN112947474A (en) Method and device for adjusting transverse control parameters of automatic driving vehicle
WO2023130824A1 (en) Motion control method for under-actuated system robot, and under-actuated system robot
CN114764241A (en) Motion state control method, device and equipment and readable storage medium
CN114372395A (en) CAE (computer aided engineering) automatic modeling method, system, terminal and storage medium for kinematic pairs
CN115480594A (en) Jump control method, apparatus, device, and medium
CN114791729A (en) Wheeled robot control method, device, equipment and readable storage medium
CN111179628B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN114092655A (en) Map construction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination