CN113352328B - Method for identifying hinge model and robot operation method - Google Patents

Method for identifying hinge model and robot operation method Download PDF

Info

Publication number
CN113352328B
CN113352328B CN202110717749.8A CN202110717749A CN113352328B CN 113352328 B CN113352328 B CN 113352328B CN 202110717749 A CN202110717749 A CN 202110717749A CN 113352328 B CN113352328 B CN 113352328B
Authority
CN
China
Prior art keywords
model
pose
robot
rigid components
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110717749.8A
Other languages
Chinese (zh)
Other versions
CN113352328A (en
Inventor
程敏
张硕
桂凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yijiahe Technology R & D Co ltd
Original Assignee
Shenzhen Yijiahe Technology R & D Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yijiahe Technology R & D Co ltd filed Critical Shenzhen Yijiahe Technology R & D Co ltd
Priority to CN202110717749.8A priority Critical patent/CN113352328B/en
Publication of CN113352328A publication Critical patent/CN113352328A/en
Application granted granted Critical
Publication of CN113352328B publication Critical patent/CN113352328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1669Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses an identification method of a hinge model and a robot operation method, comprising the following steps: (1) The end effector grabs and pulls the handle of the hinged object, and records the pose of the end effector at each moment; (2) Calculating to obtain a pose relation between two rigid components at each moment as a training sample; (3) Setting a hinge model of a hinge object, and obtaining the probability of appearance of the pose relationship between two rigid components of a certain hinge model at a certain moment according to the transformation freedom degree between the two rigid components; (4) And calculating to obtain a likelihood function of the pose relationship of the two rigid components of a certain hinge model, and selecting the hinge model which can enable the likelihood function to obtain the maximum value as a final hinge model. The invention automatically generates the motion model of the articulated object through the active operation of the robot on the articulated object and completes the process of parameter identification, thereby greatly simplifying the operation of the robot on the object with unknown articulated model and widening the application range of the robot in complex environment.

Description

Method for identifying hinge model and robot operation method
Technical Field
The invention relates to the field of robots, in particular to an identification method of a hinge model and a robot operation method.
Background
The home environment is a key application area for service robots. To accomplish a given task, robots often need to manipulate various types of objects. Many of these objects are not rigid bodies such as refrigerators, doors, drawers, etc. These objects have movable parts, and what can be considered as multiple rigid bodies connected together in a certain way and capable of relative movement is called an articulated object, and how to operate these objects is a great challenge for robots.
In the prior art, the most common mode is to move a robot chassis to a specified position, and acquire a motion track of the robot when the robot operates an articulated object in a dragging teaching mode. At this time, if the robot needs to execute corresponding operation, only the teaching track needs to be reproduced.
An improved way is to cooperate with impedance control when the robot performs an operation in order to avoid that pure position control causes excessive stress between the robot and the articulated object.
Another way is to input the parameters of the articulation model directly to the robot, so that the robot can calculate, on the basis of this known model, the pose that its end-effector should have at each stage of operation, which can also be used for manipulating objects.
There are a number of disadvantages to current implementations of articulating object operation. First, in a method of implementing an object operation by teaching reproduction, a contact stress between the robot and the articulated object may be very large due to various errors, which may greatly reduce the service life of the robot or the object and may even cause damage.
The second improvement improves the stresses that occur during contact by impedance control, but this solution and the first one have the problem of poor applicability, requiring corresponding teaching work for each articulated object to be operated.
The third solution solves the problems of the first two solutions to a certain extent, but the autonomy of the robot is still poor, and each new articulated object requires the user to manually input its model parameters, which is not acceptable for the end user.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the defects, the invention provides the method for identifying the articulated model and the application of the articulated model in robot operation, so that the operation of the robot on an object with an unknown articulated model is greatly simplified, and the application range of the robot in a complex environment is widened.
The technical scheme is as follows:
a method for identifying an articulated model comprises the following steps:
(1) The robot controls the end effector to grab and pull the handle of the hinged object, and the pose of the end effector at each moment is recorded and obtained in the process; wherein the hinged object is formed by connecting two rigid components;
(2) Calculating the pose of the end effector at each moment recorded in the step (1) to obtain the pose relationship between two rigid components at each moment as a training sample;
(3) Setting a hinge model of a hinge object, and obtaining the probability of appearance of the pose relationship between two rigid components of a certain hinge model at a certain moment according to the transformation freedom degree between the two rigid components, so as to obtain the expected transformation relationship between the two rigid components of the certain hinge model;
(4) And (4) calculating to obtain a likelihood function of the pose relationship of the two rigid components of the certain hinge model according to the expected transformation relationship between the two rigid components of the certain hinge model obtained in the step (3), obtaining the maximum value of the likelihood function through a maximum likelihood estimation algorithm, and selecting the hinge model which can enable the likelihood function to obtain the maximum value as a final hinge model.
In the step (1), the pose of the end effector at each moment is obtained through the robot joint encoder and the positive kinematics calculation.
In the step (3), the articulation model comprises a translation joint model and a rotation joint model;
for the translational joint model, the handle of the hinged object has translational freedom degree in a certain fixed axial direction, and the variation of the end effector poses at all times relative to the initial time is solved as an implicit variable
Figure BDA0003135602120000021
Let e 1 A unit vector representing the direction of the translation axis, then for a translation joint the implicit variables are described as:
Figure BDA0003135602120000022
where trans removes the rotating part of the transformation matrix, the articulation model of the translational joint is thus described as:
Figure BDA0003135602120000023
for the rotary joint model, the handle of the hinged object has a rotary degree of freedom in a certain fixed axial direction, and the variation of the end effector poses at all times relative to the initial time is solved as an implicit variable
Figure BDA0003135602120000031
Let e 2 A unit vector representing the direction of the axis of rotation, then for a rotating joint the implicit variables can be described as:
Figure BDA0003135602120000032
wherein angle removes the translation part in the transformation matrix, in combination with standard geometric transformation, for a rotary joint, its parameters include the rotation axis n, the rotation center c and the transformation relation r that needs to be aligned to the handle after rotation, so the articulation model of the rotary joint is described as:
Figure BDA0003135602120000033
wherein transform (c) represents a homogeneous transformation matrix obtained by translating vector c in a coordinate system, trot n (a) Is a homogeneous transformation matrix obtained by rotating an angle a around the n axis, and is calculated by a Rodrigues formula.
The step (3) is specifically as follows:
let M ij Representing the articulated model between rigid components i and j, the probability of a noisy pose relationship between two rigid components at a given moment of the model is as follows:
Figure BDA0003135602120000034
wherein a is a hinge model M ij An implicit variable representing a degree of freedom of transformation between two rigid components;
Figure BDA0003135602120000035
represents a measured value @, with a known articulation model and implicit variable>
Figure BDA0003135602120000036
The probability density function is obtained by calculation through a given known hinge model and the transformation freedom degree between two rigid components; p (a | M) ij ) A probability density function representing an implicit variable where the articulation model is known; />
Assuming that the implicit variables are subject to uniform distribution during sampling, the above equation reduces to:
Figure BDA0003135602120000037
after selecting a particular articulation model, the desired transformation relationship between the two rigid components is:
Figure BDA0003135602120000038
wherein the content of the first and second substances,
Figure BDA0003135602120000039
representing a selected articulated model M ij The desired transformation relationship between the intermediate rigid components i and j;
if the noise in the end effector pose at each moment obtained in the step (1) is in accordance with Gaussian distribution, then:
Figure BDA0003135602120000041
wherein the content of the first and second substances,
Figure BDA0003135602120000042
representing the difference between expected and measured values of the articulated model, σ 2 Representing the variance of the gaussian distribution.
A robot operation method applying the identification method of the hinge model comprises the following steps:
(1) The robot moves to a target position according to the task and grabs a handle of the hinged object;
(2) And calculating according to the obtained articulation model to obtain a next control point of the robot, obtaining a next expected pose of the end effector of the robot, moving to a corresponding pose, and controlling the articulated object through an impedance control strategy.
The impedance control strategy in the step (2) is specifically as follows:
(41) Obtaining the current external force F applied to the tail end of the robot by adopting current loop estimation or six-dimensional force measurement;
(42) According to the current pose and impedance model of the robot terminal
Figure BDA0003135602120000043
Figure BDA0003135602120000044
Calculating the acceleration of the pose correction quantity, wherein>
Figure BDA0003135602120000045
Acceleration representing pose correction amount, based on the posture correction amount>
Figure BDA0003135602120000046
Representing the speed of the pose correction quantity, delta X representing the pose correction quantity, M representing the mass term B of the impedance model representing the damping term of the impedance model, and K representing the rigidity term of the impedance model;
(43) Obtaining the pose correction amount by twice integrating the acceleration of the pose correction amount obtained in the step (42) and fusing the pose correction amount with the next expected pose obtained in the step (2) to obtain the target pose of the robot end effector;
(44) And (5) according to the target pose of the robot end effector obtained in the step (43), carrying out inverse solution through inverse kinematics and driving the robot to move to the target pose for operation.
Has the advantages that: the invention automatically generates the motion model of the articulated object through the active operation of the robot on the articulated object and completes the process of parameter identification, thereby greatly simplifying the operation of the robot on the object with unknown articulated model and widening the application range of the robot in complex environment. The invention reduces the factor of human intervention of the robot in the process of environmental cognition to the utmost extent.
Drawings
FIG. 1 is a flow chart of hinge model identification and operation.
Fig. 2 is a flowchart of a robot operation articulation model.
Fig. 3 is a flow chart of robot impedance control.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
The present invention is directed to a service robot including a mobile chassis and a multi-axis robot arm, and since an articulated object generally has an operating handle in a home environment, it is necessary that an end effector (i.e., a gripper) of the robot grips the handle of the articulated object by machine vision or the like. Fig. 1 is a flow chart illustrating the identification and operation of the articulated model, and as shown in fig. 1, the identification method of the articulated model of the present invention includes the following steps:
(1) The clamping jaw of the robot slowly pulls the handle hinged with the object with a certain force, and when the robot starts to move to the end of the movement, the robot records the position y of the clamping jaw at each moment in the process 1 ,…,y T As sampled data, where y T Jaw pose at time T, y t E SE (3), wherein y t The jaw pose at the time T is shown, T =1, …, and T, SE (3) show a special Euclidean group; the clamping jaw pose at each moment can be obtained through the calculation of a robot joint encoder and positive kinematics;
(2) Re-estimating the articulation model M by using the jaw pose at each moment recorded in the step (1), wherein the articulation model M belongs to { prism, revolutionary } if the translational joint model and the rotational joint model are considered as alternative sets of articulation models at present, wherein prism represents the translational joint model, and revolutionary represents the rotational joint model;
for the translational joint model, the handle of the hinged object has translational freedom in a certain fixed axial direction, so that all the sampling data can be solved relative to the first sampling data y 1 As an implicit variable
Figure BDA0003135602120000051
Let e represent the unit vector of the translation axis direction, then for a translation joint, its implicit variables can be described as:
Figure BDA0003135602120000052
where trans removes the rotating part of the transformation matrix, the articulation model of the translational joint can thus be described as:
Figure BDA0003135602120000053
in the case of the rotary joint model, since the handle of the articulated object has a rotational degree of freedom in a fixed axial direction, all the sample data can be solved with respect to the first sample data y 1 As an implicit variable
Figure BDA0003135602120000054
Let e represent the unit vector in the direction of the axis of rotation, then for a rotary joint, its implicit variables can be described as:
Figure BDA0003135602120000055
wherein angle removes the translation part in the transformation matrix, and combines the standard geometric transformation, and for the rotary joint, the parameters comprise the rotation axis n epsilon R 3 The rotation center c and the rigid body transformation R ∈ R that needs to be performed after the rotation 6 (i.e. from the centre of rotation to the handle), the articulation model of the rotary joint can therefore be described as:
Figure BDA0003135602120000061
wherein transform (c) represents a homogeneous transformation matrix obtained by translating vector c in a coordinate system, trot n (a) Is a winding n axisThe homogeneous transformation matrix obtained by rotating the angle a can be calculated by a Rodrigues (Rodrigues) formula;
the specific articulation model was estimated as follows:
(21) The hinged object is formed by connecting two rigid components through a specific hinge structure, wherein the pose of a certain rigid component i can be expressed as
Figure BDA0003135602120000062
Wherein i =1, …, n represents the number of n rigid components (here, 2) of the hinge model, T =1, …, and T represents the T-th moment;
the pose relationship between any two rigid members i and j in the articulated model can be described as the difference between the two poses
Figure BDA0003135602120000063
Wherein +>
Figure BDA0003135602120000064
And &>
Figure BDA0003135602120000065
Respectively representing the superposition of the poses and the inverse operation thereof;
(22) If two rigid parts are not rigidly connected, an implicit variable a may be used ij ∈R d To describe the d transformation degrees of freedom existing between the two, i.e. the transformation amount between the two rigid components in d degrees of freedom; due to the lack of a priori information on the articulation model, two candidate models, namely the rotary joint M, are provided for the connection between the two rigid parts rotational And a translation joint M prismatic At the moment, implicit variables of the hinge model are the translation amount of the translation joint and the rotation angle of the rotation joint respectively;
the clamping jaw pose at each moment obtained by recording in the step (1) can be calculated to obtain the noisy pose relation between two rigid components at each moment
Figure BDA0003135602120000066
Wherein +>
Figure BDA0003135602120000067
Representing the pose relationship with noise between the two rigid components at the time t; through the noisy pose relation data (hereinafter referred to as training sample D for short) ij ) The hinge model can be estimated;
(23) Let M ij Representing the articulated model between rigid components i and j, the probability of occurrence of a noisy pose relationship between two rigid components at a given moment of the model can be described by equation (1):
Figure BDA0003135602120000068
wherein a is a hinge model M ij An implicit variable representing a degree of freedom of transformation between two rigid components;
Figure BDA0003135602120000069
representing measured values in the case of a known articulation model and implicit variables>
Figure BDA00031356021200000610
The probability density function is obtained by calculation through a given known hinge model and the transformation freedom degree between two rigid components; p (a | M) ij ) A probability density function representing the hidden variables in the case where the articulation model is known;
assuming that the implicit variables obey a uniform distribution during sampling, the above equation can be simplified to equation (2):
Figure BDA0003135602120000071
after selecting a particular articulation model, the desired transformation relationship between the two rigid components is:
Figure BDA0003135602120000072
wherein the content of the first and second substances,
Figure BDA0003135602120000073
representing a selected articulated model M ij The desired transformation relationship between the intermediate rigid components i and j;
if the noise in the clamping jaw pose at each moment obtained in the step (1) meets Gaussian distribution, then:
Figure BDA0003135602120000074
wherein the content of the first and second substances,
Figure BDA0003135602120000075
representing the difference between expected and measured values of the articulated model, σ 2 Represents the variance of the gaussian distribution;
(24) In order to avoid continuous integral operation, the Monte Carlo integral method is adopted to process the formula (2), namely, values of a plurality of hidden variables are randomly collected to convert the integral operation of the formula (2) into summation operation;
the following likelihood functions can thus be written for all training samples:
Figure BDA0003135602120000076
(25) And selecting different hinge models, obtaining the maximum value of the likelihood function through a maximum likelihood estimation algorithm (MLE), and selecting the hinge model which can enable the likelihood function to obtain the maximum value as a result of model identification to obtain the final hinge model.
In the invention, the articulated model can be stored in a configuration file form and shared among various robots after the identification is completed. When the robot wants to operate a certain articulated object, the model of the robot can be dynamically loaded into the memory, and then the operation of the articulated model is completed according to the corresponding business, and the operation flow is shown in fig. 2 and specifically as follows:
(1) Analyzing the task, and obtaining a specific operation task by the robot according to the task, such as opening the door No. 1 by 70 degrees;
(2) The robot navigates to a target position and grabs the handle of the hinged object in a machine vision mode and the like;
(3) The robot loads the door 1 hinge model, judges whether the door 1 is opened by 70 degrees or not, namely, the robot reaches a target position, and if the door 1 is opened by 70 degrees, the robot ends the operation; if the robot does not arrive, calculating to obtain a next control point x ∈ SE (3) of the robot according to the hinge model, and obtaining a next expected pose of the robot end effector;
(4) The robot operates;
in order to avoid excessive contact stress caused by the estimation error of the hinge model, the robot adopts an impedance control strategy; the impedance control system block diagram is shown in fig. 3, wherein S represents integration; the method comprises the following specific steps:
(41) The impedance model needs to be initialized by using the initial pose of the robot (namely the current pose of the robot), and the external force F currently applied to the tail end of the robot is obtained in an external force estimation mode (current loop estimation or six-dimensional force measurement can be adopted);
(42) By impedance model
Figure BDA0003135602120000081
The acceleration of the pose correction amount can be calculated, wherein>
Figure BDA0003135602120000082
Indicates the acceleration of the pose correction amount, and>
Figure BDA0003135602120000083
representing the speed of the pose correction quantity, delta X representing the pose correction quantity, M representing the mass term B of the impedance model representing the damping term of the impedance model, and K representing the rigidity term of the impedance model;
(43) Obtaining the pose correction amount by integrating the acceleration of the pose correction amount obtained in the step (42) for two times, and fusing the pose correction amount with the next expected pose generated according to the hinged model obtained in the step (2) to obtain the target pose of the robot end effector;
(44) According to the target pose of the robot end effector obtained in the step (43), inverse solution is carried out through inverse kinematics, and the robot is driven to move; because the robot is in an impedance motion mode, even if the hinge model has certain errors, the robot can follow the motion track of the model handle, and the pose of the clamping jaw is updated;
(45) And executing operation under the control instruction of the robot.
The invention reduces the factor of human intervention of the robot in the process of environmental cognition to the maximum extent. The basic information of the environment can be realized by building a picture, the grabbing of the handle of the hinged object can be realized by utilizing image recognition and visual servo, the model learning and the operation of the hinged object can be automatically realized by the scheme provided by the patent, and the autonomy of the robot in the family environment is greatly improved.
Although the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the details of the foregoing embodiments, and various equivalent changes (such as number, shape, position, etc.) may be made within the technical spirit of the present invention, and these equivalent changes are all within the scope of the present invention.

Claims (5)

1. A method for identifying an articulated model is characterized by comprising the following steps: the method comprises the following steps:
(1) The robot controls the end effector to grab and pull the handle of the hinged object, and the pose of the end effector at each moment is recorded and obtained in the process; wherein the hinged object is formed by connecting two rigid components;
(2) Calculating the pose of the end effector at each moment recorded in the step (1) to obtain the pose relationship between two rigid components at each moment as a training sample;
(3) Setting a hinge model of a hinge object, and obtaining the probability of appearance of the pose relationship between two rigid components of a certain hinge model at a certain moment according to the transformation freedom degree between the two rigid components, so as to obtain the expected transformation relationship between the two rigid components of the certain hinge model;
the articulation model comprises a translational joint model and a rotational joint model;
for the translational joint model, the handle of the hinged object has translational freedom degree in a certain fixed axial direction, and the variation of the end effector poses at all times relative to the initial time is solved as an implicit variable
Figure QLYQS_1
Let e 1 A unit vector representing the direction of the translation axis, then for a translation joint the implicit variables are described as:
Figure QLYQS_2
wherein, the first and the second end of the pipe are connected with each other,
Figure QLYQS_3
respectively representing the pose relationship with noise between the two rigid components at the first sampling time and the t time;
where trans removes the rotating part in the transformation matrix, so the articulation model of the translational joint is described as:
Figure QLYQS_4
for the rotary joint model, the handle of the hinged object has a rotary degree of freedom in a certain fixed axial direction, and the variation of the end effector poses at all times relative to the initial time is solved as an implicit variable
Figure QLYQS_5
Let e 2 A unit vector representing the direction of the axis of rotation, then for a rotating joint the implicit variables can be described as:
Figure QLYQS_6
wherein angle removes the translation part in the transformation matrix, in combination with standard geometric transformation, for a rotary joint, its parameters include the rotation axis n, the rotation center c and the transformation relation r that needs to be aligned to the handle after rotation, so the articulation model of the rotary joint is described as:
Figure QLYQS_7
wherein, transform (c) represents homogeneous transformation matrix obtained by translating vector c in coordinate system, trot n (a) The method comprises the following steps of (1) calculating a homogeneous transformation matrix obtained by rotating an angle a around an n axis by a Rodrigues formula;
wherein a is an implicit variable of the hinge model,
Figure QLYQS_8
and theta respectively represent the superposition of the pose and the inverse operation thereof;
(4) And (4) calculating to obtain a likelihood function of the pose relationship of the two rigid components of the certain hinge model according to the expected transformation relationship between the two rigid components of the certain hinge model obtained in the step (3), obtaining the maximum value of the likelihood function through a maximum likelihood estimation algorithm, and selecting the hinge model which can enable the likelihood function to obtain the maximum value as a final hinge model.
2. The method for identifying an articulated model according to claim 1, wherein: in the step (1), the pose of the end effector at each moment is calculated by a robot joint encoder and positive kinematics.
3. The method for identifying an articulated model according to claim 1, wherein: the step (3) is specifically as follows:
let M ij Representing the articulated model between rigid components i and j, the probability of a noisy pose relationship between two rigid components at a given moment of the model is then as follows:
Figure QLYQS_9
wherein, a is a hinged model M ij An implicit variable representing a degree of freedom of transformation between two rigid components;
Figure QLYQS_10
represents a measured value @, with a known articulation model and implicit variable>
Figure QLYQS_11
The probability density function is obtained by calculation through a given known hinge model and the transformation freedom degree between two rigid components; p (a | M) ij ) A probability density function representing the hidden variables in the case where the articulation model is known;
assuming that the implicit variables are subject to uniform distribution during sampling, the above equation reduces to:
Figure QLYQS_12
after selecting a particular articulation model, the desired transformation relationship between the two rigid components is:
Figure QLYQS_13
wherein, the first and the second end of the pipe are connected with each other,
Figure QLYQS_14
representing a selected articulated model M ij The expected transformation relation between the rigid components i and j is shown, and the pose relation between any two rigid components i and j in the hinge model is described as the difference between the poses->
Figure QLYQS_15
If the noise in the end effector pose at each moment obtained in the step (1) is in accordance with Gaussian distribution, then:
Figure QLYQS_16
wherein the content of the first and second substances,
Figure QLYQS_17
representing the difference between expected and measured values of the articulated model, σ 2 Representing the variance of the gaussian distribution.
4. A robot operation method using the method for identifying an articulated model according to any one of claims 1 to 3, characterized in that: the method comprises the following steps:
(1) The robot moves to a target position according to the task and grabs a handle of the hinged object;
(2) The identification method of any one of claims 1 to 3 obtains an articulation model, and calculates a next control point of the robot based thereon to obtain a next expected pose of the end effector of the robot and moves to a corresponding pose, and controls the articulated object through an impedance control strategy.
5. The robot operating method according to claim 4, characterized in that: the impedance control strategy in the step (2) is specifically as follows:
(41) Obtaining an external force F currently applied to the tail end of the robot by adopting current loop estimation or six-dimensional force measurement;
(42) According to the current pose and impedance model of the robot terminal
Figure QLYQS_18
Figure QLYQS_19
Calculating the acceleration of the pose correction quantity, wherein>
Figure QLYQS_20
Indicates the acceleration of the pose correction amount, and>
Figure QLYQS_21
representing the speed of the pose correction quantity, delta X representing the pose correction quantity, M representing the mass term B of the impedance model representing the damping term of the impedance model, and K representing the rigidity term of the impedance model;
(43) Obtaining pose correction quantity by twice integrating the acceleration of the pose correction quantity obtained in the step (42) and fusing the pose correction quantity with the next expected pose obtained in the step (2) to obtain a target pose of the end effector of the robot;
(44) And (5) according to the target pose of the robot end effector obtained in the step (43), carrying out inverse solution through inverse kinematics and driving the robot to move to the target pose for operation.
CN202110717749.8A 2021-06-28 2021-06-28 Method for identifying hinge model and robot operation method Active CN113352328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110717749.8A CN113352328B (en) 2021-06-28 2021-06-28 Method for identifying hinge model and robot operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110717749.8A CN113352328B (en) 2021-06-28 2021-06-28 Method for identifying hinge model and robot operation method

Publications (2)

Publication Number Publication Date
CN113352328A CN113352328A (en) 2021-09-07
CN113352328B true CN113352328B (en) 2023-04-07

Family

ID=77536660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110717749.8A Active CN113352328B (en) 2021-06-28 2021-06-28 Method for identifying hinge model and robot operation method

Country Status (1)

Country Link
CN (1) CN113352328B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114310888A (en) * 2021-12-28 2022-04-12 广东省科学院智能制造研究所 Cooperative robot variable-rigidity motor skill learning and regulating method and system
CN114711760B (en) * 2022-04-06 2023-01-24 哈尔滨工业大学 Joint axis calculation method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103085069B (en) * 2012-12-17 2015-07-15 北京邮电大学 Novel robot kinematics modeling method
CN108229416B (en) * 2018-01-17 2021-09-10 苏州科技大学 Robot SLAM method based on semantic segmentation technology
US20210081791A1 (en) * 2019-09-13 2021-03-18 Osaro Computer-Automated Robot Grasp Depth Estimation
US11691281B2 (en) * 2019-11-08 2023-07-04 Massachusetts Institute Of Technology Robot control at singular configurations
US11173610B2 (en) * 2019-11-13 2021-11-16 Vicarious Fpc, Inc. Method and system for robot control using visual feedback
EP3838503B1 (en) * 2019-12-16 2024-05-01 Robert Bosch GmbH Method for controlling a robot and robot controller
CN111872934B (en) * 2020-06-19 2023-01-31 南京邮电大学 Mechanical arm control method and system based on hidden semi-Markov model

Also Published As

Publication number Publication date
CN113352328A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113352328B (en) Method for identifying hinge model and robot operation method
Zhu et al. Dual-arm robotic manipulation of flexible cables
Johns Coarse-to-fine imitation learning: Robot manipulation from a single demonstration
CN111360827B (en) Visual servo switching control method and system
US11741701B2 (en) Autonomous task performance based on visual embeddings
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
Yamada et al. Motion planner augmented reinforcement learning for robot manipulation in obstructed environments
EP2657863B1 (en) Methods and computer-program products for generating grasp patterns for use by a robot
CN107627303B (en) PD-SMC control method of visual servo system based on eye-on-hand structure
CN112605973B (en) Robot motor skill learning method and system
CN111881772A (en) Multi-mechanical arm cooperative assembly method and system based on deep reinforcement learning
JP7301034B2 (en) System and Method for Policy Optimization Using Quasi-Newton Trust Region Method
CN109108978B (en) Three-degree-of-freedom space manipulator motion planning method based on learning generalization mechanism
CN112207835B (en) Method for realizing double-arm cooperative work task based on teaching learning
Lippiello et al. Eye-in-hand/eye-to-hand multi-camera visual servoing
CN111702754A (en) Robot obstacle avoidance trajectory planning method based on simulation learning and robot
CN115319734A (en) Method for controlling a robotic device
CN115351780A (en) Method for controlling a robotic device
CN114516060A (en) Apparatus and method for controlling a robotic device
CN112809666A (en) 5-DOF mechanical arm force and position tracking algorithm based on neural network
CN112734823B (en) Image-based visual servo jacobian matrix depth estimation method
CN117340929A (en) Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data
Scholz et al. Learning non-holonomic object models for mobile manipulation
Behera et al. A hybrid neural control scheme for visual-motor coordination
CN113681560B (en) Method for operating articulated object by mechanical arm based on vision fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant