CN102528802A - Motion driving method for robot with nine degrees of freedom - Google Patents

Motion driving method for robot with nine degrees of freedom Download PDF

Info

Publication number
CN102528802A
CN102528802A CN2010106242769A CN201010624276A CN102528802A CN 102528802 A CN102528802 A CN 102528802A CN 2010106242769 A CN2010106242769 A CN 2010106242769A CN 201010624276 A CN201010624276 A CN 201010624276A CN 102528802 A CN102528802 A CN 102528802A
Authority
CN
China
Prior art keywords
msub
mtd
robot
freedom
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010106242769A
Other languages
Chinese (zh)
Other versions
CN102528802B (en
Inventor
朱登明
谢斌
刘华俊
王兆其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING ZHONGKE GUANGSHI TECHNOLOGY Co Ltd
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201010624276.9A priority Critical patent/CN102528802B/en
Publication of CN102528802A publication Critical patent/CN102528802A/en
Application granted granted Critical
Publication of CN102528802B publication Critical patent/CN102528802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

The invention provides a motion driving method for a robot with nine degrees of freedom. The method includes realizing motion path planning and gesture planning of a tail-end hand of the robot by the aid of information, which is provided by a user, of positions and gestures of the tail-end hand at multiple points in a cartesian space, and obtaining positions of the tail-end hand of the robot in an integral driving process and gestures of the tail-end hand of the robot in the integral driving process; solving a joint space from a first degree of freedom to a sixth degree of freedom of the robot at a certain point of time by the aid of the obtained information of the positions of the tail-end hand of the robot, and solving a joint space from a seventh degree of freedom to a ninth degree of freedom of the robot at the point of time by the aid of the obtained information of the gestures of the tail-end hand of the robot; and driving the robot to move by the aid of the obtained joint spaces of the ninth degrees of freedom. The motion driving method for the robot has the advantages that the user only needs to provide the positions and the gestures of the tail-end hand of the robot at a few of points in the space, accordingly, the robot can be driven, and the motion driving method is simple in implementation and high-efficiency in running.

Description

Motion driving method of nine-degree-of-freedom robot
Technical Field
The invention relates to robot motion control, in particular to a motion driving method of a nine-degree-of-freedom robot.
Background
With the continuous development of scientific technology and productivity, the application of the robot in various fields is wider, for example, the application of the robot is seen everywhere in a production workshop of a large automobile manufacturer. In most cases, one needs a robot to be able to do soA discipline for studying how a robot moves is called robot kinematics, where a given action, such as welding in the production of a car, is known as robot kinematics. Robot kinematics can be divided into two categories: forward and reverse. The problem to be solved by forward kinematics is that the joint space Q (θ) — [ θ ] of the known robot1,θ2,…,θn]And solving the position and the posture of the tail end of the robot. The problem to be solved by inverse kinematics is to know the position and the posture of the tail end of the robot and find the joint space of the robot. The solution of inverse kinematics is not unique and its solving process is more complex than that of forward kinematics.
In the prior art, various methods for solving inverse kinematics have been proposed, such as projection method and analytic method as mentioned in reference 1 "advanced robot control, Tantan Min, education Press, 5 months 2007". These methods have been used in existing 6-degree-of-freedom industrial robots (also called full-degree-of-freedom robots, whose joints are similar to the waist, shoulder, elbow and wrist joints of the human body), and which rely on them to perform mechanical movements quickly and accurately.
Nowadays, it has become possible to apply robots in the field of photography in addition to the traditional industrial production field. The robot with the camera has wide application prospect in some special scenes, such as scenes with high risk. And the camera shooting action finished by the robot has the advantages of stability, difficult shake and the like compared with manual work. However, considering that it is unlikely that a professional in the photography industry, such as a director, will have knowledge about robot control, a robot applied in the photography field needs to provide a simple and easy-to-operate interface to a user, and after the user inputs limited information through the interface, the robot should be able to complete a prescribed motion within a prescribed time according to the information to achieve a predetermined posture. For example, a director inputs several key points on a touch screen, which require the lens of a camera to reach, and the time of the camera reaching the key points, and the robot should be able to drive the camera to realize a corresponding track according to the information, and further generate a joint space for completing the track. Furthermore, a 6-degree-of-freedom robot with complete degrees of freedom cannot meet the space requirements of a camera robot due to the limitation of the motion space. Such as: the infinite rotation around the lens and other operations cannot be realized through 6 degrees of freedom, so in order to meet the rotation around the camera and special shooting requirements in the shooting process, 2 degrees of freedom need to be added behind the wrist joint of the camera robot, and one sliding degree of freedom is added in front of all joints to increase the motion space of the robot. Therefore, the imaging robot needs to adopt a 9-degree-of-freedom robot.
Due to the characteristics of the robot in the field of photography, the inverse kinematics solution method in the prior art cannot be directly applied to the photography robot with 9 degrees of freedom.
Disclosure of Invention
The invention aims to overcome the defect that the existing inverse kinematics solution method cannot be directly applied to a camera robot with 9 degrees of freedom, thereby providing a robot motion driving method which can be fast and efficient.
In order to achieve the above object, the present invention provides a motion driving method of a robot, the robot including 9 degrees of freedom, a moving joint of the robot having a first degree of freedom, a shoulder joint having a second degree of freedom, an elbow joint having a third degree of freedom, a wrist joint having a fourth degree of freedom, a hand joint having fifth, sixth and seventh degrees of freedom, and a rotation joint having eighth and ninth degrees of freedom; the method comprises the following steps:
step 1), realizing motion trajectory planning and posture planning of the robot tail end hand by using information of positions and postures of the robot tail end hand on a plurality of points in a Cartesian space, which is provided by a user, and obtaining the position of the robot tail end hand in the whole driving process and the posture of the robot tail end hand in the whole driving process;
step 2), solving joint space of the robot from a first degree of freedom to a sixth degree of freedom at a certain time point by using the position information of the robot end hand obtained in the step 1), and solving joint space of the robot from a seventh degree of freedom to a ninth degree of freedom at the time point by using the posture information of the robot end hand obtained in the step 1); the resulting joint space of nine degrees of freedom is used to drive the motion of the robot.
In the above technical solution, further comprising:
and 3) simulating by using the joint space information of the robot with nine degrees of freedom at a plurality of time points, which is obtained in the step 2), and selecting a better value from the joint space information of the nine degrees of freedom at the plurality of time points as a control point according to a simulation result.
In the above technical solution, in the step 1), the motion trajectory planning includes:
and performing curve fitting on the position information of the terminal hand of the robot provided by the user on a plurality of points in a Cartesian space to obtain at least one motion trail curve.
In the above technical scheme, the curve fitting adopts a B-spline curve method, or a Bezier curve method, or a method combining a B-spline curve and a Bezier curve.
In the above technical solution, in the step 1), the attitude planning includes:
step a), respectively taking two adjacent points of known attitude information in Cartesian space as an initial position and a target position, modifying the attitude information of the initial position and the target position from an Euler angle form to a quaternion form, and respectively using Q1And Q2Represents;
step b), solving the attitude between the initial position and the target position at any moment, namely: qn=Q1+(Q2-Q1)×(tn-t1);
And c) converting the posture obtained in the step b) back to an Euler angle form.
In the above technical solution, the step 2) includes:
step 2-1), selecting position information and attitude information at a plurality of time points from the position information and attitude information in the whole driving process obtained in the step 1);
step 2-2), respectively substituting the position information on the plurality of time points obtained in the step 2-1) into a calculation formula of a gradient projection method to obtain joint spaces of the robots from the first degree of freedom to the sixth degree of freedom in the plurality of times;
step 2-3), calculating the adjustment amount of the attitude according to the joint space of the robot from the first degree of freedom to the sixth degree of freedom calculated in the step 2-2) and the attitude information of the robot at the plurality of time points obtained in the step 2-1), and calculating the joint space of the robot from the seventh degree of freedom to the ninth degree of freedom at the plurality of time points according to the adjustment amount of the attitude.
In the above technical solution, in the step 2-2), the calculation formula of the gradient projection method is as follows:
Figure BSA00000415630700031
in the above-mentioned formula,representing joint velocity, J is the Jacobian matrix, J+Is a pseudo-inverse matrix of J,is the first derivative of the position with respect to time,
Figure BSA00000415630700034
is a set of free vectors, and alpha represents an amplification factor; wherein,
Figure BSA00000415630700035
βW,βD,βL,βRfor the weighting coefficients, they are of a value of [0, 1 ]]The number of (1);
w is a function of the degree of operability, W = det ( JJ T ) ;
d (theta) is an obstacle avoidance function,theta represents a certain attitude in the joint space, i represents the number of the capsule-like bounding box, j represents the number of the obstacle, d0A threshold representing a safe distance, η being a coefficient;
l (theta) is a joint constraint function,
Figure BSA00000415630700043
ai=(θimaximin) 2 is the median value of the allowable range of each joint, thetaimaxIndicates the maximum value of the i-th joint angle, θiminRepresents the minimum value of the ith joint angle, and n is the number of joints;
Figure BSA00000415630700044
is a 9-dimensional vector, and the vector is a vector,
Figure BSA00000415630700045
Figure BSA00000415630700046
lambda denotes
Figure BSA00000415630700047
Figure BSA00000415630700048
Representing the gradient values.
In the above technical solution, the step 2-3) includes:
step 2-3-1), calculating the attitude R of the robot at a certain moment from the joint space from the first degree of freedom to the sixth degree of freedom calculated in the step 2-2)1Obtaining the attitude R at the target position from the attitude information generated by the attitude planning of the step 1)2Calculating the posture R of the tail end hand of the robot, which needs to be adjusted from a certain moment to a target position:
R = R 1 T R 2
step 2-3-2), solving an equivalent rotating shaft by utilizing rotation transformation;
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>Rot</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
fxfyfzthe equivalent rotating shaft is a Cartesian space rotation transformation equivalent rotating shaft and corresponds to the seventh, eighth and ninth degrees of freedom of the tail end hand of the robot; thetatIs the attitude R at time t1And R2The angle difference of (a); vers thetat=1-cosθt
And 2-3-3) calculating the adjustment quantity of the robot relative to the initial posture by the equivalent rotating shaft, namely the joint space of the seventh, eighth and ninth degrees of freedom.
In the above technical solution, the step 3) includes:
step 3-1), solving a simulation result of PTP motion by using the speed and the acceleration of the shaft;
and 3-2) solving the fitting similarity between the simulation result of the PTP movement and the target curve, and selecting a better value from the joint space information of the nine degrees of freedom at the plurality of time points as a control point according to the fitting similarity.
In the above technical solution, the step 3-1) includes:
step 3-1-1), obtaining the longest acceleration time of the ith joint of the robot in PTP motion according to the speed and acceleration parameters of the shaft;
Figure BSA00000415630700051
wherein, taiIndicates the acceleration completion time of the ith joint,
Figure BSA00000415630700053
represents the maximum speed of the axis motor of the robot,represents a maximum acceleration of a shaft motor of the robot;
Figure BSA00000415630700055
the distance of the transformation is represented by,
Figure BSA00000415630700056
(end joint space θ)iEnd) - (starting point joint space θ)iStart)
Step 3-1-2), calculating the longest finish time of acceleration, the longest finish time of uniform speed and the longest finish time of deceleration of the whole robot according to the result obtained in the previous step;
Figure BSA00000415630700057
tdi=tei-tai
maxTa=max(ta1,ta2,…,tai)
maxTd=max(td1,td2,…,tdi)
maxTe=max(te1,te2,…,tei)
wherein, teiIndicates the end time, tdiRepresenting the time of completion of the uniform speed;
3-1-3), calculating the running speed and running acceleration of each joint under the condition of uniform acceleration and uniform speed;
Figure BSA00000415630700058
A i = V i max T a
wherein, ViIs the running speed of the joint i, AiIs the running acceleration of joint i;
step 3-1-4), calculating an expression theta of joint space along with time in PTP operation according to the calculation resulttSubstituting into the kinematics positive solution to obtain a simulation result Fptp1sta rt,θiEndT); wherein,
<math> <mrow> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msup> <mi>t</mi> <mn>2</mn> </msup> </mtd> <mtd> <mn>0</mn> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfrac> <msup> <msub> <mi>V</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> <msub> <mi>A</mi> <mi>i</mi> </msub> </mfrac> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>&CenterDot;</mo> <msup> <mrow> <mo>(</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> </math>
in the above technical solution, the step 3-2) includes:
step 3-2-1), calculating the similarity degree by utilizing the principle of autocorrelation
Figure BSA00000415630700063
Figure BSA00000415630700064
Step 3-2-2), calculating
Figure BSA00000415630700065
Value and threshold of
Figure BSA00000415630700066
Comparing, if the curve fitting time is less than the preset time, judging that the curve fitting is successful, otherwise, supplementing a key point at the middle point of the target curve in time, dividing the target curve into two sections, and then repeatedly executing the step 3-1) until the whole curve fitting is completed; the key points to be used in the fitting process are the control points.
The invention has the advantages that:
by adopting the robot motion driving method, a user only needs to give the positions and postures of the tail end hand of the robot on a few points in space, the robot can be driven, and the method is simple to realize and efficient to operate.
Drawings
Fig. 1 is a schematic view of 9 degrees of freedom of a 9 degree of freedom robot according to the present invention;
FIG. 2 is a flow diagram of a robot motion driving method in one embodiment;
fig. 3 is a schematic view of a 9-degree-of-freedom robot with seventh, eighth and ninth degrees of freedom at a rotation joint according to the present invention;
FIG. 4 is a schematic diagram of an obstacle avoidance structure in one embodiment;
FIG. 5 is a schematic diagram of the acceleration, deceleration, and uniform velocity process of a joint, according to one embodiment.
Detailed Description
The invention is described below with reference to the accompanying drawings and the detailed description.
Before describing the present invention, some concepts involved in the present invention are explained in a unified manner for the understanding.
1. 9 degrees of freedom of the robot: referring to fig. 1, a mobile joint of a robot has a first degree of freedom, a shoulder joint has a second degree of freedom, an elbow joint has a third degree of freedom, a wrist joint has a fourth degree of freedom, a hand joint has fifth, sixth and seventh degrees of freedom, and a rotation joint has eighth and ninth degrees of freedom.
2. Cartesian space: refers to a linear space defined using a cartesian coordinate system. Three-dimensional cartesian space is used in this application to represent the motion space of a robot in a broad sense.
3. Joint space: assuming that the robot has n degrees of freedom, the robot joint coordinate system is an n-dimensional space, and the joint space is a set composed of a series of n-dimensional spaces, and the result of the kinematic positive solution of each feature in the set is the same (i.e. the positions and postures of the robot ends solved by each feature in the joint space through forward kinematics are the same).
4. Gradient projection method: the basic formula of the gradient projection method is:
Figure BSA00000415630700071
wherein,
Figure BSA00000415630700072
is the time derivative of the angle to be determined, J is the Jacobian matrix, J+Is a pseudo-inverse matrix of J,is a position andthe first derivative of the attitude vector with respect to time,
Figure BSA00000415630700074
for a set of free vectors (a free vector is an arbitrary vector that satisfies the dimension), α represents a scalar coefficient.
Free vector
Figure BSA00000415630700075
Which can be expressed as a vector with respect to the joint space theta, the derivative over time of the free vector yields the following equation:
Figure BSA00000415630700076
wherein,
Figure BSA00000415630700077
representing free vectors
Figure BSA00000415630700078
For the gradient value of joint space [. sup. ]]TRepresenting a transposed matrix.
From the above equations (1) and (2), the extended equation (3) is obtained:
Figure BSA00000415630700079
5. posture: refers to the pose orientation of the end of the robot hand.
6. Forward kinematics: knowing the joint coordinates of each joint of the robot, the pose of each end of the robot is determined, and the mapping from joint space to cartesian space at the end of the robot is called forward kinematics.
After the above concepts are collectively described, a motion driving method of an image pickup robot of the present invention is explained in the following embodiments.
In the background art, it has been mentioned that the user needs to provide some basic information for the camera robot, such as several key points in the motion track of the camera lens, the time when the camera lens reaches these key points, the time when the camera lens stops at some key points, etc. The information of the key points comprises the position coordinates of the points on the x, y and z axes in the cartesian space and the attitude vector (Roll, Pitch, Yaw) of the tail end of the robot hand on the points. The information of these key points is essentially the position and posture of the camera lens (i.e. the end of the hand of the camera robot), and it is known to those skilled in the art that to realize the motion driving of the camera robot, the information of the angle change of all joints of the camera robot along with time needs to be known. How the present embodiment realizes the motion driving of the image pickup robot will be described below with reference to fig. 2 on the basis of the above basic information.
And step 1, realizing the motion trail planning and the posture planning of the camera lens.
The imaging robot according to the present application is a 9-degree-of-freedom robot shown in fig. 1, and when calculating joint velocities of 9 degrees of freedom from the position and orientation of the end of the imaging robot, it is necessary to calculate a (9 × 6) matrix and a (6 × 9) pseudo-inverse matrix corresponding thereto, which takes a large amount of calculation time. Referring to fig. 3, in the imaging robot according to the present invention, since the axes of the eighth degree of freedom, the ninth degree of freedom, and the seventh degree of freedom in the hand joint are aligned at one point and the three rotational directions of the three-dimensional cartesian space are mapped, the imaging robot can change the attitude only by the seventh degree of freedom, the eighth degree of freedom, and the ninth degree of freedom, and similarly, the positional change of the imaging robot is related to only the first six degrees of freedom out of 9 degrees of freedom. Based on the above structural characteristics of the camera robot, the position and the attitude of the camera robot can be considered separately in this embodiment, that is, the motion trajectory planning and the attitude planning of the camera lens in the camera robot are solved separately, so that the complexity of calculation and solution can be reduced, the calculation efficiency is improved, and the real-time completion of the related calculation is ensured.
In view of the above, step 1 comprises the following steps:
step 1-1, the key points provided to the camera robot by the user via the relevant interface are actually discrete points in cartesian space, and therefore it is first necessary to transform these discrete points into a continuous curve representing the motion trajectory of the camera lens, this transformation process is also called motion trajectory planning.
The motion trail planning can adopt a related method in the prior art, such as a method adopting a B spline curve, and the method not only can ensure the continuity of motion, but also has the advantages of simple and efficient equation solution; or the method of the Bezier curve is adopted, the method is widely used in computer drawing, and has the advantage of intuitive operation which is not possessed by a B-spline curve method. In this embodiment, as a preferred implementation manner, the B-spline curve method and the bezier curve method described above may be combined, and a continuous curve formed by discrete points is obtained from the position coordinates of the discrete key points on the x, y, and z axes. It was previously mentioned that the discrete points provided by the user are in cartesian space, and thus the continuous curve generated by the discrete points is in cartesian space. Since there may be a plurality of curves formed by several points, there may be a plurality of results of the motion trajectory planning. In summary, through the motion trajectory planning, a plurality of continuous curves including the key points can be generated from the position coordinates of a plurality of discrete key points input by the user, and the continuous curves can be expressed by a function.
And 1-2, performing attitude planning on the camera robot.
It was mentioned earlier that the information of the key points input by the user includes attitude information (Roll, Pitch, Yaw) in addition to the position information mentioned in step 1-1. However, the number of key points input by the user is limited, and the posture of a certain point between two adjacent key points cannot be directly obtained. The purpose of pose planning is to calculate pose information for these intermediate points.
In the attitude planning process, the key point with the previous time in the adjacent key points is taken as the initial position, the key point with the later time is taken as the target position, and the attitude of the robot at the initial position is taken as R1Indicating the attitude at the target position by R2And (4) showing. R1And R2All include Roll, Pitch, and Yaw components. In accordance with the common general knowledge in the art, the attitude information of the start position and the target position can be first rewritten into quaternion (see reference 2, page 190 of 3D Math Primer for Graphics and Gate Development, Fletcher Dunn, Wordware Publishing, Inc), namely R1→Q1,R2→Q2Then, Q at any time is obtainedn=Q1+(Q2-Q1)×(tn-t1) (ii) a Finally, the quaternion Q is obtainednAnd converted to the euler angle form of Roll, Pitch, Yaw.
Through the attitude planning, the attitude represented by (Roll, Pitch, Yaw) at any time in the whole driving process can be obtained.
And 1-3, storing results of motion trajectory planning and posture planning.
The motion trajectory planning result and the attitude planning result obtained in the previous steps 1-1 and 1-2 are continuous functions, so that the robot driving process does not need much information, and only needs to take a plurality of discrete points on the continuous functions. Since the robot control has a minimum time
Figure BSA00000415630700091
Thus, the total time of robot movement can be divided into
Figure BSA00000415630700092
Is then substituted into the functions generated in the previous step 1-1 and step 1-2 for representing the motion trail and postureAnd obtaining the positions and postures of the tail end of the robot hand on a plurality of discrete points. The information is saved. In this step, the
Figure BSA00000415630700093
The specific value of the integral multiple can be determined according to the precision in actual operation.
And 2, solving the joint space of the camera robot.
As mentioned earlier, the position of the camera robot is considered separately from the pose for the convenience of calculation solution. The generation of the joint space of the first 6 degrees of freedom of the imaging robot is related to the position information of the end hand of the imaging robot, and the generation of the joint space of the last 3 degrees of freedom is related to the posture information of the end hand of the imaging robot. And are therefore described separately below.
And 2-1, solving joint space of the front 6 degrees of freedom of the camera robot.
On the basis of the known position information of the end hand of the robot, the inverse kinematics of the robot is used for solving an array corresponding to the position information in joint space, namely the joint space with the first 6 degrees of freedom.
The inverse kinematics solution must have four constraints: 1) the safety of the robot is ensured; 2) collision is avoided; 3) joint constraint is avoided; 4) joint singularity and algorithm singularity are avoided.
The kinematic equations of the robot can be expressed as (see the aforementioned reference 1, page 79):
X=f(θ) (4)
in the above formula, X ∈ RmIndicating the position of the robot in cartesian space (because of the separation of position and attitude, X mentioned here is only position information, slightly different from the attitude and position represented by X in the formula (1) above); theta is formed by RnRepresents the joint space of the robot, wherein n denotes the number of joints, m is the dimension of the target vector,in the present application, m refers to cartesian spatial dimensions, which specifically include 6 dimensions (x, y, z, roll.
By converting the above equation (4), the Cartesian space velocity representing the image pickup robot can be obtained
Figure BSA00000415630700101
Velocity of joint
Figure BSA00000415630700102
The following formula for the relationship:
<math> <mrow> <mover> <mi>X</mi> <mo>&CenterDot;</mo> </mover> <mo>=</mo> <mi>J</mi> <mover> <mi>&theta;</mi> <mo>&CenterDot;</mo> </mover> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
the Jacobian matrix in the above formula is known
Figure BSA00000415630700104
,J∈Rm×nVelocity of the end of the hand of a given camera robot in Cartesian space
Figure BSA00000415630700105
(which can be obtained by first deriving the position information in cartesian space generated in step 1 with respect to time), then solving equation (5) can obtain the following inverse solution of robot kinematics:
Figure BSA00000415630700106
the above formula is the order mentioned beforeThe basic formula of the ladder projection method, equation (1), is a general solution form of equation (5), where J+Is Moore-Penrose pseudo-inverse of Jacobian matrix J, and alpha belongs to RIIs a scalar coefficient of the number of bits,
Figure BSA00000415630700107
is a free vector which can be arbitrarily chosen. The first term on the right of the equation is the special solution of equation (5) and the second term is the homogeneous solution. In a physical sense, the first term defines the motion of the robot hand, and the second term defines joint space self-motion that does not affect the motion of the hand. Joint space self-motion means that in solving a given target value after determining the function mapping, the series of joints can change due to redundancy without affecting the mapping result.
From the foregoing description, it can be known that the basic formula of the staircase projection method can be further expanded, and the expanded formula is shown in formula (3).
As can be seen by observing the associated formula for the gradient projection method, J, J+
Figure BSA00000415630700111
I is known or can be calculated from existing data, and therefore joint velocity is calculated
Figure BSA00000415630700112
Is that how to obtain the free vectorAnd a scalar coefficient α called an amplification coefficient.
Solving for free vectors
Figure BSA00000415630700114
In solving for free vector
Figure BSA00000415630700115
When considering the aforementioned reverse movementFour constraints involved in the mathematical solution: the safety of the robot is ensured; collision is avoided; joint constraint is avoided; joint singularity and algorithm singularity are avoided. Corresponding objective functions can be established according to these constraints, which are described separately below.
1. In order to avoid joint singularity and algorithm singularity, a function of the operability is introduced in the application. The operability is defined by Yoshikawa in the process of researching the working capability of the redundant degree of freedom robot, the operability can be represented by W, and the calculation formula of the operability is as follows:
W = det ( JJ T ) - - - ( 6 )
the most direct relationship between W and J can be expressed as the product of m singular values of J obtained by singular value decomposition of J
W=σ1·σ2·…·σm (7)
In the formula sigma1≥σ2≥…≥σmMore than or equal to 0 is the singular value of the J matrix. According to the matrix theory, all m singular values of the J matrix with full rank of rows are larger than zero, and the singular values of the J matrix with reduced rank have zero value, when W is 0, the corresponding J matrix is reduced rank, namely, the robot is in joint singularity, which is the most undesirable in the motion planning of the robot. To avoid moving to a position where joints are singular, the degree of operability must be greater than zero. The larger the value of W, the farther the joint is from the position where the singularity occurs, and the better the operability. Therefore, zero can be regarded as a repulsive force source of W, and the relation is introduced into the motion control of the robot with redundant degrees of freedom, which is beneficial to avoiding motion to jointThe position of the generator.
2. To avoid collision problems, an obstacle avoidance function may be set. Considering that the robot basically has a chain bar as its basic structure, the simplest capsule-like structure as shown in fig. 4 can be used to surround it in this embodiment. When the distance between the obstacle and the capsule-shaped structure of the robot is less than a certain value, the following obstacle avoidance function D (θ) ═ D (θ, B) can be obtained:
<math> <mrow> <msub> <mi>D</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mi>&eta;</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mrow> <msub> <mi>d</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mfrac> <mn>1</mn> <msub> <mi>d</mi> <mn>0</mn> </msub> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>d</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>&le;</mo> <msub> <mi>d</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> <msub> <mi>d</mi> <mi>ij</mi> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>></mo> <msub> <mi>d</mi> <mn>0</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
in the above formula, θ represents a certain posture in the joint space, B represents an obstacle, i represents the number of the capsule-like bounding box, j represents the number of the obstacle, and d0The threshold representing the safe distance, η, is a coefficient whose value is usually taken to be 1.
3. In order to avoid the problem of joint constraint, the joint needs to be ensured to be in a constraint middle position as much as possible. The method for optimizing the joint motion is to set the middle position (theta) of the joint motion rangeimaximin) With/2 as a reference, the joint constraint function is constructed as follows:
<math> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>&theta;</mi> <msub> <mrow> <mo>-</mo> <mi>a</mi> </mrow> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mrow> <mi>i</mi> <mi>max</mi> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow> </math>
in the formula ai=(θimaximin) 2 is the median value of the allowable range of each joint, thetaimaxIndicates the maximum value of the i-th joint angle, θiminRepresents the value at which the ith joint angle is minimum, and n is the number of joints. The optimization of the range of motion of the joint is to make the above-mentioned joint constraint function L (theta)i) The value of (c) is minimal.
As mentioned above, the four constraints also include a constraint condition for ensuring the safety of the robot itself, and this constraint condition is solved substantially at the same time as the above three constraints are solved.
In addition to the three objective functions of the above formula (6), formula (8), and formula (9), the first degree of freedom (the first degree of freedom may be referred to as a guide degree of freedom or a sliding degree of freedom because the moving joint substantially slides on the guide rail) in the moving joint of the imaging robot is considered to have a great influence on the solution, and the unit of the sliding degree of freedom and the rotational degree of freedom are not in one order of magnitude, and therefore, the target functions are used here
Figure BSA00000415630700123
Figure BSA00000415630700124
It is treated as a free vector alone. This free vector has an effect only on the guideway.
After the above four objective functions are generated, how to establish a unified optimization index is discussed below.
The solution of the self-motion and the special solution of the formula 3 are ensured to be in an order of magnitude, so that the four objective functions are physically kept consistent in meaning, and the fact that the four constraints are meaningful when the functions are exerted can be ensured, so that each objective function needs a parameter to be allocated to the same physical level. Since the operability of the robot is deteriorated near the position where the joints are strange, it is necessary to pay attention to these areas as forbidden areas. The effect of the operationality as an index for joint avoidance and singular optimization of the algorithm is not obvious, because W has a large value around "zero". Therefore, 1/W (theta) is selected as a parameter for measuring the influence of joint singularity on the self-movement. Similarly, the target optimization for obstacle avoidance is much better than others with (1/D (θ)). The four motion optimizations are represented by potential energy field functions with the same trend, thus having the same physical meaning.
Not only the trend of the potential energy field but also the order of magnitude has to be taken into account in the potential energy field function. For example, an appropriate operability value W is selected0(W=W0Corresponding to a boundary value near the singular position of the joint), when W > W0When the robot is far away from the joint singularity, the influence of the joint singularity is not needed to be considered, namely the self-motion of the joint space of the robot is not needed to be adjusted to increase W (theta). Conversely, if the factor by which W (θ) increases is still taken into account, it may affect the implementation of obstacle avoidance or other motion optimization. Meanwhile, the obstacle avoidance problem should also take the range of the potential force field into consideration, different from the former, the optimization of the motion range of each joint should be always required in the motion planning, and because the change range of the L (theta) is limited, the influence on the motion optimization of the former is small, the following unified optimization index can be established
Wherein beta isW,βD,βL,βRAs weighting coefficients, they may be based on requirementsIs defined as a value of [0, 1 ]]The number of (2).
Solving the amplification factor alpha
Will unify the free vectors
Figure BSA00000415630700132
After the determination, the amplification factor α is considered. The amplification factor directly controls the effect of path tracking optimization. If the size is small, the optimization effect cannot be reflected, and the constraint condition is damaged; if the result is large, the result of the self-motion is larger than the basic solution, so that the result is wrong and the inverse kinematics solution is not satisfied.
The principle of selecting the amplification factor is to ensure
Figure BSA00000415630700133
Decreasing with time, for the chosen optimization index function
Figure BSA00000415630700134
Due to the fact that
Figure BSA00000415630700135
Figure BSA00000415630700136
By selecting the amplification factor alpha to satisfy or influence
Figure BSA00000415630700137
Trend of change, for this order
Figure BSA00000415630700138
Where λ < 0, then corresponds toIs/are as follows
Figure BSA00000415630700139
Decreases with time T. α is such that the above formula holds
Figure BSA000004156307001310
In finding the free vector
Figure BSA000004156307001311
After the coefficient alpha is amplified, the coefficients can be substituted into the formula (6) to calculate the joint velocity of the front 6 axes (i.e. the front 6 degrees of freedom) of the camera robot
Figure BSA00000415630700141
After obtaining the joint velocity of the first 6 degrees of freedom, the method can be based on
Figure BSA00000415630700142
And obtaining the angle information of the specific moment, and obtaining the position and the posture of the tail end of the robot according to the forward kinematics.
Step 2-2, obtaining joint space with last 3 degrees of freedom
And 2-2-1, according to the attributes of the joints of the robot, adjusting the seventh, eighth and ninth degrees of freedom in the joints without changing positions, so that the 3 degrees of freedom are required to adjust the posture. After the joint space with the front 6 degrees of freedom is obtained, the posture R of the terminal hand of the camera robot at a certain moment can be solved by combining forward kinematics1And the attitude information generated by the attitude planning of step 1 can obtain the attitude at the target position as R2Then, the posture R of the robot end hand to be adjusted from a certain moment to the target position is:
R = R 1 T R 2 - - - ( 11 )
and 2-2-2, solving the equivalent rotating shaft by utilizing rotation transformation.
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>Rot</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> </mrow> </math>
<math> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein f ═ fxfyfz]TThe universal rotation transformation equivalent rotating shaft corresponds to the seventh, eighth and ninth degrees of freedom of the tail end hand of the robot; thetatIs the attitude R at time t1And R2The angle difference of (a); vers thetat=1-cosθt
Due to θ in equation (12)tThe trigonometric functions are known, and the posture adjustment amount R can be calculated by the formula (11), so that the equivalent rotating shafts f of the seventh, eighth and ninth degrees of freedom of the robot tail end hand can be calculatedx、fy、fz
Step 2-2-3, calculating fx、fy、fzThen, the adjustment amounts of the robot with respect to the initial attitude, that is, the joint spaces of the seventh, eighth, and ninth degrees of freedom can be obtained.
Through the steps, all information of complete joint space changing along with time can be obtained. The calculated values of the joint space theta over time are supplemented to the data structure generated in step 1-3.
And 3, simulating by using joint space information with nine degrees of freedom, comparing the curve obtained after simulation with different fitting curves, and selecting an optimal value as a control point in the driving process.
Through the previous steps, joint spaces of the camera robot are obtained, and the robot can be driven to move by the joint spaces. It is mentioned above that the joint space corresponds to the position of the end hand of the camera robot in the cartesian space, and the position of the end hand of the camera robot in the cartesian space is the discrete point mentioned in step 1-3. The larger the number of these discrete points is, the more joint spaces corresponding to these discrete points are, and the more control information is transmitted to the imaging robot. Theoretically, the more joint space information, the more advantageous the control accuracy of the imaging robot to be driven. However, in the actual driving process, there is a running process for starting, stopping or changing the speed of the device (such as a servo motor) for driving the robot, and it is easy to understand that when the camera robot is from a standstill to a certain joint speed, the camera robot is inevitably subjected to a process of accelerating first and then keeping the speed constant. However, if the externally input control information is too frequent, the previous joint speed of the robot is not reached, a new acceleration or deceleration process is performed, and the joint motion is consumed in the continuous acceleration or deceleration process, so that the operation speed of each joint of the robot is likely to fail to reach the actual value.
For the above reasons, it is therefore necessary to select the values obtained in the previous step, and the number of discrete points to be selected should be as small as possible, while ensuring that the driving process is accurate.
In the embodiment, the process of solving the motion trail of the robot by joint speed is realized by adopting the simulation PTP motion. In the process, the joint space theta obtained in the step 2 is selected through a simulation algorithm, and the minimum control points required by the robot driving are obtained.
As a common knowledge of those skilled in the art, there are three modes of joint linkage during robot driving: asynchronous, synchronous, and fully synchronous. In this embodiment, the simulation PTP process uses a fully synchronous co-working mode.
PTP movement is in three stages, acceleration, constant speed and deceleration. Wherein t is 0 to t is ta(taReferred to as acceleration time) is an acceleration stage, t is t ═ taTo t ═ td(tdMean uniform velocity time) is a uniform velocity stage, and finally t is t ═ te(teIndicating the deceleration completion time) is the deceleration phase. As shown in FIG. 5, the PTP movement of the complete synchronization and the same work needs to ensure that all joints of the robot move ta1=ta2=…=tai,td1=td2=…=tdi,te1=te2=…=tei. In this embodiment, the necessary control points are obtained by half-inserting the key points based on the fully synchronous synchronization mode. The method comprises the following concrete implementation steps:
step 3-1, solving simulation result F of PTP motion by using speed and acceleration of shaftptp1,θ2,t)=X。
3-1-1, obtaining the longest acceleration time of the ith joint of the robot in PTP motion according to the speed and acceleration parameters of the shaft;
Figure BSA00000415630700161
Figure BSA00000415630700162
wherein,
Figure BSA00000415630700163
which represents the maximum speed of the shaft motor,
Figure BSA00000415630700164
represents the maximum acceleration of the shaft motor;
Figure BSA00000415630700165
the distance of the transformation is represented by,
and 3-1-2, calculating the longest acceleration completion time, the longest uniform speed completion time and the longest deceleration completion time of the whole robot according to the result obtained in the previous step.
Specifically, first, the completion time of the uniform velocity and deceleration at the ith joint by the robot can be calculated by the following equations (15) and (16)
Figure BSA00000415630700168
tdi=tei-tai (16)
Wherein, teiIndicating the time of completion of deceleration, tdiRepresenting the time of completion of the uniform speed;
then calculating the longest finish time of acceleration, the longest finish time of uniform speed and the longest finish time of deceleration of the whole robot by the following formula
maxTa=max(ta1,ta2,…,tai)
maxTd=max(td1,td2,…,tdi) (17)
maxTe=max(te1,te2,…,tei)
And 3-1-3, calculating the running speed and running acceleration of each joint under the condition of uniform acceleration and uniform speed.
Figure BSA00000415630700169
A i = V i max T a - - - ( 19 )
Wherein, ViIs the running speed of the joint i, AiIs the running acceleration of joint i.
Step 3-1-4, calculating an expression theta of joint space along with time in PTP operation according to the calculation resultt
<math> <mrow> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msup> <mi>t</mi> <mn>2</mn> </msup> </mtd> <mtd> <mn>0</mn> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfrac> <msup> <msub> <mi>V</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> <msub> <mi>A</mi> <mi>i</mi> </msub> </mfrac> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>&CenterDot;</mo> <msup> <mrow> <mo>(</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow> </math>
Will be the joint space thetatSubstituting into the kinematics positive solution to obtain a simulation result Fptp1start,θiEnd,t)。
Step 3-2, calculating the correlation degree of the fitting
And (3) solving the fitting correlation by utilizing the ratio of the difference of the Cartesian space distance of the target curve (such as Bezieer) obtained by PTP motion at the corresponding time and the motion trajectory planning in the step (1) to the Cartesian space length of the target curve. The calculation formula of the fitting correlation is as follows:
Figure BSA00000415630700172
calculated by the above formula
Figure BSA00000415630700173
Is at a threshold value
Figure BSA00000415630700174
And thirdly, the curve fitting is considered to be successful. Otherwise, a key point is supplemented at the middle point of the target curve in time. Dividing the target curve into two sections to solve PTP fitting, then repeating the step 3-1 until the whole curve is fitted, and if the fitting is not completed, adjusting the threshold value
Figure BSA00000415630700175
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. A motion driving method of a robot comprises 9 degrees of freedom, wherein a moving joint of the robot has a first degree of freedom, a shoulder joint has a second degree of freedom, an elbow joint has a third degree of freedom, a wrist joint has a fourth degree of freedom, a hand joint has a fifth degree of freedom, a sixth degree of freedom and a seventh degree of freedom, and a self-rotation joint has an eighth degree of freedom and a ninth degree of freedom; the method comprises the following steps:
step 1), realizing motion trajectory planning and posture planning of the robot tail end hand by using information of positions and postures of the robot tail end hand on a plurality of points in a Cartesian space, which is provided by a user, and obtaining the position of the robot tail end hand in the whole driving process and the posture of the robot tail end hand in the whole driving process;
step 2), solving joint space of the robot from a first degree of freedom to a sixth degree of freedom at a certain time point by using the position information of the robot end hand obtained in the step 1), and solving joint space of the robot from a seventh degree of freedom to a ninth degree of freedom at the time point by using the posture information of the robot end hand obtained in the step 1); the resulting joint space of nine degrees of freedom is used to drive the motion of the robot.
2. The motion driving method of a robot according to claim 1, further comprising:
and 3) simulating by using the joint space information of the robot with nine degrees of freedom at a plurality of time points, which is obtained in the step 2), and selecting a better value from the joint space information of the nine degrees of freedom at the plurality of time points as a control point according to a simulation result.
3. The method of driving a robot according to claim 1 or 2, wherein in the step 1), the motion trajectory planning comprises:
and performing curve fitting on the position information of the terminal hand of the robot provided by the user on a plurality of points in a Cartesian space to obtain at least one motion trail curve.
4. The method of claim 3, wherein the curve fitting is performed by a B-spline method, a Bezier method, or a combination of a B-spline and a Bezier curve.
5. The method of driving a robot according to claim 1 or 2, wherein in the step 1), the pose planning comprises:
step a) Respectively taking two adjacent points of known attitude information in Cartesian space as an initial position and a target position, modifying the attitude information of the initial position and the target position from an Euler angle form to a quaternion form, and respectively using Q1And Q2Represents;
step b), solving the attitude between the initial position and the target position at any moment, namely: qn=Q1+(Q2-Q1)×(tn-t1);
And c) converting the posture obtained in the step b) back to an Euler angle form.
6. The motion driving method of a robot according to claim 1 or 2, wherein the step 2) comprises:
step 2-1), selecting position information and attitude information at a plurality of time points from the position information and attitude information in the whole driving process obtained in the step 1);
step 2-2), respectively substituting the position information on the plurality of time points obtained in the step 2-1) into a calculation formula of a gradient projection method to obtain joint spaces of the robots from the first degree of freedom to the sixth degree of freedom in the plurality of times;
step 2-3), calculating the adjustment amount of the attitude according to the joint space of the robot from the first degree of freedom to the sixth degree of freedom calculated in the step 2-2) and the attitude information of the robot at the plurality of time points obtained in the step 2-1), and calculating the joint space of the robot from the seventh degree of freedom to the ninth degree of freedom at the plurality of time points according to the adjustment amount of the attitude.
7. The method of claim 6, wherein in the step 2-2), the gradient projection method is calculated by the following formula:
Figure FSA00000415630600021
in the above-mentioned formula,
Figure FSA00000415630600022
representing joint velocity, J is the Jacobian matrix, J+Is a pseudo-inverse matrix of J,
Figure FSA00000415630600023
is the first derivative of the position with respect to time,
Figure FSA00000415630600024
is a set of free vectors, and alpha represents an amplification factor; wherein,
βW,βD,βL,βRfor the weighting coefficients, they are of a value of [0, 1 ]]The number of (1);
w is a function of the degree of operability, W = det ( JJ T ) ;
d (theta) is an obstacle avoidance function,
Figure FSA00000415630600027
theta represents a certain attitude in the joint space, i represents the number of the capsule-like bounding box, j represents the number of the obstacle, d0A threshold representing a safe distance, η being a coefficient;
l (theta) is a joint constraint function,ai=(θimaximin) Per 2 is the allowable range of each jointMedian value of the circumference, θimaxIndicates the maximum value of the i-th joint angle, θiminRepresents the minimum value of the ith joint angle, and n is the number of joints;
Figure FSA00000415630600031
is a 9-dimensional vector, and the vector is a vector,
Figure FSA00000415630600032
Figure FSA00000415630600033
lambda denotes
Figure FSA00000415630600034
Figure FSA00000415630600035
Representing the gradient values.
8. The motion driving method of a robot according to claim 6, wherein the step 2-3) comprises:
step 2-3-1), calculating the attitude R of the robot at a certain moment from the joint space from the first degree of freedom to the sixth degree of freedom calculated in the step 2-2)1Obtaining the attitude R at the target position from the attitude information generated by the attitude planning of the step 1)2Calculating the posture R of the tail end hand of the robot, which needs to be adjusted from a certain moment to a target position:
R = R 1 T R 2
step 2-3-2), solving an equivalent rotating shaft by utilizing rotation transformation;
<math> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>Rot</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>,</mo> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>y</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>f</mi> <mi>x</mi> </msub> <msub> <mrow> <mi>sin</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>f</mi> <mi>z</mi> </msub> <msub> <mi>vers&theta;</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mrow> <mi>cos</mi> <mi>&theta;</mi> </mrow> <mi>t</mi> </msub> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
fxfyfzthe equivalent rotating shaft is a Cartesian space rotation transformation equivalent rotating shaft and corresponds to the seventh, eighth and ninth degrees of freedom of the tail end hand of the robot; thetatIs the attitude R at time t1And R2The angle difference of (a); vers thetat=1-cosθt
And 2-3-3) calculating the adjustment quantity of the robot relative to the initial posture by the equivalent rotating shaft, namely the joint space of the seventh, eighth and ninth degrees of freedom.
9. The motion driving method of a robot according to claim 2, wherein the step 3) comprises:
step 3-1), solving a simulation result of PTP motion by using the speed and the acceleration of the shaft;
and 3-2) solving the fitting similarity between the simulation result of the PTP movement and the target curve, and selecting a better value from the joint space information of the nine degrees of freedom at the plurality of time points as a control point according to the fitting similarity.
10. The motion driving method of a robot according to claim 9, wherein the step 3-1) comprises:
step 3-1-1), obtaining the longest acceleration time of the ith joint of the robot in PTP motion according to the speed and acceleration parameters of the shaft;
Figure FSA00000415630600041
Figure FSA00000415630600042
wherein,
Figure FSA00000415630600043
which represents the maximum speed of the shaft motor,
Figure FSA00000415630600044
represents the maximum acceleration of the shaft motor;
Figure FSA00000415630600045
the distance of the transformation is represented by,
Figure FSA00000415630600046
Figure FSA00000415630600047
step 3-1-2), calculating the longest finish time of acceleration, the longest finish time of uniform speed and the longest finish time of deceleration of the whole robot according to the result obtained in the previous step;
Figure FSA00000415630600048
tdi=tei-tai
maxTa=max(ta1,ta2,…,tai)
maxTd=max(td1,td2,…,tdi)
maxTe=max(te1,te2,…,tei)
wherein, teiIndicates the end time, tdiRepresenting the time of completion of the uniform speed;
3-1-3), calculating the running speed and running acceleration of each joint under the condition of uniform acceleration and uniform speed;
Figure FSA00000415630600049
A i = V i max T a
wherein, ViIs the running speed of the joint i, AiIs the running acceleration of joint i;
step 3-1-4), calculating an expression theta of joint space along with time in PTP operation according to the calculation resulttSubstituting into the kinematics positive solution to obtain a simulation result Fptp1sta rt,θiEnd,t) (ii) a Wherein,
<math> <mrow> <msub> <mi>&theta;</mi> <mi>t</mi> </msub> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msup> <mi>t</mi> <mn>2</mn> </msup> </mtd> <mtd> <mn>0</mn> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>t</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mfrac> <msup> <msub> <mi>V</mi> <mi>i</mi> </msub> <mn>2</mn> </msup> <msub> <mi>A</mi> <mi>i</mi> </msub> </mfrac> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>a</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>A</mi> <mi>i</mi> </msub> <mn>2</mn> </mfrac> <mo>&CenterDot;</mo> <msup> <mrow> <mo>(</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <mi>max</mi> <msub> <mi>T</mi> <mi>d</mi> </msub> <mo>&lt;</mo> <mi>t</mi> <mo>&le;</mo> <mi>max</mi> <msub> <mi>T</mi> <mi>e</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> </math>
11. the motion driving method of a robot according to claim 10, wherein the step 3-2) comprises:
step 3-2-1), calculating the similarity degree by utilizing the principle of autocorrelation
Figure FSA00000415630600052
Figure FSA00000415630600053
Step 3-2-2), calculating
Figure FSA00000415630600054
Value and threshold of
Figure FSA00000415630600055
Comparing, if the curve fitting time is less than the preset time, judging that the curve fitting is successful, otherwise, supplementing a key point at the middle point of the target curve in time, dividing the target curve into two sections, and then repeatedly executing the step 3-1) until the whole curve fitting is completed; the key points to be used in the fitting process are the control points.
CN201010624276.9A 2010-12-31 2010-12-31 Motion driving method for robot with nine degrees of freedom Active CN102528802B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010624276.9A CN102528802B (en) 2010-12-31 2010-12-31 Motion driving method for robot with nine degrees of freedom

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010624276.9A CN102528802B (en) 2010-12-31 2010-12-31 Motion driving method for robot with nine degrees of freedom

Publications (2)

Publication Number Publication Date
CN102528802A true CN102528802A (en) 2012-07-04
CN102528802B CN102528802B (en) 2014-12-03

Family

ID=46337462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010624276.9A Active CN102528802B (en) 2010-12-31 2010-12-31 Motion driving method for robot with nine degrees of freedom

Country Status (1)

Country Link
CN (1) CN102528802B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102794767A (en) * 2012-08-31 2012-11-28 江南大学 B spline track planning method of robot joint space guided by vision
CN103538067A (en) * 2013-10-08 2014-01-29 南京航空航天大学 Kinematics positive solution method of fast-solving Stewart parallel mechanism based on quaternion
CN103853043A (en) * 2012-11-30 2014-06-11 北京配天大富精密机械有限公司 Method for realizing synchronous PTP motion in robots and device thereof
CN105252548A (en) * 2015-11-03 2016-01-20 葛洲坝易普力股份有限公司 Kinematic performance analysis method of irregular RPR, RP and PR type mechanical arm connecting rod coordinate systems
CN105563482A (en) * 2015-12-01 2016-05-11 珞石(北京)科技有限公司 Rotation movement planning method for end effector of industrial robot
CN107665616A (en) * 2017-09-15 2018-02-06 北京控制工程研究所 A kind of nine-degree of freedom motion simulator relative motion equivalent method and system
CN107980109A (en) * 2017-01-04 2018-05-01 深圳配天智能技术研究院有限公司 Robot motion's method for planning track and relevant apparatus
CN108121833A (en) * 2016-11-29 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of redundant degree of freedom robot is against solution method
CN108326844A (en) * 2017-01-20 2018-07-27 香港理工大学深圳研究院 The motion planning method and device of the operable degree optimization of redundancy mechanical arm
CN108378922A (en) * 2018-02-28 2018-08-10 哈尔滨工业大学 A kind of micro-wound operation robot force feedback main manipulator with redundant degree of freedom
CN108839025A (en) * 2018-07-12 2018-11-20 杭州电子科技大学 A kind of motion planning method and device of mobile mechanical arm
CN109032081A (en) * 2018-08-10 2018-12-18 山东易码智能科技股份有限公司 Multi-axis robot point synchronization control method and system based on S curve acceleration and deceleration
CN109531573A (en) * 2018-12-25 2019-03-29 珞石(山东)智能科技有限公司 One kind being based on line transect robot pose smooth path generation method
CN109699177A (en) * 2017-08-21 2019-04-30 韩华精密机械株式会社 Robot teaching device, method and system
CN110216670A (en) * 2019-04-30 2019-09-10 武汉理工大学 A kind of industrial robot automatic obstacle-avoiding method and device based on loss field
CN110815226A (en) * 2019-11-15 2020-02-21 四川长虹电器股份有限公司 Method for returning to initial position at any posture and any position of robot
CN112256023A (en) * 2020-09-28 2021-01-22 南京理工大学 Bezier curve-based airport border patrol robot local path planning method and system
CN112428275A (en) * 2020-11-30 2021-03-02 深圳市优必选科技股份有限公司 Robot motion planning method and device, movable robot and storage medium
WO2021103699A1 (en) * 2019-11-29 2021-06-03 沈阳通用机器人技术股份有限公司 Motion optimization method for robot having redundant degree of freedom
WO2021184655A1 (en) * 2020-03-19 2021-09-23 南京溧航仿生产业研究院有限公司 Method for planning motion along trajectory of end of hyper-redundant mechanical arm
CN113524171A (en) * 2021-05-26 2021-10-22 南京玖玖教育科技有限公司 Control method, system, robot, device and medium for multi-degree-of-freedom robot
CN113710432A (en) * 2019-04-16 2021-11-26 西门子股份公司 Method for determining a trajectory of a robot
CN114378833A (en) * 2022-03-23 2022-04-22 珞石(北京)科技有限公司 Mechanical arm track planning method based on robust constraint control
CN114505846A (en) * 2022-03-02 2022-05-17 中国科学院沈阳自动化研究所 Multi-degree-of-freedom redundant mechanical arm capable of achieving positioning and posture adjustment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106054599B (en) * 2016-05-25 2019-06-14 哈尔滨工程大学 A kind of delay control method of master-slave mode submarine mechanical arm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0383951A1 (en) * 1988-08-31 1990-08-29 Fanuc Ltd. Vertical articulated robot
JPH04365102A (en) * 1991-06-12 1992-12-17 Mitsubishi Electric Corp Robot control method
JP2000112510A (en) * 1998-10-09 2000-04-21 Kobe Steel Ltd Robot teaching method and its device
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
JP2009107074A (en) * 2007-10-30 2009-05-21 Olympus Medical Systems Corp Manipulator apparatus and medical device system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0383951A1 (en) * 1988-08-31 1990-08-29 Fanuc Ltd. Vertical articulated robot
JPH04365102A (en) * 1991-06-12 1992-12-17 Mitsubishi Electric Corp Robot control method
JP2000112510A (en) * 1998-10-09 2000-04-21 Kobe Steel Ltd Robot teaching method and its device
CN1883887A (en) * 2006-07-07 2006-12-27 中国科学院力学研究所 Robot obstacle-avoiding route planning method based on virtual scene
JP2009107074A (en) * 2007-10-30 2009-05-21 Olympus Medical Systems Corp Manipulator apparatus and medical device system

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102794767A (en) * 2012-08-31 2012-11-28 江南大学 B spline track planning method of robot joint space guided by vision
CN103853043A (en) * 2012-11-30 2014-06-11 北京配天大富精密机械有限公司 Method for realizing synchronous PTP motion in robots and device thereof
CN103853043B (en) * 2012-11-30 2017-02-22 北京配天技术有限公司 Method for realizing synchronous PTP motion in robots and device thereof
CN103538067A (en) * 2013-10-08 2014-01-29 南京航空航天大学 Kinematics positive solution method of fast-solving Stewart parallel mechanism based on quaternion
CN105252548A (en) * 2015-11-03 2016-01-20 葛洲坝易普力股份有限公司 Kinematic performance analysis method of irregular RPR, RP and PR type mechanical arm connecting rod coordinate systems
CN105252548B (en) * 2015-11-03 2017-03-08 葛洲坝易普力股份有限公司 The Kinematics Analysis method of irregular RPR, RP and PR type robot linkage coordinate system
CN105563482A (en) * 2015-12-01 2016-05-11 珞石(北京)科技有限公司 Rotation movement planning method for end effector of industrial robot
CN108121833A (en) * 2016-11-29 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of redundant degree of freedom robot is against solution method
CN107980109A (en) * 2017-01-04 2018-05-01 深圳配天智能技术研究院有限公司 Robot motion's method for planning track and relevant apparatus
CN108326844A (en) * 2017-01-20 2018-07-27 香港理工大学深圳研究院 The motion planning method and device of the operable degree optimization of redundancy mechanical arm
CN109699177A (en) * 2017-08-21 2019-04-30 韩华精密机械株式会社 Robot teaching device, method and system
CN107665616A (en) * 2017-09-15 2018-02-06 北京控制工程研究所 A kind of nine-degree of freedom motion simulator relative motion equivalent method and system
CN107665616B (en) * 2017-09-15 2019-10-22 北京控制工程研究所 A kind of nine-degree of freedom motion simulator relative motion equivalent method and system
CN108378922A (en) * 2018-02-28 2018-08-10 哈尔滨工业大学 A kind of micro-wound operation robot force feedback main manipulator with redundant degree of freedom
CN108839025A (en) * 2018-07-12 2018-11-20 杭州电子科技大学 A kind of motion planning method and device of mobile mechanical arm
CN109032081A (en) * 2018-08-10 2018-12-18 山东易码智能科技股份有限公司 Multi-axis robot point synchronization control method and system based on S curve acceleration and deceleration
CN109032081B (en) * 2018-08-10 2019-10-01 山东易码智能科技股份有限公司 Multi-axis robot point synchronization control method and system based on S curve acceleration and deceleration
CN109531573A (en) * 2018-12-25 2019-03-29 珞石(山东)智能科技有限公司 One kind being based on line transect robot pose smooth path generation method
CN113710432A (en) * 2019-04-16 2021-11-26 西门子股份公司 Method for determining a trajectory of a robot
CN110216670A (en) * 2019-04-30 2019-09-10 武汉理工大学 A kind of industrial robot automatic obstacle-avoiding method and device based on loss field
CN110216670B (en) * 2019-04-30 2022-04-15 武汉理工大学 Industrial robot automatic obstacle avoidance method and device based on loss field
CN110815226A (en) * 2019-11-15 2020-02-21 四川长虹电器股份有限公司 Method for returning to initial position at any posture and any position of robot
CN110815226B (en) * 2019-11-15 2022-03-01 四川长虹电器股份有限公司 Method for returning to initial position at any posture and any position of robot
WO2021103699A1 (en) * 2019-11-29 2021-06-03 沈阳通用机器人技术股份有限公司 Motion optimization method for robot having redundant degree of freedom
WO2021184655A1 (en) * 2020-03-19 2021-09-23 南京溧航仿生产业研究院有限公司 Method for planning motion along trajectory of end of hyper-redundant mechanical arm
CN112256023A (en) * 2020-09-28 2021-01-22 南京理工大学 Bezier curve-based airport border patrol robot local path planning method and system
CN112256023B (en) * 2020-09-28 2022-08-19 南京理工大学 Bezier curve-based airport border patrol robot local path planning method and system
CN112428275A (en) * 2020-11-30 2021-03-02 深圳市优必选科技股份有限公司 Robot motion planning method and device, movable robot and storage medium
CN112428275B (en) * 2020-11-30 2022-04-19 深圳市优必选科技股份有限公司 Robot motion planning method and device, movable robot and storage medium
CN113524171A (en) * 2021-05-26 2021-10-22 南京玖玖教育科技有限公司 Control method, system, robot, device and medium for multi-degree-of-freedom robot
CN114505846A (en) * 2022-03-02 2022-05-17 中国科学院沈阳自动化研究所 Multi-degree-of-freedom redundant mechanical arm capable of achieving positioning and posture adjustment
CN114505846B (en) * 2022-03-02 2024-02-06 中国科学院沈阳自动化研究所 Multi-degree-of-freedom redundant mechanical arm capable of achieving positioning and posture adjustment
CN114378833A (en) * 2022-03-23 2022-04-22 珞石(北京)科技有限公司 Mechanical arm track planning method based on robust constraint control

Also Published As

Publication number Publication date
CN102528802B (en) 2014-12-03

Similar Documents

Publication Publication Date Title
CN102528802A (en) Motion driving method for robot with nine degrees of freedom
CN102122172B (en) Image pickup system and control method thereof for machine motion control
CN107490965B (en) Multi-constraint trajectory planning method for space free floating mechanical arm
CN111399514B (en) Robot time optimal track planning method
CN106647282B (en) Six-degree-of-freedom robot trajectory planning method considering tail end motion error
CN108241339B (en) Motion solving and configuration control method of humanoid mechanical arm
Kabir et al. Generation of synchronized configuration space trajectories of multi-robot systems
CN105772917B (en) A kind of three joint spot welding robot&#39;s Trajectory Tracking Control methods
CN106777475B (en) A kind of injection machine arm dynamics synergy emulation method of confined space constraint
CN102554938A (en) Tracking method for mechanical arm tail end trajectory of robot
CN105773620A (en) Track planning and control method of free curve of industrial robot based on double quaternions
CN108549321B (en) Industrial robot track generation method and system integrating time energy jump degree
Rezende et al. Constructive time-varying vector fields for robot navigation
Dou et al. Inverse kinematics for a 7-DOF humanoid robotic arm with joint limit and end pose coupling
CN110861088A (en) Motion optimization method of redundant degree of freedom robot
CN113671960B (en) Autonomous navigation and control method of magnetic micro-nano robot
Luo et al. Repulsive reaction vector generator for whole-arm collision avoidance of 7-DoF redundant robot manipulator
CN113568422B (en) Four-foot robot control method based on model predictive control optimization reinforcement learning
Pikalov et al. Vector model for solving the inverse kinematics problem in the system of external adaptive control of robotic manipulators
CN114347017B (en) Curved surface motion control method of adsorption type mobile processing robot based on plane projection
Chen et al. Kinematics optimization of a novel 7-DOF redundant manipulator
Wehbe et al. Novel three-dimensional optimal path planning method for vehicles with constrained pitch and yaw
Banga Optimal Trajectory Planning Analysis of Robot Manipulator Using PSO
Chen et al. Robot autonomous grasping and assembly skill learning based on deep reinforcement learning
Han et al. Path planning for robotic manipulator in narrow space with search algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: BEIJING GLISEE TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES

Effective date: 20131114

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20131114

Address after: 100190 Beijing City, Haidian District Zhongguancun Road No. 89 building 14H Hengxing

Applicant after: Beijing Zhongke Guangshi Technology Co., Ltd.

Address before: 100190 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Applicant before: Institute of Computing Technology, Chinese Academy of Sciences

C14 Grant of patent or utility model
GR01 Patent grant