CN111152220A - Mechanical arm control method based on man-machine fusion - Google Patents
Mechanical arm control method based on man-machine fusion Download PDFInfo
- Publication number
- CN111152220A CN111152220A CN201911419535.1A CN201911419535A CN111152220A CN 111152220 A CN111152220 A CN 111152220A CN 201911419535 A CN201911419535 A CN 201911419535A CN 111152220 A CN111152220 A CN 111152220A
- Authority
- CN
- China
- Prior art keywords
- arm
- mechanical arm
- joint angle
- joint
- human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000004927 fusion Effects 0.000 title claims abstract description 23
- 230000009471 action Effects 0.000 claims abstract description 32
- 238000013507 mapping Methods 0.000 claims abstract description 23
- 230000002093 peripheral effect Effects 0.000 claims abstract description 3
- 210000002310 elbow joint Anatomy 0.000 claims description 42
- 210000003857 wrist joint Anatomy 0.000 claims description 40
- 230000033001 locomotion Effects 0.000 claims description 36
- 210000000323 shoulder joint Anatomy 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 239000013598 vector Substances 0.000 claims description 13
- 230000002068 genetic effect Effects 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 210000000245 forearm Anatomy 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 7
- 238000009395 breeding Methods 0.000 claims description 4
- 230000001488 breeding effect Effects 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 10
- 210000000707 wrist Anatomy 0.000 description 8
- 230000009466 transformation Effects 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 239000012636 effector Substances 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000035772 mutation Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 210000001503 joint Anatomy 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 101000802640 Homo sapiens Lactosylceramide 4-alpha-galactosyltransferase Proteins 0.000 description 1
- 102100035838 Lactosylceramide 4-alpha-galactosyltransferase Human genes 0.000 description 1
- 241001417527 Pempheridae Species 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 238000012890 quintic function Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J3/00—Manipulators of master-slave type, i.e. both controlling unit and controlled unit perform corresponding spatial movements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a mechanical arm control method based on man-machine fusion, and belongs to the technical field of robot control. The control method comprises the following steps: (1) acquiring a peripheral real-time scene image of a target object; (2) displaying a real-time scene image at an operation end, acquiring arm joint angle data, and mapping the arm joint angle data into mapping joint angle data theta for controlling a mechanical armH(ii) a (3) According to the current position data and the position data of the predicted target object, a preset operation path is drawn according to a polynomial path planning method, and planned joint angle data theta for controlling the mechanical arm is obtainedR(ii) a (4) According to the formula thetaS=(1‑α)θH+αθRThe method can well integrate the action advantages and disadvantages of the human arm and the mechanical arm, the remote control of the robot, and the likeThe field of the technology.
Description
Technical Field
The invention relates to the technical field of robot control, in particular to a control method of a heterogeneous mechanical arm based on man-machine fusion.
Background
With the development of robotics, in order to ensure the personal safety of the operator, a camera is usually mounted on the robot, for example, if the panoramic imaging system disclosed in patent document CN108082415A is disclosed, or a camera is arranged on the working site to obtain a scene image, so as to avoid that the operator needs to enter a deep sea working environment, and to remove bullets and mine sweepers, which may affect the safety of the operator.
After the environmental scene image is acquired, the human arm simulates the field operation action, and the joint angle data of the human arm in the field operation action process is acquired, for example, the method for acquiring the walking posture of the target object disclosed in the patent document with the publication number of CN107967687A can be adopted; and controlling the robot to perform field operation based on the joint angle data.
Based on the remote control method, the safety influence on operators can be well avoided, but because people judge the site environment and the distance through influencing data, the problem of judgment errors exists when all site three-dimensional information is difficult to obtain completely, and the problem that the robot arm is difficult to control the action of the robot arm directly based on the obtained joint data of the human arm in the demonstration action process is caused.
In addition, in the operation process, although the robot arm is of a humanoid structure, the robot arm is still difficult to be of a homogeneous structure with the robot arm, for example, the degree of freedom and the arm length are different, namely, the robot arm is generally heterogeneous, so that the robot arm is difficult to directly control the humanoid robot arm to operate by utilizing joint data of the robot arm, theoretical data of robot arm control can be directly obtained by utilizing a working space matrix, but due to the heterogeneity of the robot arm and the robot arm, the directly calculated theoretical data cannot be directly executed in some cases, and follow-up control cannot be realized in some cases.
Disclosure of Invention
The invention mainly aims to provide a control method of a mechanical arm, which can better utilize a scene image on site to remotely control a robot;
another object of the present invention is to provide a method for controlling a robot arm, so as to perform some actions that cannot be performed in a similar manner during the following control of a heterogeneous humanoid robot arm and a robot arm, thereby achieving better following control.
In order to achieve the above purpose, the robot arm control method provided by the invention is based on a man-machine fusion strategy, and specifically comprises the following steps: (1) acquiring a peripheral real-time scene image of a target object; (2) displaying a real-time scene image at an operation end, acquiring data of a human arm joint angle series in a demonstration action process of a human arm, and mapping the human arm joint angle data into mapping joint angle data theta for controlling a mechanical armH(ii) a (3) Predicting a target according to the current position data, the position data of a predicted target object and a human arm track probability distribution model, drawing a preset operation path according to a polynomial path planning method, and acquiring planned joint angle data theta for controlling a mechanical arm according to the preset operation pathR(ii) a (4) According to the formula thetaS=(1-α)θH+αθRCalculating current execution joint angle data of the mechanical arm, wherein the current execution joint angle data is not less than 0 and not more than α and not more than 1, the human weight coefficient is a human weight coefficient and increases along with the increase of the prediction confidence coefficient, and the confidence coefficient increases along with the decrease of the distance between the execution tail end of the mechanical arm and the prediction target object after the distance between the execution tail end of the mechanical arm and the prediction target object is smaller than a preset distance, and 5, controlling the mechanical arm to execute operation actions by taking the current execution joint angle data as a control command.
In the control process, firstly, according to the action currently input by a user, weight joint angle data are distributed based on the confidence coefficient of a predicted target object to perform man-machine fusion, so that the action data control of the human arm is taken as the main control in a long distance, and the field planning scheme is taken as the main control in a short distance, thereby effectively avoiding the problems of judgment error and the like caused by the fact that a person looks over a field operation environment image. And the scheme is based on a polynomial path planning method, so that the phenomenon that a target object is damaged due to manual control in the process can be effectively avoided.
The prediction target object is an object closest to the position of the execution end.
The specific scheme is that the position data of the target object is predicted according to a human arm track probability distribution model.
More specifically, the position data of the target object is predicted by utilizing the acquired human arm track probability distribution model based on the following formula:
wherein G is*To most likely reach the target location, S is the initial location, U is the user input location, ξ represents the motion trajectory, θ is any parameter that may have an effect on the prediction, CGFor the cost function, the probability function P (G) satisfies the following formula,
wherein, L is the distance from the initial position of the end of the mechanical arm to the target position, and L is the distance from the current end of the mechanical arm to the target position.
Further, the expression of the human weight coefficient α is:
wherein conf is the confidence, and the expression is,
conf=P(G*|ξS→U)logP(G*|ξS→U)
P(G*|ξS→U)=P(ξS→U|G*)P(G*)。
the preferable scheme is that α is 0 when the confidence coefficient is in a first confidence coefficient interval, the first confidence coefficient interval is positioned in the first half, α linearly increases with the increase of the confidence coefficient when the confidence coefficient is in a second confidence coefficient interval, α exponentially increases with the increase of the confidence coefficient when the confidence coefficient is in a third confidence coefficient interval, the third interval is positioned in the second half, and the first confidence coefficient interval, the second confidence coefficient interval and the third confidence coefficient interval are adjacent and spliced into the whole motion confidence coefficient interval.
The preferred scheme is that the polynomial path planning method is a joint angle function and meets the following formula:
θ(t)=a5×t5+a4×t4+a3×t3+a2×t2+a1×t+a0
wherein T is time, and T is time of moving to the designated position.
It is preferable that the human-machine weight coefficient α is given a value smaller than 0.5 when acceleration of the joint angle of the human arm is detected to exceed a preset threshold value.
In order to achieve the above another object, the present invention provides a preferable scheme that the robot arm is a heterogeneous humanoid robot arm, and the human arm joint angle data is mapped into the mapped joint angle data θ for controlling the robot armHComprises the following steps: (1) matching the tail end working spaces of the mechanical arm and the human arm to obtain a conversion matrix between the two spaces; calculating target wrist joint coordinate data of the mechanical arm by using a conversion matrix based on the current wrist joint coordinate data of the human arm and the shoulder joint coordinate data of the mechanical arm; (2) calculating elbow joint coordinate data of the humanoid mechanical arm based on the target wrist joint coordinate data and the shoulder joint coordinate data and by taking the direction vector of the lower arm of the human body as the direction vector of the lower arm; (3) taking elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking a working space at the tail end of the mechanical arm as an independent variable value range, at least constructing a fitness function by calculating deviation of the position of a wrist joint of the mechanical arm relative to coordinate data of a target wrist joint, and searching out optimal elbow joint and shoulder joint angle data by utilizing a genetic algorithm to serve as joint angle data and elbow joint angle data of the mechanical arm; and calculating the wrist joint angle data of the mechanical arm according to the wrist joint angle data of the human arm.
According to the scheme, the conversion matrix of the human arm and the tail end working space of the mechanical arm is obtained, theoretical coordinate data of the tail end of the mechanical arm are obtained accordingly, and theoretical coordinate data of the elbow joint are obtained; and then searching out the angle data of the optimal elbow joint and shoulder joint based on the genetic algorithm, wherein the fitness function mainly considers the calculation deviation of the tail end position, namely mainly paying attention to the action precision of the execution tail end of the fitness function, so that the fitness function is used for searching an approximate solution meeting the requirement of the fitness function to carry out approximate execution, and therefore the joint angle data of the humanoid mechanical arm in the action execution process does not need to completely refer to the joint angle data of the human arm, can be approximately executed, and can better carry out heterogeneous control.
The preferable scheme is that the solving step comprises the following steps: (1) calculating the approximate wrist joint position by taking the angle data represented by each generation of individuals as an approximate solution of the elbow joint angle and the shoulder joint angle; (2) and acquiring the calculation deviation of the wrist joint position based on the target wrist joint data and the approximate wrist joint position, and acquiring an optimal solution by taking the minimum calculation deviation as a reference.
The preferred scheme is that a fitness function is constructed based on the calculated deviation of the wrist joint position and the elbow joint position, and the calculated deviation of the wrist joint position is considered preferentially. On the premise of mainly considering the execution precision of the tail end action and partially considering the precision of the elbow joint, namely on the premise of preferentially considering the execution precision of the action, the action of the whole human-simulated mechanical arm is closer to the action of the human arm, so that a human can conveniently determine the next action through a video, and the field operation of the mechanical arm is better guided.
The preferred approach is that the fitness function is fit ═ Dh+0.03De+0.03Sum, wherein DhApproximate Euclidean distance, D, of the wrist joint position from the position represented by the target wrist joint coordinate dataeThe Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual. On the premise of ensuring the accuracy of the tail end, the fitness function is raised for the solution with overlarge deviation between the elbow position and the target and the solution with other configurations generated by distortion removal, so that large values are excluded, and off-site control is better performed.
The preferred scheme is that the step of searching the optimal angle data of the elbow joint and the shoulder joint by using a genetic algorithm comprises the following steps: (1) taking the tail end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population; (2) calculating individual fitness values based on a fitness function, sequencing the fitness values in high and low directions, generating random integers by using an exponential distribution mode to select individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode; (3) replacing partial individuals with the maximum fitness value by utilizing the filial generation individuals generated by propagation according to a preset proportion, and increasing new individuals generated by partial variation based on the variation probability; (4) and repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
The optimal scheme is based on the acquired three-dimensional information of the workplace of the mechanical arm, and the independent variable value range is a feasible working space after the interference space is eliminated from the working space at the tail end of the mechanical arm. Therefore, in the control process, the interference of the field environment on the action of the mechanical arm can be effectively avoided, so that the field interference environment does not need to be accurately judged in the remote control process of a person, and the action of the mechanical arm can be controlled remotely more easily.
Drawings
FIG. 1 is a flowchart illustrating a method for controlling a robotic arm according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for mapping actions according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of modeling a human arm DH in an embodiment of the invention;
FIG. 4 is a geometric relationship between a joint point and a vector of a right arm according to an embodiment of the present invention;
FIG. 5 is a geometric relationship between the joint point and the vector of the left arm according to the embodiment of the present invention;
FIG. 6 is a degree of freedom matching relationship of motion mapping between a human arm and a mechanical arm in an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the X-Z coordinate system according to the embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the Y-Z coordinate system according to the embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a matching process of the end working spaces of the human arm and the manipulator in an X-Y coordinate system according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the X-Y-Z three-dimensional coordinate system according to the embodiment of the present invention;
FIG. 11 is a diagram illustrating a normalization process of elbow coordinate data of a robotic arm according to an embodiment of the present invention;
FIG. 12 is a schematic diagram illustrating modeling of a DH robotic arm according to an embodiment of the present invention;
FIG. 13 is a relationship between human weight coefficients and confidence;
FIG. 14 is a graph showing the relationship between the human-machine weight coefficient and the confidence coefficient in the practical experiment of the present invention;
FIG. 15 is a diagram showing the matching effect of the human body, the machine body and the combination of the human body and the machine body in the X-axis direction;
FIG. 16 is a Y-axis diagram showing the matching effect of human and machine actions and their fusion;
FIG. 17 is a Z-axis diagram illustrating the matching effect of human and machine motions and their fusion;
FIG. 18 is a photograph of a single-target obstacle avoidance experiment, with a first row being machine-controlled individually, a second row being person-controlled individually, and third and fourth rows being shared by a person;
fig. 19 is a photograph of a multi-target predictive capture experiment, with first and second horizontal rows for long distance target capture and third and fourth rows for short distance target capture.
The invention is further illustrated by the following examples and figures.
Detailed Description
Examples
As shown in fig. 1, the robot arm control method based on human-machine fusion of the present invention includes an obtaining step S1, a mapping step S2, a planning step S2, a fusion step S4, and a control step S5, and specifically includes the following steps:
in the acquisition step S1, a real-time scene image around the target object is acquired.
The method specifically comprises the steps that a camera carried on a robot or a camera arranged on a working site is used for obtaining a scene image of the robot in the working process, the number of the cameras is arranged according to the actual situation, and the depth information of the working scene can be obtained based on the arrangement of the depth cameras.
A mapping step S2, displaying a real-time scene image at the operation end, acquiring the arm joint angle series data of the arm in the demonstration action process, and mapping the arm joint angle data into the mapping joint angle data theta for controlling the mechanical armH。
As shown in fig. 2, the mapping step S2 specifically includes an obtaining step S21, a matching step S22, a calculating step S23, and a solving step S24, and the specific process is as follows:
in the acquisition step S21, joint angle data of the human arm during the demonstration motion is acquired.
The most widely used model for the upper limbs of the human body is the 7-degree-of-freedom model, shown in FIG. 3, with base coordinates (X)0Y0Z0) Is positioned between the trunk and the shoulder joint of the human body, three motion coordinate systems are established at the shoulder joint and respectively correspond to three motion degrees of freedom of the shoulder joint, specifically adduction/abduction (X)1Y1Z1) Inward/outward (X)2Y2Z2) And flexion/extension (X)3Y3Z3) (ii) a Flexion/extension (X) at the elbow joint4Y4Z4) Degree of freedom; but according to the internal/external rotation (X) at the wrist joint5Y5Z5) Adduction/abduction (X)6Y6Z6) And flexion/extension (X)7Y7Z7) Respectively establishing three motion coordinate systems; motion coordinate (X) of end effector8Y8Z8) is located at the finger.
The geometric parameter a of the rod piece in the D-H model can be determined according to the proportional relation of the body movement limb size and the body heighti-1And diAs shown in table 1 below, about the ith joint degree of freedom ZiAngle of rotation of axis thetaiDepending on the givenCorresponding to the range of motion β of the joint motion degree of freedomi。
Table 1: human arm DH parameter table
A monocular camera is used, a ResNet-50-based deep convolutional neural network is adopted, the human body posture estimation task in a three-dimensional space is carried out, the network is trained to be in gradient convergence successfully, and expected relevant bone joint point coordinate values are output. As shown in fig. 4 and 5. Wherein,a vector representing the left arm from the shoulder joint to the elbow joint (left),represents the vector from elbow to wrist (left) for the left arm, and the same for the right arm. In addition, there are several limb vectors that are auxiliary vectors used in the calculation process:from the left shoulder to the right shoulder,from the right shoulder to the left shoulder, which are used to represent the horizontal direction of the plane of the human body, respectively;from the neck to the pelvis, representing the vertical downward direction of the plane of the body, this auxiliary vector being used in the calculation of the angles of both the left and right arms;andare respectivelyAndandthe result of the cross product; in a similar manner, the first and second substrates are,andare respectivelyAndandandthe result of the cross multiplication.
The formula is calculated as follows taking the left arm as an example.
The three joint degrees of freedom of the wrist are measured by the attitude sensor IMU, which in this embodiment is an IMU model HWT901B, deployed at the wrist. From the DH modelWhereinIn the known manner, it is known that,is the matrix to be solved. From the IMU coordinates:
wherein A, B, C, D is the transformation matrix between the aforementioned eight coordinate systems, i.e. coordinate system No. 0 to coordinate system No. 7, TAX、TAY、TAZIs the transformation matrix generated between the three euler angles of the IMU and the initial coordinate system.
66=arccos(-ax)
As shown in fig. 6, the human-machine action mapping algorithm can map three degrees of freedom of the shoulder and one degree of freedom of the elbow of the human arm to the first five degrees of freedom of YUMI. Two degrees of freedom at the end of YUMI are directly controlled by the inward/outward rotation and extension/flexion of the wrist of the human body measured by IMU; the method mainly comprises two parts of working space matching and genetic algorithm.
A matching step S22, matching the tail end working space of the mechanical arm and the human arm to obtain a conversion matrix between the two spaces; and calculating target wrist joint coordinate data of the mechanical arm by using the conversion matrix based on the current wrist joint coordinate data of the human arm.
All 7 joints of a single arm of the YUMI robot are rotating joints, a coordinate system 12 is established on the arm, and YUMIDH parameters are shown in the following table 2, wherein ai-1And αi-1Respectively representing the length and twist angle of the connecting rod i-1, diAnd thetaiIndicating the offset and joint angle of link i.
TABLE 2 YUMI DH parameters table
According to the DH parameters in the table, the human arm and YUMI arm kinematic models can be obtained. The homogeneous transformation between two adjacent coordinates in fig. 12 can be represented by the DH parameters as:
where c is an abbreviation for trigonometric function cos and s is an abbreviation for sin. Further, the relationship of the end effector position to the base position may be calculated as follows:
wherein n represents a degree of freedom, and P ═ x, y, z,1]For indicating the position of the relevant joint,a homogeneous matrix in formula (1).
The motion space matching is realized by certain geometric transformation, so that two originally completely different motion spaces or speed spaces of the master-slave heterogeneous manipulator or the human arm and the mechanical arm can be matched as much as possible. The motion matching control method of the master-slave manipulator generally adopted at present mainly comprises three methods: joint-joint matching, end pose matching and point-to-point matching. Among the three matching methods, the joint-joint matching method is mainly used for isomorphic master-slave manipulators, and the point-to-point matching method is the most commonly used method and can be suitable for controlling all isomorphic and heterogeneous master-slave manipulators. In this embodiment, a general method for position matching in master-slave heterogeneous manipulator motion matching is shown as follows:
in the formula, [ X ]p,Yp,Zp]TIs the slave manipulator end effector position; [ X, Y, Z ]]TThe position of the tail end of the human arm, α the rotation angle around the Z axis of the base coordinate of the slave manipulator, SxIs the magnification factor along the X axis of the base coordinate of the arm of the human hand; syIs a magnification factor along the Y axis of the base coordinate of the arm of the human hand; szIs a Z-axis magnification factor along the base coordinate of the arm of the human hand; lxThe translation distance is along the X axis of the arm base coordinate of the human hand; lyThe translation distance is along the Y axis of the arm base coordinate of the human hand; lzIs the translation distance along the Z axis of the arm base coordinate of the human hand. Taking human shoulder joint and YUMI shoulder joint as principal and subordinate base coordinate systems respectively, Sx=0.9186,Sy=0.8476,Sz=0.8775,α=180°The obtained variation relation is shown as the formula. Fig. 10 is a three-dimensional cloud of a person and a YUMI workspace after matching processing, and fig. 7 to 9 are plane projection views.
In the calculation step S23, elbow joint coordinate data of the humanoid robot arm is calculated based on the target wrist joint coordinate data and the shoulder joint coordinate data and using the direction vector of the human forearm as the forearm direction vector.
Considering the mapping to mimic the human arm configuration, as shown in fig. 11, the end position and elbow position are determined due to the relatively fixed shoulder position, which also determines the basic configuration of the arm. Firstly, according to the mode, the position of the wrist is mapped to the working space of the YUMI, and the position of the elbow needs to be adjusted to carry out normalization processing in consideration of the fact that the length of the forearm of the human hand is not consistent with the length of the forearm of the YUMI. We retained the orientation of the forearm, making the length of the forearm of YUMI.
Wherein l is the length of the forearm of YUMI, O is the origin of the shoulder, E is the coordinate of the elbow, W is the coordinate of the wrist, and the wrist coordinate E' of YUMI corresponding to the current motion of the hand is obtained after adjustment.
Solving step S24, taking the elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking the end working space of the mechanical arm as an independent variable value range, at least constructing a fitness function by the calculation deviation of the wrist joint position of the mechanical arm relative to the coordinate data of the target wrist joint, and searching out the optimal elbow joint and shoulder joint angle data by using a genetic algorithm as the joint angle data and elbow joint angle data of the mechanical arm; and calculating the wrist joint angle data of the mechanical arm according to the wrist joint angle data of the human arm.
That is, the genetic algorithm is used to obtain the corresponding inverse solution S of the angle, that is, [ x ]h,yh,zh,xe,ye,ze]→[θ1,θ2,θ3,θ4,θ5]
The step of searching the angle data of the optimal elbow joint and shoulder joint by using a genetic algorithm as an approximate solution of the joint angle comprises the following steps:
initializing step S41, taking the end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population; and constructing a fitness function.
In a mechanical arm working space S, N solutions are randomly generated to serve as a population, and a real number coding mode is adopted as a coding mode; i.e. defining the individual as:
ind[i]=[θ1,θ2,θ3,θ4,θ5]Ti=1,2,……,n
for the fitness function of the individual, calculating the tail end position W of the mechanical arm, the elbow position E and the angle sum of the mechanical arm corresponding to each individual by using a DH matrix in modelingAnd respectively calculating Euclidean distances between the tail end and the elbow position and the target position:
Dh=||H-H0||
De=||E-E0||
in this embodiment, the fitness function is defined as:
fit=Dh+0.03De+0.03Sum
i.e. DhApproximate Euclidean distance, D, of the wrist joint position from the position represented by the target wrist joint coordinate dataeThe Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual. Thus, on the premise of ensuring the accuracy of the distal end, the fitness function will rise for the solution in which the elbow position deviates too much from the target and the solution in other configurations generated by the distortion removal, thereby excluding the large value, which means that in the present embodiment, the fitness function fit is constructed based on the calculated deviation of the wrist joint position and the elbow joint position, and the calculated deviation of the wrist joint position is prioritized, "prioritized" is configured to be the right of itThe weight coefficient is at least 10 times of the weight coefficient of other parameters, so that the motion of the whole humanoid mechanical arm is closer to the motion of the human arm on the premise of mainly considering the precision of the end motion execution and partially considering the precision of the elbow joint, namely preferentially considering the precision of the execution motion, the human can conveniently determine the next motion through videos, and the field operation of the mechanical arm can be better guided.
And a selecting and breeding step S42, calculating the fitness value of each generation of individuals based on the fitness function fit, sequencing the individuals in high and low, generating random integers in an exponential distribution mode to select the individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode.
After the individual fitness is calculated, the individuals are numbered according to the sequence of fitness from high to low, namely the fitness function from small to large. The selection of individuals is made using a form of exponential distribution to generate random integers.
I.e. the probability P that the individual with number k is selected is:
P(k)=λe-λk,k=1,2,……n
so that the likelihood that a solution with a small fitness function is selected is much greater than a solution with a small fitness function.
For the two individuals selected in the above manner, the next generation is generated by direct fusion:
son=k·ind[i]+(1-k)·ind[j]
where k is a randomly generated real number between 0 and 1.
And a crossing and mutation step S43, replacing the partial individual with the maximum fitness value with the filial generation individual according to a preset proportion, and increasing the new individual generated by partial mutation based on the mutation probability.
This is repeated to produce 20% offspring of population, replacing the last 20% of individuals with the largest fitness function. After replacement, each individual was mutated with a probability of 0.05, and a small amount was added to the original random direction as a new individual.
And repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
Firstly, acquiring a conversion matrix of a human arm and a tail end working space of a mechanical arm, acquiring theoretical coordinate data of the tail end of the mechanical arm according to the conversion matrix, and acquiring theoretical coordinate data of an elbow joint; and then searching out the angle data of the optimal elbow joint and shoulder joint based on a genetic algorithm, wherein the fitness function mainly considers the calculation deviation of the tail end position, namely mainly paying attention to the action precision of the execution tail end of the fitness function, so that an approximate solution meeting the requirement of the fitness function is searched by utilizing the genetic algorithm to carry out approximate execution, the joint angle data of the humanoid mechanical arm in the action execution process does not need to completely refer to the joint angle data of the human arm, the humanoid mechanical arm can carry out approximate simulation execution, and therefore heterogeneous control can be better carried out.
In the solving step, firstly, the angle data represented by each generation of individuals is used as an approximate solution of the elbow joint angle and the shoulder joint angle, and the approximate wrist joint position is calculated; and acquiring the calculation deviation of the wrist joint position based on the target wrist joint data and the approximate wrist joint position, and acquiring an optimal solution by taking at least the minimum calculation deviation as a reference.
A planning step S3 of planning a preset operation path according to the polynomial path planning method based on the current position data and the position data of the preset target object, and acquiring planned joint angle data θ for controlling the robot arm according to the preset operation pathR。
In this embodiment, the position data of the target object is preset to be predicted according to a human hand probability trajectory distribution model, and the specific process is as follows:
first, we need to calculate the target position that we may reach next based on the path input by the user, i.e. predict the most likely target objectAs shown in formula
In the formula G*To most likely reach the target location, S is the initial location, U is the user input location, ξ represents the motion trajectory, and θ is any parameter that may have an effect on the prediction.
Consider a trajectory ξ of a user inputS→U,G*The calculation formula can be written as
Our trace ξ of user inputS→UProbability of occurrence makes the maximum entropy assumption that the track occurrence probability and the track cost value increase exhibit an exponential decay trend.
P(ξ|G)∝exp(-CG(ξ))
CG=Tx 2+Ty 2+Tz 2+Rx 2+Ry 2+Rz 2
Wherein, CGAs a function of cost, Ti(i ═ x, y, z) and RiAnd (i ═ x, y, z) are the translation speed and the rotation speed on the X, Y, Z axis, respectively.
In practical calculations, we approximate the integral on the trajectory using the laplacian method, which can be simplified as:
p (G) the calculation formula is as follows:
wherein, L is the distance from the initial position of the end of the mechanical arm to the target position, and L is the distance from the current end of the mechanical arm to the target position.
Secondly, the position of the predicted target object is predictedThen, the three-dimensional positions P of all the target objects are obtained by visionG∈g(ii) a Calculating a position θ of a joint space by inverse kinematics of a robotic armTDetermining the time T for moving to the specified position according to the given speed v; interpolation is performed in joint space using a quintic function, as shown below.
θ(t)=a5×t5+a4×t4+a3×t3+a2×t2+a1×t+a0
According to the constraint conditions, as shown in the following formula, various coefficients are determined, so that the corresponding relation between time and the working space angle is obtained.
And calculating the spatial position of the mechanical arm joint according to the sampling time of motion capture, and performing man-machine fusion according to the confidence coefficient and the weight of the current position to generate a new joint position.
A fusion step S4 according to the formula θS=(1-α)θH+αθRAnd calculating the current execution joint angle data of the mechanical arm, wherein the current execution joint angle data is more than or equal to 0 and less than or equal to α and less than or equal to 1, is a human-machine weight coefficient, and is increased when the confidence coefficient between the execution tail end of the mechanical arm and a preset target object is increased.
The method comprises the following steps of adjusting task weights by using a piecewise function, wherein a linear expression is used in the first half to enable a machine to move stably, an exponential function is used in the second half to switch human-computer control weights rapidly and improve response speed, α is 0 when confidence degrees of an execution tail end of a mechanical arm and a predicted target are in a first confidence degree interval, the first confidence degree interval is located in the first half, α is linearly increased along with increase of the confidence degree when the confidence degree is in a second confidence degree interval, α is exponentially increased along with increase of the confidence degree when the confidence degree is in a third confidence degree interval, the third interval is located in the second half, and the first confidence degree interval, the second confidence degree interval and the third confidence degree interval are adjacent and are spliced into a whole motion confidence degree interval, wherein in the embodiment, the relation between the confidence degree and the collaborative weight is as shown in 13, and the specific expression is as follows:
the confidence coefficient calculation formula adopts a probability distribution entropy to calculate, and specifically comprises the following formula:
conf=P(G*|ξS→U)logP(G*|ξS→U)
P(G*|ξS→U)=P(ξS→U|G*)P(G*)。
that is, in the present embodiment, when the predicted target is correct, the confidence increases, the machine weight increases, and the precise operation is completed. And when the prediction is wrong, the confidence coefficient is reduced, the human control weight is increased, the machine moves according to the input of the user until a new task target is predicted, the confidence coefficient is continuously increased, the machine control weight is increased, and the task is completed.
In the control step S5, the robot arm is controlled to execute the operation using the current execution joint angle data as the control command.
In the embodiment, the advantages of a human and a machine can be brought into play in man-machine fusion, a remote rough task is planned by the human, and a close precise task is executed by the machine, so that the advantages of the actions of the human arm and the mechanical arm can be fully utilized to complement each other, and the corresponding defects are overcome.
In addition, in order to avoid that the robot arm is difficult to manipulate due to the short distance between the end of execution and the predicted target object when the human determines that the problem such as interference occurs in the front direction, it is stipulated that the human-machine weight α is given a value smaller than 0.5, even close to 0, for example, 0 to 0.1, when the acceleration of the joint angle of the human arm is detected to exceed the preset threshold, that is, when the emergency adjustment operation occurs.
As shown in fig. 13 to 18, the applicant performed a single-target obstacle avoidance grab test and a multi-target prediction grab test. As shown in fig. 18, the human and the machine are isolated, the teleoperation is simulated, the human operates according to the target in the display screen, the human motion information is captured by the camera and converted into the motion of the machine through the motion mapping algorithm, and the yumi machine control is controlled by using an open source program of the university of berkeley. The sampling frequency is about 1 HZ. The target position is known, the first row in the figure is a path obtained by the machine through single use of fifth-order polynomial interpolation for movement, and the machine can reach the target position but can knock over an obstacle. The second row is that the robot arm is controlled by independently using an action mapping mode, and the robot arm can avoid obstacles but cannot accurately reach a target position to grab. The third row and the fourth row in the figure are human-machine fusion control, the control weight of a person is larger at the beginning, the machine is controlled to avoid the obstacle, the control weight of the machine is increased after the distance from the target object is closer, the target object is accurately grabbed, and the initial position is returned.
The robot, the human and the two-in-one control pair are shown in fig. 15 to 16, that is, the end position change curves of the three control modes respectively, and it can be seen from the figure that the target position can be reached by the single machine control in about 5 s. The human-machine fusion control constraint 12s reaches the target position. The target position cannot be reached within 30s by a single human control. Fig. 14 shows the confidence and weight variation curves of three fusion control methods with linear, exponential and modified distribution plots. It can be seen from the figure that the exponential type changes to weight 1 at the earliest, i.e. the machine is fully controlled and the amplitude of the fluctuation is the largest. The linear variation is the smoothest, but the slowest to reach full machine control. The weight change of the correction type fusion control mode is stable, the final switching speed is high, and people are allowed to correct the motion of the tail end of the mechanical arm when the tail end of the mechanical arm is close to the target position.
As shown in fig. 19. Two objects to be grabbed are respectively placed on the platforms. One near the end of the arm and one far away, the former is white and the latter is green, the target position is known. The robotic arm will automatically grasp the object closest to its end if no human intervention is present. The robot predicts the target object to be grabbed according to the current action of the human and the distance from the target object, determines the confidence coefficient of grabbing each object, judges the object to be grabbed according to the confidence coefficient, and calculates the human-computer task distribution weight. When grabbing a green object, (a) the first 1.4s, the machine weight is 0, and is fully controlled by the human. And 1.4-2.3s, the object is close to the white object, and the object to be grasped is predicted to be the white object. As the action of the human is changed continuously, after 2.3s, the confidence coefficient of the green object is increased, the human-computer cooperation is used for controlling, and the capture of the green object is completed in about 12 s. When a white object is grabbed, (b), the human is controlled to move towards the direction far away from the target by the front 1.5s, and the human is controlled to approach the target object by the back 1.5s, so that the human is completely controlled. After 3s, the machine weight is gradually increased, and the target object grabbing is completed in about 11 s.
In the above embodiment, firstly, the user makes the corresponding motion U according to the recognized object position, the motion capture system calculates the angle of each joint of the arm according to the motion of the user, and the joint angle θ of the robot arm is obtained by using the man-machine mapping algorithmHMeanwhile, the machine carries out intention prediction according to the position P of the recognized object and the action U of the current user to obtain a target object G with the maximum possibility, then carries out trajectory planning, and obtains theta according to a kinematic inverse solutionRAccording to the accuracy of the predicted grabbed target, the weight distribution of the task is carried out, the control of the human and the control of the machine are fused, and the control angle theta after the human and the machine are fused is obtainedSAnd controlling the mechanical arm to move, continuously changing the control angle according to the motion effect of the mechanical arm, and adjusting the weight until the task is completed.
Claims (10)
1. A mechanical arm control method based on man-machine fusion is characterized by comprising the following steps:
an acquisition step, acquiring a peripheral real-time scene image of a target object;
mapping step, in controlThe control end displays the real-time scene image, acquires human arm joint angle series data of a human arm in a demonstration action process, and maps the human arm joint angle data into mapping joint angle data theta for controlling the mechanical armH;
Planning, namely drawing a preset operation path according to a polynomial path planning method according to the current position data of the execution tail end and the position data of the predicted target object, and acquiring planned joint angle data theta for controlling the mechanical arm according to the preset operation pathR;
A step of fusion according to a formula thetaS=(1-α)θH+αθRThe current execution joint angle data of the mechanical arm is calculated, wherein the current execution joint angle data is not less than 0 and not more than α, the current execution joint angle data is a human weight coefficient and is increased along with the increase of the confidence coefficient of a prediction target, and the confidence coefficient is increased along with the decrease of the distance after the distance between the execution tail end of the mechanical arm and the prediction target is smaller than the preset distance;
and a control step of controlling the mechanical arm to execute the operation action by taking the current execution joint angle data as a control instruction.
2. The robot arm control method according to claim 1, characterized in that:
the prediction target object is an object closest to the position of the execution end.
3. A robot arm control method according to claim 1 or 2, wherein a robot arm trajectory probability distribution model is created using a maximum entropy principle for predicting the position data of the prediction target object based on the following equation:
wherein G is*To most likely reach the target location, S is the initial location, U is the user input location, ξ represents the motion trajectory, θ is any parameter that may have an effect on the prediction, CGFor the cost function, the probability function P (G) satisfies the following formula,
wherein, L is the distance from the initial position of the end of the mechanical arm to the target position, and L is the distance from the current end of the mechanical arm to the target position.
5. a robot arm control method according to any one of claims 1 to 3, characterized in that:
when the confidence coefficient is in a first confidence coefficient interval, α is 0, wherein the first confidence coefficient interval is positioned in the first half;
α increasing linearly as the confidence increases when the confidence is within a second confidence interval;
and when the confidence coefficient is in a third confidence coefficient interval, α exponentially increases along with the increase of the confidence coefficient, the third confidence coefficient interval is positioned in the back half, and the first confidence coefficient interval, the second confidence coefficient interval and the third confidence coefficient interval are adjacent and spliced into the whole motion confidence coefficient interval.
7. The robot arm control method according to any one of claims 1 to 6, characterized in that:
when detecting that the acceleration of the joint angle of the human arm exceeds a preset threshold, assigning a numerical value smaller than 0.5 to the human-machine weight coefficient α.
8. A method for controlling a robot arm according to any of claims 1 to 7, wherein the robot arm is a heterogeneous humanoid robot arm, and the mapping of the joint angle data of the robot arm into the mapped joint angle data θ for controlling the robot arm is performedHComprises the following steps:
matching, namely matching the mechanical arm with the tail end working space of the human arm to obtain a conversion matrix between the two spaces; calculating target wrist joint coordinate data of the mechanical arm by using the conversion matrix based on the current wrist joint coordinate data of the human arm;
calculating elbow joint coordinate data of the humanoid mechanical arm based on the target wrist joint coordinate data and the structural size data of the mechanical arm and by taking a direction vector of a human forearm as a forearm direction vector of the mechanical arm;
a solving step, namely taking elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking a working space at the tail end of the mechanical arm as an independent variable value range, constructing a fitness function at least by calculating deviation of the position of a wrist joint of the mechanical arm relative to the coordinate data of the target wrist joint, and searching out optimal elbow joint and shoulder joint angle data by utilizing a genetic algorithm to serve as mapping joint angle data and mapping elbow joint angle data of the mechanical arm; and calculating the mapping wrist joint angle data of the mechanical arm according to the mapping wrist joint angle data of the human arm.
9. The robot arm control method according to claim 8, characterized in that:
calculating the approximate wrist joint position by taking the angle data represented by each generation of individuals as the approximate solution of the elbow joint angle and the shoulder joint angle, wherein the fitness function is fit (D)h+0.03De+0.03Sum, wherein DhThe Euclidean distance D between the approximate wrist joint position and the position represented by the target wrist joint coordinate dataeAnd the Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual.
10. A robot arm control method according to claim 8 or 9, wherein the step of searching for the optimum elbow joint and shoulder joint angle data using a genetic algorithm comprises the steps of:
taking the tail end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population;
calculating individual fitness values based on a fitness function, sequencing the fitness values in high and low directions, generating random integers by using an exponential distribution mode to select individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode;
replacing partial individuals with the maximum fitness value by utilizing the filial generation individuals generated by propagation according to a preset proportion, and increasing new individuals generated by partial variation based on the variation probability;
and repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911419535.1A CN111152220B (en) | 2019-12-31 | 2019-12-31 | Mechanical arm control method based on man-machine fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911419535.1A CN111152220B (en) | 2019-12-31 | 2019-12-31 | Mechanical arm control method based on man-machine fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111152220A true CN111152220A (en) | 2020-05-15 |
CN111152220B CN111152220B (en) | 2021-07-06 |
Family
ID=70560510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911419535.1A Active CN111152220B (en) | 2019-12-31 | 2019-12-31 | Mechanical arm control method based on man-machine fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111152220B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070835A (en) * | 2020-08-21 | 2020-12-11 | 达闼机器人有限公司 | Mechanical arm pose prediction method and device, storage medium and electronic equipment |
CN112428278A (en) * | 2020-10-26 | 2021-03-02 | 北京理工大学 | Control method and device of mechanical arm and training method of man-machine cooperation model |
CN112476456A (en) * | 2020-11-25 | 2021-03-12 | 浙江工业大学 | Arm-wing cooperative variant control system and control method for simulating bird prey |
CN118269106A (en) * | 2024-05-30 | 2024-07-02 | 江西求是高等研究院 | Mechanical arm motion control system and rehabilitation robot |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171194A1 (en) * | 2005-12-22 | 2007-07-26 | Francois Conti | Workspace expansion controller for human interface systems |
US20120283747A1 (en) * | 2009-11-16 | 2012-11-08 | Koninklijke Philips Electronics N.V. | Human-robot shared control for endoscopic assistant robot |
CN107097227A (en) * | 2017-04-17 | 2017-08-29 | 北京航空航天大学 | A kind of man-machine collaboration robot system |
CN108241339A (en) * | 2017-12-27 | 2018-07-03 | 北京航空航天大学 | The movement solution of apery mechanical arm and configuration control method |
CN108326852A (en) * | 2018-01-16 | 2018-07-27 | 西北工业大学 | A kind of space manipulator method for planning track of multiple-objection optimization |
WO2019222480A1 (en) * | 2018-05-16 | 2019-11-21 | University Of Maryland, College Park | Confidence-based robotically-assisted surgery system |
-
2019
- 2019-12-31 CN CN201911419535.1A patent/CN111152220B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070171194A1 (en) * | 2005-12-22 | 2007-07-26 | Francois Conti | Workspace expansion controller for human interface systems |
US20120283747A1 (en) * | 2009-11-16 | 2012-11-08 | Koninklijke Philips Electronics N.V. | Human-robot shared control for endoscopic assistant robot |
CN107097227A (en) * | 2017-04-17 | 2017-08-29 | 北京航空航天大学 | A kind of man-machine collaboration robot system |
CN108241339A (en) * | 2017-12-27 | 2018-07-03 | 北京航空航天大学 | The movement solution of apery mechanical arm and configuration control method |
CN108326852A (en) * | 2018-01-16 | 2018-07-27 | 西北工业大学 | A kind of space manipulator method for planning track of multiple-objection optimization |
WO2019222480A1 (en) * | 2018-05-16 | 2019-11-21 | University Of Maryland, College Park | Confidence-based robotically-assisted surgery system |
Non-Patent Citations (2)
Title |
---|
ANCA D DRAGAN ET AL: "A policy-blending formalism for shared control", 《THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH》 * |
陈友东等: "一种基于高斯过程混合模型的机械臂抓取方法", 《机器人》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112070835A (en) * | 2020-08-21 | 2020-12-11 | 达闼机器人有限公司 | Mechanical arm pose prediction method and device, storage medium and electronic equipment |
CN112428278A (en) * | 2020-10-26 | 2021-03-02 | 北京理工大学 | Control method and device of mechanical arm and training method of man-machine cooperation model |
CN112476456A (en) * | 2020-11-25 | 2021-03-12 | 浙江工业大学 | Arm-wing cooperative variant control system and control method for simulating bird prey |
CN112476456B (en) * | 2020-11-25 | 2022-03-25 | 浙江工业大学 | Arm-wing cooperative variant control system and control method for simulating bird prey |
CN118269106A (en) * | 2024-05-30 | 2024-07-02 | 江西求是高等研究院 | Mechanical arm motion control system and rehabilitation robot |
Also Published As
Publication number | Publication date |
---|---|
CN111152220B (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111152218B (en) | Action mapping method and system of heterogeneous humanoid mechanical arm | |
CN111152220B (en) | Mechanical arm control method based on man-machine fusion | |
US11745355B2 (en) | Control device, control method, and non-transitory computer-readable storage medium | |
Zacharias et al. | Making planned paths look more human-like in humanoid robot manipulation planning | |
JP5210883B2 (en) | A method of using a computer to control the movement of a part of a physical multi-joint system, a system for controlling the movement of a part of a physical multi-joint system, A computer-based method for tracking motion, a system for tracking the motion of a human by a physical articulated system separate from a human, and a movement of a part of a physical articulated system separate from a source system Using a computer to control | |
JP5210884B2 (en) | Computer-based method for controlling the posture of a physical articulated system and system for positioning an articulated system | |
US20150314442A1 (en) | Bare Hand Robot Path Teaching | |
CN111085996B (en) | Control method, device and system of live working robot | |
JP2019188477A (en) | Robot motion teaching device, robot system, and robot control device | |
CN115179294A (en) | Robot control method, system, computer device, and storage medium | |
CN114516060A (en) | Apparatus and method for controlling a robotic device | |
CN109807887A (en) | Flexible arm Intellisense and control method and system based on deep neural network | |
CN113664835A (en) | Automatic hand-eye calibration method and system for robot | |
CN112975939A (en) | Dynamic trajectory planning method for cooperative mechanical arm | |
JPWO2019239562A1 (en) | Machine learning device and robot system equipped with it | |
JP7263987B2 (en) | Control device, control method, and control program | |
JP7376318B2 (en) | annotation device | |
Yamane | Kinematic redundancy resolution for humanoid robots by human motion database | |
CN117245666A (en) | Dynamic target quick grabbing planning method and system based on deep reinforcement learning | |
CN114536351B (en) | Redundant double-arm robot teaching method and device, electronic equipment and system | |
Fantacci et al. | Markerless visual servoing on unknown objects for humanoid robot platforms | |
EP4175795B1 (en) | Transfer between tasks in different domains | |
Zorić et al. | Performance Comparison of Teleoperation Interfaces for Ultra-Lightweight Anthropomorphic Arms | |
Bai et al. | Kinect-based hand tracking for first-person-perspective robotic arm teleoperation | |
Hwang et al. | Real-time grasp planning based on motion field graph for human-robot cooperation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |