CN111152220B - Mechanical arm control method based on man-machine fusion - Google Patents

Mechanical arm control method based on man-machine fusion Download PDF

Info

Publication number
CN111152220B
CN111152220B CN201911419535.1A CN201911419535A CN111152220B CN 111152220 B CN111152220 B CN 111152220B CN 201911419535 A CN201911419535 A CN 201911419535A CN 111152220 B CN111152220 B CN 111152220B
Authority
CN
China
Prior art keywords
mechanical arm
joint
arm
joint angle
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911419535.1A
Other languages
Chinese (zh)
Other versions
CN111152220A (en
Inventor
杨灿军
朱元超
魏谦笑
武鑫
王楚璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911419535.1A priority Critical patent/CN111152220B/en
Publication of CN111152220A publication Critical patent/CN111152220A/en
Application granted granted Critical
Publication of CN111152220B publication Critical patent/CN111152220B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J3/00Manipulators of master-slave type, i.e. both controlling unit and controlled unit perform corresponding spatial movements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a mechanical arm control method based on man-machine fusion, and belongs to the technical field of robot control. The control method comprises the following steps: (1) acquiring a peripheral real-time scene image of a target object; (2) displaying a real-time scene image at an operation end, acquiring arm joint angle data, and mapping the arm joint angle data into mapping joint angle data theta for controlling a mechanical armH(ii) a (3) According to the current position data and the position data of the predicted target object, a preset operation path is drawn according to a polynomial path planning method, and planned joint angle data theta for controlling the mechanical arm is obtainedR(ii) a (4) According to the formula thetaS=(1‑α)θH+αθRCalculating current execution joint angle data of the mechanical arm; the weight α increases as the confidence of the arm predicting the target object increases. The method can well integrate the advantages and disadvantages of the actions of the human arm and the mechanical arm, and the fields of remote control of the robot and the like.

Description

Mechanical arm control method based on man-machine fusion
Technical Field
The invention relates to the technical field of robot control, in particular to a control method of a heterogeneous mechanical arm based on man-machine fusion.
Background
With the development of robotics, in order to ensure the personal safety of the operator, a camera is usually mounted on the robot, for example, if the panoramic imaging system disclosed in patent document CN108082415A is disclosed, or a camera is arranged on the working site to obtain a scene image, so as to avoid that the operator needs to enter a deep sea working environment, and to remove bullets and mine sweepers, which may affect the safety of the operator.
After the environmental scene image is acquired, the human arm simulates the field operation action, and the joint angle data of the human arm in the field operation action process is acquired, for example, the method for acquiring the walking posture of the target object disclosed in the patent document with the publication number of CN107967687A can be adopted; and controlling the robot to perform field operation based on the joint angle data.
Based on the remote control method, the safety influence on operators can be well avoided, but because people judge the site environment and the distance through influencing data, the problem of judgment errors exists when all site three-dimensional information is difficult to obtain completely, and the problem that the robot arm is difficult to control the action of the robot arm directly based on the obtained joint data of the human arm in the demonstration action process is caused.
In addition, in the operation process, although the robot arm is of a humanoid structure, the robot arm is still difficult to be of a homogeneous structure with the robot arm, for example, the degree of freedom and the arm length are different, namely, the robot arm is generally heterogeneous, so that the robot arm is difficult to directly control the humanoid robot arm to operate by utilizing joint data of the robot arm, theoretical data of robot arm control can be directly obtained by utilizing a working space matrix, but due to the heterogeneity of the robot arm and the robot arm, the directly calculated theoretical data cannot be directly executed in some cases, and follow-up control cannot be realized in some cases.
Disclosure of Invention
The invention mainly aims to provide a control method of a mechanical arm, which can better utilize a scene image on site to remotely control a robot;
another object of the present invention is to provide a method for controlling a robot arm, so as to perform some actions that cannot be performed in a similar manner during the following control of a heterogeneous humanoid robot arm and a robot arm, thereby achieving better following control.
In order to achieve the above purpose, the robot arm control method provided by the invention is based on a man-machine fusion strategy, and specifically comprises the following steps: (1) acquiring a peripheral real-time scene image of a target object; (2) displaying a real-time scene image at an operation end, acquiring data of a human arm joint angle series in a demonstration action process of a human arm, and mapping the human arm joint angle data into mapping joint angle data theta for controlling a mechanical armH(ii) a (3) Predicting a target according to the current position data, the position data of a predicted target object and a human arm track probability distribution model, drawing a preset operation path according to a polynomial path planning method, and acquiring planned joint angle data theta for controlling a mechanical arm according to the preset operation pathR(ii) a (4) According to the formula thetaS=(1-α)θH+αθRCalculating current execution joint angle data of the mechanical arm; alpha is more than or equal to 0 and less than or equal to 1, is a human-machine weight coefficient, and increases with the increase of the prediction confidence coefficient, and after the distance between the execution end of the mechanical arm and the prediction target object is less than the preset distance, the confidence coefficient increases with the decrease of the distance between the execution end of the mechanical arm and the prediction target object; (5) and controlling the mechanical arm to execute the operation action by taking the current execution joint angle data as a control command.
In the control process, firstly, according to the action currently input by a user, weight joint angle data are distributed based on the confidence coefficient of a predicted target object to perform man-machine fusion, so that the action data control of the human arm is taken as the main control in a long distance, and the field planning scheme is taken as the main control in a short distance, thereby effectively avoiding the problems of judgment error and the like caused by the fact that a person looks over a field operation environment image. And the scheme is based on a polynomial path planning method, so that the phenomenon that a target object is damaged due to manual control in the process can be effectively avoided.
The prediction target object is an object closest to the position of the execution end.
The specific scheme is that the position data of the target object is predicted according to a human arm track probability distribution model.
More specifically, the position data of the target object is predicted by utilizing the acquired human arm track probability distribution model based on the following formula:
Figure BDA0002351978430000021
wherein G is*To most likely reach the target location, S is the initial location, U is the user input location, ξ represents the motion trajectory, θ is any parameter that may have an effect on the prediction, CGIn order to be a function of the cost,the probability function p (g) satisfies the following formula,
Figure BDA0002351978430000031
wherein, L is the distance from the initial position of the end of the mechanical arm to the target position, and L is the distance from the current end of the mechanical arm to the target position.
Further, the expression of the human-machine weight coefficient α is:
Figure BDA0002351978430000032
wherein conf is the confidence, and the expression is,
conf=P(G*S→U)logP(G*S→U)
P(G*S→U)=P(ξS→U|G*)P(G*)。
the preferable scheme is that when the confidence coefficient is in the first confidence coefficient interval, alpha is 0; the first confidence interval is located in the first half; when the confidence coefficient is in the second confidence coefficient interval, the alpha is linearly increased along with the increase of the confidence coefficient; when the confidence coefficient is in a third confidence coefficient interval, the alpha exponentially increases along with the increase of the confidence coefficient; the third interval is positioned in the second half, and the first confidence interval, the second confidence interval and the third confidence interval are adjacent and spliced into the whole motion confidence interval.
The preferred scheme is that the polynomial path planning method is a joint angle function and meets the following formula:
θ(t)=a5×t5+a4×t4+a3×t3+a2×t2+a1×t+a0
Figure BDA0002351978430000033
wherein T is time, and T is time of moving to the designated position.
Preferably, when the acceleration of the joint angle of the human arm is detected to exceed a preset threshold, a numerical value smaller than 0.5 is given to the human-machine weight coefficient alpha.
In order to achieve the above another object, the present invention provides a preferable scheme that the robot arm is a heterogeneous humanoid robot arm, and the human arm joint angle data is mapped into the mapped joint angle data θ for controlling the robot armHComprises the following steps: (1) matching the tail end working spaces of the mechanical arm and the human arm to obtain a conversion matrix between the two spaces; calculating target wrist joint coordinate data of the mechanical arm by using a conversion matrix based on the current wrist joint coordinate data of the human arm and the shoulder joint coordinate data of the mechanical arm; (2) calculating elbow joint coordinate data of the humanoid mechanical arm based on the target wrist joint coordinate data and the shoulder joint coordinate data and by taking the direction vector of the lower arm of the human body as the direction vector of the lower arm; (3) taking elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking a working space at the tail end of the mechanical arm as an independent variable value range, at least constructing a fitness function by calculating deviation of the position of a wrist joint of the mechanical arm relative to coordinate data of a target wrist joint, and searching out optimal elbow joint and shoulder joint angle data by utilizing a genetic algorithm to serve as joint angle data and elbow joint angle data of the mechanical arm; and calculating the wrist joint angle data of the mechanical arm according to the wrist joint angle data of the human arm.
According to the scheme, the conversion matrix of the human arm and the tail end working space of the mechanical arm is obtained, theoretical coordinate data of the tail end of the mechanical arm are obtained accordingly, and theoretical coordinate data of the elbow joint are obtained; and then searching out the angle data of the optimal elbow joint and shoulder joint based on the genetic algorithm, wherein the fitness function mainly considers the calculation deviation of the tail end position, namely mainly paying attention to the action precision of the execution tail end of the fitness function, so that the fitness function is used for searching an approximate solution meeting the requirement of the fitness function to carry out approximate execution, and therefore the joint angle data of the humanoid mechanical arm in the action execution process does not need to completely refer to the joint angle data of the human arm, can be approximately executed, and can better carry out heterogeneous control.
The preferable scheme is that the solving step comprises the following steps: (1) calculating the approximate wrist joint position by taking the angle data represented by each generation of individuals as an approximate solution of the elbow joint angle and the shoulder joint angle; (2) and acquiring the calculation deviation of the wrist joint position based on the target wrist joint data and the approximate wrist joint position, and acquiring an optimal solution by taking the minimum calculation deviation as a reference.
The preferred scheme is that a fitness function is constructed based on the calculated deviation of the wrist joint position and the elbow joint position, and the calculated deviation of the wrist joint position is considered preferentially. On the premise of mainly considering the execution precision of the tail end action and partially considering the precision of the elbow joint, namely on the premise of preferentially considering the execution precision of the action, the action of the whole human-simulated mechanical arm is closer to the action of the human arm, so that a human can conveniently determine the next action through a video, and the field operation of the mechanical arm is better guided.
The preferred approach is that the fitness function is fit ═ Dh+0.03De+0.03Sum, wherein DhApproximate Euclidean distance, D, of the wrist joint position from the position represented by the target wrist joint coordinate dataeThe Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual. On the premise of ensuring the accuracy of the tail end, the fitness function is raised for the solution with overlarge deviation between the elbow position and the target and the solution with other configurations generated by distortion removal, so that large values are excluded, and off-site control is better performed.
The preferred scheme is that the step of searching the optimal angle data of the elbow joint and the shoulder joint by using a genetic algorithm comprises the following steps: (1) taking the tail end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population; (2) calculating individual fitness values based on a fitness function, sequencing the fitness values in high and low directions, generating random integers by using an exponential distribution mode to select individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode; (3) replacing partial individuals with the maximum fitness value by utilizing the filial generation individuals generated by propagation according to a preset proportion, and increasing new individuals generated by partial variation based on the variation probability; (4) and repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
The optimal scheme is based on the acquired three-dimensional information of the workplace of the mechanical arm, and the independent variable value range is a feasible working space after the interference space is eliminated from the working space at the tail end of the mechanical arm. Therefore, in the control process, the interference of the field environment on the action of the mechanical arm can be effectively avoided, so that the field interference environment does not need to be accurately judged in the remote control process of a person, and the action of the mechanical arm can be controlled remotely more easily.
Drawings
FIG. 1 is a flowchart illustrating a method for controlling a robotic arm according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for mapping actions according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of modeling a human arm DH in an embodiment of the invention;
FIG. 4 is a geometric relationship between a joint point and a vector of a right arm according to an embodiment of the present invention;
FIG. 5 is a geometric relationship between the joint point and the vector of the left arm according to the embodiment of the present invention;
FIG. 6 is a degree of freedom matching relationship of motion mapping between a human arm and a mechanical arm in an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the X-Z coordinate system according to the embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the Y-Z coordinate system according to the embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a matching process of the end working spaces of the human arm and the manipulator in an X-Y coordinate system according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating a matching process of the human arm and the end working space of the robot in the X-Y-Z three-dimensional coordinate system according to the embodiment of the present invention;
FIG. 11 is a diagram illustrating a normalization process of elbow coordinate data of a robotic arm according to an embodiment of the present invention;
FIG. 12 is a schematic diagram illustrating modeling of a DH robotic arm according to an embodiment of the present invention;
FIG. 13 is a relationship between human weight coefficients and confidence;
FIG. 14 is a graph showing the relationship between the human-machine weight coefficient and the confidence coefficient in the practical experiment of the present invention;
FIG. 15 is a diagram showing the matching effect of the human body, the machine body and the combination of the human body and the machine body in the X-axis direction;
FIG. 16 is a Y-axis diagram showing the matching effect of human and machine actions and their fusion;
FIG. 17 is a Z-axis diagram illustrating the matching effect of human and machine motions and their fusion;
FIG. 18 is a photograph of a single-target obstacle avoidance experiment, with a first row being machine-controlled individually, a second row being person-controlled individually, and third and fourth rows being shared by a person;
fig. 19 is a photograph of a multi-target predictive capture experiment, with first and second horizontal rows for long distance target capture and third and fourth rows for short distance target capture.
The invention is further illustrated by the following examples and figures.
Detailed Description
Examples
As shown in fig. 1, the robot arm control method based on human-machine fusion of the present invention includes an obtaining step S1, a mapping step S2, a planning step S2, a fusion step S4, and a control step S5, and specifically includes the following steps:
in the acquisition step S1, a real-time scene image around the target object is acquired.
The method specifically comprises the steps that a camera carried on a robot or a camera arranged on a working site is used for obtaining a scene image of the robot in the working process, the number of the cameras is arranged according to the actual situation, and the depth information of the working scene can be obtained based on the arrangement of the depth cameras.
A mapping step S2, displaying the real-time scene image at the operation end, acquiring the arm joint angle series data of the arm in the demonstration action process, and mapping the arm joint angle data into the data for controlling the mechanical armMapping joint angle data θH
As shown in fig. 2, the mapping step S2 specifically includes an obtaining step S21, a matching step S22, a calculating step S23, and a solving step S24, and the specific process is as follows:
in the acquisition step S21, joint angle data of the human arm during the demonstration motion is acquired.
The most widely used model for the upper limbs of the human body is the 7-degree-of-freedom model, shown in FIG. 3, with base coordinates (X)0Y0Z0) Is positioned between the trunk and the shoulder joint of the human body, three motion coordinate systems are established at the shoulder joint and respectively correspond to three motion degrees of freedom of the shoulder joint, specifically adduction/abduction (X)1Y1Z1) Inward/outward (X)2Y2Z2) And flexion/extension (X)3Y3Z3) (ii) a Flexion/extension (X) at the elbow joint4Y4Z4) Degree of freedom; but according to the internal/external rotation (X) at the wrist joint5Y5Z5) Adduction/abduction (X)6Y6Z6) And flexion/extension (X)7Y7Z7) Respectively establishing three motion coordinate systems; motion coordinate (X) of end effector8Y8Z8) is located at the finger.
The geometric parameter a of the rod piece in the D-H model can be determined according to the proportional relation of the body movement limb size and the body heighti-1And diAs shown in table 1 below, about the ith joint degree of freedom ZiAngle of rotation of axis thetaiRange of motion β dependent on the given corresponding degree of freedom of joint motioni
Table 1: human arm DH parameter table
Figure BDA0002351978430000071
Figure BDA0002351978430000081
The text uses a monocular camera, based onThe ResNet-50 deep convolution neural network is used for carrying out a human body posture estimation task in a three-dimensional space, successfully training the network to gradient convergence, and outputting expected coordinate values of relevant skeletal joint points. As shown in fig. 4 and 5. Wherein the content of the first and second substances,
Figure BDA0002351978430000082
a vector representing the left arm from the shoulder to elbow joint (left),
Figure BDA0002351978430000083
represents the vector from elbow to wrist (left) for the left arm, and the same for the right arm. In addition, there are several limb vectors that are auxiliary vectors used in the calculation process:
Figure BDA0002351978430000084
from the left shoulder to the right shoulder,
Figure BDA0002351978430000085
from the right shoulder to the left shoulder, which are used to represent the horizontal direction of the plane of the human body, respectively;
Figure BDA0002351978430000086
from the neck to the pelvis, representing the vertical downward direction of the plane of the body, this auxiliary vector being used in the calculation of the angles of both the left and right arms;
Figure BDA0002351978430000087
and
Figure BDA0002351978430000088
are respectively
Figure BDA0002351978430000089
And
Figure BDA00023519784300000810
and
Figure BDA00023519784300000811
the result of the cross product; in a similar manner, the first and second substrates are,
Figure BDA00023519784300000812
and
Figure BDA00023519784300000813
are respectively
Figure BDA00023519784300000814
And
Figure BDA00023519784300000815
and
Figure BDA00023519784300000816
and
Figure BDA00023519784300000817
the result of the cross multiplication.
The formula is calculated as follows taking the left arm as an example.
Figure BDA00023519784300000818
Figure BDA00023519784300000819
Figure BDA00023519784300000820
Figure BDA00023519784300000821
The three joint degrees of freedom of the wrist are measured by the attitude sensor IMU, which in this embodiment is an IMU model HWT901B, deployed at the wrist. From the DH model
Figure BDA00023519784300000824
Wherein
Figure BDA00023519784300000825
In the known manner, it is known that,
Figure BDA00023519784300000822
is the matrix to be solved. From the IMU coordinates:
Figure BDA00023519784300000823
Figure BDA0002351978430000091
Figure BDA0002351978430000092
Figure BDA0002351978430000093
Figure BDA0002351978430000094
wherein A, B, C, D is the transformation matrix between the aforementioned eight coordinate systems, i.e. coordinate system No. 0 to coordinate system No. 7, TAX、TAY、TAZIs the transformation matrix generated between the three euler angles of the IMU and the initial coordinate system.
Figure BDA0002351978430000095
Figure BDA0002351978430000096
66=arccos(-ax)
Figure BDA0002351978430000097
As shown in fig. 6, the human-machine action mapping algorithm can map three degrees of freedom of the shoulder and one degree of freedom of the elbow of the human arm to the first five degrees of freedom of YUMI. Two degrees of freedom at the end of YUMI are directly controlled by the inward/outward rotation and extension/flexion of the wrist of the human body measured by IMU; the method mainly comprises two parts of working space matching and genetic algorithm.
A matching step S22, matching the tail end working space of the mechanical arm and the human arm to obtain a conversion matrix between the two spaces; and calculating target wrist joint coordinate data of the mechanical arm by using the conversion matrix based on the current wrist joint coordinate data of the human arm.
All 7 joints of a single arm of the YUMI robot are rotary joints, a coordinate system 12 is established on the arm, and YUMI DH parameters are shown in the following table 2, wherein ai-1And alphai-1Respectively representing the length and twist angle of the connecting rod i-1, diAnd thetaiIndicating the offset and joint angle of link i.
TABLE 2 YUMI DH parameters table
Figure BDA0002351978430000101
According to the DH parameters in the table, the human arm and YUMI arm kinematic models can be obtained. The homogeneous transformation between two adjacent coordinates in fig. 12 can be represented by the DH parameters as:
Figure BDA0002351978430000102
where c is an abbreviation for trigonometric function cos and s is an abbreviation for sin. Further, the relationship of the end effector position to the base position may be calculated as follows:
Figure BDA0002351978430000103
wherein n represents a degree of freedom, and P ═ x, y, z,1]For indicating the position of the relevant joint,
Figure BDA0002351978430000104
a homogeneous matrix in formula (1).
The motion space matching is realized by certain geometric transformation, so that two originally completely different motion spaces or speed spaces of the master-slave heterogeneous manipulator or the human arm and the mechanical arm can be matched as much as possible. The motion matching control method of the master-slave manipulator generally adopted at present mainly comprises three methods: joint-joint matching, end pose matching and point-to-point matching. Among the three matching methods, the joint-joint matching method is mainly used for isomorphic master-slave manipulators, and the point-to-point matching method is the most commonly used method and can be suitable for controlling all isomorphic and heterogeneous master-slave manipulators. In this embodiment, a general method for position matching in master-slave heterogeneous manipulator motion matching is shown as follows:
Figure BDA0002351978430000111
in the formula, [ X ]p,Yp,Zp]TIs the slave manipulator end effector position; [ X, Y, Z ]]TThe position of the tail end of the human arm; alpha is a Z-axis rotation angle around a slave manipulator base coordinate; sxIs the magnification factor along the X axis of the base coordinate of the arm of the human hand; syIs a magnification factor along the Y axis of the base coordinate of the arm of the human hand; szIs a Z-axis magnification factor along the base coordinate of the arm of the human hand; lxThe translation distance is along the X axis of the arm base coordinate of the human hand; lyThe translation distance is along the Y axis of the arm base coordinate of the human hand; lzIs the translation distance along the Z axis of the arm base coordinate of the human hand. Taking human shoulder joint and YUMI shoulder joint as principal and subordinate base coordinate systems respectively, Sx=0.9186,Sy=0.8476,SzThe obtained variation relationship is shown in formula 0.8775, α is 180 °. Fig. 10 is a three-dimensional cloud of a person and a YUMI workspace after matching processing, and fig. 7 to 9 are plane projection views.
Figure BDA0002351978430000112
In the calculation step S23, elbow joint coordinate data of the humanoid robot arm is calculated based on the target wrist joint coordinate data and the shoulder joint coordinate data and using the direction vector of the human forearm as the forearm direction vector.
Considering the mapping to mimic the human arm configuration, as shown in fig. 11, the end position and elbow position are determined due to the relatively fixed shoulder position, which also determines the basic configuration of the arm. Firstly, according to the mode, the position of the wrist is mapped to the working space of the YUMI, and the position of the elbow needs to be adjusted to carry out normalization processing in consideration of the fact that the length of the forearm of the human hand is not consistent with the length of the forearm of the YUMI. We retained the orientation of the forearm, making the length of the forearm of YUMI.
Wherein l is the length of the forearm of YUMI, O is the origin of the shoulder, E is the coordinate of the elbow, W is the coordinate of the wrist, and the wrist coordinate E' of YUMI corresponding to the current motion of the hand is obtained after adjustment.
Solving step S24, taking the elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking the end working space of the mechanical arm as an independent variable value range, at least constructing a fitness function by the calculation deviation of the wrist joint position of the mechanical arm relative to the coordinate data of the target wrist joint, and searching out the optimal elbow joint and shoulder joint angle data by using a genetic algorithm as the joint angle data and elbow joint angle data of the mechanical arm; and calculating the wrist joint angle data of the mechanical arm according to the wrist joint angle data of the human arm.
That is, the genetic algorithm is used to obtain the corresponding inverse solution S of the angle, that is, [ x ]h,yh,zh,xe,ye,ze]→[θ1,θ2,θ3,θ4,θ5]
The step of searching the angle data of the optimal elbow joint and shoulder joint by using a genetic algorithm as an approximate solution of the joint angle comprises the following steps:
initializing step S41, taking the end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population; and constructing a fitness function.
In a mechanical arm working space S, N solutions are randomly generated to serve as a population, and a real number coding mode is adopted as a coding mode; i.e. defining the individual as:
ind[i]=[θ1,θ2,θ3,θ4,θ5]Ti=1,2,……,n
for the fitness function of the individual, calculating the tail end position W of the mechanical arm, the elbow position E and the angle sum of the mechanical arm corresponding to each individual by using a DH matrix in modeling
Figure BDA0002351978430000121
And respectively calculating Euclidean distances between the tail end and the elbow position and the target position:
Dh=||H-H0||
De=||E-E0||
in this embodiment, the fitness function is defined as:
fit=Dh+0.03De+0.03Sum
i.e. DhApproximate Euclidean distance, D, of the wrist joint position from the position represented by the target wrist joint coordinate dataeThe Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual. Therefore, on the premise of ensuring the accuracy of the distal end, the fitness function will be raised for the solution with the elbow position deviating from the target too much and the solution with other configuration generated by distortion removal, so as to exclude the large value, which means that in the present embodiment, the fitness function fit is constructed based on the calculated deviation of the wrist joint position and the elbow joint position, and takes priority to the calculated deviation of the wrist joint position, and "priority" is configured to have a weight coefficient at least 10 times of that of other parameters, so that on the premise of mainly considering the accuracy of the performance of the distal end motion, the precision of the elbow joint is partially consideredOn the premise of giving priority to the execution of motion precision, the motion of the whole humanoid mechanical arm is closer to the motion of the human arm, so that a human can conveniently determine the next motion through a video, and the field operation of the mechanical arm can be better guided.
And a selecting and breeding step S42, calculating the fitness value of each generation of individuals based on the fitness function fit, sequencing the individuals in high and low, generating random integers in an exponential distribution mode to select the individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode.
After the individual fitness is calculated, the individuals are numbered according to the sequence of fitness from high to low, namely the fitness function from small to large. The selection of individuals is made using a form of exponential distribution to generate random integers.
I.e. the probability P that the individual with number k is selected is:
P(k)=λe-λk,k=1,2,……n
so that the likelihood that a solution with a small fitness function is selected is much greater than a solution with a small fitness function.
For the two individuals selected in the above manner, the next generation is generated by direct fusion:
son=k·ind[i]+(1-k)·ind[j]
where k is a randomly generated real number between 0 and 1.
And a crossing and mutation step S43, replacing the partial individual with the maximum fitness value with the filial generation individual according to a preset proportion, and increasing the new individual generated by partial mutation based on the mutation probability.
This is repeated to produce 20% offspring of population, replacing the last 20% of individuals with the largest fitness function. After replacement, each individual was mutated with a probability of 0.05, and a small amount was added to the original random direction as a new individual.
And repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
Firstly, acquiring a conversion matrix of a human arm and a tail end working space of a mechanical arm, acquiring theoretical coordinate data of the tail end of the mechanical arm according to the conversion matrix, and acquiring theoretical coordinate data of an elbow joint; and then searching out the angle data of the optimal elbow joint and shoulder joint based on a genetic algorithm, wherein the fitness function mainly considers the calculation deviation of the tail end position, namely mainly paying attention to the action precision of the execution tail end of the fitness function, so that an approximate solution meeting the requirement of the fitness function is searched by utilizing the genetic algorithm to carry out approximate execution, the joint angle data of the humanoid mechanical arm in the action execution process does not need to completely refer to the joint angle data of the human arm, the humanoid mechanical arm can carry out approximate simulation execution, and therefore heterogeneous control can be better carried out.
In the solving step, firstly, the angle data represented by each generation of individuals is used as an approximate solution of the elbow joint angle and the shoulder joint angle, and the approximate wrist joint position is calculated; and acquiring the calculation deviation of the wrist joint position based on the target wrist joint data and the approximate wrist joint position, and acquiring an optimal solution by taking at least the minimum calculation deviation as a reference.
A planning step S3 of planning a preset operation path according to the polynomial path planning method based on the current position data and the position data of the preset target object, and acquiring planned joint angle data θ for controlling the robot arm according to the preset operation pathR
In this embodiment, the position data of the target object is preset to be predicted according to a human hand probability trajectory distribution model, and the specific process is as follows:
first, we need to calculate the target position that we may reach next based on the path input by the user, i.e. predict the most likely target object
Figure BDA0002351978430000145
As shown in formula
Figure BDA0002351978430000141
In the formula G*To most probably reach the target position, S is the initial position, U is the user input position, xi represents the motion trail, thetaIs any parameter that may have an effect on the prediction.
Track xi taking user input into accountS→U,G*The calculation formula can be written as
Figure BDA0002351978430000142
Trace xi we input to userS→UProbability of occurrence makes the maximum entropy assumption that the track occurrence probability and the track cost value increase exhibit an exponential decay trend.
P(ξ|G)∝exp(-CG(ξ))
CG=Tx 2+Ty 2+Tz 2+Rx 2+Ry 2+Rz 2
Wherein, CGAs a function of cost, Ti(i ═ x, y, z) and RiAnd (i ═ x, y, z) are the translation speed and the rotation speed on the X, Y, Z axis, respectively.
Figure BDA0002351978430000143
In practical calculations, we approximate the integral on the trajectory using the laplacian method, which can be simplified as:
Figure BDA0002351978430000144
p (G) the calculation formula is as follows:
Figure BDA0002351978430000151
wherein, L is the distance from the initial position of the end of the mechanical arm to the target position, and L is the distance from the current end of the mechanical arm to the target position.
Secondly, after predicting the position of the predicted target object, the predicted target object is obtained by visionThree-dimensional position P of objectG∈g(ii) a Calculating a position θ of a joint space by inverse kinematics of a robotic armTDetermining the time T for moving to the specified position according to the given speed v; interpolation is performed in joint space using a quintic function, as shown below.
θ(t)=a5×t5+a4×t4+a3×t3+a2×t2+a1×t+a0
According to the constraint conditions, as shown in the following formula, various coefficients are determined, so that the corresponding relation between time and the working space angle is obtained.
Figure BDA0002351978430000152
And calculating the spatial position of the mechanical arm joint according to the sampling time of motion capture, and performing man-machine fusion according to the confidence coefficient and the weight of the current position to generate a new joint position.
A fusion step S4 according to the formula θS=(1-α)θH+αθRCalculating current execution joint angle data of the mechanical arm; alpha is more than or equal to 0 and less than or equal to 1, which is a human-machine weight coefficient, and increases as the confidence between the execution end of the robot arm and the preset target increases.
In the process of man-machine fusion, the task distribution weight needs to be adjusted according to the possibility of grabbing the target. In the embodiment, a piecewise function is used for adjusting the task weight, the linear expression is used in the first half section to enable the machine to move stably, and the exponential function is used in the second half section to rapidly switch the man-machine control weight and improve the response speed; specifically, when the confidence of the execution end of the mechanical arm and the predicted target object is within a first confidence interval, α is 0, and the first confidence interval is located in the first half; when the confidence coefficient is in the second confidence coefficient interval, the alpha is linearly increased along with the increase of the confidence coefficient; when the confidence is in a third confidence interval, α increases exponentially with the increase of the confidence, the third interval is located in the second half, and the first confidence interval, the second confidence interval, and the third confidence interval are adjacent and spliced into a whole motion confidence interval, in this embodiment, the relationship between the confidence and the cooperative weight is as shown in fig. 13, and the specific expression is as follows:
Figure BDA0002351978430000161
wherein conf is a confidence coefficient, namely the weight of the person just started is large, the machine is guided to carry out rough planning, and finally the weight of the machine is increased to finish accurate control; the weight alpha of the man-machine fusion needs to be judged according to the correctness of the predicted captured target. The confidence coefficient calculation formula is calculated by using probability distribution entropy, and is specifically as follows:
conf=P(G*S→U)logP(G*S→U)
P(G*S→U)=P(ξS→U|G*)P(G*)。
that is, in the present embodiment, when the predicted target is correct, the confidence increases, the machine weight increases, and the precise operation is completed. And when the prediction is wrong, the confidence coefficient is reduced, the human control weight is increased, the machine moves according to the input of the user until a new task target is predicted, the confidence coefficient is continuously increased, the machine control weight is increased, and the task is completed.
In the control step S5, the robot arm is controlled to execute the operation using the current execution joint angle data as the control command.
In the embodiment, the advantages of a human and a machine can be brought into play in man-machine fusion, a remote rough task is planned by the human, and a close precise task is executed by the machine, so that the advantages of the actions of the human arm and the mechanical arm can be fully utilized to complement each other, and the corresponding defects are overcome.
In addition, in order to avoid that the robot arm is difficult to operate due to the short distance between the execution terminal and the predicted target object when the human determines that the interference occurs in the front direction, it is stipulated that the human-machine weight α is given a value smaller than 0.5, even close to 0, for example, 0 to 0.1, when the acceleration of the joint angle of the human arm is detected to exceed the preset threshold, that is, the emergency adjustment action occurs.
As shown in fig. 13 to 18, the applicant performed a single-target obstacle avoidance grab test and a multi-target prediction grab test. As shown in fig. 18, the human and the machine are isolated, the teleoperation is simulated, the human operates according to the target in the display screen, the human motion information is captured by the camera and converted into the motion of the machine through the motion mapping algorithm, and the yumi machine control is controlled by using an open source program of the university of berkeley. The sampling frequency is about 1 HZ. The target position is known, the first row in the figure is a path obtained by the machine through single use of fifth-order polynomial interpolation for movement, and the machine can reach the target position but can knock over an obstacle. The second row is that the robot arm is controlled by independently using an action mapping mode, and the robot arm can avoid obstacles but cannot accurately reach a target position to grab. The third row and the fourth row in the figure are human-machine fusion control, the control weight of a person is larger at the beginning, the machine is controlled to avoid the obstacle, the control weight of the machine is increased after the distance from the target object is closer, the target object is accurately grabbed, and the initial position is returned.
The robot, the human and the two-in-one control pair are shown in fig. 15 to 16, that is, the end position change curves of the three control modes respectively, and it can be seen from the figure that the target position can be reached by the single machine control in about 5 s. The human-machine fusion control constraint 12s reaches the target position. The target position cannot be reached within 30s by a single human control. Fig. 14 shows the confidence and weight variation curves of three fusion control methods with linear, exponential and modified distribution plots. It can be seen from the figure that the exponential type changes to weight 1 at the earliest, i.e. the machine is fully controlled and the amplitude of the fluctuation is the largest. The linear variation is the smoothest, but the slowest to reach full machine control. The weight change of the correction type fusion control mode is stable, the final switching speed is high, and people are allowed to correct the motion of the tail end of the mechanical arm when the tail end of the mechanical arm is close to the target position.
As shown in fig. 19. Two objects to be grabbed are respectively placed on the platforms. One near the end of the arm and one far away, the former is white and the latter is green, the target position is known. The robotic arm will automatically grasp the object closest to its end if no human intervention is present. The robot predicts the target object to be grabbed according to the current action of the human and the distance from the target object, determines the confidence coefficient of grabbing each object, judges the object to be grabbed according to the confidence coefficient, and calculates the human-computer task distribution weight. When grabbing a green object, (a) the first 1.4s, the machine weight is 0, and is fully controlled by the human. And 1.4-2.3s, the object is close to the white object, and the object to be grasped is predicted to be the white object. As the action of the human is changed continuously, after 2.3s, the confidence coefficient of the green object is increased, the human-computer cooperation is used for controlling, and the capture of the green object is completed in about 12 s. When a white object is grabbed, (b), the human is controlled to move towards the direction far away from the target by the front 1.5s, and the human is controlled to approach the target object by the back 1.5s, so that the human is completely controlled. After 3s, the machine weight is gradually increased, and the target object grabbing is completed in about 11 s.
In the above embodiment, firstly, the user makes the corresponding motion U according to the recognized object position, the motion capture system calculates the angle of each joint of the arm according to the motion of the user, and the joint angle θ of the robot arm is obtained by using the man-machine mapping algorithmHMeanwhile, the machine carries out intention prediction according to the position P of the recognized object and the action U of the current user to obtain a target object G with the maximum possibility, then carries out trajectory planning, and obtains theta according to a kinematic inverse solutionRAccording to the accuracy of the predicted grabbed target, the weight distribution of the task is carried out, the control of the human and the control of the machine are fused, and the control angle theta after the human and the machine are fused is obtainedSAnd controlling the mechanical arm to move, continuously changing the control angle according to the motion effect of the mechanical arm, and adjusting the weight until the task is completed.

Claims (10)

1. A mechanical arm control method based on man-machine fusion is characterized by comprising the following steps:
an acquisition step, acquiring a peripheral real-time scene image of a target object;
mapping stepDisplaying the real-time scene image at a control end, acquiring human arm joint angle series data of a human arm in a demonstration action process, and mapping the human arm joint angle data into mapping joint angle data theta for controlling the mechanical armH
Planning, namely drawing a preset operation path according to a polynomial path planning method according to the current position data of the execution tail end and the position data of the predicted target object, and acquiring planned joint angle data theta for controlling the mechanical arm according to the preset operation pathR
A step of fusion according to a formula thetaS=(1-α)θH+αθRCalculating current execution joint angle data of the mechanical arm; alpha is an artificial weight coefficient, alpha is more than or equal to 0 and less than or equal to 1, and increases along with the increase of the confidence coefficient of the prediction target; after the distance between the execution tail end of the mechanical arm and the prediction target object is smaller than a preset distance, the confidence coefficient is increased along with the reduction of the distance;
a control step, wherein the current execution joint angle data is used as a control instruction to control the mechanical arm to execute operation action;
in the planning step, a human arm track probability distribution model is established by using the maximum entropy principle based on the following formula and is used for predicting the position data of the predicted target object:
Figure FDA0003044322580000011
wherein G is*To most probably reach the target object position, S is the initial position, U is the user input position, xi represents the motion trail, CGFor the cost function, the probability function P (G) satisfies the following formula,
Figure FDA0003044322580000012
wherein, L is the distance between the initial position of the tail end of the mechanical arm and the target position, and L is the distance between the tail end of the current mechanical arm and the target position;
the expression of the human-machine weight coefficient α is:
Figure FDA0003044322580000021
wherein conf is the confidence, and the expression is,
conf=P(G*S→U)logP(G*S→U)
P(G*S→U)=P(ξS→U|G*)P(G*)。
2. the robot arm control method according to claim 1, characterized in that:
the prediction target object is an object closest to the position of the execution end.
3. A robot arm control method according to claim 1 or 2, wherein the polynomial path planning method is a joint angle function, and satisfies the following equation:
θ(t)=a5×t5+a4×t4+a3×t3+a2×t2+a1×t+a0
Figure FDA0003044322580000022
where T is time, T is time of moving to a specified position, θ (T) represents a position of a joint space, θTIndicating the position of the joint space at time T, θ0Representing the initial position of the joint space, a0、a1、a2、a3、a4、a5Representing the coefficients of a fifth order polynomial.
4. The robot arm control method according to claim 3, characterized in that:
and when the acceleration of the joint angle of the human arm is detected to exceed a preset threshold value, giving a numerical value smaller than 0.5 to the human-machine weight coefficient alpha.
5. A robot arm control method according to claim 3, wherein the robot arm is a heterogeneous humanoid robot arm, and the mapping of the robot arm joint angle data into mapped joint angle data θ for controlling the robot arm is performedHComprises the following steps:
matching, namely matching the mechanical arm with the tail end working space of the human arm to obtain a conversion matrix between the two spaces; calculating target wrist joint coordinate data of the mechanical arm by using the conversion matrix based on the current wrist joint coordinate data of the human arm;
calculating elbow joint coordinate data of the humanoid mechanical arm based on the target wrist joint coordinate data and the structural size data of the mechanical arm and by taking a direction vector of a human forearm as a forearm direction vector of the mechanical arm;
a solving step, namely taking elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking a working space at the tail end of the mechanical arm as an independent variable value range, constructing a fitness function at least by calculating deviation of the position of a wrist joint of the mechanical arm relative to the coordinate data of the target wrist joint, and searching out optimal elbow joint and shoulder joint angle data by utilizing a genetic algorithm to serve as mapping joint angle data and mapping elbow joint angle data of the mechanical arm; and calculating the mapping wrist joint angle data of the mechanical arm according to the mapping wrist joint angle data of the human arm.
6. The robot arm control method according to claim 5, characterized in that:
calculating the approximate wrist joint position by taking the angle data represented by each generation of individuals as the approximate solution of the elbow joint angle and the shoulder joint angle, wherein the fitness function is fit (D)h+0.03De+0.03Sum, wherein DhIs a stand forThe Euclidean distance D between the approximate wrist joint position and the position represented by the target wrist joint coordinate dataeAnd the Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual.
7. The robot arm control method according to claim 5, wherein the step of searching for the optimum elbow joint and shoulder joint angle data using a genetic algorithm comprises the steps of:
taking the tail end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population;
calculating individual fitness values based on a fitness function, sequencing the fitness values in high and low directions, generating random integers by using an exponential distribution mode to select individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode;
replacing partial individuals with the maximum fitness value by utilizing the filial generation individuals generated by propagation according to a preset proportion, and increasing new individuals generated by partial variation based on the variation probability;
and repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
8. A robot arm control method according to claim 1 or 2, wherein the robot arm is a heterogeneous humanoid robot arm, and the mapping of the robot arm joint angle data into mapped joint angle data θ for controlling the robot arm is performedHComprises the following steps:
matching, namely matching the mechanical arm with the tail end working space of the human arm to obtain a conversion matrix between the two spaces; calculating target wrist joint coordinate data of the mechanical arm by using the conversion matrix based on the current wrist joint coordinate data of the human arm;
calculating elbow joint coordinate data of the humanoid mechanical arm based on the target wrist joint coordinate data and the structural size data of the mechanical arm and by taking a direction vector of a human forearm as a forearm direction vector of the mechanical arm;
a solving step, namely taking elbow joint and shoulder joint angle data of the mechanical arm as independent variables, taking a working space at the tail end of the mechanical arm as an independent variable value range, constructing a fitness function at least by calculating deviation of the position of a wrist joint of the mechanical arm relative to the coordinate data of the target wrist joint, and searching out optimal elbow joint and shoulder joint angle data by utilizing a genetic algorithm to serve as mapping joint angle data and mapping elbow joint angle data of the mechanical arm; and calculating the mapping wrist joint angle data of the mechanical arm according to the mapping wrist joint angle data of the human arm.
9. The robot arm control method according to claim 8, characterized in that:
calculating the approximate wrist joint position by taking the angle data represented by each generation of individuals as the approximate solution of the elbow joint angle and the shoulder joint angle, wherein the fitness function is fit (D)h+0.03De+0.03Sum, wherein DhThe Euclidean distance D between the approximate wrist joint position and the position represented by the target wrist joint coordinate dataeAnd the Euclidean distance between the approximate elbow joint position calculated based on the elbow joint angle approximate solution and the position represented by the elbow joint coordinate data is calculated, and Sum is the Sum of the joint angle values represented by the current individual.
10. The robot arm control method according to claim 8, wherein the step of searching for the optimal elbow joint and shoulder joint angle data using a genetic algorithm comprises the steps of:
taking the tail end working space of the mechanical arm as a value range, randomly generating a plurality of elbow joints and shoulder joint angles to construct individuals, and forming an initial population;
calculating individual fitness values based on a fitness function, sequencing the fitness values in high and low directions, generating random integers by using an exponential distribution mode to select individuals, and breeding the next generation of the two randomly selected individuals based on a direct fusion mode;
replacing partial individuals with the maximum fitness value by utilizing the filial generation individuals generated by propagation according to a preset proportion, and increasing new individuals generated by partial variation based on the variation probability;
and repeating the steps until a preset condition is met, and acquiring joint angle data represented by the individual with the minimum fitness value as an approximate solution of the elbow joint and shoulder joint angles.
CN201911419535.1A 2019-12-31 2019-12-31 Mechanical arm control method based on man-machine fusion Active CN111152220B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911419535.1A CN111152220B (en) 2019-12-31 2019-12-31 Mechanical arm control method based on man-machine fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911419535.1A CN111152220B (en) 2019-12-31 2019-12-31 Mechanical arm control method based on man-machine fusion

Publications (2)

Publication Number Publication Date
CN111152220A CN111152220A (en) 2020-05-15
CN111152220B true CN111152220B (en) 2021-07-06

Family

ID=70560510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911419535.1A Active CN111152220B (en) 2019-12-31 2019-12-31 Mechanical arm control method based on man-machine fusion

Country Status (1)

Country Link
CN (1) CN111152220B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070835B (en) * 2020-08-21 2024-06-25 达闼机器人股份有限公司 Mechanical arm pose prediction method and device, storage medium and electronic equipment
CN112428278B (en) * 2020-10-26 2022-11-15 北京理工大学 Control method and device of mechanical arm and training method of man-machine cooperation model
CN112476456B (en) * 2020-11-25 2022-03-25 浙江工业大学 Arm-wing cooperative variant control system and control method for simulating bird prey

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
CN108241339A (en) * 2017-12-27 2018-07-03 北京航空航天大学 The movement solution of apery mechanical arm and configuration control method
CN108326852A (en) * 2018-01-16 2018-07-27 西北工业大学 A kind of space manipulator method for planning track of multiple-objection optimization
WO2019222480A1 (en) * 2018-05-16 2019-11-21 University Of Maryland, College Park Confidence-based robotically-assisted surgery system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7626571B2 (en) * 2005-12-22 2009-12-01 The Board Of Trustees Of The Leland Stanford Junior University Workspace expansion controller for human interface systems
CN102665590B (en) * 2009-11-16 2015-09-23 皇家飞利浦电子股份有限公司 For the man-robot Compliance control of Endoscope-assisted robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
CN108241339A (en) * 2017-12-27 2018-07-03 北京航空航天大学 The movement solution of apery mechanical arm and configuration control method
CN108326852A (en) * 2018-01-16 2018-07-27 西北工业大学 A kind of space manipulator method for planning track of multiple-objection optimization
WO2019222480A1 (en) * 2018-05-16 2019-11-21 University Of Maryland, College Park Confidence-based robotically-assisted surgery system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A policy-blending formalism for shared control;Anca D Dragan et al;《The International Journal of Robotics Research》;20150517;第32卷(第7期);第790-805页 *
一种基于高斯过程混合模型的机械臂抓取方法;陈友东等;《机器人》;20190531;第41卷(第3期);第343-352页 *

Also Published As

Publication number Publication date
CN111152220A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111152218B (en) Action mapping method and system of heterogeneous humanoid mechanical arm
CN111152220B (en) Mechanical arm control method based on man-machine fusion
CN111515961B (en) Reinforcement learning reward method suitable for mobile mechanical arm
JP5210883B2 (en) A method of using a computer to control the movement of a part of a physical multi-joint system, a system for controlling the movement of a part of a physical multi-joint system, A computer-based method for tracking motion, a system for tracking the motion of a human by a physical articulated system separate from a human, and a movement of a part of a physical articulated system separate from a source system Using a computer to control
Zacharias et al. Making planned paths look more human-like in humanoid robot manipulation planning
JP2019093461A (en) Holding system, learning device, holding method and model manufacturing method
US20210252714A1 (en) Control device, control method, and non-transitory computer-readable storage medium
JP6826069B2 (en) Robot motion teaching device, robot system and robot control device
CN111085996B (en) Control method, device and system of live working robot
CN115179294A (en) Robot control method, system, computer device, and storage medium
CN109807887A (en) Flexible arm Intellisense and control method and system based on deep neural network
CN113829343A (en) Real-time multi-task multi-person man-machine interaction system based on environment perception
JP7035309B2 (en) Master-slave system
CN111260649A (en) Close-range mechanical arm sensing and calibrating method
CN111152227A (en) Mechanical arm control method based on guided DQN control
JPWO2019239562A1 (en) Machine learning device and robot system equipped with it
JP7263987B2 (en) Control device, control method, and control program
Yamane Kinematic redundancy resolution for humanoid robots by human motion database
Fantacci et al. Markerless visual servoing on unknown objects for humanoid robot platforms
CN116160441A (en) Robot teleoperation collision prevention method based on human arm motion prediction
CN114536351A (en) Redundant double-arm robot teaching method and device, electronic equipment and system
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Al-Junaid ANN based robotic arm visual servoing nonlinear system
Hwang et al. Real-time grasp planning based on motion field graph for human-robot cooperation
Zorić et al. Performance Comparison of Teleoperation Interfaces for Ultra-Lightweight Anthropomorphic Arms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant