CN106020494A - Three-dimensional gesture recognition method based on mobile tracking - Google Patents

Three-dimensional gesture recognition method based on mobile tracking Download PDF

Info

Publication number
CN106020494A
CN106020494A CN201610459875.7A CN201610459875A CN106020494A CN 106020494 A CN106020494 A CN 106020494A CN 201610459875 A CN201610459875 A CN 201610459875A CN 106020494 A CN106020494 A CN 106020494A
Authority
CN
China
Prior art keywords
robot
joint angle
motion
theta
cos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610459875.7A
Other languages
Chinese (zh)
Other versions
CN106020494B (en
Inventor
张平
杜广龙
陈明轩
何子平
金培根
李方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610459875.7A priority Critical patent/CN106020494B/en
Publication of CN106020494A publication Critical patent/CN106020494A/en
Application granted granted Critical
Publication of CN106020494B publication Critical patent/CN106020494B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Numerical Control (AREA)

Abstract

The invention provides a three-dimensional gesture recognition method based on mobile tracking. According to the method, relative pose invariance of the tail end of a robot and a moving gesture is allowed to be kept while an operator moves the gesture. The method comprises steps as follows: (1), establishing a tracking model; (2), solving the tracking model; (3), driving the robot. With the adoption of a non-contact vision-based man-machine interface, the position and the pose of the hand of the operator can be acquired, and the robot is driven to track and recognize the moving gesture of the operator in real time.

Description

Three-dimensional gesture recognition method based on mobile tracking
Technical field
The invention belongs to robot motion field, particularly to a kind of three-dimensional gesture recognition method based on mobile tracking.
Background technology
Along with various robots are widely used in numerous areas, current control mode also by conventional man machine interface, show Religion device develops towards motion sensing control direction.Existing motion sensing control mode is the most all operator's profit before fixing sensor cover Mutual with staff, this interactive mode can not be mapped to mobile gesture among robot motion's track intuitively, fixes simultaneously Sensor have its fixing effective working region, the gesture moved for operator also exists and effectively works beyond sensor The deficiency in region.This invention proposes and a kind of sensor is fixed on robot end thus the movement of Tracking Recognition operator The method of three-dimension gesture, it allows operator to keep robot end relative not with the pose of mobile gesture while moving gesture Degeneration.It is fixed on the sensor of end while the mobile hand gesture location and attitude data of acquisition operations person, together with end Motion, benefit is to expand the actually active working region of sensor, as long as the hands moving range of operator is without departing from robot The range of movement of end, sensor just can capture position and the attitude data of the mobile gesture of operator always, solves in the past Fixing sensor identification target is likely beyond the problem of effective working region.
Summary of the invention
This invents a kind of method proposing moving three dimension gesture allowing robot end Tracking Recognition operator.This Individual method employs contactless view-based access control model man-machine interface, and it can obtain position and the attitude of operator's hand, in real time Driven machine people carrys out the mobile gesture of Tracking Recognition operator.
Present invention three-dimensional gesture recognition method based on mobile tracking comprises the steps:
S1, set up trace model;
S2, trace model solve;
S3, driven machine people.
Described step S1 comprises the following steps:
When completing Tracking Recognition operator's gesture task, capture behaviour by being fixed on the Leap Motion of robot end The position of work person's hand and attitude.
Leap Motion has two infrared cameras.
1) position and attitude mode
Robot base, joint of robot, Leap Motion, staff coordinate system all use 3 mutually orthogonal axles to carry out table Show (referring to Fig. 1).Robot base coordinate sys-tem xOy plane is horizontal direction;Z-axis positive direction is straight up.Robot closes Z in joint i coordinate systemi-1Axle is positioned at by right hand rule rotary shaft direction;Xi-1Axle is along Zi-1With ZiCommon vertical line direction.Leap Motion Z in coordinate systemLAxle is along Leap Motion short side direction;XLAxle is along Leap Motion long side direction;YLAxle positive direction For vertical Leap Motion plane upwards.Z in staff coordinate systemHAxle is four finger directions, with ZLDirection of principal axis is consistent;XHAxle is thumb Direction, with XLDirection of principal axis is consistent;YHAxle is that vertical hand is gone up dorsad, with YLDirection is consistent.Robot uses Denavit Hartenberg (D H) model, AiRepresent the homogeneous coordinate transformation matrix from coordinate system i-1 to coordinate system i, then have:
A i = cosθ i - sinθ i cosα i sinθ i sinα i l i cosθ i sinθ i cosθ i cosα i - cosθ i sinα i l i sinθ i 0 sinα i cosα i r i 0 0 0 1 - - - ( 1 )
Wherein θiRepresent when coordinate transform around Zi-1The angle rotated so that Xi-1And XiIt is parallel to each other;riRepresent at coordinate Along Z during conversioni-1The distance of translation so that Xi-1And XiConllinear;liRepresent the X when coordinate transformi-1The distance of translation so that Xi-1 And XiInitial point overlap;αiRepresent the Z when coordinate transformi-1Around XiThe angle rotated so that Zi-1And ZiInitial point overlaps, direction one Cause.
For a robot with six joints, from base coordinate system to the homogeneous transformation of Leap Motion coordinate system Matrix is defined as:
T 7 = A 1 A 2 ... A 6 A 7 = n 7 0 s 7 0 a 7 0 p 7 0 0 0 0 1 - - - ( 2 )
WhereinFor the normal vector of Leap Motion,For sliding vector,For close to vector,For position vector.
Particularly because Leap Motion is fixed on robot end, have:
A 7 = 1 0 0 0 0 1 0 0 0 0 1 h 0 0 0 1 - - - ( 3 )
Wherein h is Leap Motion height.
The position and the attitude that are defined as t Leap Motion be:
Xt=[J1,t J2,t J3,t J4,t J5,t J6,t]T (4)
Wherein JiFor robot i-th joint angle.
Utilizing (1), (2), (3), (4) have:
T7=X0 (5)
Each joint angle angle value of initial time robot is obtained by solving (5).
The homogeneous transform matrix being tied to staff coordinate system from Leap Motion coordinate is defined as:
A 8 = cosθ i - sinθ i cosα i sinθ i sinα i 0 sinθ i cosθ i cosα i - cosθ i sinα i 0 0 sinα i cosα i l 0 0 0 1 - - - ( 6 )
Wherein l is the distance of staff and Leap Motion.
Definition Z0For position and the attitude of initial time staff, have:
T7A8=Z0 (7)
Definition X 'tFor ideal position and the attitude of t Leap Motion, i.e. Leap in Descartes's state space The relative pose of Motion and staff is constant, has:
T7A8=X 'tT (8)
Wherein T is staff for the position of Leap Motion and attitude matrix, and parameter is captured by Leap Motion, has:
T = n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1 - - - ( 9 )
By solving (8), (9) obtain X 't
2) gesture tracking identification model
The maximum angular rate of each joint angle of definition robot:
Vmax=[v1 v2 v3 v4 v5 v6]T (10)
For making robot end constant with the relative pose of staff, problem is converted into and solves following quadratic programming equation:
m i n | | X t - X t ′ | | 2 s . t . | | X t - X t - 1 | | 1 ≤ Δ T · V max - - - ( 11 )
Wherein Δ T is the sampling interval between t and t 1 moment.
Described step S2 comprises the following steps:
Above-mentioned quadratic programming equation is solved, proposes optimal solution and approximate solution two schemes, can be retouched by geometry Stating, illustrate with two-dimensional case, main consideration theory state situation outside area of feasible solution, conclusion can naturally be generalized to 6 Tie up even n-dimensional space, as shown in Figure 2.
Optimal solution situation:
According to quadratic programming model, can obtain optimal solution is:
Xi,t=max (Xi,t-1+ΔT·Vi,max,Xi,t') i=1,2 ..., 6 (12)
This solution is the optimal solution of quadratic programming model, the theoretical shape that virtual condition solution is all the most corresponding with optimal solution State is closest;Optimal solution in understanding is semantically, from the fortune of the current joint angle state of robot to target joint horn shape state During Dong, for each joint angle, if it is possible to move to the desired value that optimal solution is corresponding, then this joint angle can be moved to Excellent solving corresponding desired value, otherwise, this joint angle moves to maximum speed can from the desired value corresponding to optimal solution Near value;This solution is to be separated by each state component, individually considers the optimal solution of each state component;But robot Position and the attitude of end are together decided on by 6 joint angles, individually solve the optimal solution that may there is joint angle space It is not the optimal solution of robot location and attitude, thereby increases and it is possible to there is the unstable phenomenon of robot motion;To this end, can be by asking Approximate solution solves motion smoothing problem;
Approximate solution situation:
For above-mentioned optimal solution, owing to speed is different, the desired value of each joint angle is different, the differences in motion of each joint angle Value ratio is in constantly change, therefore, for approximate solution scheme, by the stable movement definition of portable Leap Motion platform The movement velocity ratio moment for each joint angle preserves as the maximum movement speed ratio of each joint angle, and the moment keeps Relative motion state;Therefore, obtaining approximate solution is:
X t = X t - 1 + Δ T m a x ( Δ T , m a x ( ( X i , t ′ - X i , t - 1 ) / V i , max ) ) · V i , max - - - ( 13 )
Approximate solution is the approximate optimal solution of quadratic programming model, and essence is to allow the direction of motion of robot transport along impact point Dynamic;Approximate solution in understanding is semantically: from the motor process of the current joint angle state of robot to target joint horn shape state In, if individually considering each joint angle, then joint angle moves to the time needed for target joint angle with maximum speed and isTi,tIf more than Δ T, representing in Δ T time, this joint angle cannot move to desired value, otherwise, this pass Joint angle can move to desired value, and whether above-mentioned optimal solution can move to desired value according to this joint angle exactly, determines Its quantity of motion, but this can cause some joint angle movement velocity too fast, some joint angle movement velocity is the slowest, or certain closes Joint angle very short time of only moving just immediately stops, and the motion of some joint angle just stops for a long time, and this defines with stable movement It is not inconsistent;Approximate solution is to be considered by whole joint angles, it is stipulated that all joint angle relative motion state is the same, and motor process is each The movement velocity ratio corresponding maximum movement speed ratio of joint angle is the same, and the direction of motion is the direction along impact point, Then the state feasible solution of quadratic programming model is at Xt-1X′tLine on, position is:
Xt-1X′t/Xt-1Xt=max (Ti,t)/ΔT (14)
Described step S3 comprises the following steps:
X is obtained by solving (5)tAfter, six articulation angle angle value of robot:
M=Xt-Xt-1=(θ12,...,θ6)T (15)
Solve the angle value obtaining robot each joint desirable movement, robot end after whole joint motions End just arrives the position specified.
The present invention has such advantages as relative to prior art and effect:
The present invention proposes a kind of method of moving three dimension gesture allowing robot end Tracking Recognition operator.This The interface of view-based access control model allows operator to move gesture to control robot and reach required pose.And algorithm provides approximate solution, Can be applicable in the robot that movement velocity is different without limiting the movement velocity of staff.
Accompanying drawing explanation
Tu1Shi robot, Leap Motion, staff coordinate system schematic diagram;
Fig. 2 is the quadratic programming equation solution situation under two-dimensional space;
Fig. 3 is to be embodied as example results trace figure;
Fig. 4 is three-dimensional gesture recognition method flow chart based on mobile tracking.
Detailed description of the invention
Below in conjunction with embodiment, the present invention is described in further detail, but embodiments of the present invention are not limited to this in fact Execute example.
The present invention comprises the steps:
S1, set up trace model;
S2, trace model solve;
S3, driven machine people.
Described step S1 comprises the following steps:
When completing Tracking Recognition operator's gesture task, capture behaviour by being fixed on the Leap Motion of robot end The position of work person's hand and attitude.
Leap Motion has two infrared cameras.
1) position and attitude mode
Robot base, joint of robot, Leap Motion, staff coordinate system all use 3 mutually orthogonal axles to carry out table Show.Robot base coordinate sys-tem xOy plane is horizontal direction;Z-axis positive direction is straight up.In joint of robot i coordinate system Zi-1Axle is positioned at by right hand rule rotary shaft direction;Xi-1Axle is along Zi-1With ZiCommon vertical line direction.Z in Leap Motion coordinate systemLAxle For along Leap Motion short side direction;XLAxle is along Leap Motion long side direction;YLAxle positive direction is vertical Leap Motion plane is upwards.Z in staff coordinate systemHAxle is four finger directions, with ZLDirection of principal axis is consistent;XHAxle is thumb direction, with XLAxle Direction is consistent;YHAxle is that vertical hand is gone up dorsad, with YLDirection is consistent.Robot uses Denavit Hartenberg (D H) mould Type, AiRepresent the homogeneous coordinate transformation matrix from coordinate system i-1 to coordinate system i, then have:
A i = cosθ i - sinθ i cosα i sinθ i sinα i l i cosθ i sinθ i cosθ i cosα i - cosθ i sinα i l i sinθ i 0 sinα i cosα i r i 0 0 0 1 - - - ( 1 )
Wherein θiRepresent when coordinate transform around Zi-1The angle rotated so that Xi-1And XiIt is parallel to each other;riRepresent at coordinate Along Z during conversioni-1The distance of translation so that Xi-1And XiConllinear;liRepresent the X when coordinate transformi-1The distance of translation so that Xi-1 And XiInitial point overlap;αiRepresent the Z when coordinate transformi-1Around XiThe angle rotated so that Zi-1And ZiInitial point overlaps, direction one Cause.
For a robot with six joints in this example, D H parameters selection is as follows:
θ=[0 0000 0]T (2)
R=[250 00 650 0-200]T (3)
L=[150 570 150 00 0]T (4)
α = - π 2 - π π 2 - π 2 - π 2 0 T - - - ( 5 )
Particularly because Leap Motion is fixed on robot end, have:
A 7 = 1 0 0 0 0 1 0 0 0 0 1 h 0 0 0 1 - - - ( 6 )
Wherein h is Leap Motion height, and the Leap Motion used in this example is highly 10mm, has:
A 7 = 1 0 0 0 0 1 0 0 0 0 1 10 0 0 0 1 - - - ( 7 )
The position and the attitude that are defined as t Leap Motion be:
Xt=[J1,t J2,t J3,t J4,t J5,t J6,t]T (8)
Wherein JiFor robot i-th joint angle.
Utilizing (1), (2), (3), (4) have:
T7=X0 (9)
In this example, obtain each joint angle angle value of initial time robot by solving (5),
X 0 = π 2 - π 2 0 0 π 2 0 T - - - ( 10 )
The homogeneous transform matrix being tied to staff coordinate system from Leap Motion coordinate is defined as:
A 8 = cosθ i - sinθ i cosα i sinθ i sinα i 0 sinθ i cosθ i cosα i - cosθ i sinα i 0 0 sinα i cosα i l 0 0 0 1 - - - ( 11 )
Wherein l is the distance of staff and Leap Motion, and in this example, staff with Leap Motion distance is 200mm, palm is parallel to Leap Motion, has:
A 8 = 1 0 0 0 0 1 0 0 0 0 1 200 0 0 0 1 - - - ( 12 )
Definition Z0For position and the attitude of initial time staff, have:
T7A8=Z0 (13)
Definition X 'tFor ideal position and the attitude of t Leap Motion, i.e. Leap in Descartes's state space The relative pose of Motion and staff is constant, has:
T7A8=X 'tT (14)
Wherein T is staff for the position of Leap Motion and attitude matrix, and parameter is captured by Leap Motion, has:
T = n κ o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1 - - - ( 15 )
By solving (8), (9) obtain X 't
2) gesture tracking identification model
In this example, the maximum angular rate of each joint angle of definition robot:
Vmax=[2.71 2.16 3.59 6.69 6.82 8.07]T (16)
For making robot end constant with the relative pose of staff, problem is converted into and solves following quadratic programming equation:
m i n | | X t - X t ′ | | 2 s . t . | | X t - X t - 1 | | 1 ≤ Δ T · V max - - - ( 11 )
Wherein Δ T is the sampling interval between t and t 1 moment.
Described step S2 comprises the following steps:
Above-mentioned quadratic programming equation is solved, proposes optimal solution and approximate solution two schemes, can be retouched by geometry Stating, illustrate with two-dimensional case, main consideration theory state situation outside area of feasible solution, conclusion can naturally be generalized to 6 Tie up even n-dimensional space, as shown in Figure 2.
Optimal solution situation:
According to quadratic programming model, can obtain optimal solution is:
Xi,t=max (Xi,t-1+ΔT·Vi,max,Xi,t') i=1,2 ..., 6 (12)
This solution is the optimal solution of quadratic programming model, the theoretical shape that virtual condition solution is all the most corresponding with optimal solution State is closest;Optimal solution in understanding is semantically, from the fortune of the current joint angle state of robot to target joint horn shape state During Dong, for each joint angle, if it is possible to move to the desired value that optimal solution is corresponding, then this joint angle can be moved to Excellent solving corresponding desired value, otherwise, this joint angle moves to maximum speed can from the desired value corresponding to optimal solution Near value;This solution is to be separated by each state component, individually considers the optimal solution of each state component;But robot Position and the attitude of end are together decided on by 6 joint angles, individually solve the optimal solution that may there is joint angle space It is not the optimal solution of robot location and attitude, thereby increases and it is possible to there is the unstable phenomenon of robot motion;To this end, can be by asking Approximate solution solves motion smoothing problem;
Approximate solution situation:
For above-mentioned optimal solution, owing to speed is different, the desired value of each joint angle is different, the differences in motion of each joint angle Value ratio is in constantly change, therefore, for approximate solution scheme, by the stable movement definition of portable Leap Motion platform The movement velocity ratio moment for each joint angle preserves as the maximum movement speed ratio of each joint angle, and the moment keeps Relative motion state;Therefore, obtaining approximate solution is:
X t = X t - 1 + Δ T m a x ( Δ T , m a x ( ( X i , t ′ - X i , t - 1 ) / V i , max ) ) · V i , max - - - ( 13 )
Approximate solution is the approximate optimal solution of quadratic programming model, and essence is to allow the direction of motion of robot transport along impact point Dynamic;Approximate solution in understanding is semantically: from the motor process of the current joint angle state of robot to target joint horn shape state In, if individually considering each joint angle, then joint angle moves to the time needed for target joint angle with maximum speed and isTi,tIf more than Δ T, representing in Δ T time, this joint angle cannot move to desired value, otherwise, this pass Joint angle can move to desired value, and whether above-mentioned optimal solution can move to desired value according to this joint angle exactly, determines Its quantity of motion, but this can cause some joint angle movement velocity too fast, some joint angle movement velocity is the slowest, or certain closes Joint angle very short time of only moving just immediately stops, and the motion of some joint angle just stops for a long time, and this defines with stable movement It is not inconsistent;Approximate solution is to be considered by whole joint angles, it is stipulated that all joint angle relative motion state is the same, and motor process is each The movement velocity ratio corresponding maximum movement speed ratio of joint angle is the same, and the direction of motion is the direction along impact point, Then the state feasible solution of quadratic programming model is at Xt-1X′tLine on, position is:
Xt-1X′t/Xt-1Xt=max (Ti,t)/ΔT (14)
Described step S3 comprises the following steps:
X is obtained by solving (5)tAfter, six articulation angle angle value of robot:
M=Xt-Xt-1=(θ12,...,θ6)T (15)
Solve the angle value obtaining robot each joint desirable movement, robot end after whole joint motions End just arrives the position specified.
The data produced in this example have plotted trajectory diagram, refer to Fig. 3.
Above-described embodiment is the present invention preferably embodiment, but embodiments of the present invention are not by above-described embodiment Limit, the change made under other any spirit without departing from the present invention and principle, modify, substitute, combine, simplify, All should be the substitute mode of equivalence, within being included in protection scope of the present invention.

Claims (4)

1. three-dimensional gesture recognition method based on mobile tracking, it is characterised in that comprise the steps:
S1, set up trace model;
S2, trace model solve;
S3, driven machine people.
Three-dimensional gesture recognition method based on mobile tracking the most according to claim 1, it is characterised in that described step S1 Specifically include:
When completing Tracking Recognition operator's gesture task, capture operator by being fixed on the Leap Motion of robot end The position of hand and attitude;Described Leap Motion has two infrared cameras;
1) position and attitude mode
Robot base, joint of robot, Leap Motion, staff coordinate system all use 3 mutually orthogonal axles to represent. Robot base coordinate sys-tem xOy plane is horizontal direction;Z-axis positive direction is straight up.Z in joint of robot i coordinate systemi-1 Axle is positioned at by right hand rule rotary shaft direction;Xi-1Axle is along Zi-1With ZiCommon vertical line direction.Z in Leap Motion coordinate systemLAxle is Along Leap Motion short side direction;XLAxle is along Leap Motion long side direction;YLAxle positive direction is vertical Leap Motion plane is upwards.Z in staff coordinate systemHAxle is four finger directions, with ZLDirection of principal axis is consistent;XHAxle is thumb direction, with XLAxle Direction is consistent;YHAxle is that vertical hand is gone up dorsad, with YLDirection is consistent.Robot uses Denavit Hartenberg (D H) mould Type, AiRepresent the homogeneous coordinate transformation matrix from coordinate system i-1 to coordinate system i, then have:
A i = cosθ i - sinθ i cosα i sinθ i sinα i l i cosθ i sinθ i cosθ i cosα i - cosθ i sinα i l i sinθ i 0 sinα i cosα i r i 0 0 0 1 - - - ( 1 )
Wherein θiRepresent when coordinate transform around Zi-1The angle rotated so that Xi-1And XiIt is parallel to each other;riRepresent in coordinate transform Time along Zi-1The distance of translation so that Xi-1And XiConllinear;liRepresent the X when coordinate transformi-1The distance of translation so that Xi-1And Xi Initial point overlap;αiRepresent the Z when coordinate transformi-1Around XiThe angle rotated so that Zi-1And ZiInitial point overlaps, and direction is consistent.
For a robot with six joints, from base coordinate system to the homogeneous transform matrix of Leap Motion coordinate system It is defined as:
T 7 = A 1 A 2 ... A 6 A 7 = n 7 0 s 7 0 a 7 0 p 7 0 0 0 0 1 - - - ( 2 )
WhereinFor the normal vector of Leap Motion,For sliding vector,For close to vector,For position vector;
Because Leap Motion is fixed on robot end, have:
A 7 = 1 0 0 0 0 1 0 0 0 0 1 h 0 0 0 1 - - - ( 3 )
Wherein h is Leap Motion height;
The position and the attitude that are defined as t Leap Motion be:
Xt=[J1,t J2,t J3,t J4,t J5,t J6,t]T (4)
Wherein JiFor robot i-th joint angle;
Utilizing formula (1), (2), (3), (4) have:
T7=X0 (5)
Each joint angle angle value of initial time robot is obtained by solving (5);
The homogeneous transform matrix being tied to staff coordinate system from Leap Motion coordinate is defined as:
A 8 = cosθ i - sinθ i cosα i sinθ i sinα i 0 sinθ i cosθ i cosα i - cosθ i sinα i 0 0 sinα i cosα i l 0 0 0 1 - - - ( 6 )
Wherein l is the distance of staff and Leap Motion;
Definition Z0For position and the attitude of initial time staff, have:
T7A8=Z0 (7)
Definition X 'tFor ideal position and the attitude of t Leap Motion, i.e. Leap Motion in Descartes's state space Constant with the relative pose of staff, have:
T7A8=X 'tT (8)
Wherein T is staff for the position of Leap Motion and attitude matrix, and parameter is captured by Leap Motion, has:
T = n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1 - - - ( 9 )
By solving (8), (9) obtain X 't
2) gesture tracking identification model
The maximum angular rate of each joint angle of definition robot:
Vmax=[v1 v2 v3 v4 v5 v6]T (10)
For making robot end constant with the relative pose of staff, problem is converted into and solves following quadratic programming equation:
min | | X t - X t ′ | | 2 s . t . | | X t - X t - 1 | | 1 ≤ Δ T · V max - - - ( 11 )
Wherein Δ T is the sampling interval between t and t 1 moment.
Three-dimensional gesture recognition method based on mobile tracking the most according to claim 1, it is characterised in that described step S2 Comprise the following steps:
Above-mentioned quadratic programming equation is solved, proposes optimal solution and approximate solution two schemes, described by geometry, with two Dimension situation is generalized to 6 and ties up even n-dimensional space;
Optimal solution situation:
According to quadratic programming model, can obtain optimal solution is:
Xi,t=max (Xi,t-1+ΔT·Vi,max,Xi,t') i=1,2 ..., 6 (12)
This solution is the optimal solution of quadratic programming model, and all the most corresponding with the optimal solution theory state of virtual condition solution is Close;Optimal solution in understanding is semantically, from joint angle state being moved through to target joint horn shape state that robot is current Cheng Zhong, for each joint angle, if it is possible to move to the desired value that optimal solution is corresponding, then this joint angle can be moved to optimal solution Corresponding desired value, otherwise, this joint angle moves to maximum speed can be nearest from the desired value corresponding to optimal solution Value;This solution is to be separated by each state component, individually considers the optimal solution of each state component;But robot end Position and attitude together decided on by 6 joint angles, individually solve and may there is the optimal solution in joint angle space not It is the optimal solution of robot location and attitude, thereby increases and it is possible to there is the unstable phenomenon of robot motion;To this end, can be by seeking approximation Solution solves motion smoothing problem;
Approximate solution situation:
For above-mentioned optimal solution, owing to speed is different, the desired value of each joint angle is different, the motion difference ratio of each joint angle Example is in constantly change, therefore, for approximate solution scheme, the stable movement of portable Leap Motion platform is defined as respectively The movement velocity ratio moment of individual joint angle preserves as the maximum movement speed ratio of each joint angle, and the moment keeps relatively Kinestate;Therefore, obtaining approximate solution is:
X t = X t - 1 + Δ T m a x ( Δ T , m a x ( ( X i , t ′ - X i , t - 1 ) / V i , max ) ) · V i , max - - - ( 13 )
Approximate solution is the approximate optimal solution of quadratic programming model, and essence is to allow the direction of motion of robot move along impact point; Approximate solution in understanding is semantically: from the current joint angle state of robot to the motor process of target joint horn shape state, If individually considering each joint angle, then joint angle moves to the time needed for target joint angle with maximum speed and isTi,tIf more than Δ T, representing in Δ T time, this joint angle cannot move to desired value, otherwise, this joint Angle can move to desired value, and whether above-mentioned optimal solution can move to desired value according to this joint angle exactly, determines it Quantity of motion, but this can cause some joint angle movement velocity too fast, some joint angle movement velocity is the slowest, or certain joint Angle very short time of only moving just immediately stops, and the motion of some joint angle just stops for a long time, and this defines not with stable movement Symbol;Approximate solution is to be considered by whole joint angles, it is stipulated that all joint angle relative motion state is the same, and motor process is respectively closed The movement velocity ratio corresponding maximum movement speed ratio at joint angle is the same, and the direction of motion is the direction along impact point, then The state feasible solution of quadratic programming model is at Xt-1X′tLine on, position is:
Xt-1X′t/Xt-1Xt=max (Ti,t)/ΔT (14)。
Three-dimensional gesture recognition method based on mobile tracking the most according to claim 1, it is characterised in that described step S3 Specifically include:
X is obtained by solving (5)tAfter, six articulation angle angle value of robot:
M=Xt-Xt-1=(θ12,...,θ6)T (15)
Having solved the angle value obtaining robot each joint desirable movement, after whole joint motions, robot end is just Arrive the position specified.
CN201610459875.7A 2016-06-20 2016-06-20 Three-dimensional gesture recognition method based on mobile tracking Expired - Fee Related CN106020494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610459875.7A CN106020494B (en) 2016-06-20 2016-06-20 Three-dimensional gesture recognition method based on mobile tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610459875.7A CN106020494B (en) 2016-06-20 2016-06-20 Three-dimensional gesture recognition method based on mobile tracking

Publications (2)

Publication Number Publication Date
CN106020494A true CN106020494A (en) 2016-10-12
CN106020494B CN106020494B (en) 2019-10-18

Family

ID=57086431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610459875.7A Expired - Fee Related CN106020494B (en) 2016-06-20 2016-06-20 Three-dimensional gesture recognition method based on mobile tracking

Country Status (1)

Country Link
CN (1) CN106020494B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107309872A (en) * 2017-05-08 2017-11-03 南京航空航天大学 A kind of flying robot and its control method with mechanical arm
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN107688390A (en) * 2017-08-28 2018-02-13 武汉大学 A kind of gesture recognition controller based on body feeling interaction equipment
CN108406725A (en) * 2018-02-09 2018-08-17 华南理工大学 Force feedback man-machine interactive system and method based on electromagnetic theory and mobile tracking
CN110877335A (en) * 2019-09-27 2020-03-13 华南理工大学 Self-adaptive unmarked mechanical arm track tracking method based on hybrid filter
CN113838177A (en) * 2021-09-22 2021-12-24 上海拾衷信息科技有限公司 Hand animation production method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201813390U (en) * 2010-10-20 2011-04-27 韩旭 Network transfer behaviour based robot assembly
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
US20120166200A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute System and method for integrating gesture and sound for controlling device
CN202512439U (en) * 2012-02-28 2012-10-31 陶重犇 Human-robot cooperation system with webcam and wearable sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201813390U (en) * 2010-10-20 2011-04-27 韩旭 Network transfer behaviour based robot assembly
US20120166200A1 (en) * 2010-12-23 2012-06-28 Electronics And Telecommunications Research Institute System and method for integrating gesture and sound for controlling device
CN102350700A (en) * 2011-09-19 2012-02-15 华南理工大学 Method for controlling robot based on visual sense
CN202512439U (en) * 2012-02-28 2012-10-31 陶重犇 Human-robot cooperation system with webcam and wearable sensor

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107309872A (en) * 2017-05-08 2017-11-03 南京航空航天大学 A kind of flying robot and its control method with mechanical arm
CN107309872B (en) * 2017-05-08 2021-06-15 南京航空航天大学 Flying robot with mechanical arm and control method thereof
CN107351058A (en) * 2017-06-08 2017-11-17 华南理工大学 Robot teaching method based on augmented reality
CN107688390A (en) * 2017-08-28 2018-02-13 武汉大学 A kind of gesture recognition controller based on body feeling interaction equipment
CN108406725A (en) * 2018-02-09 2018-08-17 华南理工大学 Force feedback man-machine interactive system and method based on electromagnetic theory and mobile tracking
CN110877335A (en) * 2019-09-27 2020-03-13 华南理工大学 Self-adaptive unmarked mechanical arm track tracking method based on hybrid filter
CN113838177A (en) * 2021-09-22 2021-12-24 上海拾衷信息科技有限公司 Hand animation production method and system
CN113838177B (en) * 2021-09-22 2023-08-01 上海拾衷信息科技有限公司 Hand animation production method and system

Also Published As

Publication number Publication date
CN106020494B (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN106020494A (en) Three-dimensional gesture recognition method based on mobile tracking
Cherubini et al. A collaborative robot for the factory of the future: Bazar
Asfour et al. Toward humanoid manipulation in human-centred environments
CN105014677B (en) Vision Mechanical arm control method based on Camshift visual tracking and D-H modeling algorithm
Keshmiri et al. Image-based visual servoing using an optimized trajectory planning technique
CN106774309A (en) A kind of mobile robot is while visual servo and self adaptation depth discrimination method
CN107351058A (en) Robot teaching method based on augmented reality
Dang et al. Grasp adjustment on novel objects using tactile experience from similar local geometry
Lopez-Nicolas et al. Visual control for multirobot organized rendezvous
CN107943034A (en) Complete and Minimum Time Path planing method of the mobile robot along given path
Khatib et al. Visual coordination task for human-robot collaboration
Kragic et al. A framework for visual servoing
Lippiello et al. Visual coordinated landing of a UAV on a mobile robot manipulator
CN109048911B (en) Robot vision control method based on rectangular features
Gienger et al. Imitating object movement skills with robots—A task-level approach exploiting generalization and invariance
Babiarz et al. The concept of collision-free path planning of UAV objects
Wieland et al. Combining force and visual feedback for physical interaction tasks in humanoid robots
López-Nicolás et al. Parking objects by pushing using uncalibrated visual servoing
Haschke et al. Geometry-based grasping pipeline for bi-modal pick and place
Li et al. Vision-based formation control of a heterogeneous unmanned system
Alonso-Mora Collaborative motion planning for multi-agent systems
Yu et al. Mobile robot capable of crossing floors for library management
Xu et al. Vision‐Based Intelligent Perceiving and Planning System of a 7‐DoF Collaborative Robot
Lepora et al. Pose-based servo control with soft tactile sensing
Duc Hanh et al. Combining 3D matching and image moment based visual servoing for bin picking application

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191018

CF01 Termination of patent right due to non-payment of annual fee