CN108908353A - Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device - Google Patents
Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device Download PDFInfo
- Publication number
- CN108908353A CN108908353A CN201810593985.1A CN201810593985A CN108908353A CN 108908353 A CN108908353 A CN 108908353A CN 201810593985 A CN201810593985 A CN 201810593985A CN 108908353 A CN108908353 A CN 108908353A
- Authority
- CN
- China
- Prior art keywords
- robot
- motor
- moment
- mechanical model
- smoothness constraint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
- B25J11/0015—Face robots, animated artificial faces for imitating human expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Feedback Control In General (AREA)
- Toys (AREA)
Abstract
The invention discloses a kind of, and the robot expression based on the reverse mechanical model of smoothness constraint imitates method, the method includes:A:Extraction machine human face's feature vector:B:Construct the reverse mechanical model of smoothness constraint based on facial characteristics sequence to motor control sequence:C:Using performing artist's real-time facial feature as target, generating optimal motor control sequence driving robot face motor based on the reverse mechanical model of smoothness constraint makes the robot show expression corresponding with the face of the performing artist.The invention also discloses a kind of, and the robot expression based on the reverse mechanical model of smoothness constraint imitates device.The advantage of the invention is that:Using the embodiment of the present invention, the smoothness that the space-time similarity and motor that the imitation of robot expression can be improved continuously move shortens the time of expression migration.
Description
Technical field
The present invention relates to a kind of robot expressions to imitate method, is more particularly to one kind and is based on smoothness constraint inversely mechanical mould
The robot expression of type imitates method and device.
Background technique
With development such as control science, sensor technology, artificial intelligence, material science, make have mankind's shape and action
The humanoid robot of ability is possibly realized.Although humanoid robot has the " brain of higher degree in terms of imitating human behavior
Intelligence " (IQ), but nature, harmonious affective interaction energy are difficult to realize with traditional interactive mode such as keyboard, mouse, screen, pattern
Power, it is horizontal also far from meeting requirement of the people to robot automtion level." human-computer interaction obstacle " and affective interaction energy
Power is increasingly becoming the bottleneck of robot functionization.Therefore, " intelligence " (feeling quotrient) for how improving robot is horizontal as robot
Research field critical issue urgently to be resolved.During current natural human-computer interaction aiming at the problem that " high brain intelligence low intelligence ", visit
Man-machine interaction mode that is Suo Fuhe psychological needs, natural, being imbued with emotion is to solve the problems, such as compeling for robot " athymia "
Highly necessary ask.And facial expression is natural human-computer interaction and the most important carrier of robot emotional expression, therefore how to allow class man-machine
Device people shows expression identical with the mankind as technical problem urgently to be resolved.
Currently, imitating human expressions is to realize the more motor Collaborative Controls of robot and the presentation most effective way of expression true to nature
Diameter.It mainly includes that expression classification is imitated and expression details two classes of imitation that current robot expression, which imitates method,.Expression classification is imitated
Method is then the internal relation of Facial action unit and head control motor to be established based on Facial Action Coding System, then pass through drive
Dynamic motor realizes the common expression classification such as glad, surprised.Since generation expression is single and mode is fixed, expression classification imitates method
It is only applicable to the less robot face emotional expression of head freedom.It is different from the imitation of expression classification, expression details imitation side
Rule is details and intensity migration based on performance actuation techniques.These methods are all made of preceding to mechanical model and motion smoothing mould
Type Independent modeling is needed in real-time expression imitation stage by excellent though this mode can take into account the mechanical constraint of motor movement
Change algorithm Converse solved optimal control value, constrain the real-time speed of expression migration, therefore the prior art is migrated there are expression
The bad technical problem of real-time.
Summary of the invention
Technical problem to be solved by the present invention lies in provide a kind of machine based on the reverse mechanical model of smoothness constraint
People's expression imitates method and device, with solve expression existing in the prior art imitate space-time similarity and smoothness it is lower and
The bad technical problem of the real-time of expression migration.
The present invention is to solve above-mentioned technical problem by the following technical programs:
The embodiment of the invention provides a kind of, and the robot expression based on the reverse mechanical model of smoothness constraint imitates method, institute
The method of stating includes:
A:Extraction machine human face's feature vector:
B:Construct the reverse mechanical model of smoothness constraint based on facial characteristics sequence to motor control sequence:
C:Using performing artist's real-time facial feature as target, optimal motor control is generated based on the reverse mechanical model of smoothness constraint
Then sequence shows the robot and the table using the optimal motor control sequence driving robot face motor
The corresponding expression of the face for the person of drilling.
The embodiment of the invention also provides a kind of, and the robot expression based on the reverse mechanical model of smoothness constraint imitates device,
Described device includes:Extraction module is used for extraction machine human face feature vector:Module is constructed, it is special based on face for constructing
Levy the reverse mechanical model of smoothness constraint of sequence to motor control sequence:Generation module, for performing artist's real-time facial feature
For target, optimal motor control sequence is generated based on the reverse mechanical model of smoothness constraint, then utilizes the optimal motor control
Sequence driving robot face motor makes the robot show expression corresponding with the face of the performing artist.
The present invention has the following advantages that compared with prior art:
Using the embodiment of the present invention, using performing artist's real-time facial feature as target, it is based on the reverse mechanical model of smoothness constraint
Optimal motor control sequence is generated, robot will be realized to performing artist's character face's expression using the optimal motor control sequence
Facial expression sequence directly can be mapped as robot face motor when carrying out the migration of robot expression by the migration of feature
Control sequence, improve robot expression imitate space-time similarity and the continuous motion smoothing degree of motor, compared with the existing technology in,
It needs to eliminate solution procedure by the Converse solved optimal control value of optimization algorithm, and then shortens the time of expression migration.
Detailed description of the invention
Fig. 1 is a kind of expression imitation side of robot based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention
The flow diagram of method;
Fig. 2 is a kind of expression imitation side of robot based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention
The schematic illustration of method;
Fig. 3 is the schematic diagram that a kind of robot provided in an embodiment of the present invention controls motor and freedom degree;
Fig. 4 is a kind of schematic diagram of the syntople of robot face characteristic point provided in an embodiment of the present invention;
Fig. 5 is the principle signal for the reverse mechanical model that a kind of LSTM provided in an embodiment of the present invention encoded-decoded structure
Figure;
Fig. 6 is a kind of schematic illustration of d rank multinomial fitting provided in an embodiment of the present invention;
Fig. 7 is the result figure of motor control deviation provided in an embodiment of the present invention;
Fig. 8 is the space-time similarity result figure of expression provided in an embodiment of the present invention migration;
Fig. 9 is robot face motor movement smoothness result figure provided in an embodiment of the present invention;
Figure 10 is the influence result figure of weight parameter clock synchronization sky similarity provided in an embodiment of the present invention and motion smoothing degree;
Figure 11 is that a kind of robot expression based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention is imitated
The structural schematic diagram of device.
Specific embodiment
It elaborates below to the embodiment of the present invention, the present embodiment carries out under the premise of the technical scheme of the present invention
Implement, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to following implementation
Example.
A kind of robot expression based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention imitate method and
Device, just a kind of robot expression based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention is imitated first below
Method is introduced.
Fig. 1 is a kind of expression imitation side of robot based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention
The flow diagram of method, Fig. 2 are a kind of robot table based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention
The schematic illustration of feelings imitation method;As depicted in figs. 1 and 2, the method includes:
S101:Extraction machine human face's feature vector:
Specifically, S101 step may include:A1:Using Kinect camera obtain robot facial expression data with
And the head pose data of robot, wherein the facial expression data of the robot, including:Based on Candide-3 model
Parameterize characteristic point data, the Facial action unit data of face grid;The head pose data of the robot, including:Head
The data of the rotation angle of tri- axial directions of portion XYZ,
Specifically, the A1 step, including:
According to the position of the characteristic point of Pupil diameter algorithm keeps track or so eyeball, and increase the adjoining of itself and surrounding features point
Relationship.
Using increase eyeball characteristic point after Candide-3 model formula,
Obtain the facial table of robot
Feelings data, wherein G is the parametrization expression for increasing the Candide-3 model after eyeball characteristic point;V is characterized position vector;D is p
A feature point group at adjacency matrix;viFor the position vector of ith feature point;P is the Candide-3 increased after eyeball characteristic point
The characteristic point number of model;(xi,yi,zi) be ith feature point three-dimensional vector value;I is characterized serial number a little;J is and characteristic point
The serial number of adjacent characteristic point;eijFor the element in connection matrix, and
Head pose data (R is obtained using Kinect APIpitch,Ryaw,Rroll), wherein RpitchFor the rotation in X-axis
Angle, RyawFor the rotation angle in Y-axis, RrollFor the rotation angle on Z axis.
In practical applications, inventor has developed the Gao Fang robot " thinking about it " with 47 motors.To make to design
Robot closer to the mankind, the robot imitated in such a way that air pressure drives human head, shoulder, arm, wrist, waist,
The muscular movements such as leg, and using the lines of the colour of skin of the elastic silica gel simulation mankind and blood vessel.Feelings are not only in view of human face expression
Sense, which is expressed, embodies the most effective form of subjective intention in most important carrier and human-computer interaction, the embodiment of the present invention only with
The relevant head of expression and facial motor are research object.Fig. 3 is that a kind of robot provided in an embodiment of the present invention controls motor
And the schematic diagram of freedom degree, as shown in figure 3, Fig. 3 illustrates 11, humanoid robot head control motor and freedom degree.
In embodiments of the present invention, for the mapping relations of digging motor dominant vector and its presented facial detail, originally
Text obtains facial expression and head pose using Kinect API using Microsoft Kinect 2.0 as image-capturing apparatus in real time
Data specifically include:The rotation angle of tri- axial directions of head XYZ, the parametrization face grid based on Candide-3 model (include
1347 characteristic points), 17 Facial action units.Meanwhile consideration watching, Rotation of eyeball etc. are important in human-computer interaction
Effect, and the descriptive power of eyeball local detail is improved, according to Pupil diameter algorithm keeps track or so eyeball characteristic point, and increase
Its syntople with adjacent characteristic point is to realize the triangulation of eye areas.If after increasing by two eyeball characteristic points
Candide-3 model is:
Wherein, G is that the parametrization of the Candide-3 model indicates;V is characterized position vector;D be p feature point group at
Adjacency matrix;viFor the position vector of ith feature point;P is the feature for increasing the Candide-3 model after eyeball characteristic point
Point number, value 1349;(xi,yi,zi) be ith feature point three-dimensional vector value;I is characterized serial number a little;J be and spy
The serial number of the adjacent characteristic point of sign point.
Kinect API (compile by Kinect Application Programming Interface, Kinect application program
Journey interface) be a kind of three-dimensional image pickup device Kinect routine interface, the space of the target that is taken for obtaining Kinect
Delta data is exported;Spatial variations data can be, such as color color image two-dimensional coordinate, depth image space coordinate, bone
Bone tracks the data such as space coordinate.
Fig. 4 is a kind of schematic diagram of the syntople of robot face characteristic point provided in an embodiment of the present invention, such as Fig. 4 institute
Show, αijFor with side vivjAdjacent first angle;βijFor with side vivjAdjacent second angle;viIt is characterized a little;vjFor with feature
The adjacent characteristic point of point, and then can use following formula, determine the value of the element in adjacency matrix,
eijFor the element in adjacency matrix, and
A2:Using Laplace transform by the characteristic point data based on the Candide-3 model, from cartesian coordinate system
Conversion is into Laplacian coordinate system.
Specifically, the A2 step, may include:
Using formula,
Realize the conversion of cartesian coordinate system to Laplacian coordinate system, wherein
ζiIt is characterized point viGeometrical characteristic;It is characterized point viLaplacian coordinate;ΩiFor with characteristic point
viFor vertex adjoining triangle area and;αijFor with side vivjAdjacent first angle;βijFor with side vivjAdjacent second
Angle;viIt is characterized a little;vjFor with viAdjacent jth characteristic point;N (i) is and viAdjacent all characteristic points;L () is
Laplacian transformation;| | | | it is mod function;∑ is summing function.
Illustratively, using convex closure weight method,
Calculate Candide-
3 faceform's characteristic point viLaplacian coordinate
A3:According to the rotation angle of tri- axial directions of head XYZ, the Facial action unit data, the robot
Facial geometric feature generates robot face feature vector.
Specifically, can use cascading equations,
By the Laplacian of each characteristic point
Result after coordinate cascade is as the robot face geometrical characteristic extracted, wherein ζiIt is special for the facial geometry of ith feature point
Sign;For the three-dimensional vector of the geometrical characteristic of ith feature point.
Although the topology in the prior art, not only remained using Laplacian coordinate concatenation tactic between characteristic point is believed
Breath, and by the normal direction of fusion feature point and tangential message reflection grid vertex relative to the bending degree of its abutment points and
The direction of motion, still, the facial characteristics based on Laplacian transformation only describe the geometric deformation of bottom face shape, and not
Measure the motion amplitude variation of high-rise facial muscles and head pose.It in embodiments of the present invention, is accurate description facial muscles
Variation and head pose, the embodiment of the present invention is with the rotation angle (R of three, head axial directionpitch、Ryaw、Rroll), 17 face actions
UnitAnd facial geometric featureThe facial characteristics vector X of building is:
X is robot face feature vector;xiFor the value of i-th of dimension of facial characteristics vector;(Rpitch、Ryaw、Rroll)
For the rotation angle of tri- axial directions of robot XYZ;AUjFor the characteristic value of j-th of Facial action unit;For spy
Levy point vkLaplacian coordinate.M is the facial characteristics vector dimension extracted, and m=3+17+1349*3=4067.By face
The building process of feature vector has also merged the fortune of facial muscles it is found that X not only contains the geometric deformation information of facial muscles
Dynamic amplitude and head pose variation.Therefore, it can be imitated for robot expression and more accurate target data is provided.
S102:Construct the reverse mechanical model of smoothness constraint based on facial characteristics sequence to motor control sequence:
Specifically, the step B, including:
B1:Using formula,Building is based on facial characteristics sequence to the reverse of motor control sequence
Mechanical model, whereinFor the reverse mechanical model based on facial characteristics sequence to motor control sequence of building, andΔ t is the frame per second that Kinect camera obtains facial expression frame;For
Robot face feature vector, andΓ () is reverse mechanical model;t
For current time;K is the quantity of the expression frame before robot t moment;D is the quantity of the expression frame after robot t moment.
Yt+(d-2)ΔtFor the motor control data of t+ (d-2) time Δt;Xt-(k-2)ΔtIt is special for the face of t- (k-2) time Δt robot
Levy vector;
B2:Using multilayer LSTM encode-decode structure to based on facial characteristics sequence to motor control sequence it is smooth about
The reverse mechanical model of beam is modeled, and using the movement tendency parameter of d rank multinomial fitting motor control sequence, is based on position
It moves, speed, acceleration three's deviation constructs the reverse mechanical model of smoothness constraint.
In practical applications, Fig. 5 is the reverse mechanical mould that a kind of LSTM provided in an embodiment of the present invention encoded-decoded structure
The schematic illustration of type, as shown in figure 5, the smoothness constraint that a kind of LSTM that embodiment provides encoded-decoded structure is inversely mechanical
The building process of model can be:
1), using L layers of LSTM encoder by length is deep semantic C for the expressive features sequential coding of k;Wherein,For the L layer LSTM neural network of coding side, the l layers of t+i Δ in the network
Outputting and inputting for t moment is represented by:
Wherein,It is coding side l hidden layer
The input of t+i time Δt;For coding side l hidden layer t+i time Δt output;It is hidden for coding side l
Containing layer t+i time Δt output;For coding side l hidden layer t+ (i-1) time Δt output;Wl,EFor coding side
The full connection weight of input layer;bl,EFor the biasing of coding side input layer l hidden layer;Wl,EFor l hidden layer among coding side
Parameter, and l ∈ [1, L];L is implicit number of layers.
2), facial deep semantic is being obtainedOn the basis of, further use L layers of LSTM network to decode it as d
Robot motor's control sequence of frame, such as formula,It is shown.
And then formula is obtained,And using the formula to smoothness constraint
Reverse mechanical model Γ () is solved, wherein
For L layers of coding structure;For L layers of decoding structure;L is preset implies layer by layer
Number;For the facial characteristics sequence of k frame before robot t moment;For the motor control sequence of d frame after robot t moment
Column;Yt-ΔtFor the motor control sequence of robot t- time Δt.
Similar with step 1), in embodiments of the present invention, each outputting and inputting for layer LSTM node of decoding end is represented by:
Wherein,Exist for decoding end l hidden layer
The input of t+j time Δt;YjFor decoding end l hidden layer t+j time Δt output;It is hidden for decoding stage l
Containing layer t+j time Δt output;For decoding stage l hidden layer t+ (j-1) time Δt output;Wl,DFor decoding
Hold the full connection weight of input layer;bl,DFor the biasing of decoding end input layer;Wl,DFor the parameter of l hidden layer among decoding end,
And l ∈ [1, L];WL+1,DFor the full connection weight of decoding end output layer;bL+1,DFor the biasing of decoding end output layer;L is decoding end
Implicit number of layers.
The reverse mechanical model of the smoothness constraint constructed in the embodiment of the present invention can be special by the face of k frame before robot t moment
Levy sequenceIt is translated as d frame control sequence thereafter
Wherein,For t+ (d-1) Δ t
The output of moment decoding layer.
From above-mentioned formula it is found that the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention passes through LSTM coding-solution
Code network completes facial characteristics sequence to the translation of motor control end sequence-end.This multi-step prediction strategy based on timing
Not only have using Converse solved optimal motor control sequence, but also is conducive to the smooth real-time processing of motor movement.This step obtains
Motor control sequence motor control sequence of jth (1≤j≤n) a motor arrived in t- (d-1) Δ t~t+ (d-1) the Δ t period
Arranging to be:
(y(t-(d-1)Δt)j,…,y(t-2Δt)j,y(t-Δt)j,y(t)j,y(t+Δt)j,y(t+2Δt)j…,y(t+(d-1)Δt)j), wherein
y(t-(d-1)Δt)jFor the command displacement of j-th of motor t- (d-1) time Δt.
3), use is with d rank multinomial fitting jth (1≤j≤n) a motor in t- (d-1) Δ t~t+ (d-1) the Δ t time
The motor control sequence of section:(y(t-(d-1)Δt)j,…,y(t-2Δt)j,y(t-Δt)j,y(t)j,y(t+Δt)j,y(t+2Δt)j…,
y(t+(d-1)Δt)j), wherein y(t-(d-1)Δt)jFor the command displacement of j-th of motor t- (d-1) time Δt;y(t)jFor j-th of motor t
The command displacement at moment;y(t+(d-1)Δt)jFor the command displacement of j-th of motor t+ (d-1) time Δt.
Specifically, Fig. 6 is a kind of schematic illustration of d rank multinomial fitting provided in an embodiment of the present invention, such as Fig. 6 institute
Show, can schematic illustration shown in Fig. 6 according to the present invention, using formula,
Construct the polynomial function at j-th of motor preceding d moment and the fitting of rear d moment, wherein
Hj(t+k Δ t) is the polynomial function of the fitting of preceding d moment of j-th of motor t moment;It is j-th
I-th multinomial coefficient of the fitting function at the preceding d moment of motor t moment;Fj(when t+q Δ t) is d after j-th of motor
Carve the polynomial function of fitting;For i-th multinomial coefficient of d moment fitting function after j-th of motor;
4) formula, α, are utilizedj=P-1Uj, calculate the smoothing factor of j motor control sequence, wherein For d system of polynomials of 0 moment fitting function before j-th of motor
Number;For d multinomial coefficients of d moment fitting function before j-th of motor;For 0 moment fitting after j-th of motor
D multinomial coefficients of function;For d multinomial coefficients of d moment fitting function after j-th of motor;P is coefficient
Matrix;UjFor j motor control sequence the displacement of t- (d-1) Δ t~t+ (d-1) time Δt and neutral element constitute to
Amount,
And Uj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T, y(t-(d-1)Δt)j
For the command displacement of j-th of motor t- (d-1) time Δt;y(t)jFor the command displacement of j-th of motor t moment;y(t+(d-1)Δt)j
For the command displacement of j-th of motor t+ (d-1) time Δt.
In practical applications, using formula,
Calculate Hj(t)
Single order second dervative, Fj(t) single order second dervative, wherein
H'j(t+k Δ t) is Hj(t) the true control sequence of first derivative, i.e. j-th of motor of robot in t+k time Δt
The velocity vector of column;H″j(t+k Δ t) is Hj(t) second dervative, i.e. j-th of motor of robot is in the true of t+k time Δt
The vector acceleration of control sequence;F′j(t+q Δ t) is Fj(t) first derivative, i.e., j-th of motor of robot is in t+q Δ t
The velocity vector for the true control sequence carved;F″j(t+q Δ t) is Fj(t) second dervative, i.e. j-th of motor of robot is in t+q
The vector acceleration of the true control sequence of time Δt.
Then, equation group is constructed,
Wherein,
For the motor control displacement at d moment before j-th of motor;For j-th of motor
The motor control displacement at d moment afterwards.
Since two polynomial functions should be in the displacement having the same of connecting points t moment, velocity and acceleration, above-mentioned side
Journey group has 2d+2 equation, and includes 2d+2 vectors to be solved.
In order to simplify the expression of above-mentioned equation group, enable:
Above-mentioned equation group can be expressed as:PA=U, due to,Wherein,
OHFor null matrix;OFFor null matrix;A=(α1,…,αj…,αn) be n motor control sequence of t moment smooth system
Matrix number;U=(U1,…,Uj,…,Un) it is the transposed matrix that n motor control sequence of t moment forms, and, Uj=
(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T, by j-th of motor in t- (d-1) Δ t
The displacement of~t+ (d-1) time Δt and neutral element are constituted.When coefficient matrix P can the inverse time, smoothing factor matrix A to be solved can
It is expressed as:A=P-1U。
And then the smoothing factor α of j-th of motor control sequence of t momentjIt is represented by:αj=P-1Uj, 1≤j≤n, wherein
αjFor the smoothing factor of n motor control sequence of t moment;The smoothing factor matrix constituted for n motor of each moment.
5) formula, A=(α, are utilized1,…,αj…,αn), 1≤j≤n solves the smooth of n motor control sequence of t moment
Coefficient matrixWherein, A is The smoothing factor matrix constituted for n motor of each moment;αjFor t
The smoothing factor of n motor control sequence of moment;
Specifically, can be according to the smoothing factor α calculated in step 4)j, by smoothing factor αjForm smoothing factor matrix
6), by smoothing factor matrixFormula is substituted into,
The command displacement of d frame before j-th of motor t moment is calculatedJ-th of motor t moment
The command displacement of d frame afterwards
7), by the command displacement of d frame after n motor t moment of step 6) calculatingGeneration
Enter objective function,
The optimized parameter of the reverse mechanical model of smoothness constraint is calculated, and then obtains formula,
Wherein,
J(WE,WD,bE,bD) be the reverse mechanical model of smoothness constraint optimized parameter;WEFor the reverse mechanical model of smoothness constraint
The first model parameter matrix;WDFor the matrix of the second model parameter of the reverse mechanical model of smoothness constraint;bEFor smoothness constraint
The matrix of the third model parameter of reverse mechanical model;bDFor the square of the 4th model parameter of the reverse mechanical model of smoothness constraint
Battle array;J () is objective function;Min is minimum value value finding function;Q is the number of the expression frame after robot t is carved, q ∈ [0, d-
1];(t+q Δ t) is motion vector of the n motor of robot in the true control sequence of t+q time Δt to F;F'(t+q Δ t) is
Matrix of the n motor of robot in the velocity vector of the true control sequence of t+q time Δt;(t+q Δ t) is robot n to F "
Matrix of the motor in the vector acceleration of the true control sequence of t+q time Δt;T is current time;It is robot n
Estimated value of the motor in t+q time Δt motion vector;For n motor of robot estimating in t+q time Δt velocity vector
The matrix of calculation value;For n motor of robot the estimated value of t+q time Δt vector acceleration matrix;α is with speed
It is the weight of smoothness constraint, and α >=0 with acceleration;∑ is summing function;y(t+qΔt)jFor the control of j-th of motor t+q time Δt
System displacement;For the command displacement estimated value of j-th of motor t+q time Δt;When for j-th of motor t+q Δ t
The estimated value of the speed at quarter;For the acceleration estimation value of j-th of motor t+q time Δt;Exist for j-th of motor
The estimated value of t+q time Δt motion vector.
In practical applications, it can use above-mentioned formula, the optimized parameter of Fusion Model can be solved using gradient descent methodFacial expression, can be mapped as robot face by the reverse mechanical model constructed using the optimized parameter
The optimal motor control sequence in portion.
Using the above embodiment of the present invention, in J (WE,WD,bE,bD) in incorporate velocity and acceleration as smoothness constraint
Weight produces continuous and smooth motor control sequence.
Using the above embodiment of the present invention, the assistance information on facial muscle room and time can reflect, and then improve
The space-time similarity and motor movement smoothness of expression migration.
S103:Using performing artist's real-time facial feature as target, optimal motor is generated based on the reverse mechanical model of smoothness constraint
Then control sequence shows the robot and institute using the optimal motor control sequence driving robot face motor
State the corresponding expression of face of performing artist.
By character face's expressive features sequence of k frame before t momentAs input,
Using above-mentioned model, the current optimal driving vector of available robot face motor,
Wherein,It is defeated for the reverse mechanical model of smoothness constraint
Optimal motor control vector out;For the coding structure of L layers of LSTM;For the decoding knot of L layers of LSTM
Structure;L is preset implicit number of layers;For performer characteristic sequence;For the reverse mechanical model of smoothness constraint
First model parameter WEOptimal value;For the second model parameter W of the reverse mechanical model of smoothness constraintDOptimal value;For
The third model parameter b of the reverse mechanical model of smoothness constraintEOptimal value;For the 4th mould of the reverse mechanical model of smoothness constraint
Shape parameter bDOptimal value.
In practical applications, the facial dominant vector Y of 11 motor forms of robot face is:
Y=(y1,…,yj,…,yn) (n=11), wherein yj∈ [0,1] is the normalization controlling value of j-th of motor.
Similar to human muscular, driving 11 motors not only can allow robot that various expressions and blink, smile, wrinkle is presented
The emotion details of the exquisiteness such as eyebrow can also allow robot showing emotion while with the appearance of first-class expression subjective intention of shaking the head, put
State movement.
Inventor has carried out following experiment to verify the beneficial effect of the embodiment of the present invention:
60 kinds of the robot animation's teacher layout first motor control sequences comprising different facial expressions and head pose, every kind
Sequence duration 90 seconds, and there are expression neutrality-peak value-neutrality Strength Changes and head pose to change, then pass through Kinect
2.0 cameras capture facial characteristics with 30 frame per second per second;Then, by robot t, k frame history before the moment of 400≤t≤2400
Control sequenceCorresponding facial characteristics sequenceAnd rear d frame motor control sequenceConstitute sample
Collection:
In sample set, training of 100000 groups of samples for model parameter is selected at random, remaining Q=2000 group sample is used
In the test of model.
In experiment, building and training based on the LSTM module implementation model in TensorFlow, relevant parameter such as 1 institute of table
Show, table 1 is the model parameter built and trained based on the LSTM module implementation model in TensorFlow.
Table 1
In a first aspect, the validity of model is proposed for verifying after the completion of training, using formula,
The control deviation of statistics t+q Δ t, q ∈ [0, d-1] moment r, r ∈ [1,2000] group sample respectivelyAnd jth
The control deviation of a motorWherein,For r group sample j-th of motor of t+q time Δt true value;For r group sample j-th of motor of t+q time Δt estimated value.
Table 2 is the average motor control deviation and each motor average control deviation at d (d=5) a moment of 2000 groups of samples
Summary sheet, Fig. 7 are the result figure of motor control deviation provided in an embodiment of the present invention;Data in table 2 are depicted as statistical form
As shown in Figure 7.
Table 2
As shown in table 2, table 2 shows the multi-step prediction ability and generalization ability of institute of embodiment of the present invention representation model, wherein
Overall control deviation is less than 4.5%, and the prediction deviation of t moment is less than 3.5%.As shown in fig. 7, due to robot face by
There is intermittent shake in the influence of air pressure driving method, robot head, cause the control deviation of head or more motor larger, but t
The control deviation that the control deviation at moment is no more than 5%, t+4 time Δt is no more than 8%;And eyeball or so, eyeball are upper and lower, head
Tilt equal motors, and since the facial characteristics driven is single, control deviation is smaller.
In addition, can also be seen that from Fig. 7:With the propulsion of multi-step prediction, the control deviation at each moment is in rising trend, but
The motor maximum control deviation of t+4 time Δt is still no more than 8%.This illustrates that the reverse mechanical model of smoothness constraint proposed is preferable
Ground reflects the internal relation of robot hardware's control system Yu its presented facial characteristics, relatively accurately realizes robot face
Portion's characteristic sequence is to the translation of motor control sequence and Converse solved.
Second aspect is recorded to verify the space-time similarity of expression imitation first with Kinect Studio V2.0
50 kinds of face action sequences of different performing artists (each sequence duration 60 seconds, comprising neutrality-peak value-neutral expression's Strength Changes and
Head pose variation).The sequence that will acquire again carries out facial feature extractionIt is then based on training
The reverse mechanical model of good smoothness constraint realizes the mapping of performing artist's expressive features sequence to robot control sequence, then by model
The output at the moment of t, 5≤t≤1800 is sent to control system in a manner of serial communication, and the dynamic imitated is presented in driving motor
Expression.Synchronously, the expression of robot imitation is captured by Kinect camera in real time and extracts its facial characteristics
Finally, using formula,The real-time mould of statistical machine people respectively
Imitative preceding 20 features (head pose and 17 Facial action units of 3 axial directions) play the part of the space phase of feature with performing artist
Like degreeAnd time similarityWherein,
For the motion amplitude of robot s-th of sequence t moment, i-th of facial characteristics;For s-th of sequence of personage
The motion amplitude of i-th of facial characteristics of column t moment;For s-th of sequence t moment i-th of robot
The speed of a facial characteristics;For the speed of personage's s-th of sequence t moment, i-th of facial characteristics;
S is the totalframes that the real-time expression of robot is imitated, value 50*1976;Sim (x, b) is fitting function, and Sim (x, b)=
exp(-x2/ β) (β > 0);β is control parameter.
In practical applications, fitting function converts amplitude or velocity deviation to 0~1 similarity;β is smaller, similar to want
Ask stringenter, it is otherwise looser.
Pass through 10 folding cross validations, βI=0.3, βT=0.5.In α=0.45, preceding 20 faces that robot imitates in real time
The space-time similarity of portion's feature is as shown in figure 8, Fig. 8 is the space-time similarity result of expression provided in an embodiment of the present invention migration
Figure.
As shown in figure 8, each facial characteristics space-time similarity is more than 80%, especially to the feature for embodying expression details, such as
Cheek is heaved, the corners of the mouth is shunk, eye is closed, lower jaw closes amplitude, eyeball horizontal amplitude etc., keeps higher similarity.This not only has
Conducive to the fidelity for maintaining expression to imitate, it is also beneficial to improve the acceptance of robot affective interaction.
The third aspect can use formula to evaluate the smoothness of motor movement,
Calculate the smoothness of motor movement, wherein
For the smoothness of motor movement;TSTo jump threshold value, TS=10/256;G(yjIt (t)) is motor in t moment
Coordinate, and
Fig. 9 is robot face motor movement smoothness result figure provided in an embodiment of the present invention;As shown in figure 9, Fig. 9 is aobvious
Shown motor movement smoothness when without smoothness constraint and when with smoothness constraint motor movement smoothness comparing result.
As shown in figure 9, motor movement smoothness is maintained at 0.85 or more using the embodiment of the present invention, hence it is evident that better than without flat
The motor control model of sliding constraint.
In addition, using the embodiment of the present invention, the smoothness for the motors such as eyeball or so, mouth folding, the upper and lower cheek of eyebrow above mention
Effect is preferable, this illustrates the dynamic expressions details such as context of methods above mentions the adjoint corners of the mouth, mouth opens, and has preferable capture energy
Power and transfer ability.
Fourth aspect, in order to evaluate the effect for the smoothness constraint model being added in the embodiment of the present invention, inventor is respectively to α
Space similarity, time similarity, fortune when value is 0,0.2,0.4,0.45,0.5,0.55,0.6,0.7,0.8,1.0 respectively
The timing indicators such as dynamic smoothness are tested.
Figure 10 is the influence result figure of weight parameter clock synchronization sky similarity provided in an embodiment of the present invention and motion smoothing degree,
As shown in Figure 10, as α=0, J (WE,WD,bE,bD) only using Converse solved motor control deviation as optimization aim, therefore its table
The space-time similarity that feelings reproduce reaches maximum, this explanation is anti-with the reverse mechanical model of smoothness constraint that facial characteristics sequence is input
The mechanical relation for having reflected control motor and its facial muscles driven relatively accurately realizes inversely asking for motor control vector
Solution.
But with the increase of α, J (WE,WD,bE,bD) in incorporated velocity and acceleration motor control constraint, motor
Smoothness gradually rises, this illustrates that the smoothness constraint introduced preferably inhibits the jump of motor control vector.Inventors have found that
After comprehensively considering the smoothness of space-time similarity and motor movement of expression imitation, effect is best when α value is 0.45.
Using embodiment illustrated in fig. 1 of the present invention, using performing artist's real-time facial feature as target, it is based on the reverse machine of smoothness constraint
Tool model generates optimal motor control sequence, robot will be realized to performing artist personage face using the optimal motor control sequence
Facial expression sequence directly can be mapped as robot face when carrying out the migration of robot expression by the migration of portion's expressive features
Portion's motor control sequence improves the space-time similarity and the continuous motion smoothing degree of motor of the imitation of robot expression, relative to existing
It has been friends in the past in technology, needs to eliminate solution procedure, and then shorten expression and move by the Converse solved optimal control value of optimization algorithm
The time of shifting.
Corresponding with embodiment illustrated in fig. 1 of the present invention, it is reverse based on smoothness constraint that the embodiment of the invention also provides one kind
The robot expression of mechanical model imitates device.
Figure 11 is that a kind of robot expression based on the reverse mechanical model of smoothness constraint provided in an embodiment of the present invention is imitated
The structural schematic diagram of device, as shown in figure 11, described device includes:Extraction module 1101 is used for extraction machine human face feature
Vector:Module 1102 is constructed, for constructing the inversely mechanical mould of the smoothness constraint based on facial characteristics sequence to motor control sequence
Type:Generation module 1103, for being generated most based on the reverse mechanical model of smoothness constraint using performing artist's real-time facial feature as target
Excellent motor control sequence and to drive robot face motor to show the robot corresponding with the face of the performing artist
Expression.
In a kind of specific embodiment of the embodiment of the present invention, the extraction module 1101 is also used to:
A1:The facial expression data of robot and the head pose data of robot are obtained using Kinect camera,
Wherein, the facial expression data of the robot, including:The characteristic point of parametrization face grid based on Candide-3 model
Data, Facial action unit data;The head pose data of the robot, including:The rotation angle of tri- axial directions of head XYZ
Data;
A2:Using Laplace transform by the characteristic point data based on the Candide-3 model, from cartesian coordinate system
Conversion is into Laplacian coordinate system;
A3:According to the rotation angle of tri- axial directions of head XYZ, the Facial action unit data, the robot
Facial geometric feature generates robot face feature vector.
In a kind of specific embodiment of the embodiment of the present invention, the extraction module 1101 is also used to:
Candide-3 model after increasing eyeball characteristic point,
Wherein, G is to increase eyeball spy
The parametrization of Candide-3 model after sign point indicates;V is characterized position vector;D be p feature point group at adjoining square
Battle array;viFor the position vector of ith feature point;P is the characteristic point number for increasing the Candide-3 model after eyeball characteristic point;
(xi,yi,zi) be ith feature point three-dimensional vector value;I is characterized serial number a little;J is the characteristic point adjacent with characteristic point
Serial number;eijFor the element in connection matrix, andHead is obtained using Kinect API
Attitude data (Rpitch,Ryaw,Rroll), wherein RpitchFor the rotation angle in X-axis, RyawFor the rotation angle in Y-axis, RrollFor Z axis
On rotation angle.
In a kind of specific embodiment of the embodiment of the present invention, the extraction module 1101 is also used to:
Using formula,
Realize the conversion of cartesian coordinate system to Laplacian coordinate system, wherein ζiIt is characterized point viGeometrical characteristic;
It is characterized point viLaplacian coordinate;ΩiFor with characteristic point viFor vertex adjoining triangle area and;αijFor with side
vivjAdjacent first angle;βijFor with side vivjAdjacent second angle;viIt is characterized a little;vjFor with viAdjacent jth feature
Point;N (i) is and viAdjacent all characteristic points;L () is Laplacian transformation;| | | | it is mod function;∑ is summing function.
In a kind of specific embodiment of the embodiment of the present invention, the extraction module 1101 is also used to:Using formula,
It is special to generate robot face
Levy vector, wherein X is robot face feature vector;xiFor the value of the i-th dimension degree of facial characteristics vector;M is the face extracted
Feature vector dimension;(Rpitch、Ryaw、Rroll) be tri- axial directions of robot XYZ rotation angle;AUjFor j-th of face action list
The characteristic value of member;It is characterized point vkLaplacian coordinate.
In a kind of specific embodiment of the embodiment of the present invention, the building module 1102 is also used to:
B1:Using formula,Construct the reverse machinery based on facial characteristics sequence to motor control sequence
Model, whereinFor reverse mechanical model output motor control sequence,
Δ t is the frame per second that Kinect camera obtains robot face expression;For the facial characteristics sequence at k moment before robot
Column, and,Γ () is reverse mechanical model;T is current time;K is machine
The quantity of expression frame before device people's t moment;D is the quantity of the expression frame after robot t moment;Yt+(d-2)ΔtFor t+ (d-2) Δ t
The motor control data at moment;Xt-(k-2)ΔtFor the facial characteristics vector of t- (k-2) time Δt robot;
B2:Using multilayer LSTM encode-decode structure to based on facial characteristics sequence to motor control sequence it is smooth about
The reverse mechanical model of beam is modeled, and using the movement tendency parameter of d rank multinomial fitting motor control sequence, is based on position
It moves, speed, acceleration three's deviation constructs the reverse mechanical model of smoothness constraint.
In a kind of specific embodiment of the embodiment of the present invention, the building module 1102 is also used to:
1) formula is utilized,Mechanical model Γ reverse to smoothness constraint
() solves, wherein
For L layers of coding structure;For L layers of decoding structure;L is preset implies layer by layer
Number;For the facial characteristics sequence of k frame before robot t moment;For the motor control sequence of d frame after robot t moment
Column;Yt-ΔtFor the motor control sequence of robot t- time Δt;
2), using formula,
Construct the polynomial function at j-th of motor preceding d moment and the fitting of rear d moment, wherein
Hj(t+k Δ t) is the polynomial function of the fitting of preceding d moment of j-th of motor t moment;It is j-th
I-th multinomial coefficient of the fitting function at the preceding d moment of motor t moment;Fj(when t+q Δ t) is d after j-th of motor
Carve the polynomial function of fitting;For i-th multinomial coefficient of d moment fitting function after j-th of motor;
3) formula, α, are utilizedj=P-1Uj, calculate the smoothing factor of j motor control sequence, wherein
For the smoothing factor to be solved of j-th of motor control sequence;For
D multinomial coefficients of 0 moment fitting function before j-th of motor;For d moment fitting function before j-th of motor
D multinomial coefficients;For d multinomial coefficients of 0 moment fitting function after j-th of motor;For j-th of electricity
D multinomial coefficients of d moment fitting function after machine;P is coefficient matrix;UjIt is j motor control sequence at t- (d-1)
The vector that the displacement of Δ t~t+ (d-1) time Δt and neutral element are constituted,
And Uj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T, y(t-(d-1)Δt)j
For the command displacement of j-th of motor t- (d-1) time Δt;y(t)jFor the command displacement of j-th of motor t moment;y(t+(d-1)Δt)j
For the command displacement of j-th of motor t+ (d-1) time Δt;
4) formula, A=(α, are utilized1,…,αj…,αn), 1≤j≤n solves the smooth of n motor control sequence of t moment
Coefficient matrixWherein,
A is The smoothing factor matrix constituted for n motor of each moment;αjFor n motor control of t moment
The smoothing factor of sequence processed;
5), by smoothing factor matrixFormula is substituted into,
Calculate the command displacement of d frame before j-th of motor t momentD frame after j-th of motor t moment
Command displacement
6), by the command displacement of d frame after n motor t moment of step 5) calculatingGeneration
Enter objective function,
Calculate the optimized parameter of the reverse mechanical model of smoothness constraint, wherein
J(WE,WD,bE,bD) be the reverse mechanical model of smoothness constraint optimized parameter;WEFor the reverse mechanical model of smoothness constraint
The first model parameter matrix;WDFor the matrix of the second model parameter of the reverse mechanical model of smoothness constraint;bEFor smoothness constraint
The matrix of the third model parameter of reverse mechanical model;bDFor the square of the 4th model parameter of the reverse mechanical model of smoothness constraint
Battle array;J () is objective function;Min is minimum value value finding function;Q is the number of the expression frame after robot t moment, q ∈ [0, d-
1];(t+q Δ t) is motion vector of the n motor of robot in the true control sequence of t+q time Δt to F;F'(t+q Δ t) is
Matrix of the n motor of robot in the velocity vector of the true control sequence of t+q time Δt;(t+q Δ t) is robot n to F "
Matrix of the motor in the vector acceleration of the true control sequence of t+q time Δt;T is current time;It is robot n
Estimated value of the motor in t+q time Δt motion vector;For n motor of robot estimating in t+q time Δt velocity vector
The matrix of calculation value;For n motor of robot the estimated value of t+q time Δt vector acceleration matrix;α is with speed
It is the weight of smoothness constraint, and α >=0 with acceleration;∑ is summing function.
In a kind of specific embodiment of the embodiment of the present invention, the generation module 1103 is also used to:
By the performer expressive features sequence of k frame before t momentAs input,
Using formula,Available robot face motor it is current
Optimal driving vector, whereinFor the optimal motor control vector of the reverse mechanical model output of smoothness constraint;For L
The coding structure of layer LSTM;For the decoding structure of L layers of LSTM;For performer characteristic sequence;For
First model parameter W of the reverse mechanical model of smoothness constraintEOptimal value;For the second mould of the reverse mechanical model of smoothness constraint
Shape parameter WDOptimal value;For the third model parameter b of the reverse mechanical model of smoothness constraintEOptimal value;For smoothly about
4th model parameter b of the reverse mechanical model of beamDOptimal value.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (9)
1. a kind of robot expression based on the reverse mechanical model of smoothness constraint imitates method, which is characterized in that the method packet
It includes:
A:Extraction machine human face's feature vector:
B:Construct the reverse mechanical model of smoothness constraint based on facial characteristics sequence to motor control sequence:
C:Using performing artist's real-time facial feature as target, optimal motor control sequence is generated based on the reverse mechanical model of smoothness constraint
Then column show the robot and the performance using the optimal motor control sequence driving robot face motor
The corresponding expression of the face of person.
2. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 1 imitates method,
It is characterized in that, the step A, including:
A1:The facial expression data of robot and the head pose data of robot are obtained using Kinect camera, wherein
The facial expression data of the robot, including:The characteristic point data of parametrization face grid based on Candide-3 model,
Facial action unit data;The head pose data of the robot, including:The data of the rotation angle of tri- axial directions of head XYZ;
A2:Using Laplace transform by the characteristic point data based on the Candide-3 model, converted from cartesian coordinate system
Into Laplacian coordinate system;
A3:According to the rotation angle of tri- axial directions of head XYZ, the Facial action unit data, the robot face
Geometrical characteristic generates robot face feature vector.
3. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 2 imitates method,
It is characterized in that, the A1 step, including:
Using increase eyeball characteristic point after Candide-3 model formula,
Obtain the facial expression number of robot
According to, wherein
G is that the parametrization of the Candide-3 model after the increase eyeball characteristic point indicates;V is characterized position vector;D is p
A feature point group at adjacency matrix;viFor the position vector of ith feature point;P is after increasing eyeball characteristic point
The characteristic point number of Candide-3 model;(xi,yi,zi) be ith feature point three-dimensional vector value;I is characterized serial number a little;
J is the serial number of the characteristic point adjacent with characteristic point;eijFor the element in adjacency matrix,
And
Head pose data (R is obtained using Kinect APIpitch,Ryaw,Rroll), wherein RpitchFor the rotation angle in X-axis,
RyawFor the rotation angle in Y-axis, RrollFor the rotation angle on Z axis.
4. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 2 imitates method,
It is characterized in that, the A2 step, including:
Using formula,It is real
Conversion of the existing cartesian coordinate system to Laplacian coordinate system, wherein
ζiIt is characterized point viGeometrical characteristic;It is characterized point viLaplacian coordinate;ΩiFor with characteristic point viFor
The area of the adjoining triangle on vertex and;αijFor with side vivjAdjacent first angle;βijFor with side vivjAdjacent second jiao
Degree;viIt is characterized a little;vjFor with viAdjacent jth characteristic point;N (i) is and viAdjacent all characteristic points;L () is
Laplacian transforming function transformation function;| | | | it is mod function;∑ is summing function.
5. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 2 imitates method,
It is characterized in that, the A3 step, including:
Using formula,
Generate robot face feature to
Amount, wherein
X is robot face feature vector;xiFor the value of the i-th dimension degree of facial characteristics vector;M is the facial characteristics vector extracted
Dimension;(Rpitch、Ryaw、Rroll) be tri- axial directions of robot XYZ rotation angle;AUjFor the feature of j-th of Facial action unit
Value;It is characterized point vkLaplacian coordinate.
6. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 1 imitates method,
It is characterized in that, the step B, including:
B1:Using formula,Reverse mechanical mould of the building based on facial characteristics sequence to motor control sequence
Type, whereinFor reverse mechanical model output motor control sequence,
Δ t is the frame per second that Kinect camera obtains robot face expression;For the facial characteristics sequence at k moment before robot
Column, and,Γ () is reverse mechanical model;T is current time;K is machine
The quantity of expression frame before device people t quarter;D is the quantity of the expression frame after robot t is carved;Yt+(d-2)ΔtFor t+ (d-2) time Δt
Motor control data;Xt-(k-2)ΔtFor the facial characteristics vector of t- (k-2) time Δt robot;
B2:Using multilayer LSTM structure is encoded-decoded to the reverse mechanical mould based on facial characteristics sequence to motor control sequence
Type is modeled, and using the movement tendency parameter of d rank multinomial fitting motor control sequence, based on displacement, speed, acceleration
Three's deviation constructs the reverse mechanical model of smoothness constraint.
7. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 6 imitates method,
It is characterized in that, the B2 step, including:
1) formula is utilized,The reverse mechanical model Γ () of smoothness constraint is asked
Solution, wherein
For L layers of coding structure;For L layers of decoding structure;L is the preset hidden layer number of plies;For the facial characteristics sequence of k frame before robot t moment;For the motor control sequence of d frame after t moment after robot
Column;Yt-ΔtFor the motor control sequence of robot t- time Δt;
2), using formula,
Construct the polynomial function at j-th of motor preceding d moment and the fitting of rear d moment, wherein
Hj(t+k Δ t) is the polynomial function of the fitting of preceding d moment of j-th of motor t moment;For j-th of motor t
I-th multinomial coefficient of the fitting function at the preceding d moment at moment;Fj(t+q Δ t) is d moment fitting after j-th of motor
Polynomial function;For i-th multinomial coefficient of d moment fitting function after j-th of motor;
3) formula, α, are utilizedj=P-1Uj, calculate the smoothing factor of j motor control sequence, wherein
For the smoothing factor to be solved of j-th of motor control sequence;For jth
D multinomial coefficients of 0 moment fitting function before a motor;For the d of d moment fitting function before j-th of motor
Item multinomial coefficient;For d multinomial coefficients of 0 moment fitting function after j-th of motor;After j-th of motor
D multinomial coefficients of d moment fitting function;P is coefficient matrix;UjIt is j motor control sequence in t- (d-1) Δ t
The vector that the displacement of~t+ (d-1) time Δt and neutral element are constituted,
And Uj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T, y(t-(d-1)Δt)jIt is
The command displacement of j motor t- (d-1) time Δt;y(t)jFor the command displacement of j-th of motor t moment;y(t+(d-1)Δt)jFor jth
The command displacement of a motor t+ (d-1) time Δt;
4) formula, A=(α, are utilized1,…,αj…,αn), 1≤j≤n solves the smoothing factor of n motor control sequence of t moment
MatrixWherein,
A is The smoothing factor matrix constituted for n motor of each moment;αjFor n motor control sequence of t moment
The smoothing factor of column;
5), by smoothing factor matrixFormula is substituted into,
Calculate the command displacement of d frame before j-th of motor t momentAfter j-th of motor t moment
The command displacement of d frame
6), by the command displacement of d frame after n motor t moment of step 5) calculatingSubstitute into mesh
Scalar functions,
Calculate the optimized parameter of the reverse mechanical model of smoothness constraint, wherein
J(WE,WD,bE,bD) it is the optimized parameter based on the reverse mechanical model of smoothness constraint;WEFor the reverse mechanical model of smoothness constraint
The first model parameter matrix;WDFor the matrix of the second model parameter of the reverse mechanical model of smoothness constraint;bEFor smoothness constraint
The matrix of the third model parameter of reverse mechanical model;bDFor the square of the 4th model parameter of the reverse mechanical model of smoothness constraint
Battle array;J () is objective function;Min is minimum value value finding function;Q is the number of the expression frame after robot t is carved, q ∈ [0, d-
1];(t+q Δ t) is motion vector of the n motor of robot in the true control sequence of t+q time Δt to F;F'(t+q Δ t) is
Matrix of the n motor of robot in the velocity vector of the true control sequence of t+q time Δt;(t+q Δ t) is robot n to F "
Matrix of the motor in the vector acceleration of the true control sequence of t+q time Δt;T is current time;It is robot n
Estimated value of the motor in t+q time Δt motion vector;For n motor of robot estimating in t+q time Δt velocity vector
The matrix of calculation value;For n motor of robot the estimated value of t+q time Δt vector acceleration matrix;α is with speed
It is the weight of smoothness constraint, and α >=0 with acceleration;∑ is summing function.
8. a kind of robot expression based on the reverse mechanical model of smoothness constraint according to claim 1 imitates method,
It is characterized in that, the step C, including:
By character face's expressive features sequence of k frame before t momentAs input, utilize
Formula,Calculating robot's optimal control sequence, wherein
For the optimal motor control sequence of the reverse mechanical model output of smoothness constraint;For the coding knot of L layers of LSTM
Structure;For the decoding structure of L layers of LSTM;Performer characteristic sequence when for t moment;For smoothly about
First model parameter W of the reverse mechanical model of beamEOptimal value;Join for the second model of the reverse mechanical model of smoothness constraint
Number WDOptimal value;For the third model parameter b of the reverse mechanical model of smoothness constraintEOptimal value;It is inverse for smoothness constraint
To the 4th model parameter b of mechanical modelDOptimal value.
9. a kind of robot expression based on the reverse mechanical model of smoothness constraint imitates device, which is characterized in that described device packet
It includes:
Extraction module is used for extraction machine human face feature vector:
Module is constructed, for constructing the reverse mechanical model of smoothness constraint based on facial characteristics sequence to motor control sequence:
Generation module, for being generated based on the reverse mechanical model of smoothness constraint optimal using performing artist's real-time facial feature as target
Then motor control sequence shows the robot using the optimal motor control sequence driving robot face motor
Expression corresponding with the face of the performing artist.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593985.1A CN108908353B (en) | 2018-06-11 | 2018-06-11 | Robot expression simulation method and device based on smooth constraint reverse mechanical model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593985.1A CN108908353B (en) | 2018-06-11 | 2018-06-11 | Robot expression simulation method and device based on smooth constraint reverse mechanical model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108908353A true CN108908353A (en) | 2018-11-30 |
CN108908353B CN108908353B (en) | 2021-08-13 |
Family
ID=64410836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810593985.1A Active CN108908353B (en) | 2018-06-11 | 2018-06-11 | Robot expression simulation method and device based on smooth constraint reverse mechanical model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108908353B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2581486A (en) * | 2019-02-15 | 2020-08-26 | Hanson Robotics Ltd | Animatronic robot calibration |
CN112454390A (en) * | 2020-11-27 | 2021-03-09 | 中国科学技术大学 | Humanoid robot facial expression simulation method based on deep reinforcement learning |
CN116485964A (en) * | 2023-06-21 | 2023-07-25 | 海马云(天津)信息技术有限公司 | Expression processing method, device and storage medium of digital virtual object |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265869A (en) * | 2002-03-12 | 2003-09-24 | Univ Waseda | Eye-eyebrow structure of robot |
EP1988493A1 (en) * | 2007-04-30 | 2008-11-05 | National Taiwan University of Science and Technology | Robotic system and method for controlling the same |
CN106078752A (en) * | 2016-06-27 | 2016-11-09 | 西安电子科技大学 | Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect |
CN106919899A (en) * | 2017-01-18 | 2017-07-04 | 北京光年无限科技有限公司 | The method and system for imitating human face expression output based on intelligent robot |
CN106926258A (en) * | 2015-12-31 | 2017-07-07 | 深圳光启合众科技有限公司 | The control method and device of robot emotion |
CN107392109A (en) * | 2017-06-27 | 2017-11-24 | 南京邮电大学 | A kind of neonatal pain expression recognition method based on deep neural network |
-
2018
- 2018-06-11 CN CN201810593985.1A patent/CN108908353B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265869A (en) * | 2002-03-12 | 2003-09-24 | Univ Waseda | Eye-eyebrow structure of robot |
EP1988493A1 (en) * | 2007-04-30 | 2008-11-05 | National Taiwan University of Science and Technology | Robotic system and method for controlling the same |
CN106926258A (en) * | 2015-12-31 | 2017-07-07 | 深圳光启合众科技有限公司 | The control method and device of robot emotion |
CN106078752A (en) * | 2016-06-27 | 2016-11-09 | 西安电子科技大学 | Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect |
CN106919899A (en) * | 2017-01-18 | 2017-07-04 | 北京光年无限科技有限公司 | The method and system for imitating human face expression output based on intelligent robot |
CN107392109A (en) * | 2017-06-27 | 2017-11-24 | 南京邮电大学 | A kind of neonatal pain expression recognition method based on deep neural network |
Non-Patent Citations (2)
Title |
---|
李竞等: "基于积分投影和LSTM的微表情识别研究", 《计算机时代》 * |
黄忠: "类人机器人表情识别与表情再现方法研究", 《中国博士学位论文全文数据库》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2581486A (en) * | 2019-02-15 | 2020-08-26 | Hanson Robotics Ltd | Animatronic robot calibration |
CN112454390A (en) * | 2020-11-27 | 2021-03-09 | 中国科学技术大学 | Humanoid robot facial expression simulation method based on deep reinforcement learning |
CN116485964A (en) * | 2023-06-21 | 2023-07-25 | 海马云(天津)信息技术有限公司 | Expression processing method, device and storage medium of digital virtual object |
CN116485964B (en) * | 2023-06-21 | 2023-10-13 | 海马云(天津)信息技术有限公司 | Expression processing method, device and storage medium of digital virtual object |
Also Published As
Publication number | Publication date |
---|---|
CN108908353B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Magnenat-Thalmann et al. | Handbook of virtual humans | |
CN110599573B (en) | Method for realizing real-time human face interactive animation based on monocular camera | |
JP2022553167A (en) | MOVIE PROCESSING METHOD, MOVIE PROCESSING APPARATUS, COMPUTER PROGRAM AND ELECTRONIC DEVICE | |
Ren et al. | Automatic facial expression learning method based on humanoid robot XIN-REN | |
Zhu et al. | Human motion generation: A survey | |
CN108908353A (en) | Robot expression based on the reverse mechanical model of smoothness constraint imitates method and device | |
Morishima | Face analysis and synthesis | |
Choi et al. | Animatomy: An animator-centric, anatomically inspired system for 3d facial modeling, animation and transfer | |
Huang et al. | Facial expression imitation method for humanoid robot based on smooth-constraint reversed mechanical model (SRMM) | |
Liu et al. | Real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks | |
Tang et al. | Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar | |
Haber et al. | Facial modeling and animation | |
Chi | A motion control scheme for animating expressive arm movements | |
van Welbergen | Behavior Generation for Interpersonal Coordination with Virtual Humans: on Specifying, Scheduling and Realizing Multimodal Virtual Human Behavior | |
Tang et al. | Lip-sync in human face animation based on video analysis and spline models | |
Neff et al. | Animation of natural virtual characters | |
Thalmann | The virtual human as a multimodal interface | |
Ishikawa et al. | 3D face expression estimation and generation from 2D image based on a physically constraint model | |
Egan et al. | Neurodog: Quadruped embodiment using neural networks | |
Lu | Learning-Based, Muscle-Actuated Biomechanical Human Animation: Bipedal Locomotion Control and Facial Expression Transfer | |
Dai et al. | Research on 2D Animation Simulation Based on Artificial Intelligence and Biomechanical Modeling | |
Dwarakanath | Neuromuscular Animation and FACS-Based Expression Transfer Via Deep Learning | |
Valvoda | Virtual humanoids and presence in virtual environments | |
Zhang et al. | Implementation of Animation Character Action Design and Data Mining Technology Based on CAD Data | |
Rajendran | Understanding the Desired Approach for Animating Procedurally |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |