CN108908353B - Robot expression simulation method and device based on smooth constraint reverse mechanical model - Google Patents

Robot expression simulation method and device based on smooth constraint reverse mechanical model Download PDF

Info

Publication number
CN108908353B
CN108908353B CN201810593985.1A CN201810593985A CN108908353B CN 108908353 B CN108908353 B CN 108908353B CN 201810593985 A CN201810593985 A CN 201810593985A CN 108908353 B CN108908353 B CN 108908353B
Authority
CN
China
Prior art keywords
robot
motor
moment
mechanical model
delta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810593985.1A
Other languages
Chinese (zh)
Other versions
CN108908353A (en
Inventor
黄忠
刘娟
丁蕾
江巨浪
唐飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anqing Normal University
Original Assignee
Anqing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anqing Normal University filed Critical Anqing Normal University
Priority to CN201810593985.1A priority Critical patent/CN108908353B/en
Publication of CN108908353A publication Critical patent/CN108908353A/en
Application granted granted Critical
Publication of CN108908353B publication Critical patent/CN108908353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • B25J11/0015Face robots, animated artificial faces for imitating human expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Feedback Control In General (AREA)
  • Toys (AREA)

Abstract

The invention discloses a robot expression simulation method based on a smooth constraint reverse mechanical model, which comprises the following steps: a: extracting a robot face feature vector: b: constructing a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence: c: and generating an optimal motor control sequence based on a smooth constraint reverse mechanical model by taking the real-time facial features of the performer as a target to drive a robot facial motor so that the robot presents an expression corresponding to the facial features of the performer. The invention also discloses a robot expression simulation device based on the smooth constraint reverse mechanical model. The invention has the advantages that: by applying the embodiment of the invention, the time-space similarity of the robot expression simulation and the smoothness of the continuous motion of the motor can be improved, and the expression migration time is shortened.

Description

Robot expression simulation method and device based on smooth constraint reverse mechanical model
Technical Field
The invention relates to a robot expression simulation method, in particular to a robot expression simulation method and device based on a smooth constraint reverse mechanical model.
Background
With the development of control science, sensor technology, artificial intelligence, material science and the like, the humanoid robot with human appearance and action capability becomes possible. Although the humanoid robot has a high degree of brain intelligence (intelligence quotient) in the aspect of imitating human behaviors, natural and harmonious emotional interaction capacity is difficult to realize by traditional interaction modes such as a keyboard, a mouse, a screen and patterns, and the level of the humanoid robot can not meet the requirement of people on the intelligence level of the robot. The man-machine interaction obstacle and the emotional interaction capability gradually become the bottleneck of the practicability of the robot. Therefore, how to improve the "mental (emotional) level of the robot becomes a key problem to be solved urgently in the field of robot research. Aiming at the problem of high brain intelligence and low mental intelligence in the current natural human-computer interaction process, a natural human-computer interaction mode rich in emotion and meeting psychological needs is explored, and the urgent requirement for solving the problem of 'emotion loss' of the robot is met. The facial expression is the most important carrier for natural human-computer interaction and expression of robot emotion, so how to make the humanoid robot show the same expression as human is a technical problem to be solved urgently.
At present, human expression simulation is the most effective way for realizing multi-motor cooperative control and presenting vivid expression of a robot. The current robot expression simulation method mainly comprises two types of expression category simulation and expression detail simulation. The expression category simulation method is that the internal relation between a facial action unit and a head control motor is established based on a facial action coding system, and common expression categories such as happiness and surprise are realized by driving the motor. Because the generated expression is single and the mode is fixed, the expression category simulation method is only suitable for the robot facial emotion expression with less freedom of the head. Unlike expression category simulation, the expression detail simulation method is based on detail and intensity migration of performance-driven techniques. The methods adopt a forward mechanical model and a motion smooth model for independent modeling, mechanical constraints of motor motion can be considered in the mode, but an optimal control value needs to be reversely solved through an optimization algorithm in a real-time expression simulation stage, and the real-time speed of expression migration is restricted, so that the technical problem that the real-time performance of expression migration is poor in the prior art exists.
Disclosure of Invention
The invention aims to provide a robot expression simulation method and device based on a smooth constraint reverse mechanical model, and aims to solve the technical problems that the expression simulation space-time similarity and smoothness are low and the expression migration instantaneity is poor in the prior art.
The invention solves the technical problems through the following technical scheme:
the embodiment of the invention provides a robot expression simulation method based on a smooth constraint reverse mechanical model, which comprises the following steps:
a: extracting a robot face feature vector:
b: constructing a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence:
c: and generating an optimal motor control sequence based on a smooth constraint inverse mechanical model by taking the real-time facial features of the performer as a target, and then driving a robot facial motor by using the optimal motor control sequence to enable the robot to present an expression corresponding to the face of the performer.
The embodiment of the invention also provides a robot expression simulation device based on the smooth constraint reverse mechanical model, which comprises: the extraction module is used for extracting the facial feature vector of the robot: the construction module is used for constructing a smooth constraint reverse mechanical model from the facial feature sequence to the motor control sequence: and the generating module is used for generating an optimal motor control sequence based on the smooth constraint reverse mechanical model by taking the real-time facial features of the performer as targets, and then driving a robot facial motor by using the optimal motor control sequence to enable the robot to present an expression corresponding to the face of the performer.
Compared with the prior art, the invention has the following advantages:
by applying the embodiment of the invention, the real-time facial features of the performer are taken as targets, an optimal motor control sequence is generated based on a smooth constraint reverse mechanical model, the optimal motor control sequence is utilized to realize the transfer of the facial expression features of the performer figures by the robot, when the robot expression transfer is carried out, the figure expression sequence can be directly mapped into the robot facial motor control sequence, the time-space similarity of the robot expression simulation and the motor continuous motion smoothness are improved, compared with the prior art, the optimal control value needs to be reversely solved through an optimization algorithm, the solving step is omitted, and the expression transfer time is shortened.
Drawings
Fig. 1 is a schematic flowchart of a robot expression simulation method based on a smooth constraint inverse mechanical model according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a robot expression simulation method based on a smooth constraint inverse mechanical model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a robot control motor and degrees of freedom provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an adjacency relationship between feature points of a robot face according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an inverse mechanical model of an LSTM encoding-decoding structure according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a d-order polynomial fit according to an embodiment of the present invention;
FIG. 7 is a graph of results of motor control deviations provided by embodiments of the present invention;
FIG. 8 is a graph illustrating spatiotemporal similarity results of expression migration according to an embodiment of the present invention;
FIG. 9 is a result graph of smoothness of motion of a robot face motor according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating the effect of weighting parameters on spatiotemporal similarity and motion smoothness according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a robot expression simulation device based on a smooth constraint inverse mechanical model according to an embodiment of the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
The method and the device for simulating the robot expression based on the smooth constraint reverse mechanical model provided by the embodiment of the invention are introduced below firstly.
Fig. 1 is a schematic flowchart of a robot expression simulation method based on a smooth constraint inverse mechanical model according to an embodiment of the present invention, and fig. 2 is a schematic diagram of a principle of a robot expression simulation method based on a smooth constraint inverse mechanical model according to an embodiment of the present invention; as shown in fig. 1 and 2, the method includes:
s101: extracting a robot face feature vector:
specifically, the step S101 may include: a1: utilize the Kinect camera to acquire the facial expression data of robot and the head gesture data of robot, wherein, the facial expression data of robot includes: feature point data and facial action unit data of the parameterized face mesh based on the Candide-3 model; head pose data of the robot, comprising: data of the rotation angles of the three axial directions of the head XYZ,
specifically, the step a1 includes:
and tracking the positions of the characteristic points of the left eyeball and the right eyeball according to a pupil positioning algorithm, and increasing the adjacency relation between the characteristic points and the surrounding characteristic points.
By using the formula of the Candide-3 model after the eyeball characteristic points are added,
Figure GDA0001788135800000041
acquiring facial expression data of the robot, wherein G is parameterized representation of a Candide-3 model with the eyeball feature points added; v is a feature point position vector; d is an adjacent matrix consisting of p characteristic points; v. ofiThe position vector of the ith characteristic point is taken as the position vector of the ith characteristic point; p is the number of the characteristic points of the Candide-3 model after the eyeball characteristic points are increased; (x)i,yi,zi) The three-dimensional vector value of the ith characteristic point is obtained; i is the serial number of the characteristic point; j is the serial number of the characteristic point adjacent to the characteristic point; e.g. of the typeijAre connected to elements in a matrix, and
Figure GDA0001788135800000051
obtaining head pose data (R) using Kinect APIpitch,Ryaw,Rroll) Wherein R ispitchAngle of rotation on the X axis, RyawAngle of rotation on the Y axis, RrollIs the angle of rotation on the Z axis.
In practical application, the inventor develops a high-imitation robot with 47 motors to 'think'. In order to enable the designed robot to be closer to a human body, the robot simulates the muscle movement of the head, the shoulders, the arms, the wrists, the waist, the legs and the like of the human body in an air pressure driving mode, and the skin color and the vein of blood vessels of the human body are simulated by adopting elastic silica gel. Since facial expressions are not only the most important vector for emotional expression, but also the most effective form for expressing subjective intentions in human-computer interaction, the embodiment of the invention only takes head and facial motors related to expressions as research objects. Fig. 3 is a schematic diagram of a robot control motor and degrees of freedom according to an embodiment of the present invention, and as shown in fig. 3, fig. 3 shows 11 control motors and degrees of freedom of a head of a robot-like robot.
In the embodiment of the present invention, in order to mine a mapping relationship between a motor control vector and facial details presented by the motor control vector, microsoft Kinect 2.0 is used as an image capturing device, and facial expression and head posture data are obtained in real time by using a Kinect API, which specifically includes: rotation angles of the XYZ three axial directions of the head, a parameterized face mesh (containing 1347 feature points) based on a Candide-3 model, and 17 face action units. Meanwhile, important functions of gaze fixation, eyeball rotation and the like in man-machine interaction are considered, the description capacity of local details of the eyeballs is improved, the characteristic points of the left eyeball and the right eyeball are tracked according to a pupil positioning algorithm, and the adjacency relation between the characteristic points and adjacent characteristic points is increased to realize triangulation of the eye area. The Candide-3 model after adding the two eyeball feature points is set as follows:
Figure GDA0001788135800000061
wherein G is a parameterized representation of the Candide-3 model; v is a feature point position vector; d is an adjacent matrix consisting of p characteristic points; v. ofiThe position vector of the ith characteristic point is taken as the position vector of the ith characteristic point; p is the number of the characteristic points of the Candide-3 model after the eyeball characteristic points are increased, and the value is 1349; (x)i,yi,zi) The three-dimensional vector value of the ith characteristic point is obtained; i is the serial number of the characteristic point; j is the number of feature points adjacent to the feature point.
A Kinect API (Kinect Application Programming Interface) is a program Interface of a three-dimensional camera device Kinect and is used for outputting spatial variation data of a shot target acquired by the Kinect; the spatial variation data may be, for example, color-image two-dimensional coordinates, depth-image spatial coordinates, bone-tracking spatial coordinates, and the like.
Fig. 4 is a schematic diagram of an adjacency relation of feature points of a robot face according to an embodiment of the present invention, as shown in fig. 4, αijIs and the edge vivjAdjacent first angles; beta is aijIs and the edge vivjAn adjacent second angle; v. ofiIs a characteristic point; v. ofjFor feature points that are adjacent to a feature point, the values of the elements in the adjacency matrix can then be determined using the following formula,
eijare elements in a contiguous matrix, and
Figure GDA0001788135800000062
a2: feature point data based on the Candide-3 model is transformed from a Cartesian coordinate system into a Laplacian coordinate system using a Laplacian transformation.
Specifically, the step a2 may include:
by means of the formula (I) and (II),
Figure GDA0001788135800000063
the conversion from a Cartesian coordinate system to a Laplacian coordinate system is realized, wherein,
ζiis a characteristic point viThe geometric features of (a);
Figure GDA0001788135800000064
is a characteristic point viLaplacian coordinates; omegaiTo take a characteristic point viThe sum of the areas of adjoining triangles that are vertices; alpha is alphaijIs and the edge vivjAdjacent first angles; beta is aijIs and the edge vivjAn adjacent second angle; v. ofiIs a characteristic point;vjis equal to viAn adjoining j-th feature point; n (i) is and viAll the feature points that are adjacent; l () is Laplacian transform; | | is a modulo function; and Σ is a summation function.
Illustratively, the convex hull weight method is used,
Figure GDA0001788135800000071
calculating Candide-3 face model characteristic points viLaplacian coordinates of
Figure GDA0001788135800000072
A3: and generating a robot face feature vector according to the rotation angles of the three axial directions of the head XYZ, the face action unit data and the robot face geometric feature.
In particular, a cascade of equations may be utilized,
Figure GDA0001788135800000073
taking the result of cascading Laplacian coordinates of each feature point as the extracted geometric feature of the robot face, wherein ζiThe geometric feature of the face of the ith feature point;
Figure GDA0001788135800000074
is a three-dimensional vector of the geometric feature of the ith feature point.
Although in the prior art, the Laplacian coordinate cascade strategy is adopted, not only topology information among feature points is kept, but also the bending degree and the movement direction of a grid vertex relative to adjacent points of the grid vertex are reflected by fusing normal direction and tangential direction information of the feature points, the facial features based on Laplacian transformation only describe the geometric deformation of the bottom facial shape, and the movement amplitude change of high-level facial muscles and head postures is not measured. In the embodiment of the invention, in order to accurately describe the change of facial muscles and the head posture, the embodiment of the invention uses the rotation angles (R) of the three axial directions of the headpitch、Ryaw、Rroll) 17 surface portionAction unit
Figure GDA0001788135800000075
And facial geometric features
Figure GDA0001788135800000076
The constructed facial feature vector X is:
Figure GDA0001788135800000077
x is a robot face feature vector; x is the number ofiIs the value of the ith dimension of the facial feature vector; (R)pitch、Ryaw、Rroll) Rotating angles of the robot in three axial directions of XYZ; AU (AU)jThe characteristic value of the jth surface action unit;
Figure GDA0001788135800000078
is a characteristic point vkLaplacian coordinates. m is the extracted facial feature vector dimension, and m is 3+17+1349 x 3 4067. As known from the construction process of the facial feature vector, X not only contains the geometric deformation information of facial muscles, but also integrates the motion amplitude and the head posture change of the facial muscles. Therefore, the robot expression simulation system can provide more accurate target data for robot expression simulation.
S102: constructing a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence:
specifically, the step B includes:
b1: by means of the formula (I) and (II),
Figure GDA0001788135800000081
constructing a reverse mechanical model based on a facial feature sequence to a motor control sequence, wherein,
Figure GDA0001788135800000082
for constructing a reverse mechanical model based on a facial feature sequence to a motor control sequence, an
Figure GDA0001788135800000083
Delta t is the frame rate of the Kinect camera for acquiring the facial expression frame;
Figure GDA0001788135800000084
is a robot facial feature vector, an
Figure GDA0001788135800000085
Γ () is the inverse mechanical model; t is the current time; k is the number of expression frames before the t moment of the robot; d is the number of expression frames after the robot t moment. Y ist+(d-2)ΔtMotor control data at time t + (d-2) Δ t; xt-(k-2)ΔtThe face feature vector of the robot at the t- (k-2) delta t moment is obtained;
b2: modeling a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence by using a multilayer LSTM coding-decoding structure, fitting a motion trend parameter of the motor control sequence by using a d-order polynomial, and constructing the smooth constraint reverse mechanical model based on the deviation of displacement, speed and acceleration.
In practical application, fig. 5 is a schematic diagram of a principle of an inverse mechanical model of an LSTM encoding-decoding structure provided by an embodiment of the present invention, and as shown in fig. 5, a construction process of a smooth constrained inverse mechanical model of an LSTM encoding-decoding structure provided by an embodiment may be:
1) adopting an L-layer LSTM encoder to encode the expression characteristic sequence with the length of k into a depth semantic C;
Figure GDA0001788135800000086
wherein the content of the first and second substances,
Figure GDA0001788135800000087
the L-layer LSTM neural network at the encoding end can be expressed as the input and output at the time of L-layer t + i delta t in the network:
Figure GDA0001788135800000091
wherein the content of the first and second substances,
Figure GDA0001788135800000092
inputting the l hidden layer of the coding end at the t + i delta t moment;
Figure GDA0001788135800000093
outputting the output of the l hidden layer at the t + i delta t moment of the encoding end;
Figure GDA0001788135800000094
outputting the first hidden layer at the coding end at the time of t + i delta t;
Figure GDA0001788135800000095
the output of the first hidden layer at the coding end at the time of t + (i-1) delta t; wl,EThe full connection weight of the input layer of the encoding end; bl,EInputting the bias of the l hidden layer of the layer for the encoding end; wl,EIs the parameter of the L-th hidden layer in the middle of the encoding end, and L is equal to [1, L ∈](ii) a l is the number of hidden layers.
2) Obtaining face depth semantics
Figure GDA0001788135800000096
On the basis, the L-layer LSTM network is further adopted to decode the control sequence into a robot motor control sequence of d frames, such as a formula,
Figure GDA0001788135800000097
as shown.
So as to obtain the formula,
Figure GDA0001788135800000098
and solving a smooth constrained inverse mechanical model Γ () using the formula, wherein,
Figure GDA0001788135800000099
coding structure of L layer;
Figure GDA00017881358000000910
a decoding structure for L layer; l is a preset hidden layer number;
Figure GDA00017881358000000911
the face feature sequence of k frames before the t moment of the robot is obtained;
Figure GDA00017881358000000912
a motor control sequence of d frames after the t moment of the robot; y ist-ΔtThe control sequence of the motor at the t-delta t moment of the robot.
Similar to step 1), in the embodiment of the present invention, the input and output of each layer of LSTM nodes at the decoding end can be expressed as:
Figure GDA00017881358000000913
wherein the content of the first and second substances,
Figure GDA00017881358000000914
inputting the l hidden layer at the decoding end at the t + j delta t moment; y isjThe output of the l hidden layer at the decoding end at the t + j delta t moment;
Figure GDA00017881358000000915
the output of the l hidden layer at the moment of t + j delta t in the decoding stage;
Figure GDA00017881358000000916
the output of the l hidden layer at the moment t + (j-1) delta t in the decoding stage; wl,DA full connection weight of an input layer for a decoding end; bl,DBiasing of the input layer for the decoding end; wl,DIs the parameter of the L-th hidden layer in the middle of the decoding end, and L is equal to [1, L ∈];WL+1,DA full connection weight of an output layer for a decoding end; bL+1,DAn offset for the output layer at the decoding end; l is the number of implicit layers at the decoding end.
The smooth constraint reverse mechanical model constructed in the embodiment of the invention can be used for converting the facial feature sequence of k frames before t moment of the robot
Figure GDA0001788135800000101
Translated into its d-frame control sequence
Figure GDA0001788135800000102
Figure GDA0001788135800000103
Wherein the content of the first and second substances,
Figure GDA0001788135800000104
the output of the decoding layer is at time t + (d-1) Δ t.
From the above formula, the smooth constrained inverse mechanical model provided by the embodiment of the present invention completes the end-to-end translation of the facial feature sequence to the motor control sequence through the LSTM encoding-decoding network. The multi-step prediction strategy based on the time sequence not only utilizes the inverse solution to the optimal motor control sequence, but also is beneficial to the real-time processing of motor motion smoothing. The motor control sequence of the j (j is more than or equal to 1 and less than or equal to n) th motor in the t- (d-1) delta t-t + (d-1) delta t time period obtained in the step can be as follows:
(y(t-(d-1)Δt)j,…,y(t-2Δt)j,y(t-Δt)j,y(t)j,y(t+Δt)j,y(t+2Δt)j…,y(t+(d-1)Δt)j) Wherein, y(t-(d-1)Δt)jThe control displacement at the time of t- (d-1) delta t of the jth motor.
3) And fitting a motor control sequence of the jth motor (j is more than or equal to 1 and less than or equal to n) in the t- (d-1) delta t-t + (d-1) delta t time period by using a d-order polynomial: (y)(t-(d-1)Δt)j,…,y(t-2Δt)j,y(t-Δt)j,y(t)j,y(t+Δt)j,y(t+2Δt)j…,y(t+(d-1)Δt)j) Wherein, y(t-(d-1)Δt)jThe control displacement of the jth motor at the time t- (d-1) delta t; y is(t)jThe control displacement of the jth motor at the moment t is obtained; y is(t+(d-1)Δt)jIs the control displacement at the time of t + (d-1) delta t of the jth motor.
Specifically, fig. 6 is a schematic diagram of a fitting principle of a polynomial of order d according to an embodiment of the present invention, as shown in fig. 6, according to the schematic diagram of fig. 6 of the present invention, a polynomial of order d is obtained by using a formula,
Figure GDA0001788135800000111
and (3) constructing a polynomial function fitted at the front d moments and the back d moments of the jth motor, wherein,
Hj(t + k Δ t) is a polynomial function fitted to the first d times of the jth motor t time;
Figure GDA0001788135800000112
the polynomial coefficient of the ith term of the fitting function at the d moments before the t moment of the jth motor is obtained; fj(t + q Δ t) is a polynomial function fitted d moments after the jth motor;
Figure GDA0001788135800000113
fitting the polynomial coefficient of the ith term of the function for d moments after the jth motor;
4) using the formula, alphaj=P-1UjCalculating the smoothing coefficients of the j motor control sequences, wherein,
Figure GDA0001788135800000114
Figure GDA0001788135800000115
fitting a d term polynomial coefficient of the function for the first 0 moments of the jth motor;
Figure GDA0001788135800000116
fitting a d term polynomial coefficient of a function for d moments before the jth motor;
Figure GDA0001788135800000117
fitting a d term polynomial coefficient of the function for 0 th moment after the jth motor;
Figure GDA0001788135800000118
fitting a d term polynomial coefficient of the function for d moments after the jth motor; p is a coefficient matrix; u shapejIs a vector formed by the displacement of j motor control sequences at t- (d-1) delta t-t + (d-1) delta t and zero elements,
and U isj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T,y(t-(d-1)Δt)jThe control displacement of the jth motor at the time t- (d-1) delta t; y is(t)jThe control displacement of the jth motor at the moment t is obtained; y is(t+(d-1)Δt)jIs the control displacement at the time of t + (d-1) delta t of the jth motor.
In practical applications, the use of a formula,
Figure GDA0001788135800000119
calculate HjFirst and second derivatives of (t), Fj(t), wherein,
H'j(t + k. DELTA.t) is Hj(t), i.e. the velocity vector of the real control sequence of the jth motor of the robot at the time t + k Δ t; h ″)j(t + k. DELTA.t) is Hj(t), i.e. the acceleration vector of the real control sequence of the jth motor of the robot at the time t + k delta t; f'j(t + q. DELTA.t) is Fj(t), i.e. the velocity vector of the real control sequence of the jth motor of the robot at time t + q Δ t; f ″)j(t + q. DELTA.t) is FjAnd (t), namely an acceleration vector of a real control sequence of the jth motor of the robot at the moment t + q delta t.
Then, a system of equations is constructed,
Figure GDA0001788135800000121
wherein the content of the first and second substances,
Figure GDA0001788135800000122
controlling displacement for the motor at the d moment before the jth motor;
Figure GDA0001788135800000123
and controlling the displacement of the motor at d moments after the jth motor.
Since the two polynomial functions should have the same displacement, velocity and acceleration at the time of the connection point t, the above equation set has 2d +2 equations and includes 2d +2 vectors to be solved.
To simplify the representation of the above system of equations, let:
Figure GDA0001788135800000124
Figure GDA0001788135800000131
Figure GDA0001788135800000132
the above equation set can be expressed as: PA is equal to U, since,
Figure GDA0001788135800000133
wherein the content of the first and second substances,
OHis a zero matrix; o isFIs a zero matrix; a ═ α1,…,αj…,αn) A smoothing coefficient matrix for n motor control sequences at time t; u ═ U (U)1,…,Uj,…,Un) A displacement matrix composed of n motor control sequences at time t, and Uj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)TThe motor comprises the displacement of the jth motor at the time of t- (d-1) delta t-t + (d-1) delta t and zero elements. When the coefficient matrix P is invertible, the smooth coefficient matrix a to be solved can be represented as: a ═ P-1U。
And then the smoothing coefficient alpha of the jth motor control sequence at the moment tjCan be expressed as: alpha is alphaj=P-1UjJ is more than or equal to 1 and less than or equal to n, wherein, alphajThe smoothing coefficients of n motor control sequences at the time t;
Figure GDA0001788135800000134
and a smoothing coefficient matrix formed by n motors at each time.
5) Using the formula, a ═ α1,…,αj…,αn) J is more than or equal to 1 and less than or equal to n, and solving a smooth coefficient matrix of n motor control sequences at t moment
Figure GDA0001788135800000135
Wherein A is
Figure GDA0001788135800000136
Figure GDA0001788135800000137
A smoothing coefficient matrix formed by n motors at each moment; alpha is alphajThe smoothing coefficients of n motor control sequences at the time t;
specifically, the smoothing coefficient α can be calculated in step 4)jSmoothing the coefficient αjForming a smoothing coefficient matrix
Figure GDA0001788135800000138
6) Smoothing the coefficient matrix
Figure GDA0001788135800000139
The result is substituted into a formula,
Figure GDA0001788135800000141
calculating to obtain the control displacement of the d frame before the t moment of the jth motor
Figure GDA0001788135800000142
Control displacement of d frame after t moment of jth motor
Figure GDA0001788135800000143
7) Controlling the displacement of d frames after t moments of the n motors calculated in the step 6)
Figure GDA0001788135800000144
The result is substituted into the objective function,
Figure GDA0001788135800000145
calculating the optimal parameters of the smooth constraint reverse mechanical model to obtain a formula,
Figure GDA0001788135800000146
wherein the content of the first and second substances,
J(WE,WD,bE,bD) Optimal parameters for the smooth constrained inverse mechanical model; wEA matrix of first model parameters that is a smoothly constrained inverse mechanical model; wDA matrix of second model parameters that is a smooth constrained inverse mechanical model; bEA matrix of third model parameters that is a smooth constrained inverse mechanical model; bDA matrix of fourth model parameters that is a smooth constrained inverse mechanical model; j () is the objective function; min is a minimum evaluation function; q is the number of the expression frame of the robot after t-moment, and q belongs to [0, d-1 ]](ii) a F (t + q delta t) is a displacement vector of a real control sequence of n motors of the robot at the moment of t + q delta t; f' (t + q delta t) is a matrix of speed vectors of real control sequences of n motors of the robot at the time of t + q delta t; f "(t + q delta t) is a matrix of acceleration vectors of real control sequences of n motors of the robot at the time of t + q delta t; t is the current time;
Figure GDA0001788135800000151
estimating values of displacement vectors of n motors of the robot at the moment of t + q delta t;
Figure GDA0001788135800000152
a matrix of estimated values of speed vectors of n motors of the robot at the time of t + q delta t;
Figure GDA0001788135800000153
a matrix of estimated values of acceleration vectors of n motors of the robot at the time of t + q delta t; alpha is the weight taking speed and acceleration as smooth constraint, and alpha is more than or equal to 0; sigma is a summation function; y is(t+qΔt)jThe control displacement of the jth motor at the moment t + q delta t;
Figure GDA0001788135800000154
the estimated value of the control displacement at the t + q delta t moment of the jth motor is obtained;
Figure GDA0001788135800000155
is an estimated value of the speed of the jth motor at the time t + q Δ t;
Figure GDA0001788135800000156
an acceleration estimated value at the t + q delta t moment of the jth motor;
Figure GDA0001788135800000157
the estimated displacement vector for the jth motor at time t + q Δ t.
In practical application, the optimal parameters of the fusion model can be solved by using the formula and adopting a gradient descent method
Figure GDA0001788135800000158
The reverse mechanical model constructed by the optimal parameters can be used for mapping the human expression into the optimal motor control sequence of the robot face.
Using the above-described embodiments of the present invention, at J (W)E,WD,bE,bD) The velocity and acceleration are blended as weights for the smoothing constraints, resulting in a continuous and smooth motor control sequence.
By applying the embodiment of the invention, the assistance information of facial muscle in space and time can be reflected, and the space-time similarity and motor motion smoothness of expression migration are further improved.
S103: and generating an optimal motor control sequence based on a smooth constraint inverse mechanical model by taking the real-time facial features of the performer as a target, and then driving a robot facial motor by using the optimal motor control sequence to enable the robot to present an expression corresponding to the face of the performer.
The facial expression feature sequence of the human figure k frames before t time
Figure GDA0001788135800000159
By using the model as input, the current optimal driving vector of the robot face motor can be obtained,
Figure GDA00017881358000001510
wherein the content of the first and second substances,
Figure GDA00017881358000001511
an optimal motor control vector output for the smooth constraint reverse mechanical model;
Figure GDA0001788135800000161
coding structure of L layer LSTM;
Figure GDA0001788135800000162
decoding structure for L-layer LSTM; l is the preset number of hidden layers;
Figure GDA0001788135800000163
a sequence of facial features for the performer;
Figure GDA0001788135800000164
first model parameters W for a smoothly constrained inverse mechanical modelEThe optimum value of (d);
Figure GDA0001788135800000165
second model parameters W for a smoothly constrained inverse mechanical modelDThe optimum value of (d);
Figure GDA0001788135800000166
third model parameters b for a smooth constrained inverse mechanical modelEThe optimum value of (d);
Figure GDA0001788135800000167
fourth model parameters b for a smoothly constrained inverse mechanical modelDThe optimum value of (c).
In practical application, a face control vector Y composed of 11 motors of the robot face is as follows:
Y=(y1,…,yj,…,yn) (n-11), wherein yj∈[0,1]Is the normalized control value of the jth motor.
Similar to human muscles, the driving of the 11 motors not only can enable the robot to present various expressions and fine and smooth emotional details such as blinking, smiling and frowning, but also can enable the robot to express emotions and simultaneously accompany gesture actions expressing subjective intentions such as head shaking and head nodding.
In order to verify the beneficial effects of the embodiments of the present invention, the inventors performed the following experiments:
firstly, a robot animator arranges 60 motor control sequences containing different facial expressions and head gestures, wherein the time length of each sequence is 90 seconds, and the motor control sequences have expression neutral-peak-neutral intensity change and head gesture change, and then facial features are captured by a Kinect 2.0 camera at a frame rate of 30 seconds; then, the robot t, t is more than or equal to 400 and less than or equal to 2400 k frames before the time point are subjected to historical control sequence
Figure GDA0001788135800000168
Corresponding facial feature sequence
Figure GDA0001788135800000169
And d-frame-after motor control sequence
Figure GDA00017881358000001610
And (3) forming a sample set:
Figure GDA00017881358000001611
in the sample set, 100000 groups of samples are randomly selected for training model parameters, and the rest Q-2000 groups of samples are used for testing the model.
In the experiment, the model is built and trained based on the LSTM module in TensorFlow, the relevant parameters are shown in Table 1, and Table 1 is the model parameters for building and training the model based on the LSTM module in TensorFlow.
TABLE 1
Figure GDA0001788135800000171
In the first aspect, after training is completed, to verify the validity of the proposed model, a formula is used,
Figure GDA0001788135800000172
respectively counting t + q delta t, q belongs to [0, d-1 ]]Time r, r ∈ [1,2000 ]]Control deviation of group samples
Figure GDA0001788135800000173
And control deviation of jth motor
Figure GDA0001788135800000174
Wherein the content of the first and second substances,
Figure GDA0001788135800000175
the real value of the jth motor at the moment t + q delta t of the ith group of samples is obtained;
Figure GDA0001788135800000176
the estimate for the jth motor at time t + q Δ t for the ith set of samples.
Table 2 is a summary table of average motor control deviations at d (d ═ 5) times of 2000 groups of samples and average control deviations of each motor, and fig. 7 is a graph of results of motor control deviations provided in the embodiment of the present invention; the data in table 2 are plotted as a statistical table as shown in fig. 7.
TABLE 2
Figure GDA0001788135800000177
As shown in table 2, table 2 shows the multi-step prediction capability and generalization capability of the model shown in the embodiment of the present invention, wherein the overall control deviation is less than 4.5%, and the prediction deviation at time t is less than 3.5%. As shown in fig. 7, the face of the robot is affected by the air pressure driving mode, the head of the robot shakes intermittently, so that the control deviation of the head up and down motor is large, but the control deviation at the time t is not more than 5%, and the control deviation at the time t +4 Δ t is not more than 8%; the motors for left and right eyeballs, up and down eyeballs, left and right head inclination and the like have smaller control deviation due to single driven facial features.
In addition, it can be seen from fig. 7 that: with the progress of multi-step prediction, the control deviation at each moment is in an ascending trend, but the maximum control deviation of the motor at the t +4 delta t still does not exceed 8%. The smooth constraint reverse mechanical model better reflects the internal relation between the robot hardware control system and the presented facial features, and realizes the translation and reverse solution from the robot facial feature sequence to the motor control sequence more accurately.
In a second aspect, to verify spatiotemporal similarity of expression impersonation, 50 facial motion sequences (each 60 seconds in duration, containing neutral-peak-neutral expression intensity variations and head pose variations) of different performers were first recorded using a Kinect Studio V2.0. Then, the obtained sequence is subjected to facial feature extraction
Figure GDA0001788135800000189
And then mapping the expression characteristic sequence of the performer to a robot control sequence based on the trained smooth constraint reverse mechanical model, transmitting the output of the model t at the moment when t is more than or equal to 5 and less than or equal to 1800 to a control system in a serial port communication mode, and driving a motor to present simulated dynamic expressions. Synchronously, real-time capturing the expression simulated by the robot through a Kinect camera and extracting the facial features of the robot
Figure GDA0001788135800000181
Finally, by using the formula,
Figure GDA0001788135800000182
respectively counting the spatial similarity of the first 20 characteristics (3 axial head postures and 17 facial action units) simulated by the robot in real time and the characteristics played by the performer
Figure GDA0001788135800000183
Similarity to time
Figure GDA0001788135800000184
Wherein the content of the first and second substances,
Figure GDA0001788135800000185
the motion amplitude of the ith surface feature at the t moment of the s sequence of the robot is obtained;
Figure GDA0001788135800000186
the motion amplitude of the ith facial feature at the t moment of the s-th sequence of the character is obtained;
Figure GDA0001788135800000187
the speed of the ith surface feature at the t moment of the s sequence of the robot;
Figure GDA0001788135800000188
the speed of the ith facial feature at the t moment of the s-th sequence of the character; s is the total frame number of the real-time expression simulation of the robot, and the value is 50 x 1976; sim (x, b) is the fitting function, and Sim (x, b) ═ exp (-x)2β) (β > 0); beta is a control parameter.
In practical application, the fitting function converts the amplitude or speed deviation into a similarity of 0-1; the smaller beta is, the more stringent the similarity requirement is, otherwise the more relaxed.
By 10-fold cross validation, betaI=0.3、βT0.5. When α is 0.45, the spatiotemporal similarity of the front 20 facial features simulated by the robot in real time is shown in fig. 8, and fig. 8 is a spatiotemporal similarity result graph of expression migration provided by the embodiment of the present invention.
As shown in fig. 8, the spatial-temporal similarity of each facial feature exceeds 80%, and particularly, the similarity is kept high for the features showing expression details, such as cheek swelling, mouth corner contraction, eye closing, jaw opening and closing amplitude, eyeball horizontal amplitude and the like. The method is not only beneficial to maintaining the fidelity of expression simulation, but also beneficial to improving the interactive sense of recognition of the robot emotion.
In order to evaluate the smoothness of the motor movement, in a third aspect, a formula may be utilized,
Figure GDA0001788135800000191
calculating the smoothness of the motor movement, wherein,
Figure GDA0001788135800000192
smoothness of motor motion; t isSTo a jump threshold, TS=10/256;G(yj(t)) is the coordinate of the motor at time t, and
Figure GDA0001788135800000193
FIG. 9 is a result graph of smoothness of motion of a robot face motor according to an embodiment of the present invention; as shown in fig. 9, fig. 9 shows the comparison result of the smoothness of the motor movement without the smooth constraint and the smoothness of the motor movement with the smooth constraint.
As shown in fig. 9, by applying the embodiment of the present invention, the motor motion smoothness is maintained above 0.85, which is significantly better than the motor control model without smooth constraint.
In addition, by applying the embodiment of the invention, the smoothness effect of the motors for left and right eyes, opening and closing of the mouth, lifting of the upper and lower eyebrows and cheeks and the like is better, which shows that the method has better capturing capability and transferring capability for dynamic expression details accompanied with lifting of the mouth corners, opening of the mouth and the like.
In a fourth aspect, in order to evaluate the effect of the smoothness constraint model added in the embodiment of the present invention, the inventors performed tests on time series indexes such as spatial similarity, temporal similarity, motion smoothness, and the like when α is 0, 0.2, 0.4, 0.45, 0.5, 0.55, 0.6, 0.7, 0.8, and 1.0, respectively.
Fig. 10 is a graph illustrating the effect of the weight parameters on the spatiotemporal similarity and the motion smoothness according to the embodiment of the present invention, as shown in fig. 10, when α is 0, J (W)E,WD,bE,bD) The motor control deviation of the reverse solution is only taken as an optimization target, so the space-time similarity of expression reproduction reaches the maximum, which shows that a smooth constraint reverse mechanical model taking a facial feature sequence as input reflects the mechanical relationship between a control motor and facial muscles driven by the control motor, and the reverse solution of the motor control vector is accurately realized.
However, as α increases, J (W)E,WD,bE,bD) The motor control constraints of speed and acceleration are blended, and the smoothness of the motor is gradually improved, which shows that the introduced smooth constraint better inhibits the jump of the motor control vector. The inventor finds that the effect is best when the value of alpha is 0.45 after comprehensively considering the space-time similarity of expression simulation and the smoothness of motor motion.
By applying the embodiment shown in the figure 1 of the invention, the real-time facial features of the performer are taken as targets, an optimal motor control sequence is generated based on a smooth constraint reverse mechanical model, the optimal motor control sequence is utilized to realize the transfer of the facial expression features of the performer character by the robot, and when the robot expression transfer is carried out, the character expression sequence can be directly mapped into the robot facial motor control sequence, so that the time-space similarity and the motor continuous motion smoothness of the robot expression simulation are improved.
Corresponding to the embodiment of the invention shown in fig. 1, the embodiment of the invention also provides a robot expression simulation device based on the smooth constraint reverse mechanical model.
Fig. 11 is a schematic structural diagram of a robot expression simulation apparatus based on a smooth-constrained inverse mechanical model according to an embodiment of the present invention, as shown in fig. 11, the apparatus includes: an extraction module 1101, configured to extract a feature vector of a robot face: a building module 1102 configured to build a smooth constrained inverse machine model based on the facial feature sequence to the motor control sequence: a generating module 1103, configured to generate an optimal motor control sequence based on the smooth constrained inverse mechanical model and drive a robot face motor to make the robot present an expression corresponding to the performer's face, with the performer's real-time facial features as a target.
In a specific implementation manner of the embodiment of the present invention, the extracting module 1101 is further configured to:
a1: utilize the Kinect camera to acquire the facial expression data of robot and the head gesture data of robot, wherein, the facial expression data of robot includes: feature point data and facial action unit data of the parameterized face mesh based on the Candide-3 model; head pose data of the robot, comprising: data of rotation angles of the three axial directions of the head XYZ;
a2: converting feature point data based on the Candide-3 model from a Cartesian coordinate system to a Laplacian coordinate system by using Laplacian transformation;
a3: and generating a robot face feature vector according to the rotation angles of the three axial directions of the head XYZ, the face action unit data and the robot face geometric feature.
In a specific implementation manner of the embodiment of the present invention, the extracting module 1101 is further configured to:
the Candide-3 model after the eyeball characteristic points are added,
Figure GDA0001788135800000211
g is parameterized representation of the Candide-3 model after eyeball feature points are added; v is a feature point position vector; d is an adjacent matrix consisting of p characteristic points; v. ofiThe position vector of the ith characteristic point is taken as the position vector of the ith characteristic point; p is the number of the characteristic points of the Candide-3 model after the eyeball characteristic points are increased; (x)i,yi,zi) The three-dimensional vector value of the ith characteristic point is obtained; i is the serial number of the characteristic point; j is the serial number of the characteristic point adjacent to the characteristic point; e.g. of the typeijAre connected to elements in a matrix, and
Figure GDA0001788135800000212
obtaining head pose data (R) using Kinect APIpitch,Ryaw,Rroll) Wherein R ispitchAngle of rotation on the X axis, RyawAngle of rotation on the Y axis, RrollIs the angle of rotation on the Z axis.
In a specific implementation manner of the embodiment of the present invention, the extracting module 1101 is further configured to:
by means of the formula (I) and (II),
Figure GDA0001788135800000221
realizing the conversion from a Cartesian coordinate system to a Laplacian coordinate system, wherein, zetaiIs a characteristic point viThe geometric features of (a);
Figure GDA0001788135800000222
is a characteristic point viLaplacian coordinates; omegaiTo take a characteristic point viThe sum of the areas of adjoining triangles that are vertices; alpha is alphaijIs and the edge vivjAdjacent first angles; beta is aijIs and the edge vivjAn adjacent second angle; v. ofiIs a characteristic point; v. ofjIs equal to viAn adjoining j-th feature point; n (i) is and viAll the feature points that are adjacent; l () is Laplacian transform; | | is a modulo function; and Σ is a summation function.
In a specific implementation manner of the embodiment of the present invention, the extracting module 1101 is further configured to: by means of the formula (I) and (II),
Figure GDA0001788135800000223
generating a robot face feature vector, wherein X is the robot face feature vector; x is the number ofiIs the value of the ith dimension of the facial feature vector; m is the dimension of the extracted facial feature vector; (R)pitch、Ryaw、Rroll) Rotating angles of the robot in three axial directions of XYZ; AU (AU)jThe characteristic value of the jth surface action unit;
Figure GDA0001788135800000224
is a characteristic point vkLaplacian coordinates.
In a specific implementation manner of the embodiment of the present invention, the building module 1102 is further configured to:
b1: by means of the formula (I) and (II),
Figure GDA0001788135800000225
constructing a reverse mechanical model based on a facial feature sequence to a motor control sequence, wherein,
Figure GDA0001788135800000226
a motor control sequence output for the reverse mechanical model,
Figure GDA0001788135800000227
delta t is the frame rate of the Kinect camera for acquiring the facial expression of the robot;
Figure GDA0001788135800000228
is a facial feature sequence at the k moment before the robot, and,
Figure GDA0001788135800000229
Γ () is the inverse mechanical model; t is the current time; k is the number of expression frames before the t moment of the robot; d is the number of expression frames of the robot after the t moment; y ist+(d-2)ΔtMotor control data at time t + (d-2) Δ t; xt-(k-2)ΔtThe face feature vector of the robot at the t- (k-2) delta t moment is obtained;
b2: modeling a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence by using a multilayer LSTM coding-decoding structure, fitting a motion trend parameter of the motor control sequence by using a d-order polynomial, and constructing the smooth constraint reverse mechanical model based on the deviation of displacement, speed and acceleration.
In a specific implementation manner of the embodiment of the present invention, the building module 1102 is further configured to:
1) by means of the formula (I) and (II),
Figure GDA0001788135800000231
solving a smooth constrained inverse mechanical model Γ (), wherein,
Figure GDA0001788135800000232
coding structure of L layer;
Figure GDA0001788135800000233
a decoding structure for L layer; l is a preset hidden layer number;
Figure GDA0001788135800000234
the face feature sequence of k frames before the t moment of the robot is obtained;
Figure GDA0001788135800000235
a motor control sequence of d frames after the t moment of the robot; y ist-ΔtA motor control sequence at the t-delta t moment of the robot is obtained;
2) and the use of a formula,
Figure GDA0001788135800000236
and (3) constructing a polynomial function fitted at the front d moments and the back d moments of the jth motor, wherein,
Hj(t + k Δ t) is a polynomial function fitted to the first d times of the jth motor t time;
Figure GDA0001788135800000237
the polynomial coefficient of the ith term of the fitting function at the d moments before the t moment of the jth motor is obtained; fj(t + q Δ t) is a polynomial function fitted d moments after the jth motor;
Figure GDA0001788135800000238
fitting the polynomial coefficient of the ith term of the function for d moments after the jth motor;
3) using the formula, alphaj=P-1UjCalculating the smoothing coefficients of the j motor control sequences, wherein,
Figure GDA0001788135800000239
the smoothing coefficient to be solved of the jth motor control sequence is obtained;
Figure GDA00017881358000002310
fitting a d term polynomial coefficient of the function for the first 0 moments of the jth motor;
Figure GDA00017881358000002311
fitting a d term polynomial coefficient of a function for d moments before the jth motor;
Figure GDA00017881358000002312
fitting a d term polynomial coefficient of the function for 0 th moment after the jth motor;
Figure GDA00017881358000002313
fitting a d term polynomial coefficient of the function for d moments after the jth motor; p is a coefficient matrix; u shapejIs a vector formed by the displacement of j motor control sequences at t- (d-1) delta t-t + (d-1) delta t and zero elements,
and U isj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T,y(t-(d-1)Δt)jThe control displacement of the jth motor at the time t- (d-1) delta t; y is(t)jThe control displacement of the jth motor at the moment t is obtained; y is(t+(d-1)Δt)jThe control displacement is the control displacement of the jth motor at the moment t + (d-1) delta t;
4) using the formula, a ═ α1,…,αj…,αn) J is more than or equal to 1 and less than or equal to n, and solving a smooth coefficient matrix of n motor control sequences at t moment
Figure GDA0001788135800000241
Wherein the content of the first and second substances,
a is
Figure GDA0001788135800000242
Figure GDA0001788135800000243
A smoothing coefficient matrix formed by n motors at each moment; alpha is alphajThe smoothing coefficients of n motor control sequences at the time t;
5) smoothing the coefficient matrix
Figure GDA0001788135800000244
The result is substituted into a formula,
Figure GDA0001788135800000245
calculating the control displacement of d frames before the t moment of the jth motor
Figure GDA0001788135800000246
Control displacement of d frame after t moment of jth motor
Figure GDA0001788135800000247
6) D frames of control displacement after t moments of the n motors calculated in the step 5)
Figure GDA0001788135800000248
The result is substituted into the objective function,
Figure GDA0001788135800000249
calculating optimal parameters of the smooth constrained inverse mechanical model, wherein,
J(WE,WD,bE,bD) Optimal parameters for the smooth constrained inverse mechanical model; wEA matrix of first model parameters that is a smoothly constrained inverse mechanical model; wDA matrix of second model parameters that is a smooth constrained inverse mechanical model; bEA matrix of third model parameters that is a smooth constrained inverse mechanical model; bDA matrix of fourth model parameters that is a smooth constrained inverse mechanical model; j () is the objective function; min is a minimum evaluation function; q is the number of the expression frame of the robot after the t moment, and q belongs to [0, d-1 ]](ii) a F (t + q Δ t) is of n motors of the robot at the time of t + q Δ tDisplacement vectors of the real control sequence; f' (t + q delta t) is a matrix of speed vectors of real control sequences of n motors of the robot at the time of t + q delta t; f "(t + q delta t) is a matrix of acceleration vectors of real control sequences of n motors of the robot at the time of t + q delta t; t is the current time;
Figure GDA0001788135800000251
estimating values of displacement vectors of n motors of the robot at the moment of t + q delta t;
Figure GDA0001788135800000252
a matrix of estimated values of speed vectors of n motors of the robot at the time of t + q delta t;
Figure GDA0001788135800000253
a matrix of estimated values of acceleration vectors of n motors of the robot at the time of t + q delta t; alpha is the weight taking speed and acceleration as smooth constraint, and alpha is more than or equal to 0; and Σ is a summation function.
In a specific implementation manner of the embodiment of the present invention, the generating module 1103 is further configured to:
the facial expression feature sequence of the performer k frames before t time
Figure GDA0001788135800000254
As an input, the user may, using a formula,
Figure GDA0001788135800000255
the current optimal drive vector for the robot face motor can be obtained, wherein,
Figure GDA0001788135800000256
an optimal motor control vector output for the smooth constraint reverse mechanical model;
Figure GDA0001788135800000257
coding structure of L layer LSTM;
Figure GDA0001788135800000258
decoding structure for L-layer LSTM;
Figure GDA0001788135800000259
a sequence of facial features for the performer;
Figure GDA00017881358000002510
first model parameters W for a smoothly constrained inverse mechanical modelEThe optimum value of (d);
Figure GDA00017881358000002511
second model parameters W for a smoothly constrained inverse mechanical modelDThe optimum value of (d);
Figure GDA00017881358000002512
third model parameters b for a smooth constrained inverse mechanical modelEThe optimum value of (d);
Figure GDA00017881358000002513
fourth model parameters b for a smoothly constrained inverse mechanical modelDThe optimum value of (c).
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. A robot expression simulation method based on a smooth constraint inverse mechanical model is characterized by comprising the following steps:
a: extracting a robot face feature vector:
b: constructing a smooth constraint reverse mechanical model from a facial feature sequence to a motor control sequence: and step B, comprising:
b1: by means of the formula (I) and (II),
Figure FDA0003064439520000011
constructing a reverse mechanical model based on a facial feature sequence to a motor control sequence, wherein,
Figure FDA0003064439520000012
a motor control sequence output for the reverse mechanical model,
Figure FDA0003064439520000013
delta t is the frame rate of the Kinect camera for acquiring the facial expression of the robot;
Figure FDA0003064439520000014
is a facial feature sequence at the k moment before the robot, and,
Figure FDA0003064439520000015
Γ () is the inverse mechanical model; t is the current time; k is the number of expression frames before t moment of the robot; d is the number of expression frames of the robot after t-th moment; y ist+(d-2)ΔtMotor control data at time t + (d-2) Δ t; xt-(k-2)ΔtThe face feature vector of the robot at the t- (k-2) delta t moment is obtained;
b2: modeling a reverse mechanical model from a facial feature sequence to a motor control sequence by using a multilayer LSTM coding-decoding structure, fitting a motion trend parameter of the motor control sequence by adopting a d-order polynomial, and constructing a smooth constraint reverse mechanical model based on the deviation of displacement, speed and acceleration; the step B2, including:
1) by means of the formula (I) and (II),
Figure FDA0003064439520000016
solving a smooth constrained inverse mechanical model Γ (), wherein,
Figure FDA0003064439520000017
coding structure of L layer;
Figure FDA0003064439520000018
a decoding structure for L layer; l is a preset hidden layer number;
Figure FDA0003064439520000019
the face feature sequence of k frames before the t moment of the robot is obtained;
Figure FDA00030644395200000110
a motor control sequence of d frames after t time of the robot; y ist-ΔtA motor control sequence at the t-delta t moment of the robot is obtained;
2) and the use of a formula,
Figure FDA0003064439520000021
and (3) constructing a polynomial function fitted at the front d moments and the back d moments of the jth motor, wherein,
Hj(t + k Δ t) is a polynomial function fitted to the first d times of the jth motor t time;
Figure FDA0003064439520000022
the polynomial coefficient of the ith term of the fitting function at the d moments before the t moment of the jth motor is obtained; fj(t + q Δ t) is a polynomial function fitted d moments after the jth motor;
Figure FDA0003064439520000023
fitting the polynomial coefficient of the ith term of the function for d moments after the jth motor;
3) using the formula, alphaj=P-1UjCalculating the smoothing coefficients of the j motor control sequences, wherein,
Figure FDA0003064439520000024
the smoothing coefficient to be solved of the jth motor control sequence is obtained;
Figure FDA0003064439520000025
fitting a d term polynomial coefficient of the function for the first 0 moments of the jth motor;
Figure FDA0003064439520000026
fitting a d term polynomial coefficient of a function for d moments before the jth motor;
Figure FDA0003064439520000027
fitting a d term polynomial coefficient of the function for 0 th moment after the jth motor;
Figure FDA0003064439520000028
fitting a d term polynomial coefficient of the function for d moments after the jth motor; p is a coefficient matrix; u shapejIs a vector formed by the displacement of j motor control sequences at t- (d-1) delta t-t + (d-1) delta t and zero elements,
and U isj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T,y(t-(d-1)Δt)jThe control displacement of the jth motor at the time t- (d-1) delta t; y is(t)jThe control displacement of the jth motor at the moment t is obtained; y is(t+(d-1)Δt)jThe control displacement is the control displacement of the jth motor at the moment t + (d-1) delta t;
4) using the formula, a ═ α1,…,αj…,αn) J is more than or equal to 1 and less than or equal to n, and solving a smooth coefficient matrix of n motor control sequences at t moment
Figure FDA0003064439520000029
Wherein the content of the first and second substances,
a is
Figure FDA00030644395200000210
Figure FDA00030644395200000211
A smoothing coefficient matrix formed by n motors at each moment; alpha is alphajThe smoothing coefficients of n motor control sequences at the time t;
5) smoothing the coefficient matrix
Figure FDA00030644395200000212
The result is substituted into a formula,
Figure FDA0003064439520000031
calculating the control displacement of d frames before the t moment of the jth motor
Figure FDA0003064439520000032
Control displacement of d frame after t moment of jth motor
Figure FDA0003064439520000033
6) D frames of control displacement after t moments of the n motors calculated in the step 5)
Figure FDA0003064439520000034
The result is substituted into the objective function,
Figure FDA0003064439520000035
calculating optimal parameters of the smooth constrained inverse mechanical model, wherein,
J(WE,WD,bE,bD) Optimal parameters of the reverse mechanical model based on smooth constraint; wEA matrix of first model parameters that is a smoothly constrained inverse mechanical model; wDA matrix of second model parameters that is a smooth constrained inverse mechanical model; bEA matrix of third model parameters that is a smooth constrained inverse mechanical model; bDA matrix of fourth model parameters that is a smooth constrained inverse mechanical model; j () is the objective function; min is a minimum evaluation function; q is the number of the expression frame of the robot after t-moment, and q belongs to [0, d-1 ]](ii) a F (t + q delta t) is a displacement vector of a real control sequence of n motors of the robot at the moment of t + q delta t; f' (t + q delta t) is a matrix of speed vectors of real control sequences of n motors of the robot at the time of t + q delta t; f "(t + q delta t) is a matrix of acceleration vectors of real control sequences of n motors of the robot at the time of t + q delta t; t is the current time;
Figure FDA0003064439520000036
estimating values of displacement vectors of n motors of the robot at the moment of t + q delta t;
Figure FDA0003064439520000037
a matrix of estimated values of speed vectors of n motors of the robot at the time of t + q delta t;
Figure FDA0003064439520000038
a matrix of estimated values of acceleration vectors of n motors of the robot at the time of t + q delta t; alpha is the weight taking speed and acceleration as smooth constraint, and alpha is more than or equal to 0; sigma is a summation function;
c: and generating an optimal motor control sequence based on a smooth constraint inverse mechanical model by taking the real-time facial features of the performer as a target, and then driving a robot facial motor by using the optimal motor control sequence to enable the robot to present an expression corresponding to the face of the performer.
2. The method for simulating the robot expression based on the smooth constrained inverse mechanical model according to claim 1, wherein the step A comprises:
a1: utilize the Kinect camera to acquire the facial expression data of robot and the head gesture data of robot, wherein, the facial expression data of robot includes: feature point data and facial action unit data of the parameterized face mesh based on the Candide-3 model; head pose data of the robot, comprising: data of rotation angles of the three axial directions of the head XYZ;
a2: converting feature point data based on the Candide-3 model from a Cartesian coordinate system to a Laplacian coordinate system by using Laplacian transformation;
a3: and generating a robot face feature vector according to the rotation angles of the three axial directions of the head XYZ, the face action unit data and the robot face geometric feature.
3. The method for simulating the robot expression based on the smooth constrained inverse mechanical model according to claim 2, wherein the step A1 comprises:
by using the formula of the Candide-3 model after the eyeball characteristic points are added,
Figure FDA0003064439520000041
facial expression data of the robot is acquired, wherein,
g is the parameterized representation of the Candide-3 model added with the eyeball characteristic points; v is a feature point position vector; d is an adjacent matrix consisting of p characteristic points; v. ofiThe position vector of the ith characteristic point is taken as the position vector of the ith characteristic point; p is the number of the characteristic points of the Candide-3 model after the eyeball characteristic points are increased; (x)i,yi,zi) The three-dimensional vector value of the ith characteristic point is obtained; i is the serial number of the characteristic point; j is the serial number of the characteristic point adjacent to the characteristic point; e.g. of the typeijAre the elements in the adjacency matrix,
and is
Figure FDA0003064439520000051
Obtaining head pose data (R) using Kinect APIpitch,Ryaw,Rroll) Wherein R ispitchAngle of rotation on the X axis, RyawAngle of rotation on the Y axis, RrollIs the angle of rotation on the Z axis.
4. The method for simulating the robot expression based on the smooth constrained inverse mechanical model according to claim 2, wherein the step A2 comprises:
by means of the formula (I) and (II),
Figure FDA0003064439520000052
the conversion from a Cartesian coordinate system to a Laplacian coordinate system is realized, wherein,
ζiis a characteristic point viThe geometric features of (a);
Figure FDA0003064439520000053
is a characteristic point viLaplacian coordinates; omegaiTo take a characteristic point viThe sum of the areas of adjoining triangles that are vertices; alpha is alphaijIs and the edge vivjAdjacent first angles; beta is aijIs and the edge vivjAn adjacent second angle; v. ofiIs a characteristic point; v. ofjIs equal to viAn adjoining j-th feature point; n (i) is and viAll the feature points that are adjacent; l () is a Laplacian transform function; | | is a modulo function; and Σ is a summation function.
5. The method for simulating the robot expression based on the smooth constrained inverse mechanical model according to claim 2, wherein the step A3 comprises:
by means of the formula (I) and (II),
Figure FDA0003064439520000054
generating a robot facial feature vector, wherein,
x is a robot face feature vector; x is the number ofiIs the value of the ith dimension of the facial feature vector; m is the dimension of the extracted facial feature vector; (R)pitch、Ryaw、Rroll) Rotating angles of the robot in three axial directions of XYZ; AU (AU)jThe characteristic value of the jth surface action unit;
Figure FDA0003064439520000055
is a characteristic point vkLaplacian coordinates.
6. The method for simulating the robot expression based on the smooth constrained inverse mechanical model according to claim 1, wherein the step C comprises:
the facial expression feature sequence of the human figure k frames before t time
Figure FDA0003064439520000061
As an input, the user may, using a formula,
Figure FDA0003064439520000062
and calculating an optimal control sequence of the robot, wherein,
Figure FDA0003064439520000063
an optimal motor control sequence output for the smooth constraint reverse mechanical model;
Figure FDA0003064439520000064
coding structure of L layer LSTM;
Figure FDA0003064439520000065
decoding structure for L-layer LSTM;
Figure FDA0003064439520000066
is the facial feature sequence of the performer at the time of t;
Figure FDA0003064439520000067
first model parameters W for a smoothly constrained inverse mechanical modelEThe optimum value of (d);
Figure FDA0003064439520000068
second model parameters W for a smoothly constrained inverse mechanical modelDThe optimum value of (d);
Figure FDA0003064439520000069
third model parameters b for a smooth constrained inverse mechanical modelEThe optimum value of (d);
Figure FDA00030644395200000610
fourth model parameters b for a smoothly constrained inverse mechanical modelDThe optimum value of (c).
7. A robot expression simulation device based on a smooth constrained inverse mechanical model, the device comprising:
the extraction module is used for extracting the facial feature vector of the robot:
the construction module is used for constructing a smooth constraint reverse mechanical model from the facial feature sequence to the motor control sequence: the model building process comprises the following steps:
b1: by means of the formula (I) and (II),
Figure FDA00030644395200000611
constructing a reverse mechanical model based on a facial feature sequence to a motor control sequence, wherein,
Figure FDA00030644395200000612
a motor control sequence output for the reverse mechanical model,
Figure FDA00030644395200000613
delta t is the frame rate of the Kinect camera for acquiring the facial expression of the robot;
Figure FDA00030644395200000614
is a facial feature sequence at the k moment before the robot, and,
Figure FDA00030644395200000615
Γ () is the inverse mechanical model; t is the current time; k is the number of expression frames before t moment of the robot; d is the number of expression frames of the robot after t-th moment; y ist+(d-2)ΔtMotor control data at time t + (d-2) Δ t; xt-(k-2)ΔtThe face feature vector of the robot at the t- (k-2) delta t moment is obtained;
b2: modeling a reverse mechanical model from a facial feature sequence to a motor control sequence by using a multilayer LSTM coding-decoding structure, fitting a motion trend parameter of the motor control sequence by adopting a d-order polynomial, and constructing a smooth constraint reverse mechanical model based on the deviation of displacement, speed and acceleration; the step B2, including:
1) by means of the formula (I) and (II),
Figure FDA0003064439520000071
solving a smooth constrained inverse mechanical model Γ (), wherein,
Figure FDA0003064439520000072
coding structure of L layer;
Figure FDA0003064439520000073
a decoding structure for L layer; l is a preset hidden layer number;
Figure FDA0003064439520000074
the face feature sequence of k frames before the t moment of the robot is obtained;
Figure FDA0003064439520000075
a motor control sequence of d frames after t time of the robot; y ist-ΔtA motor control sequence at the t-delta t moment of the robot is obtained;
2) and the use of a formula,
Figure FDA0003064439520000076
and (3) constructing a polynomial function fitted at the front d moments and the back d moments of the jth motor, wherein,
Hj(t + k Δ t) is a polynomial function fitted to the first d times of the jth motor t time;
Figure FDA0003064439520000077
the polynomial coefficient of the ith term of the fitting function at the d moments before the t moment of the jth motor is obtained; fj(t + q Δ t) is a polynomial function fitted d moments after the jth motor;
Figure FDA0003064439520000078
d moments after the jth motorFitting the ith polynomial coefficient of the function;
3) using the formula, alphaj=P-1UjCalculating the smoothing coefficients of the j motor control sequences, wherein,
Figure FDA0003064439520000079
the smoothing coefficient to be solved of the jth motor control sequence is obtained;
Figure FDA00030644395200000710
fitting a d term polynomial coefficient of the function for the first 0 moments of the jth motor;
Figure FDA00030644395200000711
fitting a d term polynomial coefficient of a function for d moments before the jth motor;
Figure FDA00030644395200000712
fitting a d term polynomial coefficient of the function for 0 th moment after the jth motor;
Figure FDA00030644395200000713
fitting a d term polynomial coefficient of the function for d moments after the jth motor; p is a coefficient matrix; u shapejIs a vector formed by the displacement of j motor control sequences at t- (d-1) delta t-t + (d-1) delta t and zero elements,
and U isj=(y(t-(d-1)Δt)j,…,y(t-Δt)j,y(t)j,y(t+Δt)j,…,y(t+(d-1)Δt)j,0,0,0)T,y(t-(d-1)Δt)jThe control displacement of the jth motor at the time t- (d-1) delta t; y is(t)jThe control displacement of the jth motor at the moment t is obtained; y is(t+(d-1)Δt)jThe control displacement is the control displacement of the jth motor at the moment t + (d-1) delta t;
4) using the formula, a ═ α1,…,αj…,αn) J is more than or equal to 1 and less than or equal to n, and solving a smooth coefficient matrix of n motor control sequences at t moment
Figure FDA0003064439520000081
Wherein the content of the first and second substances,
a is
Figure FDA0003064439520000082
Figure FDA0003064439520000083
A smoothing coefficient matrix formed by n motors at each moment; alpha is alphajThe smoothing coefficients of n motor control sequences at the time t;
5) smoothing the coefficient matrix
Figure FDA0003064439520000084
The result is substituted into a formula,
Figure FDA0003064439520000085
calculating the control displacement of d frames before the t moment of the jth motor
Figure FDA0003064439520000086
Control displacement of d frame after t moment of jth motor
Figure FDA0003064439520000087
6) D frames of control displacement after t moments of the n motors calculated in the step 5)
Figure FDA0003064439520000088
The result is substituted into the objective function,
Figure FDA0003064439520000089
calculating optimal parameters of the smooth constrained inverse mechanical model, wherein,
J(WE,WD,bE,bD) Optimal parameters of the reverse mechanical model based on smooth constraint; wEFor smoothingConstraining a matrix of first model parameters of an inverse mechanical model; wDA matrix of second model parameters that is a smooth constrained inverse mechanical model; bEA matrix of third model parameters that is a smooth constrained inverse mechanical model; bDA matrix of fourth model parameters that is a smooth constrained inverse mechanical model; j () is the objective function; min is a minimum evaluation function; q is the number of the expression frame of the robot after t-moment, and q belongs to [0, d-1 ]](ii) a F (t + q delta t) is a displacement vector of a real control sequence of n motors of the robot at the moment of t + q delta t; f' (t + q delta t) is a matrix of speed vectors of real control sequences of n motors of the robot at the time of t + q delta t; f "(t + q delta t) is a matrix of acceleration vectors of real control sequences of n motors of the robot at the time of t + q delta t; t is the current time;
Figure FDA0003064439520000091
estimating values of displacement vectors of n motors of the robot at the moment of t + q delta t;
Figure FDA0003064439520000092
a matrix of estimated values of speed vectors of n motors of the robot at the time of t + q delta t;
Figure FDA0003064439520000093
a matrix of estimated values of acceleration vectors of n motors of the robot at the time of t + q delta t; alpha is the weight taking speed and acceleration as smooth constraint, and alpha is more than or equal to 0; sigma is a summation function;
and the generating module is used for generating an optimal motor control sequence based on the smooth constraint reverse mechanical model by taking the real-time facial features of the performer as targets, and then driving a robot facial motor by using the optimal motor control sequence to enable the robot to present an expression corresponding to the face of the performer.
CN201810593985.1A 2018-06-11 2018-06-11 Robot expression simulation method and device based on smooth constraint reverse mechanical model Active CN108908353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810593985.1A CN108908353B (en) 2018-06-11 2018-06-11 Robot expression simulation method and device based on smooth constraint reverse mechanical model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810593985.1A CN108908353B (en) 2018-06-11 2018-06-11 Robot expression simulation method and device based on smooth constraint reverse mechanical model

Publications (2)

Publication Number Publication Date
CN108908353A CN108908353A (en) 2018-11-30
CN108908353B true CN108908353B (en) 2021-08-13

Family

ID=64410836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810593985.1A Active CN108908353B (en) 2018-06-11 2018-06-11 Robot expression simulation method and device based on smooth constraint reverse mechanical model

Country Status (1)

Country Link
CN (1) CN108908353B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2581486A (en) * 2019-02-15 2020-08-26 Hanson Robotics Ltd Animatronic robot calibration
CN112454390B (en) * 2020-11-27 2022-05-17 中国科学技术大学 Humanoid robot facial expression simulation method based on deep reinforcement learning
CN116485964B (en) * 2023-06-21 2023-10-13 海马云(天津)信息技术有限公司 Expression processing method, device and storage medium of digital virtual object

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265869A (en) * 2002-03-12 2003-09-24 Univ Waseda Eye-eyebrow structure of robot
EP1988493A1 (en) * 2007-04-30 2008-11-05 National Taiwan University of Science and Technology Robotic system and method for controlling the same
CN106078752A (en) * 2016-06-27 2016-11-09 西安电子科技大学 Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN106926258A (en) * 2015-12-31 2017-07-07 深圳光启合众科技有限公司 The control method and device of robot emotion
CN107392109A (en) * 2017-06-27 2017-11-24 南京邮电大学 A kind of neonatal pain expression recognition method based on deep neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265869A (en) * 2002-03-12 2003-09-24 Univ Waseda Eye-eyebrow structure of robot
EP1988493A1 (en) * 2007-04-30 2008-11-05 National Taiwan University of Science and Technology Robotic system and method for controlling the same
CN106926258A (en) * 2015-12-31 2017-07-07 深圳光启合众科技有限公司 The control method and device of robot emotion
CN106078752A (en) * 2016-06-27 2016-11-09 西安电子科技大学 Method is imitated in a kind of anthropomorphic robot human body behavior based on Kinect
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN107392109A (en) * 2017-06-27 2017-11-24 南京邮电大学 A kind of neonatal pain expression recognition method based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于积分投影和LSTM的微表情识别研究;李竞等;《计算机时代》;20170430;13-16,20 *
类人机器人表情识别与表情再现方法研究;黄忠;《中国博士学位论文全文数据库》;20171215;93-115页 *
黄忠.类人机器人表情识别与表情再现方法研究.《中国博士学位论文全文数据库》.2017, *

Also Published As

Publication number Publication date
CN108908353A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN110599573B (en) Method for realizing real-time human face interactive animation based on monocular camera
Ersotelos et al. Building highly realistic facial modeling and animation: a survey
Murtaza et al. Analysis of face recognition under varying facial expression: a survey.
WO2017044499A1 (en) Image regularization and retargeting system
Ren et al. Automatic facial expression learning method based on humanoid robot XIN-REN
CN108908353B (en) Robot expression simulation method and device based on smooth constraint reverse mechanical model
Zhu et al. Human motion generation: A survey
WO2023284435A1 (en) Method and apparatus for generating animation
Choi et al. Animatomy: An animator-centric, anatomically inspired system for 3d facial modeling, animation and transfer
Hua et al. Towards more realistic human-robot conversation: A seq2seq-based body gesture interaction system
Liu et al. Real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks
Tuyen et al. Conditional generative adversarial network for generating communicative robot gestures
Huang et al. Facial expression imitation method for humanoid robot based on smooth-constraint reversed mechanical model (SRMM)
Tang et al. Real-time conversion from a single 2D face image to a 3D text-driven emotive audio-visual avatar
CN117333604A (en) Character face replay method based on semantic perception nerve radiation field
Haber et al. Facial modeling and animation
CN113436302B (en) Face animation synthesis method and system
Cai et al. Immersive interactive virtual fish swarm simulation based on infrared sensors
Ishikawa et al. 3D face expression estimation and generation from 2D image based on a physically constraint model
WO2015042867A1 (en) Method for editing facial expression based on single camera and motion capture data
Salam Multi-Object modelling of the face
CN117152843B (en) Digital person action control method and system
CN115648203A (en) Method for realizing real-time mirror image behavior of robot based on lightweight neural network
CN115122345A (en) Robot expression online driving method
Mao Human Motion Prediction: From Deterministic to Stochastic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant