CN108564642A - Unmarked performance based on UE engines captures system - Google Patents

Unmarked performance based on UE engines captures system Download PDF

Info

Publication number
CN108564642A
CN108564642A CN201810217894.8A CN201810217894A CN108564642A CN 108564642 A CN108564642 A CN 108564642A CN 201810217894 A CN201810217894 A CN 201810217894A CN 108564642 A CN108564642 A CN 108564642A
Authority
CN
China
Prior art keywords
expression
weight parameter
mentioned
performing artist
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810217894.8A
Other languages
Chinese (zh)
Inventor
车武军
吴泽烨
谷卓
徐波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201810217894.8A priority Critical patent/CN108564642A/en
Publication of CN108564642A publication Critical patent/CN108564642A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to image processing fields, propose a kind of unmarked performance capture system based on UE engines, it aims to solve the problem that while capturing the action of performing artist with expression to generate in role animation method, mark point causes intrusion to feel performing artist, the problem of making performance be interfered.The system includes:Facial performance capture module, is configured to the face image data of acquisition performing artist, and calculates according to above-mentioned face image data the weight parameter of the facial expression of above-mentioned performing artist;Performance capture module is configured to acquire the bone image data of above-mentioned performing artist, and determines the human body attitude parameter of above-mentioned performing artist according to above-mentioned bone image data;Animation producing module is configured to the weight parameter according to above-mentioned facial expression and above-mentioned human body attitude parameter, and action and the expression of role's 3D models are generated using UE graphic packages.The present invention realizes the capture of performing artist's action and expression, and assigns virtual role according to action and expression data and really reasonably act and lively expression.

Description

Unmarked performance based on UE engines captures system
Technical field
The present invention relates to computer graphics, computer vision and field of virtual reality, more particularly to a kind of to be based on UE The unmarked performance of engine captures system.
Background technology
Performance capturing technology is for capturing the action of performing artist and expression, in the fields such as film, animation, game It has a wide range of applications.Virtual role really reasonably action and lively expression are assigned by performing capturing technology, it can band Give user excellent more elegant perception experience.Motion capture technology includes that optical profile type captures and the capture of inertial navigation formula.Optical profile type captures Performing artist is shot by optical camera, analysis calculates the artis of performing artist, such as kinect etc.;The capture of inertial navigation formula passes through The sensor dressed with performing artist obtains the motion state of artis, analyzes the current posture of performing artist, for example, promise also rise, OptiTrack etc..
Currently, existing performance capturing technology scheme has, and marks in performing artist's whole body and face, is caught by optical camera Double and facial expression are caught, performing artist's image of shooting is substituted for by void according to the mark point captured in post-production Quasi- actor model.But mark point causes intrusion to feel performing artist so that the difficulty performed naturally increases.Alternatively, carrying out table respectively Feelings capture and motion capture, are then synthesized, but the difficulty combined each other is increased in post-production, and to user It carries out other roles and edits to be limited.
Invention content
In order to solve the above problem in the prior art, in order to solve to capture the action of performing artist simultaneously with expression with life At in role's animation method, mark point causes intrusion to feel performing artist so that the difficulty performed naturally increases, alternatively, due to dividing Not carry out expression capture and motion capture, then synthesized, cause to increase the difficulty combined each other in post-production, And other roles are carried out to user and edit the problem of being limited, the present invention uses following technical scheme to solve the above problems:
This application provides the unmarked performance based on UE engines (Unreal Engine, virtual engine) to capture system, should System includes:Facial performance capture module is configured to the face image data of acquisition performing artist, and according to above-mentioned face-image number According to the weight parameter for the facial expression for calculating above-mentioned performing artist, and it is denoted as the first weight parameter;Performance capture module, configuration To acquire the bone image data of above-mentioned performing artist, and determine according to above-mentioned bone image data the human body attitude of above-mentioned performing artist Parameter;Animation producing module is configured to, according to above-mentioned first weight parameter and above-mentioned human body attitude parameter, utilize UE graphic packages Generate above-mentioned performing artist correspond to character 3D models action and expression.
In some instances, above-mentioned facial performance capture module includes facial image acquisition unit and expression computing unit; Above-mentioned facial image acquisition unit is configured to the face image data of acquisition performing artist's front face;Above-mentioned expression computing unit, It is configured to carry out feature point tracking to above-mentioned face image data, calculates the weight parameter of the facial expression of above-mentioned performing artist.
In some instances, above-mentioned performance capture module includes skeleton data collecting unit and human body attitude confirmation form Member;Above-mentioned bone image collecting unit includes more Kinect sensors, is configured to acquire above-mentioned performing artist from different angles Multiframe bone image data, each above-mentioned bone image data of frame include form skeleton each artis body joint point coordinate With the tracking attribute of each above-mentioned artis, and according to each artis point that above-mentioned tracking attribute is each above-mentioned bone image data With confidence level;Above-mentioned human body attitude confirmation unit is configured to each artis in the bone image data according to above-mentioned performing artist and sits The human body attitude parameter of above-mentioned performing artist is determined in mark and each above-mentioned body joint point coordinate variation.
In some instances, above-mentioned human body attitude confirmation unit is further configured to:Utilize preset coordinate conversion matrix The bone image data acquired to each Kinect sensor carry out coordinate system conversion, generate and refer to skeleton data;According to each The average skeleton data of above-mentioned performing artist is synthesized using Weighted Average Algorithm with reference to skeleton data.
In some instances, above-mentioned " to synthesize above-mentioned performing artist's using Weighted Average Algorithm with reference to skeleton data according to each Average skeleton data ", including:Determine the above-mentioned artis with reference to skeleton data confidence level be above-mentioned artis weight because Son;Above-mentioned body joint point coordinate is calculated according to the weight factor of each any body joint point coordinate with reference to skeleton data and above-mentioned artis Average value;The average skeleton number of above-mentioned performing artist is determined according to the average value of whole body joint point coordinates of composition human skeleton According to.
In some instances, above-mentioned animation producing module includes skeleton motion control unit and expression control unit;It is above-mentioned Skeleton motion control unit is configured to the human body attitude parameter determined according to above-mentioned performance capture module, utilizes above-mentioned UE Graphic package generates the action animation of the 3D models of character;Above-mentioned expression control unit is configured to according to above-mentioned facial table The facial expression weight parameter for drilling capture module determination, the 3D models of above-mentioned character are generated using above-mentioned UE graphic packages Expression animation.
In some instances, above-mentioned skeleton motion control unit, is further configured to:It, will using preset mapping relations Above-mentioned average skeleton data is converted to the actor model data of above-mentioned character in UE4 graphic packages;It is mixed using quaternary number Mode above-mentioned actor model data are passed through into the 3D models of UE4 engines assignment to above-mentioned character;Initial scaffold is calculated to become The variable quantity of every bone during changing to current skeleton;By father's artis of the additional corresponding bone of each above-mentioned variable quantity, really Make the action animation of the 3D models of above-mentioned character.
In some instances, above-mentioned expression control unit is further configured to:By above-mentioned first weight parameter and preset angle Each basic expression in color table feelings library is corresponded to, and determines the corresponding basic expression combination of above-mentioned facial expression;Using preset The correspondence of target distortion function and each basic expression in above-mentioned role's expression library, it is upper to determine that above-mentioned facial expression corresponds to State the expression animation of the 3D models of character.
In some instances, above-mentioned " to carry out each basic expression in above-mentioned first weight parameter and preset angle color table feelings library It is corresponding, determine the corresponding basic expression combination of above-mentioned facial expression ", including:Utilize preset expression weight calculation program meter It counts stating role's expression weight parameter of each basic expression in role's expression library in, and is denoted as the second weight parameter;By above-mentioned One weight parameter is mapped with above-mentioned second weight parameter, according to mapping result, is determined corresponding with above-mentioned facial expression Second weight parameter;According to the correspondence of each basic expression in above-mentioned second weight parameter and above-mentioned role's expression library, really Determine the basic expression combination in the corresponding above-mentioned role's expression library of above-mentioned facial expression.
In some instances, above-mentioned " to map above-mentioned first weight parameter and above-mentioned second weight parameter, according to reflecting Penetrate as a result, determining the second weight parameter corresponding with above-mentioned facial expression ", including:By the first power in the UE graphic packages The number of weight parameter is compared with the number of basic expression in role's expression library;If number is identical, choose and above-mentioned first The second consistent weight parameter of weight parameter serial number is as corresponding second weight parameter of above-mentioned facial expression;If above-mentioned UE figures The number of the first weight parameter is less than the number of basic expression in above-mentioned role's expression library in shape program, then is joined according to the first weight Several numbers chooses equal number of basic expression as expression subset from above-mentioned role basis expression library, calculates above-mentioned table Role's expression weight parameter of each basis expression in feelings subset, and it is denoted as the second new weight parameter, it chooses and is weighed with above-mentioned first Consistent new above-mentioned second weight parameter of weight parameter serial number is as corresponding second weight parameter of above-mentioned facial expression;Otherwise, It chooses with the second weight parameter of the difference minimum of above-mentioned first weight parameter as corresponding second weight of above-mentioned facial expression Parameter.
Unmarked performance provided by the present application based on UE engines captures system, and table is captured by facial performance capture module The facial expression for the person of drilling, performance capture module capture the limb action of performing artist, and animation producing module is according to performing artist's Facial expression and limb action generate the action animation and expression animation of the 3D models of character using UE graphic packages.This Invention can capture action and the expression data of performing artist simultaneously, and pass through the form real-time rendering of role animation in UE engines Out, user can be with self-defined actor model.It solves while capturing the action of performing artist with expression to generate role animation side In method, mark point causes intrusion to feel performing artist so that the performance of cartoon role personage is interfered.
Description of the drawings
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the implementing procedure figure that system is captured according to the unmarked performance based on UE engines of the application;
Fig. 3 is the application for the middle helmet-type network cameras schematic diagram for capturing facial expression;
Fig. 4 a and Fig. 4 b are the performance capture design sketch of performance and expression performance.
Specific implementation mode
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this A little embodiments are used only for explaining the technical principle of the present invention, it is not intended that limit the scope of the invention.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the example for the embodiment that the unmarked performance based on UE engines of the application can be applied to capture system Sexual system framework.
As shown in Figure 1, system includes:Facial performance capture module is configured to the face image data of acquisition performing artist, and The weight parameter of the facial expression of above-mentioned performing artist is calculated according to above-mentioned face image data, and is denoted as the first weight parameter;It is dynamic Make performance capture module, is configured to acquire the bone image data of above-mentioned performing artist, and determined according to above-mentioned bone image data The human body attitude parameter of above-mentioned performing artist;Animation producing module is configured to according to above-mentioned first weight parameter and above-mentioned human body appearance State parameter, using UE graphic packages generate above-mentioned performing artist correspond to character 3D models action and expression.
With continued reference to Fig. 2, the implementation schematic diagram of system in the present embodiment is shown.In the present embodiment, above-mentioned face Performance capture module and above-mentioned performance capture module obtain the letter of the facial expression and movement posture of performance performing artist respectively Breath, and will be obtained and be sent to above-mentioned animation producing module with the relevant facial expression information of performance and movement posture information, Above-mentioned animation producing module generates the people's object angle according to above-mentioned facial expression information and movement posture information, using UE graphic packages The action animation and expression animation of the 3D models of color.Above-mentioned facial performance capture module and above-mentioned performance capture module can be with It is the information that user inputs in real time;User can prepare character data in advance, according to real time information input by user, generate in real time The action animation of character.
In the present embodiment, above-mentioned facial performance capture module includes facial image acquisition unit and expression computing unit. Wherein, above-mentioned facial image acquisition unit is configured to the face image data of acquisition performing artist's front face;Above-mentioned expression calculates Unit is configured to carry out feature point tracking and analysis to above-mentioned face image data, calculates the facial expression of above-mentioned performing artist Weight parameter.
Above-mentioned facial image acquisition unit can be video or Image Acquisition sensing equipment, for example, can be helmet-type net Network camera.As shown in figure 3, above-mentioned helmet-type network cameras primary structure is as follows:There are one adjustable support, branch for helmet front dress Frame tail portion is equipped with an IP Camera, and the helmet back side is equipped with power supply, is connected with camera by data line.Carrying out facial figure Above-mentioned helmet-type network cameras carries out captured in real-time to the front face image of target user when as capturing.Captured image or Video flowing is transferred to the expression computing unit progress expression parameter calculating for being set to the ends PC by wired or wireless network.
Above-mentioned expression parameter calculation unit is configured to carry out feature point tracking and analysis to above-mentioned face image data, calculates The weight parameter of the facial expression of above-mentioned performing artist.Expression parameter calculation procedure is preset in above-mentioned expression parameter calculation unit, on The tracking that expression parameter calculation procedure carries out the face image data of acquired performing artist characteristic point is stated, performing artist is calculated Facial expression weight parameter.
As an example, the weight parameter calculating of above-mentioned performing artist can carry out in the following way.It can be connected with PC machine One Kinect sensor, FaceShift can detect the Kinect sensor and be connected thereto automatically, and Kinect sensor is caught The depth data of the human face expression obtained can be with real-time Transmission to FaceShift.The people that FaceShift obtains Kinect sensor Face expression depth data and the basic expression model of user compare and analyze, and FaceShift calculates 51 of current expression Weight parameter is denoted as { wi, i=1,2 ..., 51 }.
Specifically, by taking the blendshape expression models of n basic expression composition as an example, each basic expression is with containing There is the three-dimensional grid faceform on p vertex to indicate, there are three component x, y, z, i.e., the space coordinates on each vertex on each vertex For (x, y, z).The apex coordinate of each basic expression is expanded into long vector in any order, but each underlying table after being unfolded After the apex coordinate of feelings expansion sequence should be it is the same, expansion sequence can be (xxxyyyzzz) or (xyzxyzxyz) etc., thus obtain the vectorial b that n length is 3pk, k=1,2 ..., n use b0Indicate neutral expression, bk- b0As k-th basic expression bkWith neutral expression b0Difference, current expression can be expressed as:Wherein, wkIndicate the arbitrary value in section [0,1].Therefore, 51 underlying tables Feelings model can be expressed as Fi=bi-b0(i=1 ..., 51), above-mentioned formula is reduced toWherein F= f-b0
In the present embodiment, above-mentioned performance capture module includes skeleton data collecting unit and human body attitude confirmation form Member.Above-mentioned bone image collecting unit includes more Kinect sensors, is configured to acquire above-mentioned performing artist from different angles Multiframe bone image data, each above-mentioned bone image data include form skeleton each artis body joint point coordinate and The tracking attribute of each above-mentioned artis, and distributed according to each artis that above-mentioned tracking attribute is each bone image data Confidence level;Above-mentioned human body attitude confirmation unit is configured to each body joint point coordinate in the bone image data according to above-mentioned performing artist The human body attitude parameter of above-mentioned performing artist is determined with the transformation of each body joint point coordinate.
Install more in different positions in the data collection zone domain for acquiring performing artist's bone action data Kinect sensor, to be captured from different angles to the action of performing artist.Above-mentioned Kinect sensor is collected Performing artist bone image data include form skeleton each artis body joint point coordinate and each artis tracking Attribute.As an example, every frame data of each Kinect sensor acquisition include the tracking attribute of a skeleton and each joint, skeleton It can be expressed as { vij, wherein j indicates artis number, vijIndicate skeleton in i-th Kinect sensor coordinate system the The coordinate of j artis.The tracking attribute of above-mentioned each artis be divided into it is tracking, speculating, do not track.Can be with The confidence level that three state assignments of track attribute reduce successively, is denoted as { wij}.Wherein, wijIndicate that i-th Kinect sensor is sat The confidence level of j-th of artis of skeleton in mark system.Above-mentioned bone image collecting unit is by network by above-mentioned bone image number According to above-mentioned human body attitude confirmation unit is sent to, to carry out the calculating of human body attitude parameter to performing artist.
In the present embodiment, above-mentioned human body attitude confirmation unit is further configured to:Utilize preset coordinate conversion matrix The bone image data acquired to each Kinect sensor carry out coordinate system conversion, generate and refer to skeleton data;According to each The average skeleton data of above-mentioned performing artist is synthesized using Weighted Average Algorithm with reference to skeleton data.Here, each Kinect is passed Sensor carries out coordinate system conversion, and the data that each Kinect sensor acquires are transformed under same reference frame.First, may be used Wherein a Kinect sensor coordinate system is reference frame using specified, then, what remaining each kinect sensor captured Each artis of human skeleton is as the match point between local Coordinate System and reference frame;Finally, each kinect sensings are determined Transformation matrix of the device coordinate system to reference frame so that apart from summation minimum between the match point after transformation.Pass through above-mentioned change Matrix is changed, the bone image data that each kinect sensors are acquired carry out coordinate system conversion, generate and refer to skeleton data.
In the present embodiment, above-mentioned to synthesize the flat of above-mentioned performing artist using Weighted Average Algorithm with reference to skeleton data according to each Equal skeleton data, including:Determine that the confidence level of the above-mentioned artis with reference to skeleton data is the weight factor of above-mentioned artis;Root The average value of the body joint point coordinate is calculated according to the weight factor of each any body joint point coordinate with reference to skeleton data and the artis; The average skeleton data of above-mentioned performing artist is determined according to the average value for the whole body joint point coordinates for forming above-mentioned skeleton.This In, the average skeleton data for calculating performing artist is the average value for each body joint point coordinate for calculating composition human skeleton.For any The calculating of the average value of body joint point coordinate can be that coordinate of the artis under reference frame is weighted average meter It calculates, wherein weight factor is the confidence level of the body joint point coordinate.As an example, one under reference frame can be will transition to Frame human skeleton data remember { vij, wij, wherein j indicates that artis number, i indicate kinect sensor numbers, vijIndicate i-th The coordinate of j-th of artis, w in the skeleton captured in a Kinect sensor coordinate systemijFor the confidence level of the artis.It will Confidence level is weighted average computation as weight, to the multiframe kinect skeleton joint point coordinates of same skeleton, obtains one Average skeleton.
In the present embodiment, above-mentioned animation producing module includes skeleton motion control unit and expression control unit, above-mentioned Skeleton motion control unit is configured to the human body attitude parameter determined according to above-mentioned performance capture module, utilizes above-mentioned UE Graphic package generates the action animation of the 3D models of character;Above-mentioned expression control unit is configured to according to above-mentioned facial table The facial expression weight parameter for drilling capture module determination generates above-mentioned performing artist using above-mentioned UE graphic packages and corresponds to character 3D models expression animation.It is the action schematic diagram generated according to above-mentioned performance as shown in fig. 4 a, Fig. 4 b are according to upper State the expression animation of facial performance generation.
The human body attitude parameter that above-mentioned skeleton motion control unit is determined according to above-mentioned performance capture module, in utilization It states UE graphic packages and generates the action animation that performing artist corresponds to the 3D models of character.It is specifically as follows, is reflected using preset Relationship is penetrated, above-mentioned average skeleton data is converted to the actor model data of above-mentioned character in UE4 graphic packages;Using four Above-mentioned actor model data are passed through the 3D models of UE4 engines assignment to above-mentioned character by the mode of first number mixing;It calculates just Beginning skeleton changes to the variable quantity of every bone during current skeleton;The father of the additional corresponding bone of each above-mentioned variable quantity is closed Node determines the action animation of the 3D models of above-mentioned character.
The 3D model maintenance portion skeletons mapping that can be used in UE graphic packages, is used for the people of Kinect sensor The average skeleton data of body skeleton action is converted to the form needed for 3D models.Skeleton, which maps, closes the skeleton of Kinect sensor Node corresponds to 3D models skeleton joint point, according to the similitude of 3D models skeleton structure and kinect sensor middle skeleton structures It is mapped one by one, if there is extra or missing artis in 3D models, is not made mapping processing.Mapping can pass through artis title Auto-matching is carried out, can also be bound manually.3D models skeleton is by a series of artis and its company in UE graphic packages Connect composition, each artis has unique name, therefore can by compare Kinect sensor skeleton joint point title and Two skeletons of 3D models skeleton joint point title pair carry out automatic mappings in UE graphic packages, can not the part of automatic mapping can be with It is matched manually, matched result is attached on the 3D models for needing to use, the action animation of skeleton action is presented. Above-mentioned be attached to matched result needs be to be assigned to transformed 3D models skeleton data on the 3D models used 3D models, assignment are found out initial scaffold and are changed to every bone during current skeleton by the way of the mixing of quaternary number Variable quantity (being indicated with quaternary number), each variable quantity is attached in father's artis of corresponding bone later.Mapping is closed The artis being not present in system, the variation being positioned against depending on father's artis in skeleton cartoon.
In the present embodiment, above-mentioned expression control unit is further configured to:By above-mentioned first weight parameter and preset angle Each basic expression in color table feelings library is corresponded to, and determines the corresponding basic expression combination of above-mentioned facial expression;Using preset The correspondence of each basic expression in target distortion function and above-mentioned role's expression library, determine above-mentioned facial expression correspond to it is upper State the expression animation of character 3D models.
In the present embodiment, above-mentioned " to carry out each basic expression in above-mentioned first weight parameter and preset angle color table feelings library It is corresponding, determine the corresponding basic expression combination of above-mentioned facial expression ", including:Utilize preset expression weight calculation program meter It counts stating role's expression weight parameter of each basic expression in role's expression library in, and is denoted as the second weight parameter;By above-mentioned One weight parameter is mapped with above-mentioned second weight parameter, according to mapping result, is determined corresponding with above-mentioned facial expression Second weight parameter;According to the correspondence of each basic expression in above-mentioned second weight parameter and above-mentioned role's expression library, really Determine the basic expression combination in the corresponding above-mentioned role's expression library of above-mentioned facial expression.
In the present embodiment, above-mentioned " to map above-mentioned first weight parameter and above-mentioned role's expression weight parameter, root According to mapping result, the second weight parameter corresponding with above-mentioned facial expression is determined ", including:By in above-mentioned UE graphic packages The number of one weight parameter is compared with the number of basic expression in above-mentioned role's expression library;If number is identical, that is, see, above-mentioned UE The basic expression corresponding to whole face image datas acquired in graphic package is set with basic expression in above-mentioned role's expression library It is fixed consistent;Second weight parameter consistent with above-mentioned first weight parameter serial number is chosen as above-mentioned facial expression corresponding second Weight parameter.If the number of the first weight parameter is less than basic expression in above-mentioned role's expression library in above-mentioned UE graphic packages Number is chosen equal number of basic expression from above-mentioned role basis expression library and is made then according to the number of the first weight parameter For expression subset, that is, the basic expression corresponding to whole face image datas acquired in the UE graphic packages and the table Basic expression setting is consistent in feelings subset;Role's expression weight parameter of each basis expression in above-mentioned expression subset is calculated, and is remembered For the second new weight parameter, new above-mentioned second weight parameter consistent with above-mentioned first weight parameter serial number is chosen as upper State corresponding second weight parameter of facial expression;Otherwise, the second weight with the difference minimum of above-mentioned first weight parameter is chosen Parameter is as corresponding second weight parameter of above-mentioned facial expression.
As an example, there is N number of basic expression in above-mentioned role's expression library, the corresponding weight parameter of role is converted to, is denoted as Second weight parameter { vi, i=1,2 ..., N }.It is right that whole face image data institutes can be received in above-mentioned UE graphic packages The basic expression answered is M, and the number for being converted to the corresponding weight parameter of performing artist is M, is denoted as the first weight parameter { wi, i =1,2 ..., M }, the number of preferred M is 51.If role's expression library and the basic expression corresponding to whole face image datas Setting it is completely the same, then N=M, then the expression weight v of rolei=wi, i=1,2 ..., M;If basic in role's expression library Expression type is less, then the weight parameter w of selection and i-th of immediate expression j of basic expression in role's expression libraryjIt is assigned to vi, i.e. vi=wj;If underlying table affectionate person's class is more in role's expression library, a subset in role basis expression library is chosenIt is corresponded with the basic expression corresponding to whole face image datas, the weight in the subset Parameter is set asThe weight parameter of remaining expression is set to 0.According in above-mentioned UE graphic packages First weight parameter of the basic expression corresponding to whole face image datas and the basic expression in above-mentioned role's expression library The correspondence of second weight parameter determines corresponding second weight parameter of above-mentioned facial expression.UE in UE graphic packages Engine is by calling the function of weight parameter conversion to calculate the final expression weight parameter of role.The final power that UE engines will obtain Weight parameter is input in target distortion setting function, and the facial vertex of control role or the deformation of characteristic point make role make phase Expression animation is presented in the expression answered.
So far, it has been combined preferred embodiment shown in the drawings and describes technical scheme of the present invention, still, this field Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific implementation modes.Without departing from this Under the premise of the principle of invention, those skilled in the art can make the relevant technologies feature equivalent change or replacement, these Technical solution after change or replacement is fallen within protection scope of the present invention.

Claims (10)

1. a kind of unmarked performance based on UE engines captures system, which is characterized in that the system comprises:
Facial performance capture module is configured to the face image data of acquisition performing artist, and according to the face image data meter The weight parameter of the facial expression of the performing artist is calculated, and is denoted as the first weight parameter;
Performance capture module is configured to acquire the bone image data of the performing artist, and according to the bone image number According to the human body attitude parameter of the determination performing artist;
Animation producing module is configured to, according to first weight parameter and the human body attitude parameter, utilize UE graphic packages Generate the performing artist correspond to character 3D models action and expression.
2. the unmarked performance according to claim 1 based on UE engines captures system, which is characterized in that the face table It includes facial image acquisition unit and expression computing unit to drill capture module,
The facial image acquisition unit is configured to acquire the face image data of performing artist's front face;
The expression computing unit is configured to carry out feature point tracking to the face image data, and calculates the performing artist Facial expression weight parameter.
3. the unmarked performance according to claim 1 based on UE engines captures system, which is characterized in that the action schedule It includes skeleton data collecting unit and human body attitude confirmation unit to drill capture module;
The bone image collecting unit includes more Kinect sensors, is configured to acquire the performing artist from different angles Multiframe bone image data, bone image data described in each frame include the body joint point coordinate for each artis for forming skeleton With the tracking attribute of each artis, and according to each artis point that the tracking attribute is each bone image data With confidence level;
The human body attitude confirmation unit is configured in the bone image data according to the performing artist each body joint point coordinate and each The human body attitude parameter of the performing artist is determined in the body joint point coordinate variation.
4. the unmarked performance according to claim 3 based on UE engines captures system, which is characterized in that the human body appearance State confirmation unit is further configured to:
The bone image data acquired to each Kinect sensor using preset coordinate conversion matrix are carried out coordinate system and turned It changes, generates and refer to skeleton data;
According to each average skeleton data for synthesizing the performing artist using Weighted Average Algorithm with reference to skeleton data.
5. the unmarked performance according to claim 4 based on UE engines captures system, which is characterized in that " according to each ginseng Examine the average skeleton data that skeleton data synthesizes the performing artist using Weighted Average Algorithm ", including:
Determine that the confidence level of the artis with reference to skeleton data is the weight factor of the artis;
The artis is calculated according to the weight factor of each any body joint point coordinate with reference to skeleton data and the artis to sit Target average value;
The average skeleton data of the performing artist is determined according to the average value of whole body joint point coordinates of composition human skeleton.
6. the unmarked performance according to claim 1 based on UE engines captures system, which is characterized in that the animation life Include skeleton motion control unit and expression control unit at module;
The skeleton motion control unit is configured to the human body attitude parameter determined according to the performance capture module, profit The action animation of the 3D models of character is generated with the UE graphic packages;
The expression control unit is configured to the facial expression weight parameter determined according to the facial performance capture module, profit The expression animation of the 3D models of the character is generated with the UE graphic packages.
7. the unmarked performance according to claim 6 based on UE engines captures system, which is characterized in that the bone fortune Dynamic control unit, is further configured to:
Using preset mapping relations, the average skeleton data is converted to the angle of character described in UE4 graphic packages Color model data;
The actor model data are passed through into the 3D moulds of UE4 engines assignment to the character by the way of the mixing of quaternary number Type;
Calculate the variable quantity that initial scaffold changes to every bone during current skeleton;
By father's artis of the additional corresponding bone of each variable quantity, determine that the action of the 3D models of the character is dynamic It draws.
8. the unmarked performance according to claim 6 based on UE engines captures system, which is characterized in that the expression control Unit processed, is further configured to:
Each basic expression in first weight parameter and preset angle color table feelings library is carried out corresponding, determines the facial expression Corresponding basis expression combination;
Using the correspondence of each basic expression in preset target distortion function and role's expression library, the face is determined Portion's expression corresponds to the expression animation of the 3D models of the character.
9. unmarked performance according to claim 8 based on UE engines captures system, which is characterized in that " by described the Each basic expression in one weight parameter and preset angle color table feelings library carries out corresponding, determines the corresponding underlying table of the facial expression Feelings combine ", including:
Role's expression weight of each basic expression in role's expression library is calculated using preset expression weight calculation program Parameter, and it is denoted as the second weight parameter;
First weight parameter and second weight parameter are mapped, according to mapping result, determined and the face Corresponding second weight parameter of portion's expression;
According to the correspondence of each basic expression in second weight parameter and role's expression library, the face is determined Basic expression combination in the corresponding role's expression library of expression.
10. unmarked performance according to claim 9 based on UE engines captures system, which is characterized in that " by described the One weight parameter is mapped with second weight parameter, according to mapping result, is determined corresponding with the facial expression Second weight parameter ", including:
By the number of the first weight parameter in the UE graphic packages compared with the number of basic expression in role's expression library;
If number is identical, second weight parameter consistent with the first weight parameter serial number is chosen as the facial expression Corresponding second weight parameter;
If the number of the first weight parameter is less than the number of basic expression in role's expression library in the UE graphic packages, Then according to the number of the first weight parameter, equal number of basic expression is chosen from the role basis expression library as expression Subset, calculates role's expression weight parameter of each basis expression in the expression subset, and is denoted as the second new weight parameter, selects Take new second weight parameter consistent with the first weight parameter serial number as the facial expression corresponding second Weight parameter;
Otherwise, it chooses corresponding as the facial expression with the second weight parameter of the difference minimum of first weight parameter Second weight parameter.
CN201810217894.8A 2018-03-16 2018-03-16 Unmarked performance based on UE engines captures system Pending CN108564642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810217894.8A CN108564642A (en) 2018-03-16 2018-03-16 Unmarked performance based on UE engines captures system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810217894.8A CN108564642A (en) 2018-03-16 2018-03-16 Unmarked performance based on UE engines captures system

Publications (1)

Publication Number Publication Date
CN108564642A true CN108564642A (en) 2018-09-21

Family

ID=63531818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810217894.8A Pending CN108564642A (en) 2018-03-16 2018-03-16 Unmarked performance based on UE engines captures system

Country Status (1)

Country Link
CN (1) CN108564642A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859297A (en) * 2019-03-07 2019-06-07 灵然创智(天津)动画科技发展有限公司 One kind is unmarked to put facial capture device and method
CN110189404A (en) * 2019-05-31 2019-08-30 重庆大学 Virtual facial modeling method based on real human face image
CN110213521A (en) * 2019-05-22 2019-09-06 创易汇(北京)科技有限公司 A kind of virtual instant communicating method
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN110570498A (en) * 2019-08-30 2019-12-13 常熟理工学院 Movie & TV animation trail tracking capture system
WO2020063009A1 (en) * 2018-09-25 2020-04-02 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium, and electronic device
CN111399662A (en) * 2020-06-04 2020-07-10 之江实验室 Human-robot interaction simulation device and method based on high-reality virtual avatar
CN111488861A (en) * 2020-05-13 2020-08-04 吉林建筑大学 Ski athlete gesture recognition system based on multi-feature value fusion
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
CN112308910A (en) * 2020-10-10 2021-02-02 达闼机器人有限公司 Data generation method and device and storage medium
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978764A (en) * 2014-04-10 2015-10-14 华为技术有限公司 Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN106228119A (en) * 2016-07-13 2016-12-14 天远三维(天津)科技有限公司 A kind of expression catches and Automatic Generation of Computer Animation system and method
CN106373142A (en) * 2016-12-07 2017-02-01 西安蒜泥电子科技有限责任公司 Virtual character on-site interaction performance system and method
US20170039750A1 (en) * 2015-03-27 2017-02-09 Intel Corporation Avatar facial expression and/or speech driven animations
CN106778563A (en) * 2016-12-02 2017-05-31 江苏大学 A kind of quick any attitude facial expression recognizing method based on the coherent feature in space
WO2017115937A1 (en) * 2015-12-30 2017-07-06 단국대학교 산학협력단 Device and method synthesizing facial expression by using weighted value interpolation map
US20170256098A1 (en) * 2016-03-02 2017-09-07 Adobe Systems Incorporated Three Dimensional Facial Expression Generation
CN107277599A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of live broadcasting method of virtual reality, device and system
CN107563295A (en) * 2017-08-03 2018-01-09 中国科学院自动化研究所 Comprehensive human body method for tracing and processing equipment based on more Kinect
CN107577451A (en) * 2017-08-03 2018-01-12 中国科学院自动化研究所 More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978764A (en) * 2014-04-10 2015-10-14 华为技术有限公司 Three-dimensional face mesh model processing method and three-dimensional face mesh model processing equipment
US20170039750A1 (en) * 2015-03-27 2017-02-09 Intel Corporation Avatar facial expression and/or speech driven animations
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
WO2017115937A1 (en) * 2015-12-30 2017-07-06 단국대학교 산학협력단 Device and method synthesizing facial expression by using weighted value interpolation map
US20170256098A1 (en) * 2016-03-02 2017-09-07 Adobe Systems Incorporated Three Dimensional Facial Expression Generation
CN106228119A (en) * 2016-07-13 2016-12-14 天远三维(天津)科技有限公司 A kind of expression catches and Automatic Generation of Computer Animation system and method
CN106778563A (en) * 2016-12-02 2017-05-31 江苏大学 A kind of quick any attitude facial expression recognizing method based on the coherent feature in space
CN106373142A (en) * 2016-12-07 2017-02-01 西安蒜泥电子科技有限责任公司 Virtual character on-site interaction performance system and method
CN107277599A (en) * 2017-05-31 2017-10-20 珠海金山网络游戏科技有限公司 A kind of live broadcasting method of virtual reality, device and system
CN107563295A (en) * 2017-08-03 2018-01-09 中国科学院自动化研究所 Comprehensive human body method for tracing and processing equipment based on more Kinect
CN107577451A (en) * 2017-08-03 2018-01-12 中国科学院自动化研究所 More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020063009A1 (en) * 2018-09-25 2020-04-02 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium, and electronic device
CN109859297A (en) * 2019-03-07 2019-06-07 灵然创智(天津)动画科技发展有限公司 One kind is unmarked to put facial capture device and method
CN110213521A (en) * 2019-05-22 2019-09-06 创易汇(北京)科技有限公司 A kind of virtual instant communicating method
CN110189404A (en) * 2019-05-31 2019-08-30 重庆大学 Virtual facial modeling method based on real human face image
CN110189404B (en) * 2019-05-31 2023-04-07 重庆大学 Virtual face modeling method based on real face image
CN110517337A (en) * 2019-08-29 2019-11-29 成都数字天空科技有限公司 Cartoon role expression generation method, animation method and electronic equipment
CN110570498A (en) * 2019-08-30 2019-12-13 常熟理工学院 Movie & TV animation trail tracking capture system
CN111488861A (en) * 2020-05-13 2020-08-04 吉林建筑大学 Ski athlete gesture recognition system based on multi-feature value fusion
CN111399662B (en) * 2020-06-04 2020-09-29 之江实验室 Human-robot interaction simulation device and method based on high-reality virtual avatar
CN111399662A (en) * 2020-06-04 2020-07-10 之江实验室 Human-robot interaction simulation device and method based on high-reality virtual avatar
CN111968207A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
CN111968207B (en) * 2020-09-25 2021-10-29 魔珐(上海)信息科技有限公司 Animation generation method, device, system and storage medium
US11893670B2 (en) 2020-09-25 2024-02-06 Mofa (Shanghai) Information Technology Co., Ltd. Animation generation method, apparatus and system, and storage medium
CN112308910A (en) * 2020-10-10 2021-02-02 达闼机器人有限公司 Data generation method and device and storage medium
CN112308910B (en) * 2020-10-10 2024-04-05 达闼机器人股份有限公司 Data generation method, device and storage medium
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
CN113421286B (en) * 2021-07-12 2024-01-02 北京未来天远科技开发有限公司 Motion capturing system and method
CN113781611A (en) * 2021-08-25 2021-12-10 北京壳木软件有限责任公司 Animation production method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108564642A (en) Unmarked performance based on UE engines captures system
CN107349594B (en) A kind of action evaluation method of virtual Dance System
CN111460872B (en) Image processing method and device, image equipment and storage medium
JP5244951B2 (en) Apparatus and system for image processing based on 3D spatial dimensions
Zhao A survey on virtual reality
CN108564643A (en) Performance based on UE engines captures system
CN106600709A (en) Decoration information model-based VR virtual decoration method
CN108564641B (en) Expression capturing method and device based on UE engine
CN108572731A (en) Dynamic based on more Kinect and UE4 catches Data Representation method and device
US20130170715A1 (en) Garment modeling simulation system and process
CN100557639C (en) Three-dimensional virtual human body movement generation method based on key frame and space-time restriction
CN109829976A (en) One kind performing method and its system based on holographic technique in real time
CN105243375B (en) A kind of motion characteristic extracting method and device
CN109472795A (en) A kind of image edit method and device
CN112950769A (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
CN108320330A (en) Real-time three-dimensional model reconstruction method and system based on deep video stream
CN109523615B (en) Data processing method and device for virtual animation character actions
Wu et al. 3D film animation image acquisition and feature processing based on the latest virtual reconstruction technology
US20170193677A1 (en) Apparatus and method for reconstructing experience items
JPH10240908A (en) Video composing method
Qinping A survey on virtual reality
CN116681854A (en) Virtual city generation method and device based on target detection and building reconstruction
CN115914660A (en) Method for controlling actions and facial expressions of digital people in meta universe and live broadcast
CN110853131A (en) Virtual video data generation method for behavior recognition
CN116485953A (en) Data processing method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180921