CN107154069B - Data processing method and system based on virtual roles - Google Patents

Data processing method and system based on virtual roles Download PDF

Info

Publication number
CN107154069B
CN107154069B CN201710331551.XA CN201710331551A CN107154069B CN 107154069 B CN107154069 B CN 107154069B CN 201710331551 A CN201710331551 A CN 201710331551A CN 107154069 B CN107154069 B CN 107154069B
Authority
CN
China
Prior art keywords
virtual
character
facial
data
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710331551.XA
Other languages
Chinese (zh)
Other versions
CN107154069A (en
Inventor
郑辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wecomics Network Technology Co ltd
Original Assignee
Shanghai Wecomics Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wecomics Network Technology Co ltd filed Critical Shanghai Wecomics Network Technology Co ltd
Priority to CN201710331551.XA priority Critical patent/CN107154069B/en
Publication of CN107154069A publication Critical patent/CN107154069A/en
Application granted granted Critical
Publication of CN107154069B publication Critical patent/CN107154069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a data processing method and a system based on virtual roles, and provides a data processing method based on virtual roles, which comprises the following steps: s100, acquiring facial expression image information of a performer; s200, converting the facial expression image information into role expression data; s300, processing the character expression data according to a data interpolation algorithm to obtain virtual expression data of the virtual character. According to the invention, the facial actions of the human face can be transferred to any virtual character selected by the user for demonstration, the current facial expression change of a performer can be reflected by the virtual character in real time, the virtual character is more vivid and vivid, the interestingness is increased, the use experience of the user is improved, and the actions of the character can be stably controlled in an interval when a fault is identified, so that the demonstration of the virtual character is more vivid and natural.

Description

Data processing method and system based on virtual roles
Technical Field
The invention relates to the field of image recognition and the field of computer animation, in particular to a data processing method and a data processing system based on virtual roles.
Background
Nowadays, video playing on audio-visual communication equipment is very common, communication objects in videos are often real images of users, and with continuous development of face acquisition technologies and modeling technologies, the face acquisition technologies and the modeling technologies are often applied to many important fields such as animations, movies and games, and especially to the fields of entertainment games, virtual live broadcasts and 3D animations which need human-computer interaction. The animation of the virtual character comprises a limb animation and an expression animation, in order to achieve the vivid effect of the virtual character, the simple limb animation cannot meet the requirements of a user, and the vivid expression animation is an important factor for improving the use experience of the user. For example, live broadcast of real people can be replaced by live broadcast of cartoon characters, animals and celebrities, two strangers can also respectively select different virtual images to carry out video chat, and the like.
To achieve the above objective, the facial expression and the motion of a real person in the real world are needed to control the facial expression and the motion of a virtual character in the virtual world, and in the prior art, although the collected facial motion of a real face can be transferred to the virtual character for demonstration, the facial motion cannot be demonstrated on a special virtual character according to the needs of a user, for example, a virtual character such as giraffe which has a large difference with the facial features. In addition, the problems of inaccurate identification, wrong identification or failed identification caused by environmental factors such as light interference, poor network conditions and the like cannot be solved in the prior art, so that the expression package cannot be normally acquired, and the control of the virtual character is unstable.
Disclosure of Invention
The invention provides a data processing method and a data processing system based on virtual roles, and aims to realize the smooth demonstration of transferring the real-time facial actions of human faces to any virtual role selected by a user; on the other hand, when the fault is identified, the action of the character is stably controlled in an interval, so that the virtual character demonstration is more vivid and natural.
The technical scheme provided by the invention is as follows:
the invention provides a data processing method based on virtual roles, which comprises the following steps: s100, acquiring facial expression image information of a performer; s200, converting the facial expression image information into role expression data; s300, processing the character expression data according to a data interpolation algorithm to obtain virtual expression data of the virtual character.
According to the invention, the facial actions of the human face can be transferred to any virtual character selected by the user for demonstration, the current facial expression change of a performer can be reflected by the virtual character in real time, the virtual character is more vivid and vivid, the interestingness is increased, the use experience of the user is improved, and the actions of the character can be stably controlled in an interval when a fault is identified, so that the demonstration of the virtual character is more vivid and natural.
Further, the step S300 is followed by the step of: s310, judging whether the virtual role of the performer is initialized or not; if yes, go to step S330, otherwise go to step S320; s320, initializing the virtual role according to initialization data; s330, judging whether the current moment receives the character expression data within a preset receiving time; if yes, go to step S340, otherwise go to step S350; s340, performing interpolation operation on the character expression data at the current moment and the character expression data at the previous moment to obtain first virtual expression data of the virtual character; s350, performing interpolation operation on the character expression data at the previous moment and the initialization data to obtain second virtual expression data of the virtual character.
According to the invention, when the recognition is lost and the recognition is wrong, the smooth transition processing enables the action of the role to be stably controlled in an interval, so that the demonstration of the virtual role in the later period is more natural and vivid, the unnatural phenomena such as jitter and the like caused by the self precision of the recording equipment are avoided, and the use experience of a user is improved.
Further, the step S200 includes the steps of: s210, according to the facial expression image information of the performer, identifying the facial expression information at the current moment to obtain facial feature points and current space coordinate values of the facial feature points; s220, calculating the current space coordinate value of the facial feature point and the corresponding initial space coordinate to obtain a displacement difference value; s230, calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment; s240, converting the expression parameters at the current moment according to a preset conversion mode to obtain the character expression data; wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
According to the invention, the facial actions of the human face can be transferred to any virtual character selected by the user for demonstration, the current facial expression change of the performer can be reflected by the virtual character in real time, the virtual character is more vivid and vivid, the interestingness is increased, and the use experience of the user is improved.
Further, the step S300 is followed by the step of:
s400, controlling the virtual character to demonstrate according to the virtual expression data.
Further, the step S400 is followed by the step of:
s500, acquiring control parameter information and controlling the virtual role to demonstrate;
the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information.
In the present invention, motion parameter information includes, but is not limited to, displacement, degree of scaling, angle of rotation, and in-out field motion. Compared with the preset cartoon characters, the method has a large number of real-time actions for users to use, and the actions are activated by controlling the buttons of the window, so that the virtual characters look more vivid. Moreover, the hairstyle of the virtual role can be selected according to the preference of the user, and the more beautiful and personalized virtual role can be obtained. Certainly, the properties can be selected according to the preference of the user, and the interest is increased.
Further, the step S100 includes, before the step, the steps of: s010, modeling to obtain a virtual role model of the performer; s020 acquires the facial image data of the performer, and extracts facial feature information of the performer; and S030, according to the facial feature information, performing skeleton binding or feature point binding on the virtual character model to generate the virtual character.
In the invention, the user can create a favorite virtual character by himself, and the skeleton (or the characteristic point) of the user and the skeleton (or the characteristic point) of the virtual character which is wanted to be created can be bound according to the user requirement, so that even if the virtual character which is created by the user and has a large difference with the facial characteristics of the human face can be vividly and vividly demonstrated.
The invention also provides a data processing system based on the virtual role, which comprises: the first acquisition module is used for acquiring facial expression image information of a performer; the conversion module is in communication connection with the acquisition module; converting the facial expression image information acquired by the acquisition module into role expression data; the processing module is in communication connection with the conversion module; and processing the character expression data converted by the conversion module according to a data interpolation algorithm to obtain virtual expression data of the virtual character.
According to the invention, the facial actions of the human face can be transferred to any virtual character selected by the user for demonstration, the current facial expression change of a performer can be reflected by the virtual character in real time, the virtual character is more vivid and vivid, the interestingness is increased, the use experience of the user is improved, and the actions of the character can be stably controlled in an interval when a fault is identified, so that the demonstration of the virtual character is more vivid and natural.
Further, the processing module comprises: the storage submodule stores the initialization data and the role expression data at each moment according to the time sequence; the receiving submodule receives character expression data; the first judgment submodule judges whether the virtual role of the performer is initialized or not; the initialization submodule is in communication connection with the first judgment submodule; when the first judging submodule judges that the virtual role of the performer is not initialized, initializing the virtual role according to initialization data; the second judgment submodule is in communication connection with the receiving submodule and the first judgment submodule respectively; when the first judgment submodule judges that the virtual character is initialized, judging whether the receiving submodule receives character expression data at the current moment in a preset receiving time length; the first processing sub-module is in communication connection with the second judging sub-module and the storage sub-module; when the second judgment submodule judges that the receiving submodule receives the role expression data of the current moment in a preset receiving duration; performing interpolation operation on the role expression data at the current moment stored by the storage submodule and the role expression data at the last moment stored by the storage submodule to obtain first virtual expression data of the virtual role; the second processing submodule is in communication connection with the second judging submodule and the storage submodule; when the second judgment submodule judges that the receiving submodule does not receive the role expression data at the current moment within the preset receiving duration; and performing interpolation operation on the character expression data of the last moment stored by the storage submodule and the initialization data stored by the storage submodule to obtain second virtual expression data of the virtual character.
According to the invention, when a fault occurs in recognition, for example, recognition fails or recognition errors occur, interpolation operation and smoothing processing are carried out on data, so that the expression and the action of the character are stably controlled in a section, and the virtual character demonstration is more vivid and natural.
Further, the conversion module further includes: the recognition sub-module is used for recognizing the facial expression information at the current moment according to the facial expression image information of the performer acquired by the first acquisition module to obtain the facial feature points and the current space coordinate values of the facial feature points; the operation submodule is in communication connection with the identification submodule; calculating the current space coordinate value of the facial feature point, which is obtained by the recognition of the recognition submodule, with a preset displacement value to obtain the expression parameter of the performer at the current moment; calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment; the conversion submodule is in communication connection with the operation submodule; converting the expression parameters of the current moment obtained by the operation of the operation submodule according to a preset conversion mode to obtain the character expression data; wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
According to the invention, not only can the real-time facial expression change of the performer be mapped on the virtual character for demonstration, but also the expression parameters of the user can be converted according to the user requirements in a conversion mode preset by the user, so that more exaggerated and fun character expression data are obtained, and the use experience and the interestingness of the user are improved.
Further, the data processing system based on the virtual roles also comprises a modeling module, wherein the modeling module is used for obtaining a virtual role model of the performer; an acquisition module that acquires facial image data of the performer; the extraction module is in communication connection with the acquisition module; extracting facial feature information of the performer according to the facial image data of the performer acquired by the acquisition module; the generation module is respectively in communication connection with the modeling module and the extraction module; according to the facial feature information extracted by the extraction module, carrying out bone binding or feature point binding on the virtual character model obtained by modeling of the modeling module to generate the virtual character; the control module is in communication connection with the processing module; controlling the virtual role of the performer to demonstrate according to the virtual expression data obtained after the processing of the processing module; the second acquisition module is used for acquiring control parameter information; the control module is also in communication connection with the second acquisition module; controlling the virtual role to demonstrate according to the control parameter information acquired by the second acquisition module; the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information.
In the invention, besides changing the facial expression of the performer, the virtual character is synchronously demonstrated in real time, the relationship between the preset control key and the scene parameter is also increased, the control key is selected to control the virtual character to increase some body actions, and the favorite prop of the user is selected for the virtual character, so that the performance of the virtual character is not rigid and the individuation is sufficient.
Compared with the prior art, the invention provides a data processing method and a system based on virtual roles, which at least bring the following technical effects:
1. the invention can transfer the real-time facial action of the human face to any virtual character selected by the user to be vividly demonstrated. The current facial expression change of performers can be reflected through the virtual roles in real time, the virtual roles are created to be more vivid, interestingness is increased, and user experience is improved.
2. The invention can avoid the privacy disclosure of the user and improve the safety in the application fields with strong interactivity such as video chat, network live broadcast and the like.
3. The method and the device can perform interpolation processing on the captured data when the recognition is lost and the recognition is wrong, and stably control the action of the role in an interval in a smooth transition manner, so that the demonstration of the role is more natural and vivid, the unnatural phenomena such as jitter and the like caused by the accuracy of recording equipment and the like are avoided, and the use experience of a user is improved.
4. The invention can acquire rich expression information and eye movement information, combines emotion and facial animation for demonstration, and improves the use experience of performers and the watching experience of audiences.
5. The invention can fuse and render the facial action of the virtual character with at least one of the preset body animation, the facial animation, the props, the hairs, the displacement, the zoom degree, the rotation angle and the scene access animation, and has higher interestingness.
Drawings
The following description of the preferred embodiments will provide a method and system for processing data based on virtual roles, which is capable of further describing features, technical characteristics, advantages and implementation manners thereof in a clearly understandable manner by referring to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a virtual role based data processing method of the present invention;
FIG. 2 is a flow chart of another embodiment of a virtual role based data processing method of the present invention;
FIG. 3 is a diagram illustrating the data interpolation effect of an embodiment of the data processing method based on virtual roles according to the present invention;
FIG. 4 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 5 is a diagram illustrating the effect of a grid probe according to an embodiment of the data processing method based on virtual roles;
FIG. 6 is a diagram illustrating the effect of a grid probe according to an embodiment of the data processing method based on virtual roles;
FIG. 7 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 8 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 9 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 10 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 11 is a flow diagram of another embodiment of a virtual role based data processing method of the present invention;
FIG. 12 is a diagram of a hair control interface for an embodiment of a data processing method based on avatars in accordance with the present invention;
FIG. 13 is a hair probe effect diagram of an embodiment of a virtual character based data processing method of the present invention;
FIG. 14 is a diagram of an action and prop control interface of an embodiment of a data processing method based on virtual roles according to the present invention;
FIG. 15 is a block diagram of one embodiment of a virtual role based data processing system of the present invention;
FIG. 16 is a virtual magic male cat pictogram of one embodiment of a virtual character-based data processing system of the present invention;
FIG. 17 is a block diagram of another embodiment of a virtual role based data processing system in accordance with the present invention;
FIG. 18 is a block diagram of another embodiment of a virtual role based data processing system in accordance with the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
Referring to fig. 1, an embodiment of a data processing method based on a virtual role according to the present invention includes:
s100, acquiring facial expression image information of a performer;
s200, converting the facial expression image information into role expression data;
s300, processing the character expression data according to a data interpolation algorithm to obtain virtual expression data of the virtual character.
Specifically, in the embodiment of the present invention, the motion capture device such as a camera is used to capture and obtain facial expression image information of the performer, and the processor such as a computer is used to process data. The camera can be a camera carried by the computer, can also be connected in a physical connection mode such as a USB (universal serial bus) mode, and can also be connected in a communication connection mode such as WIFI (wireless fidelity) and Bluetooth. According to the invention, the facial actions of the human face can be transferred to any virtual character selected by the user for demonstration, the current facial expression change of a performer can be reflected by the virtual character in real time, the virtual character is more vivid and vivid, the interestingness is increased, the use experience of the user is improved, and the actions of the character can be stably controlled in an interval when a fault is identified, so that the demonstration of the virtual character is more vivid and natural.
Referring to fig. 2, another embodiment of a data processing method according to a virtual role according to the present invention includes:
s010, modeling to obtain a virtual role model of the performer;
s020 acquires the facial image data of the performer, and extracts facial feature information of the performer;
s030 performs bone binding or feature point binding on the virtual character model according to the facial feature information to generate the virtual character;
s100, acquiring facial expression image information of a performer;
s200, converting the facial expression image information into role expression data;
s310, judging whether the virtual role of the performer is initialized or not; if yes, go to step S330, otherwise go to step S320;
s320, initializing the virtual role according to initialization data;
s330, judging whether the current moment receives the character expression data within a preset receiving time; if yes, go to step S340, otherwise go to step S350;
s340, performing interpolation operation on the character expression data at the current moment and the character expression data at the previous moment to obtain first virtual expression data of the virtual character;
s350, performing interpolation operation on the character expression data at the previous moment and the initialization data to obtain second virtual expression data of the virtual character.
In particular, in the embodiment of the present invention, the Faceshift capturing technology has the greatest benefit that captured data can be sent to a computer or other processors in the form of network packets, but the captured data is often inaccurate in identification, incorrect in identification or failed in identification due to some special reasons (such as light interference). Therefore, it is necessary to parse the network packet at Unity, and it is possible to implement processing on the erroneous packet in the parsing process to make up for the bad results caused by the recognition loss and the recognition error, so that even if the recognition loss occurs, or when a part of the recognition data has an excessive deviation or a recognition failure, Unity will perform smooth transition processing, and can stably control the action of the character in a section, so that the demonstration of the character is more natural and vivid, and avoid the unnatural phenomena such as jitter due to the accuracy of the recording device itself, and the format of the data packet transmitted by Faceshift is as follows: protocol number + body of packet. The protocol number is one of 2 items 33633 and 33433.
Specifically, when the protocol number is 33633, the packet body is initialization data (initialization packet) of animation and skeleton. When the protocol number is 33433, the body of the packet is the driving data of the animation and the skeleton. (drive data packet, where drive data refers to character expression data of the virtual character obtained from the performer's facial expression image information after the virtual character has been initialized.)
If an error is identified without initialization, 33633 packets cannot be received. At this point, the client will use the last correctly identified initialization data. The initialization data of this time is not used for replacement until the identification is correct. If the initialization data has not been acquired, the thread is suspended, ignores all 33433 packets, and waits until the correct initialization data is redriven.
If an error is identified, it is typically discovered by parsing 33433 of the packet.
33433 the format of the text pack is:
packet ID + packet size + packet start bit + drive data + packet ID + packet size + packet start bit + drive data.
When the packet ID is equal to 101, the drive data is a boolean value indicating whether the frame identification is normal. If an error is identified, the data is skipped, and the data is not driven, but is only smoothly transited to the initial state. For Unity3d, the orientation, displacement, and skin deformation data of a frame of the character are smoothly interpolated (DamperLerp). And smoothly transiting the orientation and the displacement of the character to the initial values within 2 seconds, and transiting the skin deformation data of the character to the initial state.
The code is as follows:
Translation=Vector3.Lerp
(_prevValidTranslation,m_prev_track.Translation(),_lefpT);
Rotation=Quaternion.Slerp
(_prevValidRotation,m_prev_track.dRotation(),_lefpT);
float _ lefpT increments by 1 from 0 in 2 seconds.
This process is referred to as "loss reduction".
If a packet that failed to be identified is received during the lost recovery, it is skipped uniformly.
If the correct packet is identified as received during the loss recovery, the action at that time is smoothed over to the data of the new packet.
Translation=Vector3.Lerp
(_prevValidTranslation,m_current_track.Translation(),_lefpT);
Rotation=Quaternion.Slerp
(_prevValidRotation,m_current_track.dRotation(),_lefpT);
float _ lefpT is reset to 0 and accumulates 1 from 0 in 2 seconds.
There is a special case where Faceshift acquires data by recognizing an image other than a face as a face. Such as a human face pattern on the clothing, or if the light is too dark, the hair may be identified as eyebrows. Faceshift at this point considers its data collection to be correct and marks the 101 field of 33433 packet as true. At this point, the process must be done at Unity3 d. When the acquired data is obtained every time, the received data is not suitable for driving the skeleton animation and the skin deformation of the role, but the data of the current time and the data of the last time are interpolated:
data=Math.Lerp(prevData,currentData,0.5)
and taking the interpolated value as the data acquired this time, assigning the interpolated value to prevData, and reserving the data for processing when a data packet is acquired next time. This has the advantage that when the recognition is unstable, the movement of the character can be controlled almost stably within an interval. The effect of the acquired data interpolation processing is shown in fig. 3. The demonstration of the role is more natural and vivid, and the unnatural phenomena such as jitter and the like caused by the precision of the recording equipment are avoided. The invention can ensure that the action of the role is stably controlled in an interval through smooth transition processing when the recognition is lost or the recognition is wrong, so that the demonstration of the role is more natural and vivid, the unnatural phenomena such as jitter and the like caused by the accuracy of recording equipment and the like are avoided, and the use experience of a user is improved.
Referring to fig. 4, another embodiment of a data processing method according to a virtual role according to the present invention includes:
s010, modeling to obtain a virtual role model of the performer;
s020 acquires the facial image data of the performer, and extracts facial feature information of the performer;
s030 performs bone binding or feature point binding on the virtual character model according to the facial feature information to generate the virtual character;
s100, acquiring facial expression image information of a performer;
s210, according to the facial expression image information of the performer, identifying the facial expression information at the current moment to obtain facial feature points and current space coordinate values of the facial feature points;
s220, calculating the current space coordinate value of the facial feature point and the corresponding initial space coordinate to obtain a displacement difference value;
s230, calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment;
s240, converting the expression parameters at the current moment according to a preset conversion mode to obtain the character expression data;
s400, controlling the virtual character to demonstrate according to the virtual expression data;
wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
Specifically, in the embodiment of the present invention, a user may create a favorite virtual character by himself, after a virtual character model is created, a basic expression library of the virtual character needs to be created, a bone is bound in a preset manner to obtain an initial expression of the virtual character, N basic expressions of the virtual character are obtained according to a training adjustment or a pre-stored expression ratio, wherein the training adjustment is to obtain spatial coordinates of all facial feature key points of a real expression of the user, then the virtual character is trained and adjusted to obtain the number of basic expressions in N of the virtual character according to the spatial coordinates of each basic expression, where the number of basic expressions of the virtual character is the same as the number of basic expressions of a real face, as shown in fig. 5, a grid and a grid probe are created on an acquired face according to a preset number of real face key feature points, therefore, the facial expression ratio is obtained through the processing, the current facial expression ratio of the performer is obtained through the facial expression change acquired by the camera in the later period, the facial expression change is mapped to the virtual character in real time, and the performer can vividly demonstrate the facial expression ratio. Capturing facial expression image information of a performer in front of a lens by using motion capturing equipment such as a camera, wherein the facial expression image information comprises a series of change information such as expression change, eyeball change and neck motion change, analyzing and processing the captured facial expression image information to obtain a current spatial coordinate value of a facial feature point and a current spatial coordinate value of the facial feature point so as to obtain a displacement difference value, further obtain an expression parameter of the performer at the current moment, converting the current expression parameter to obtain role expression data, mapping the role expression data to a virtual role model selected by a user, and reflecting the current facial expression change of the performer through the virtual role in real time. According to the embodiment, the facial expression change is acquired, basically only the outline position information of five sense organs is acquired, and the facial change of a real person cannot be acquired more vividly, so that the movement of eyeballs and the movement of necks are acquired, the space coordinate information of the movement of the eyeballs is classified, and the emotional meaning represented by the current space coordinate of the eyeballs is obtained by comparing the classified space coordinate information with the eyeball emotional information coordinate in the emotion information database established in advance, so that abundant eye movement information can be acquired, the emotional state of a performer is analyzed, the emotional state is mapped to a virtual character for demonstration according to the emotional state, the expression symbol and the body action which are bound in advance and represent the corresponding emotion, interestingness can be increased, and the using experience of the performer and the watching experience of audiences are improved.
Specifically, the initial spatial coordinate of the initial expression is obtained by capturing the facial key feature points of the initial expression of the performer before shot (i.e., the expression without any change in the performer's face), then capturing the current expression change of the performer before shot in real time, analyzing the collected facial key feature points of the current expression to obtain the current spatial coordinate value of the current expression, then calculating the difference between the current spatial coordinate value and the current spatial coordinate value, and calculating the ratio. For example, assuming that the facial expression collected by the camera in real time is happy, the spatial coordinate values of the eyebrows, eyes, nose, mouth, ears and the like of the initial expression are set as original point values So, the spatial coordinate values of the eyebrows, eyes, nose, mouth, ears and the like of the basic expression (i.e. the maximum emotional expression degree of common expressions such as joyous, sad, angry and the like of the performer) are set as Smax when the maximum expressions are expressed, and the spatial coordinate values of the eyebrows, eyes, nose, mouth, ears and the like of the current expression are set as St, then the ratio Q ═ St-So ÷ Smax |. For example, the current facial expression of the performer is sadness-joy-jordance, i.e. the emotions of the shape and feel sadness and joy are interlaced together, the complex expression of sadness-joy-jordance is split into the basic expressions of joy and sadness, and assuming that the ratio of joy and sadness of sadness-jordance is S1-0.5 and the ratio of sadness S2-0.5, the ratio P of the virtual character is obtained by controlling the displacement change of the corresponding facial feature points of the virtual character according to the ratio of joy and sadness which are previously established, i.e. the joy ratio P1-0.5 and the ratio of sadness P2-0.5, the current facial expression of the performer is sadness-jordance of the virtual character corresponding to sadness-jordance, and sadness in the reality are both sadness-jordance and sadness, and sadness are certainly, the ratio of sadness-joy and sadness is not necessarily equal to 0.5, the ratio can be calculated accordingly. In addition, assuming that the current facial expression of the performer is grin and the ratio of grin is S3 is 0.2, the performer can calculate the grin ratio of the virtual character according to a preset P-nS mode, for example, when n is 2, P3 is 0.2 × 2 is 0.4, and the degree of grin of the virtual character is greater than the degree of grin of the performer, so that not only the function of transferring the facial action of the real-time face to all virtual images selected by the user, such as an animated character, an enterprise mascot, a casual idol, and the like, but also the performer can conveniently perform exaggerated interesting performances according to requirements, and the use experience of the performer and the watching experience of the viewers are improved. Because many actions of live broadcast are acquired through Faceshift and all actions can be freely controlled, a program is often required to calculate grid deformation in real time to replace artists to produce animations. Using a similar probe: setting the positions and the radiuses of a plurality of key probes, and simulating various skin effects by controlling the vertex coordinates in the ball with the probes as the center of the ball in real time. For example, as shown in fig. 6, if the virtual character created by the performer is a dog model, a separate probe is added to the nose of the dog model because the dog's nature likes to smell the nose. The probe ball covers the entire nose. At random intervals (typically 1 to 3 seconds), the x coordinate of the vertex inside the probe sphere is shifted toward the center and restored, creating the effect of squeezing. The y coordinate of the vertex is shifted and restored towards the positive direction of the y axis to form the effect of lifting the nose, so that the nose of the virtual character can be more vividly and vividly controlled to change according to the nose change of the user, and the interestingness can be improved. Due to the characteristic of real-time interaction of internet live broadcast, the method can quickly establish contact between live broadcast images and audiences and obtain feedback, thereby greatly improving the entertainment and the interestingness of live broadcast. The application field of the virtual technology of the invention can comprise various commercial operations, such as brand authorization, advertisement implantation, peripheral product development, film and television work production, artists, network red brokers and the like. But also from the complete industrial chain building related to media, PGC video, film, concert, games and animation elements. In addition, virtual live broadcasting can also be applied to education, museums, tourist attractions and the like. Only the facial movement changes of the five sense organs of the performer are collected for demonstration, so that if eyeball changes are bound with emotion information due to insufficient vividness, the eyeball changes are collected and mapped to virtual roles for demonstration, live broadcast demonstration can be more vivid and interesting, and the watching experience of audiences is improved. The invention can also avoid the privacy disclosure of the user and improve the safety in the application fields with strong interactivity such as video chat, network live broadcast and the like.
Referring to fig. 7, another embodiment of a data processing method according to a virtual role according to the present invention includes:
s010, modeling to obtain a virtual role model of the performer;
s020 acquires the facial image data of the performer, and extracts facial feature information of the performer;
s030 performs bone binding or feature point binding on the virtual character model according to the facial feature information to generate the virtual character;
s100, acquiring facial expression image information of a performer;
s210, according to the facial expression image information of the performer, identifying the facial expression information at the current moment to obtain facial feature points and current space coordinate values of the facial feature points;
s220, calculating the current space coordinate value of the facial feature point and the corresponding initial space coordinate to obtain a displacement difference value;
s230, calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment;
s240, converting the expression parameters at the current moment according to a preset conversion mode to obtain the character expression data;
s400, controlling the virtual character to demonstrate according to the virtual expression data;
s500, acquiring control parameter information and controlling the virtual role to demonstrate;
wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and neck bones; the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information.
Specifically, in the process of demonstrating the virtual character in S400 in the embodiment of the present invention, if the control parameter information is obtained, it is necessary to perform fusion demonstration on the facial animation of the virtual character and the animation corresponding to the control parameter information selected by the performer, specifically, as shown in fig. 8.
S10, according to a preset trunk action instruction input by a user, carrying out fusion animation demonstration on the virtual character facial animation and trunk action parameter information selected by the user; wherein the torso motion information comprises displacement, rotation, and scaling; and/or the presence of a gas in the gas,
s30, according to the preset face action command input by the user, carrying out fusion animation demonstration on the virtual character face animation and the face action selected by the user; and/or the presence of a gas in the gas,
s40, according to the preset prop instruction input by the user, carrying out fusion animation demonstration on the facial animation of the virtual character and the prop parameter information selected by the user; and/or the presence of a gas in the gas,
s50, according to a preset hair instruction input by a user, carrying out fusion animation demonstration on the facial animation of the virtual character and the hair parameter information selected by the user; and/or the presence of a gas in the gas,
s60, according to a preset background instruction input by a user, carrying out fusion animation demonstration on the virtual character facial animation and the background parameter information selected by the user;
compared with the preset cartoon characters, the virtual character recognition method has the technical characteristics that a large number of real-time actions can be used by a user, and the actions are activated by controlling the buttons of the window, so that the virtual character looks vivid. Moreover, the hairstyle of the virtual role can be selected according to the preference of the user, and the more beautiful and personalized virtual role can be obtained. Certainly, the props such as the glasses and the blush can be selected according to the preference of the user, so that the interest is increased.
Further, the step S10 includes the steps of:
s11, judging whether the virtual character receives the preset action command within a preset standby time, if so, executing step S12; otherwise, go to step S17;
s12, outputting the current state of the virtual role as a standby state, and controlling the virtual role to demonstrate according to a preset standby action;
s17, outputting the current state of the virtual character as an active state, calling the action parameter information corresponding to the current preset action instruction according to the current preset action instruction, and controlling the virtual character to demonstrate.
Further, after the step S12, before the step S17, the method further includes the steps of:
s13, judging whether the virtual character receives the preset action command in the standby state, if so, executing step S14; otherwise, go to step S12;
s14, judging whether the current preset action command is the first preset action command, if yes, executing step S15; otherwise, go to step S16;
s15, stopping the standby action, calling action parameter information corresponding to the first preset action instruction according to the first preset action instruction, and controlling the virtual role to demonstrate;
and S16, continuing the standby action, calling action parameter information corresponding to a second preset action instruction according to the second preset action instruction, and controlling the virtual character to demonstrate.
Further, the step S17 is followed by the step of:
s18, judging whether the virtual character receives the next preset action command within the completion duration corresponding to the current preset action command in the active state, if so, executing the step S19; otherwise, go to step S20;
s19, the virtual character stops the current action, and according to the next preset action instruction, the action parameter information corresponding to the next preset action instruction is called to control the virtual character to demonstrate;
and S20, continuing to control the virtual character to demonstrate according to the current preset action instruction.
The overall flow chart of the above steps is shown in fig. 9. The preset action instruction comprises a first preset action instruction and a second preset action instruction; the first preset action instruction is a preset action instruction which is used for interrupting the body action to be waited and demonstrating through the body action related to the first preset action instruction; the second action refers to a preset action instruction which is used for fusion demonstration through the related body action of the second preset action instruction and the standby body action without interrupting the standby body action. In this embodiment, only the torso movement parameter information corresponding to the torso movement instruction is specifically described, and the process of the face movement instruction is consistent with that of the torso movement instruction, and the detailed description is not repeated here.
In the embodiment of the invention, the head and the body are separated when the model is made, the head is mounted on the body in the Unity, Faceshift only controls the neck skeleton, eyeball rotation and facial skin deformation animation, the body action comprises standby action and non-standby action, the non-standby action comprises 3Dmax prefabricated animation and Unity3d program animation, the prefabricated animation is the animation only controlling the body, and the prefabricated animation and the facial expression are mutually overlapped, for example, one side can be inverted, and the other side can throw the eyes. The Unity3d procedural animation is that after the head animation and the prefabricated animation are processed, the new skeleton animation is reloaded on the procedural animation. Take clicking the "pop eyeball" button as an example: each frame Update performs the first return of the eye's coordinates to the original coordinates, then turns the eye's orientation according to the Faceshift data, and finally translates the eye's "skeleton" forward. In addition, because long waits can make the character appear stiff, a special "small action" is played every N t seconds, such as often touching the head or belly, making the virtual character appear more lively.
Compared with the preset cartoon roles, the invention has the technical characteristics that a large number of real-time actions can be used by users, and the action control is divided into 3 parts: faceshift captured facial motion, 3Dmax pre-fabricated animation, and programmatically controlled animation in Unity 3D.
Faceshift captured animation: facial expression, eye movement, neck rotation
3Dmax prefabricated animation: clapping, surging mouth, love with both hands, falling, dancing, standing upside down, knocking head, bowing, leg shaking, split, yawning, tantrum and the like
Unity3d procedural animation: the head revolves at a high speed, falls off, is pulled and pops out of the eyeball, and the like.
Wherein the 3Dmax pre-animation and the Unity3d procedural animation are activated by a button of the control window.
The animation is prefabricated by 3Dmax, and is programmatically controlled in Unity 3D.
The head is detached from the body at the time of making the model. The head was then fitted inside Unity to the body, with the father of the head being the cervical skeleton. Faceshift controls only the neck anatomy, eye rotation and facial skin deformation, while the prefabricated animation controls only the body animation, both superimposed on each other. For example, people can throw charming while standing upside down. For the Unity3d animation program, after the Faceshift animation and the program animation are processed, the new skeleton animation is reloaded on the Faceshift animation program. Take clicking the "pop eyeball" button as an example: each frame Update performs the first return of the eye's coordinates to the original coordinates, then turns the eye's orientation according to the Faceshift data, and finally translates the eye's "skeleton" forward.
The standby operation is divided into 5 operations. The main standby action is respectively, the length is 0.5 second, and the main standby action is used for restoring the character skeleton. The special standby actions are respectively: breathe in situ, touch the head and feel the stomach, and swing the body back and forth. A special "small action" (N being a random number between 1 and 10) is played every N0.5 seconds, since a long standby would make the character appear stiff. The cartoon image looks more vivid by touching the head or the stomach frequently. And immediately starting to play the prefabricated animation after the character receives a playing signal of the prefabricated animation in any state. And after the animation playing is finished, automatically entering the main standby animation, and randomly playing the special standby animation. The running motion is a special prefabricated animation. After receiving the running signal, if the character state is standby at the moment, the running animation interrupts the standby state and immediately plays the running animation. Otherwise, the character will only perform the displacement and not the running animation. For example, if the character is performing a horse dance and the character is controlled to move to the left of the screen, the character jumps to the left of the screen in a horse dance posture, instead of interrupting the action of the horse dance.
As shown in fig. 10, further, the step S30 includes the steps of:
s31, judging whether the virtual character receives the preset prop instruction, if yes, executing step S32;
s32, judging whether the virtual character receives the preset action command, if yes, executing step S33; otherwise, go to step S37;
s33, judging whether the prop corresponding to the preset prop instruction is a first preset prop, if so, executing step S34; otherwise, go to step S36;
s34 hiding the first preset prop within a preset hiding duration;
s35, calling action parameter information corresponding to the preset action instruction according to the preset action instruction, and controlling the virtual character to demonstrate;
s36 reserving a second preset prop, calling action parameter information corresponding to the preset action instruction according to the preset action instruction, and controlling the virtual character to demonstrate;
s37, calling prop parameter information corresponding to the preset prop instruction according to the preset prop instruction, and controlling the virtual character to demonstrate;
the prop type comprises a first preset prop and a second preset prop; the first preset prop refers to a prop hidden in a preset hidden time length when the preset action instruction is received; the second preset prop is a prop reserved when the preset action instruction is received. Although the prop can control the switch directly through the button, some props need to be invoked in conjunction with the event.
The first preset prop includes, but is not limited to: love heart, cucumber, bone, etc. For example interrupted by a dancing action when a more love-like action is made, the love will be immediately hidden.
The second preset prop includes, but is not limited to: blush, glasses, sunglasses, special effects of split screen, special effects of cucumber stain and the like. For example, a red love is displayed when two hands are folded when the love is performed, the triggering condition of the display is the action, for example, when a bone is hammered towards the screen, the bone is expected to reach the front of the screen to display a split screen picture, and the triggering condition of the split screen special effect is the position of the bone. When the bone is hit to the display screen, the handstand animation is immediately switched, and the special effect of screen breaking and the glasses still exist. The glasses can be closed only through a button of the control panel, and the special screen breaking effect has a life cycle of the glasses, is gradually transparent and can be automatically hidden after a plurality of seconds.
After importing the 3Dmax animation into Unity3D, each keyframe is edited in Unity3D, e.g., love heart, and an event is thrown while the animation axis reaches hands closed. When a bone is thrown, an event is thrown when the animation axis reaches the front of the bone flying to the screen. All events are the same as the events of clicking the buttons, and a certain prop can be driven to be displayed or hidden, so that the editing control can be conveniently realized, and various trigger conditions can be conveniently, quickly and uniformly managed.
As shown in fig. 11, further, the step S40 includes the steps of:
s41, judging whether the virtual character receives the preset hair instruction, if yes, executing step S42;
s42, judging whether the virtual character receives the preset action command, if yes, executing step S43; otherwise, go to step S47;
s43, calling action parameter information corresponding to the preset action instruction according to the preset action instruction, and controlling the virtual character to demonstrate;
s44, searching the hair change part of the virtual character according to a preset searching mode;
s45, judging whether the space coordinate variation value of the hair variation part exceeds a preset variation range, if so, executing a step S46;
s46, adjusting the hair at the hair change part according to a preset adjustment mode;
and S47, calling hair parameter information corresponding to the preset hair instruction according to the preset hair instruction, and controlling the virtual character to demonstrate.
Wherein the hair selection information comprises one or any more of hair color, hair length, hair transparency and hair coloring.
In the present invention, all rendering tasks of Unity3D are not separated from shaders, which are actually a small piece of program, and it is responsible for combining the input Mesh with the input map or color in a specified manner, and then the output draws the image on the screen according to the output. The input map or color, etc. plus the corresponding shaders and the specific parameter settings of the shaders, and the shaders and the input parameters are packed and stored together to obtain a Material. Then, we can assign the material to the appropriate renderer for rendering output. Unlike the animation, which consumes a lot of time to render the hair in advance, the virtual character hair of the present invention is rendered in real time, and because the rendering effect and rendering efficiency need to be balanced, the hair probe technology is used to select the hair length, color, transparency, etc. of the virtual character. In addition, the movement of the character often causes the acceleration of the hair to change, thereby twisting the hair in a local area. The computer overhead can be significant if adjustments are made to each hair as in an animation. Therefore, the hair of the virtual character is subjected to distortion searching and processing by using the hair probe system, so that local hair swing can be controlled under the condition that the whole character only uses one shader, and the rendering efficiency is greatly improved.
In the invention, the hair of the virtual character is rendered in real time, and because the rendering effect and the rendering efficiency need to be balanced, the hair probe technology is used for selecting the hair length, the color, the transparency and the like of the virtual character, which is different from the hair which is rendered in advance in an animation film and consumes a large amount of time. In addition, the movement of the character often causes the acceleration of the hair to change, thereby twisting the hair in a local area. The computer overhead can be significant if adjustments are made to each hair as in an animation. Therefore, the hair of the virtual character is subjected to distortion searching and processing by using the hair probe system, so that local hair swing can be controlled under the condition that the whole character only uses one shader, and the rendering efficiency is greatly improved. The hair is written by a shader using the vertex & fragment function of Unity 3D. The composition of hair requires at least 2 posters: base mapping, Length mapping. As shown in fig. 12, the Base map indicates the color of the hair, and the 3D artist creates the hair map in the modeling software. On the basis, the shading and the punctiform filter lens are added through ps software, so that the sticker has irregular shading. The Length map shows the Length of the hair, and the hair map is manufactured in modeling software by 3D (three-dimensional) art personnel, so that the hair is decolored, the contrast is increased, and the hair is slightly adjusted. Wherein white indicates the longest hair and black indicates the shortest hair. The Noise map is a map common to all cartoon characters and is used to indicate the transparency of the hair, and is generally tiled 15 to 20 times. The less tiling, the larger the clump particles. The more tiling, the finer the clump particles. The Base map is multiplied by the Base color as the "Base color" of the character, i.e., the color of the area without hair (e.g., lips, mouth, around eyes). And multiplying the Base map by the FurColor and multiplying the transparency of the tiled Noise map to generate a 'semitransparent granular' map, wherein the map is shifted by the gray level of the FurLength Length map along the normal direction of the model. The outermost contour of the hair is obtained. This process was repeated a total of 20 times, each offset being i/20. Forming a 20-layer "translucent" hair shell. Wherein, the formula is as follows:
the transparency alpha of each layer was 1-pow (multiplier, _ FurTickness).
Of course, if it is desired that the hair look more cartoon, a lighting method with sharper light and dark transitions may be used. There are two methods of coloration:
cartoon coloring with A static light source
A cube map is projected into the viewport in the normal direction of the model, and the result is multiplied by the existing hair color. The code is as follows:
float3 cubenormal=mul(UNITY_MATRIX_MV,float4(v.normal,0));
float4 cube=texCUBE(_ToonShade,i.cubenormal);
return col cube; // col is the hair color/based calculated in the preceding step
Cartoon coloring of B dynamic light source
The normal of the model and the orientation of the main parallel light source are subjected to dot multiplication to obtain a value between-1 and 1, which is called d, and the range of d is changed into a range between 0 and 1 through d ═ d × 0.5+ 0.5. The lighting color is obtained by mapping a ramp with a height of 1 pixel with d as the abscissa and uv (table lookup). The result is multiplied by the existing hair color. The code is as follows:
#ifndef USING_DIRECTIONAL_LIGHT
lightDir=normalize(lightDir);
#endif
half d=dot(normal,lightDir)*0.5+0.5;
half4 ramp=tex2D(_Ramp,float2(d,d));
return_LightColor0*ramp*(atten*2);
in addition, the movement of the character often causes the acceleration of the hair to change, thereby twisting the hair in a local area. The computer overhead can be significant if adjustments are made to each hair as in an animation. For this purpose, a hair probe system is used for the search process. As shown in fig. 13, the ball marker is a hair probe. The probe is bound to the skeleton of the character. When there is movement or rotation of the bone, acceleration is caused. The probe captures the acceleration of the corresponding bone and feeds back the acceleration to the hair. The closer to the probe, the more affected by acceleration. The further away from the probe, the less the acceleration is affected. At locations other than the probes, the acceleration is an interpolation of the multiple probes. After Cpu has calculated the acceleration, multiple vectors 3 are transmitted to Gpu by the probe. The hair clipper initially makes a normal i/20 offset to the hair. The modification was normal i/20+ acelerate pow2 (i/20). The reason for using pow2 is that the closer the hair is to the root, the less acceleration the hair is subjected to. When making a female character, hair tends to be long. However, since the head is swung with the head and the orientation of the head is calculated in real time, the waving of the hair cannot be made by prerecording animation in advance. At this point our grid probes are again assigned to use. For shorter hairs (e.g., forehead hairs), a probe is placed at the root of the hair and the father is the character's head. The probe captures the translational acceleration and the rotational acceleration of the head in real time. The mesh vertices closer to the origin of the probe are less affected by acceleration, and the mesh vertices farther from the origin are more affected by acceleration. The code is as follows:
Vector3 offset=distance*distance*accelerate;
where distance is the distance from the vertex to the origin of the probe. For longer hairs (e.g. ponytail), we used two probes. The probe A is placed at the root of the hair, and the probe B is placed at the tip of the hair. The distal probe B will swing with the hair and will naturally sag under the influence of physical gravity. A bezier curve of the space is inserted between the two probes:
Vector3 Evaluate(Vector3 v0,Vector3 v1,Vector3 v2,Vector3 v3,float t){
float mf=1-t;
return v0*mf*mf*mf+3*v1*t*mf*mf+3*v2*t*t*mf+v3*t*t*t;}
wherein v0 ═ m _ start; // probe location at the root of the hair;
v1 ═ m _ start + (m _ end-m _ start). normalized m _ start priority; v/position of hair root plus unit vector pointing to hair tip multiplied by initial weight (the greater the weight, the stiffer the hair, the smaller the weight, the more drooping the hair)
v2 ═ m _ accelerate ═ m _ endPriority; // acceleration of hair movement times endpoint weight (the greater the weight, the stiffer the hair, the smaller the weight, the softer the hair)
v3 ═ m _ end; // distal to hair probe location
t is the distance from each vertex on the hair model to probe a divided by the total length of the AB probe. Its physical meaning is the weight percentage of the AB probe.
Therefore, local hair swing can be controlled under the condition that the whole character only uses one shader, and the rendering efficiency is greatly improved.
After importing the 3Dmax animation into Unity3D, each keyframe is edited in Unity3D, e.g., love heart, and an event is thrown while the animation axis reaches hands closed. When a bone is thrown, an event is thrown when the animation axis reaches the front of the bone flying to the screen. As shown in fig. 14, the 3Dmax prefabricated animation and Unity3d procedural animation are activated by a button of a control window of an action, prop, etc., so that the avatar of the virtual character looks more vivid. Moreover, the hairstyle of the virtual role can be selected according to the preference of the user, and the more beautiful and personalized virtual role can be obtained. Certainly, the props such as glasses, blush, bones, love hearts and cucumbers can be selected according to the preference of the user, and the selected props are used when the body action of the virtual character is not influenced, so that the interest is increased. In short, any one or more of body movement, props and hair can be selected according to requirements to be fused and rendered in face change, or no input control body animation instruction can be selected, and after the preset standby time is reached, the background can play the body movement according to the preset standby movement, or the function of playing the standby animation can be turned off in advance, and the animation of the virtual character can be played freely as long as the user needs the function. All events are the same as the events of clicking the buttons, and a certain prop can be driven to be displayed or hidden, so that the events can be conveniently edited, and various trigger conditions can be uniformly managed.
Referring to FIG. 15, the present invention provides one embodiment of a virtual role based data processing system comprising:
the first acquisition module is used for acquiring facial expression image information of a performer;
the conversion module is in communication connection with the acquisition module; converting the facial expression image information acquired by the acquisition module into role expression data;
the processing module is in communication connection with the conversion module; and processing the character expression data converted by the conversion module according to a data interpolation algorithm to obtain virtual expression data of the virtual character.
Specifically, in the embodiment of the invention, the first acquisition module is used for capturing facial expression image information of a performer before taking a shot, the conversion module is used for processing the facial expression image information to obtain a spatial coordinate of a current facial feature of a human face, so that character expression data is obtained, the processing module is further used for carrying out operation processes such as interpolation operation and the like to obtain virtual expression data of virtual characters, for example, "night meow", which is hosted by a well-known emotion owner, and a virtual man's god image with a brand-new concept, such as fig. 16, is used for telling an emotion artistic story, shares sour, sweet and bitter music in emotion of each person, solves a doubt on line for audiences and treats injuries. The embodiment of the invention can transfer the facial action of the face to any virtual role selected by the user for demonstration, can reflect the current facial expression change of the performer in real time through the virtual role, creates a more vivid image by the virtual role, increases the interestingness, improves the use experience of the user, and can stably control the action of the role in an interval when the fault is identified, so that the demonstration of the virtual role is more vivid and natural.
Referring to fig. 17, in another embodiment of the present invention, the processing module includes:
the storage submodule stores the initialization data and the role expression data at each moment according to the time sequence;
the receiving submodule receives character expression data;
the first judgment submodule judges whether the virtual role of the performer is initialized or not;
the initialization submodule is in communication connection with the first judgment submodule; when the first judging submodule judges that the virtual role of the performer is not initialized, initializing the virtual role according to initialization data;
the second judgment submodule is in communication connection with the receiving submodule and the first judgment submodule respectively; when the first judgment submodule judges that the virtual character is initialized, judging whether the receiving submodule receives character expression data at the current moment in a preset receiving time length;
the first processing sub-module is in communication connection with the second judging sub-module and the storage sub-module; when the second judgment submodule judges that the receiving submodule receives the role expression data of the current moment in a preset receiving duration; performing interpolation operation on the role expression data at the current moment stored by the storage submodule and the role expression data at the last moment stored by the storage submodule to obtain first virtual expression data of the virtual role;
the second processing submodule is in communication connection with the second judging submodule and the storage submodule; when the second judgment submodule judges that the receiving submodule does not receive the role expression data at the current moment within the preset receiving duration; and performing interpolation operation on the character expression data of the last moment stored by the storage submodule and the initialization data stored by the storage submodule to obtain second virtual expression data of the virtual character.
The conversion module further comprises:
the recognition sub-module is used for recognizing the facial expression information at the current moment according to the facial expression image information of the performer acquired by the first acquisition module to obtain the facial feature points and the current space coordinate values of the facial feature points;
the operation submodule is in communication connection with the identification submodule; calculating the current space coordinate value of the facial feature point, which is obtained by the recognition of the recognition submodule, with a preset displacement value to obtain the expression parameter of the performer at the current moment; calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment;
the conversion submodule is in communication connection with the operation submodule; converting the expression parameters of the current moment obtained by the operation of the operation submodule according to a preset conversion mode to obtain the character expression data;
wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
Specifically, in the embodiment of the present invention, first, the initial spatial coordinate of the initial expression is obtained by capturing the facial key feature points of the initial expression of the performer before taking a shot, then capturing the current expression change of the performer before taking a shot in real time, analyzing the collected facial key feature points of the current expression to obtain the current spatial coordinate value of the current expression, then calculating the difference between the two, and calculating the ratio. The invention can transfer the facial action of the human face to any virtual character selected by the user for demonstration in real time. In the invention, the vitality is not enough when only the movement changes of facial five sense organs of the performer are collected for demonstration, so that if the eyeball changes are bound with the emotion information, for example, the rapid rotation of the eyeballs indicates that the decision made by the performer is impulsive, the dull presence of the eyes indicates that the performer is absent from mind, the eyeballs rotate slowly to bring out the degree of fatigue of the performer, and the like. After the eyeball is collected to change, the live broadcast demonstration can be more vivid and interesting by mapping the live broadcast demonstration to the virtual character for demonstration, and the watching experience of audiences is improved. In the invention, the capture technology has the greatest benefit that the captured data can be sent to other clients in the form of network packets, but the captured data is often inaccurate in identification, wrong in identification or failed in identification due to some special reasons (such as light interference). Therefore, the network packet needs to be analyzed in the Unity, and the wrong packet can be processed in the analyzing process to make up the consequences caused by identification loss and identification errors, so that even if identification is lost or when part of identification data has overlarge deviation or identification fails, the Unity can smoothly transit, the action of the character can be stably controlled in an interval, the demonstration of the character is more natural and vivid, and the action of the face and the face collected in real time can be smoothly and rapidly transferred to the virtual character.
Referring to fig. 18, the present invention provides another embodiment of a virtual character-based data processing system, which further includes a modeling module for modeling a virtual character model of the performer; an acquisition module that acquires facial image data of the performer; the extraction module is in communication connection with the acquisition module; extracting facial feature information of the performer according to the facial image data of the performer acquired by the acquisition module; the generation module is respectively in communication connection with the modeling module and the extraction module; according to the facial feature information extracted by the extraction module, carrying out bone binding or feature point binding on the virtual character model obtained by modeling of the modeling module to generate the virtual character; the control module is in communication connection with the processing module; controlling the virtual role of the performer to demonstrate according to the virtual expression data obtained after the processing of the processing module; the second acquisition module is used for acquiring control parameter information; the control module is also in communication connection with the second acquisition module; controlling the virtual role to demonstrate according to the control parameter information acquired by the second acquisition module; the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information.
Specifically, in the embodiment of the invention, the user can create a favorite virtual role by himself, and after the virtual role model is established, a basic expression library of the virtual character needs to be established, bones are bound in a preset mode to obtain the initial expression of the virtual character, then N basic expressions of the virtual character are obtained according to training adjustment or prestored expression ratio values, wherein the training adjustment is to obtain the spatial coordinates of all the facial feature key points of the user's real expression, then training and adjusting the virtual character to obtain the number of the basic expressions in N of the virtual character according to the space coordinates of the basic expressions, here, the number of base expressions of the virtual character is the same as the number of base expressions of the real face, therefore, the facial expression change is mapped to the virtual character in real time through the facial expression change acquired by the camera at the later stage, and the facial expression change is vividly and vividly demonstrated. Compared with the preset cartoon characters, the technical characteristics of the invention are that a large amount of real-time actions can be used by users, the animation is prefabricated through the 3Dmax, and the animation is controlled by the program in the Unity3D, so that the 3Dmax prefabricated animation and the Unity3d program animation are activated through the buttons of the control window, and the image of the virtual character is more vivid. Moreover, the hairstyle of the virtual role can be selected according to the preference of the user, and the more beautiful and personalized virtual role can be obtained. Of course, the selected prop can be used when the body action of the virtual character is not influenced according to the preference of the user, so that the interestingness is increased. Therefore, the facial action of the virtual character is fused and rendered with at least one of the preset body action, the prop and the hair, the interestingness is higher, and the user experience is better. After importing the 3Dmax animation into Unity3D, each keyframe is edited in Unity3D, e.g., love heart, and an event is thrown while the animation axis reaches hands closed. When a bone is thrown, an event is thrown when the animation axis reaches the front of the bone flying to the screen. All events are the same as the events of clicking the buttons, and a certain prop can be driven to be displayed or hidden, so that the editing control can be conveniently realized, and various trigger conditions can be conveniently, quickly and uniformly managed.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A data processing method based on virtual roles is characterized by comprising the following steps:
s100, acquiring facial expression image information of a performer;
s200, converting the facial expression image information into role expression data;
s300, processing the character expression data according to a data interpolation algorithm to obtain virtual expression data of a virtual character;
the step S300 includes:
s310, judging whether the virtual role of the performer is initialized or not; if yes, go to step S330, otherwise go to step S320;
s320, initializing the virtual role according to initialization data;
s330, judging whether the current moment receives the character expression data within a preset receiving time; if yes, go to step S340;
s340, performing interpolation operation on the character expression data at the current moment and the character expression data at the previous moment to obtain first virtual expression data of the virtual character;
the step S340 includes:
if the character expression data at the current moment is marked as a frame with an identification error, discarding the character expression data at the current moment, and smoothly transitioning the character expression data at the previous moment to an initial state through smooth interpolation;
and if the character expression data at the current moment is marked as a frame with normal recognition but the actual recognition is wrong, performing interpolation operation on the character expression data at the current moment and the character expression data at the previous moment, and using the obtained interpolation result as the first virtual expression data of the virtual character.
2. The virtual character-based data processing method according to claim 1, characterized in that:
the step S330 further includes: otherwise, go to step S350;
s350, performing interpolation operation on the character expression data at the previous moment and the initialization data to obtain second virtual expression data of the virtual character.
3. The virtual character-based data processing method according to claim 1, wherein the step S200 comprises the steps of:
s210, according to the facial expression image information of the performer, identifying the facial expression information at the current moment to obtain facial feature points and current space coordinate values of the facial feature points;
s220, calculating the current space coordinate value of the facial feature point and the corresponding initial space coordinate to obtain a displacement difference value;
s230, calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment;
s240, converting the expression parameters at the current moment according to a preset conversion mode to obtain the character expression data; wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
4. The virtual character-based data processing method according to claim 1, further comprising, after the step S300, the steps of:
s400, controlling the virtual character to demonstrate according to the virtual expression data.
5. The virtual character-based data processing method according to claim 4, wherein the step S400 is followed by the step of:
s500, acquiring control parameter information and controlling the virtual role to demonstrate; the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information;
the step S500 includes:
s10, according to a preset trunk action instruction input by a user, carrying out fusion animation demonstration on the virtual character facial animation and trunk action parameter information selected by the user; wherein the torso motion information comprises displacement, rotation, and scaling; and/or the presence of a gas in the gas,
s20, according to the preset face action command input by the user, carrying out fusion animation demonstration on the virtual character face animation and the face action selected by the user; and/or the presence of a gas in the gas,
s30, according to the preset prop instruction input by the user, carrying out fusion animation demonstration on the facial animation of the virtual character and the prop parameter information selected by the user; and/or the presence of a gas in the gas,
s40, according to a preset hair instruction input by a user, carrying out fusion animation demonstration on the facial animation of the virtual character and the hair parameter information selected by the user; and/or the presence of a gas in the gas,
and S50, performing fusion animation demonstration on the virtual character facial animation and the background parameter information selected by the user according to a preset background instruction input by the user.
6. The virtual character-based data processing method according to any one of claims 1-5, wherein the step S100 is preceded by the steps of:
s010, modeling to obtain a virtual role model of the performer;
s020 acquires the facial image data of the performer, and extracts facial feature information of the performer;
and S030, according to the facial feature information, performing skeleton binding or feature point binding on the virtual character model to generate the virtual character.
7. A virtual character-based data processing system, comprising:
the first acquisition module is used for acquiring facial expression image information of a performer;
the conversion module is in communication connection with the acquisition module; converting the facial expression image information acquired by the acquisition module into role expression data;
the processing module is in communication connection with the conversion module; processing the character expression data converted by the conversion module according to a data interpolation algorithm to obtain virtual expression data of a virtual character;
the processing module comprises:
the storage submodule stores the initialization data and the role expression data at each moment according to the time sequence;
the receiving submodule receives character expression data;
the first judgment submodule judges whether the virtual role of the performer is initialized or not;
the initialization submodule is in communication connection with the first judgment submodule; when the first judging submodule judges that the virtual role of the performer is not initialized, initializing the virtual role according to initialization data;
the second judgment submodule is in communication connection with the receiving submodule and the first judgment submodule respectively; when the first judgment submodule judges that the virtual character is initialized, judging whether the receiving submodule receives character expression data at the current moment in a preset receiving time length;
the first processing sub-module is in communication connection with the second judging sub-module and the storage sub-module; when the second judgment submodule judges that the receiving submodule receives the role expression data of the current moment in a preset receiving duration, the role expression data of the current moment stored by the storage submodule and the role expression data of the previous moment stored by the storage submodule are subjected to interpolation operation to obtain first virtual expression data of the virtual role;
the interpolation operation of the role expression data of the current moment stored by the storage submodule and the role expression data of the last moment stored by the storage submodule is performed to obtain the first virtual expression data of the virtual role, and the method comprises the following steps:
if the character expression data at the current moment is marked as a frame with an identification error, discarding the character expression data at the current moment, and smoothly transitioning the character expression data at the previous moment to an initial state through smooth interpolation;
and if the character expression data at the current moment is marked as a frame with normal recognition but the actual recognition is wrong, performing interpolation operation on the character expression data at the current moment and the character expression data at the previous moment, and using the obtained interpolation result as the first virtual expression data of the virtual character.
8. The virtual character-based data processing system of claim 7, wherein the processing module further comprises:
the second processing submodule is in communication connection with the second judging submodule and the storage submodule; when the second judgment submodule judges that the receiving submodule does not receive the role expression data at the current moment within the preset receiving duration; and performing interpolation operation on the character expression data of the last moment stored by the storage submodule and the initialization data stored by the storage submodule to obtain second virtual expression data of the virtual character.
9. The virtual character-based data processing system of claim 7, wherein the translation module further comprises:
the recognition sub-module is used for recognizing the facial expression information at the current moment according to the facial expression image information of the performer acquired by the first acquisition module to obtain the facial feature points and the current space coordinate values of the facial feature points;
the operation submodule is in communication connection with the identification submodule; calculating the current space coordinate value of the facial feature point, which is obtained by the recognition of the recognition submodule, with a preset displacement value to obtain the expression parameter of the performer at the current moment; calculating the displacement difference value and a preset displacement value to obtain expression parameters of the performer at the current moment;
the conversion submodule is in communication connection with the operation submodule; converting the expression parameters of the current moment obtained by the operation of the operation submodule according to a preset conversion mode to obtain the character expression data; wherein the facial feature points include eyes, eyeballs, eyebrows, nose, mouth, ears, and a neck bone.
10. A virtual character-based data processing system according to any one of claims 7-9, further comprising:
the modeling module is used for modeling to obtain a virtual character model of the performer;
an acquisition module that acquires facial image data of the performer;
the extraction module is in communication connection with the acquisition module; extracting facial feature information of the performer according to the facial image data of the performer acquired by the acquisition module;
the generation module is respectively in communication connection with the modeling module and the extraction module; according to the facial feature information extracted by the extraction module, carrying out bone binding or feature point binding on the virtual character model obtained by modeling of the modeling module to generate the virtual character;
the control module is in communication connection with the processing module; controlling the virtual role of the performer to demonstrate according to the virtual expression data obtained after the processing of the processing module;
the second acquisition module is used for acquiring control parameter information;
the control module is also in communication connection with the second acquisition module; controlling the virtual role to demonstrate according to the control parameter information acquired by the second acquisition module;
the control parameter information comprises any one or more of action parameter information, prop parameter information, hair parameter information and background parameter information;
the control module is further used for carrying out fusion animation demonstration on the virtual character facial animation and the trunk action parameter information selected by the user according to a preset trunk action instruction input by the user; wherein the torso motion information comprises displacement, rotation, and scaling; and/or performing fusion animation demonstration on the virtual character facial animation and the facial action selected by the user according to a preset facial action instruction input by the user; and/or performing fusion animation demonstration on the facial animation of the virtual character and the prop parameter information selected by the user according to a preset prop instruction input by the user; and/or performing fusion animation demonstration on the virtual character facial animation and hair parameter information selected by a user according to a preset hair instruction input by the user; and/or performing fusion animation demonstration on the virtual character facial animation and the background parameter information selected by the user according to a preset background instruction input by the user.
CN201710331551.XA 2017-05-11 2017-05-11 Data processing method and system based on virtual roles Active CN107154069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710331551.XA CN107154069B (en) 2017-05-11 2017-05-11 Data processing method and system based on virtual roles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710331551.XA CN107154069B (en) 2017-05-11 2017-05-11 Data processing method and system based on virtual roles

Publications (2)

Publication Number Publication Date
CN107154069A CN107154069A (en) 2017-09-12
CN107154069B true CN107154069B (en) 2021-02-02

Family

ID=59793268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710331551.XA Active CN107154069B (en) 2017-05-11 2017-05-11 Data processing method and system based on virtual roles

Country Status (1)

Country Link
CN (1) CN107154069B (en)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633270A (en) * 2017-09-29 2018-01-26 上海与德通讯技术有限公司 Intelligent identification Method, robot and computer-readable recording medium
CN107945255A (en) * 2017-11-24 2018-04-20 北京德火新媒体技术有限公司 A kind of virtual actor's facial expression driving method and system
CN108335345B (en) * 2018-02-12 2021-08-24 北京奇虎科技有限公司 Control method and device of facial animation model and computing equipment
CN108399383B (en) * 2018-02-14 2021-03-23 深圳市商汤科技有限公司 Expression migration method, device storage medium, and program
CN108564641B (en) * 2018-03-16 2020-09-25 中国科学院自动化研究所 Expression capturing method and device based on UE engine
CN108525306B (en) * 2018-03-26 2020-03-06 Oppo广东移动通信有限公司 Game implementation method and device, storage medium and electronic equipment
CN108305309B (en) * 2018-04-13 2021-07-20 腾讯科技(成都)有限公司 Facial expression generation method and device based on three-dimensional animation
CN108632660A (en) * 2018-05-28 2018-10-09 深圳Tcl新技术有限公司 Method for displaying image, television set and the storage medium of television set
CN108961367A (en) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 The method, system and device of role image deformation in the live streaming of three-dimensional idol
CN108810561A (en) * 2018-06-21 2018-11-13 珠海金山网络游戏科技有限公司 A kind of three-dimensional idol live broadcasting method and device based on artificial intelligence
CN109147017A (en) * 2018-08-28 2019-01-04 百度在线网络技术(北京)有限公司 Dynamic image generation method, device, equipment and storage medium
CN109445573A (en) * 2018-09-14 2019-03-08 重庆爱奇艺智能科技有限公司 A kind of method and apparatus for avatar image interactive
CN109120985B (en) * 2018-10-11 2021-07-23 广州虎牙信息科技有限公司 Image display method and device in live broadcast and storage medium
CN111200747A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Live broadcasting method and device based on virtual image
CN109410298B (en) * 2018-11-02 2023-11-17 北京恒信彩虹科技有限公司 Virtual model manufacturing method and expression changing method
CN109542389B (en) * 2018-11-19 2022-11-22 北京光年无限科技有限公司 Sound effect control method and system for multi-mode story content output
CN111291151A (en) * 2018-12-06 2020-06-16 阿里巴巴集团控股有限公司 Interaction method and device and computer equipment
CN111353842A (en) * 2018-12-24 2020-06-30 阿里巴巴集团控股有限公司 Processing method and system of push information
CN109727303B (en) * 2018-12-29 2023-07-25 广州方硅信息技术有限公司 Video display method, system, computer equipment, storage medium and terminal
CN111460870A (en) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 Target orientation determination method and device, electronic equipment and storage medium
CN109840019B (en) * 2019-02-22 2023-01-10 网易(杭州)网络有限公司 Virtual character control method, device and storage medium
CN109922355B (en) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN110139115B (en) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 Method and device for controlling virtual image posture based on key points and electronic equipment
CN110062267A (en) * 2019-05-05 2019-07-26 广州虎牙信息科技有限公司 Live data processing method, device, electronic equipment and readable storage medium storing program for executing
CN110321008B (en) * 2019-06-28 2023-10-24 北京百度网讯科技有限公司 Interaction method, device, equipment and storage medium based on AR model
CN110308792B (en) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 Virtual character control method, device, equipment and readable storage medium
CN110399825B (en) * 2019-07-22 2020-09-29 广州华多网络科技有限公司 Facial expression migration method and device, storage medium and computer equipment
CN110570499B (en) * 2019-09-09 2023-08-15 珠海金山数字网络科技有限公司 Expression generating method, device, computing equipment and storage medium
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN110784676B (en) * 2019-10-28 2023-10-03 深圳传音控股股份有限公司 Data processing method, terminal device and computer readable storage medium
CN111063339A (en) * 2019-11-11 2020-04-24 珠海格力电器股份有限公司 Intelligent interaction method, device, equipment and computer readable medium
CN111111199A (en) * 2019-11-19 2020-05-08 江苏名通信息科技有限公司 Role three-dimensional modeling system and method based on picture extraction
CN111111154B (en) * 2019-12-04 2023-06-06 北京代码乾坤科技有限公司 Modeling method and device for virtual game object, processor and electronic device
CN111179384A (en) * 2019-12-30 2020-05-19 北京金山安全软件有限公司 Method and device for showing main body
CN111401921B (en) * 2020-03-05 2023-04-18 成都威爱新经济技术研究院有限公司 Virtual human-based remote customer service method
WO2021208330A1 (en) * 2020-04-17 2021-10-21 完美世界(重庆)互动科技有限公司 Method and apparatus for generating expression for game character
CN111432267B (en) * 2020-04-23 2021-05-21 深圳追一科技有限公司 Video adjusting method and device, electronic equipment and storage medium
CN112419465A (en) * 2020-12-09 2021-02-26 网易(杭州)网络有限公司 Rendering method and device of virtual model
CN113255457A (en) * 2021-04-28 2021-08-13 上海交通大学 Animation character facial expression generation method and system based on facial expression recognition
CN113656638B (en) * 2021-08-16 2024-05-07 咪咕数字传媒有限公司 User information processing method, device and equipment for watching live broadcast
CN113744374B (en) * 2021-09-03 2023-09-22 浙江大学 Expression-driven 3D virtual image generation method
CN114219878B (en) * 2021-12-14 2023-05-23 魔珐(上海)信息科技有限公司 Animation generation method and device for virtual character, storage medium and terminal
CN114554111B (en) * 2022-02-22 2023-08-01 广州繁星互娱信息科技有限公司 Video generation method and device, storage medium and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Candide-3模型的人脸表情动画系统设计与实现;张泽强等;《福建电脑》;20160225;第9页至第11页 *
基于视觉特征提取的表情人脸动画生成;张翼;《中国优秀硕士学位论文全文数据库信息科技辑》;20080815;第34页至53页 *

Also Published As

Publication number Publication date
CN107154069A (en) 2017-09-12

Similar Documents

Publication Publication Date Title
CN107154069B (en) Data processing method and system based on virtual roles
US11478709B2 (en) Augmenting virtual reality video games with friend avatars
JP5785254B2 (en) Real-time animation of facial expressions
CN106170083B (en) Image processing for head mounted display device
WO2022062678A1 (en) Virtual livestreaming method, apparatus, system, and storage medium
US20180373413A1 (en) Information processing method and apparatus, and program for executing the information processing method on computer
US20160110922A1 (en) Method and system for enhancing communication by using augmented reality
KR20220017903A (en) An entertaining mobile application that animates a single image of the human body and applies effects
JP2012519333A (en) Image conversion system and method
CN102470273A (en) Visual representation expression based on player expression
CN107248185A (en) A kind of virtual emulation idol real-time live broadcast method and system
CN101968891A (en) System for automatically generating three-dimensional figure of picture for game
Gonzalez-Franco et al. Movebox: Democratizing mocap for the microsoft rocketbox avatar library
KR102353556B1 (en) Apparatus for Generating Facial expressions and Poses Reappearance Avatar based in User Face
Maraffi Maya character creation: modeling and animation controls
KR20140065762A (en) System for providing character video and method thereof
Huang et al. A process for the semi-automated generation of life-sized, interactive 3D character models for holographic projection
de Dinechin et al. Automatic generation of interactive 3D characters and scenes for virtual reality from a single-viewpoint 360-degree video
TWI814318B (en) Method for training a model using a simulated character for animating a facial expression of a game character and method for generating label values for facial expressions of a game character using three-imensional (3d) image capture
KR102553432B1 (en) System for creating face avatar
Doroski Thoughts of Spirits in Madness: Virtual Production Animation and Digital Technologies for the Expansion of Independent Storytelling
KR20060040118A (en) Method and appartus for producing customized three dimensional animation and system for distributing thereof
US8896607B1 (en) Inverse kinematics for rigged deformable characters
WO2022229639A2 (en) Computer-implemented method for controlling a virtual avatar
CN117504296A (en) Action generating method, action displaying method, device, equipment, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant