CN113192165A - Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium - Google Patents

Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113192165A
CN113192165A CN202110519161.1A CN202110519161A CN113192165A CN 113192165 A CN113192165 A CN 113192165A CN 202110519161 A CN202110519161 A CN 202110519161A CN 113192165 A CN113192165 A CN 113192165A
Authority
CN
China
Prior art keywords
expression
base
control information
personalized
expression base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110519161.1A
Other languages
Chinese (zh)
Inventor
李团辉
王擎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110519161.1A priority Critical patent/CN113192165A/en
Publication of CN113192165A publication Critical patent/CN113192165A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed

Abstract

The application provides a control information generation method and device based on personalized expression bases, electronic equipment and a readable storage medium. And then obtaining video frames of the reference object under different set expressions, generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information. According to the scheme, the personalized expression base which reflects the characteristics of the reference object can be generated based on the expression base template, and the control information represented by the personalized expression base can be obtained based on the video frames with different set expressions and the personalized expression base, so that the control information can be used for controlling the subsequent virtual image. Therefore, the complexity of the equipment can be reduced, excessive interference is avoided, and the obtained control information can accurately reflect each set expression.

Description

Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium
Technical Field
The application relates to the technical field of live broadcast, in particular to a control information generation method and device based on a personalized expression base, electronic equipment and a readable storage medium.
Background
With the popularity of the live broadcast industry, more and more people have entered the live broadcast industry as anchor. The traditional live broadcast industry mainly relies on the individual talent performance of the anchor to attract audiences, the anchor always carries out live broadcast in a single image (self image), diversity is lacked, and a plurality of interesting live broadcast modes and playing methods are limited. Therefore, a new live broadcast form, namely virtual live broadcast, has recently been derived, so that the anchor can drive a plurality of different virtual images behind for personalized live broadcast.
The expression animation of the high-precision vivid avatar is mainly produced in two ways, one of which is to drive the avatar based on information manually drawn by an animator for a frame of image, and the way is very inefficient and is not suitable for mass production. There is also a way to generate control information for driving an avatar based on a way of a mark point by adding the mark point to a face of a subject. The method has the problems of complex equipment and excessive interference due to the need of adding the mark points.
Disclosure of Invention
The object of the present application includes, for example, providing a method and an apparatus for generating control information based on a personalized expression base, an electronic device, and a readable storage medium, which are capable of obtaining control information that can accurately represent each expression while reducing the complexity of the device and avoiding excessive interference.
The embodiment of the application can be realized as follows:
in a first aspect, the present application provides a method for generating control information based on a personalized expression base, where the method includes:
obtaining a plurality of face images of a reference object in different states;
splitting facial features contained in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases;
obtaining video frames of the reference object under different set expressions;
and generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information.
In an alternative embodiment, the method further comprises:
in the live broadcast process, obtaining a control expression instruction aiming at the constructed virtual image, and obtaining a set expression corresponding to the control expression;
extracting the stored control information corresponding to the set expression;
and controlling the virtual image to display the control expression according to the control information.
In an optional embodiment, the step of generating, according to the video frame and the personalized expression base, control information including a personalized expression base corresponding to the set expression includes:
each individual expression base is endowed with a base coefficient and then combined to obtain a combined expression base;
comparing the video frame with the combined expression base, and adjusting each base coefficient based on a comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met;
and generating control information based on the base coefficient combination and the personalized expression base.
In an alternative embodiment, the video frame contains point cloud information of three-dimensional feature points;
the step of comparing the video frame with the combined expression base and adjusting each base coefficient based on the comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met comprises the following steps:
aligning the point cloud information of the three-dimensional feature points with the feature points in the combined expression base;
and adjusting each base coefficient according to the result of the alignment operation until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In an optional embodiment, the step of aligning the point cloud information of the three-dimensional feature points with the feature points in the combined expression base includes:
determining three-dimensional feature points corresponding to the feature points in the point cloud information of the three-dimensional feature points aiming at the feature points in the combined expression base, and carrying out position alignment constraint on the corresponding feature points and the three-dimensional feature points on coordinate values;
determining a plane where the three-dimensional feature points corresponding to the feature points are located, and performing distance constraint on the feature points and the plane;
and obtaining an alignment operation result according to the results of the position alignment constraint and the distance constraint.
In an alternative embodiment, the video frame comprises two-dimensional image information;
comparing the video frame with the combined expression base, adjusting each base coefficient based on a comparison result, and obtaining a base coefficient combination finally corresponding to the set expression when a preset requirement is met, wherein the step comprises the following steps:
comparing the two-dimensional image information with the two-dimensional information contained in the combined expression base;
and adjusting each base coefficient according to the comparison operation result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In an optional embodiment, the step of comparing the two-dimensional image information with the two-dimensional information included in the combined expression base includes:
rendering the combined expression base based on a preset texture map to obtain a comparison image;
respectively carrying out position comparison operation on the key points in the comparison image and the corresponding key points in the two-dimensional image information on coordinate values and carrying out pixel comparison operation on pixel values;
and obtaining a comparison operation result based on the results of the position comparison operation and the pixel comparison operation.
In an optional embodiment, the video frame with the set expression includes multiple continuous video frames, and the control information corresponding to the set expression includes multiple pieces of control sub-information respectively corresponding to the video frames;
the step of controlling the avatar to display the control expression according to the control information comprises the following steps:
aiming at continuous multi-frame live video frames to be controlled in the live broadcast process, obtaining control sub-information corresponding to each frame of live video frame in the control information;
and controlling the virtual image in the corresponding live video frame by utilizing each piece of control sub-information so as to display the control expression by the virtual image in the continuous multi-frame live video frame.
In an optional embodiment, the step of splitting facial features included in each of the facial images based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases includes:
constructing three-dimensional models in different states according to the plurality of face images;
splitting facial features in each three-dimensional model based on a plurality of preset expression base templates and a plurality of three-dimensional models to generate a plurality of personalized expression bases;
the personalized expression bases comprise a reference expression base and a plurality of variable expression bases, and each variable expression base has a facial feature which is different from the corresponding facial feature in the reference expression base compared with the reference expression base.
In an alternative embodiment, the step of constructing three-dimensional models in different states according to the plurality of facial images includes:
for each state, acquiring a plurality of target face images in the state, wherein the target face images are images shot from different perspectives;
and performing three-dimensional reconstruction based on the multiple target face images, and performing topology processing on a model obtained by the three-dimensional reconstruction to obtain the three-dimensional model in the state.
In a second aspect, the present application provides a control information generating apparatus based on a personalized expression base, the apparatus comprising:
the first obtaining module is used for obtaining a plurality of face images of the reference object in different states;
the splitting module is used for splitting facial features contained in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases;
the second obtaining module is used for obtaining video frames of the reference object under different set expressions;
and the generating module is used for generating control information which corresponds to each set expression and contains the personalized expression base according to the video frame and the personalized expression base, and storing the control information.
In a third aspect, the present application provides an electronic device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when executed by the electronic device, are executed by the processors to perform the method steps of any one of the preceding embodiments.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon machine-executable instructions which, when executed, implement the method steps of any one of the preceding embodiments.
The beneficial effects of the embodiment of the application include, for example:
the application provides a control information generation method and device based on personalized expression bases, electronic equipment and a readable storage medium. And then obtaining video frames of the reference object under different set expressions, generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information. According to the scheme, the personalized expression base which reflects the characteristics of the reference object can be generated based on the expression base template, and the control information represented by the personalized expression base can be obtained based on the video frames with different set expressions and the personalized expression base, so that the control information can be used for controlling the subsequent virtual image. Therefore, the complexity of the equipment can be reduced, excessive interference is avoided, and the obtained control information can accurately reflect each set expression.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a control information generation method according to an embodiment of the present application;
fig. 2 is a schematic view of another application scenario of the control information generation method according to the embodiment of the present application;
fig. 3 is a flowchart of a control information generation method according to an embodiment of the present application;
FIG. 4 is a flowchart of sub-steps included in step S120 of FIG. 3;
FIG. 5 is a facial image taken from multiple perspectives according to an embodiment of the present application;
fig. 6 is a schematic diagram of a model obtained by three-dimensional reconstruction provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a three-dimensional model obtained after topology processing according to an embodiment of the present application;
fig. 8 is a schematic diagram of an expression base template provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a personalized expression base according to an embodiment of the present application;
FIG. 10 is a flowchart of sub-steps included in step S140 of FIG. 3;
FIG. 11 is a schematic diagram of a combined expression base obtained in an embodiment of the present application;
FIG. 12 is a flowchart of the sub-steps involved in step S142A of FIG. 10;
FIG. 13 is a flowchart of sub-steps involved in step S1421A of FIG. 12;
fig. 14 is a schematic diagram of an alignment operation performed based on point cloud information according to an embodiment of the present disclosure;
FIG. 15 is another flowchart of the substeps involved in step S142 of FIG. 10;
FIG. 16 is a flowchart of sub-steps involved in step S1421B of FIG. 15;
fig. 17 is a schematic diagram of a comparison operation performed based on two-dimensional image information according to an embodiment of the present application;
fig. 18 is a flowchart of a control method in a control information generation method according to an embodiment of the present application;
FIG. 19 is a schematic diagram of controlling an avatar according to an embodiment of the present application;
FIG. 20 is a flowchart of sub-steps involved in step S230 of FIG. 18;
fig. 21 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 22 is a functional block diagram of a control information generation device based on a personalized expression base according to an embodiment of the present application.
Icon: 100-a server; 200-an image acquisition device; 300-live broadcast providing end; 400-live broadcast receiving end; 110-a storage medium; 120-a processor; 130-control information generating device based on personalized expression base; 131-a first obtaining module; 132-split module; 133-a second obtaining module; 134-a generation module; 140-communication interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, a schematic view of an application scenario of the method for generating control information based on a personalized expression base according to the embodiment of the present application is shown, where the application scenario includes a server 100 and a plurality of image capturing devices 200 communicatively connected to the server 100. The plurality of image capturing apparatuses 200 may include an apparatus for capturing a two-dimensional image, such as a camera, and may further include an apparatus for capturing a depth image, such as a depth camera.
In this embodiment, each image capturing device 200 may send the captured image information or video information to the server 100, and analyze and process the received image information or video information through the server 100 to obtain control information that may be used to control the avatar.
Referring to fig. 2, an application scenario of the control information generation method according to the embodiment of the present application may further include a live broadcast providing end 300 and a live broadcast receiving end 400, and the server 100 may be a live broadcast server. The live broadcast provider 300 and the live broadcast receiver 400 may be communicatively connected to a live broadcast server. The live broadcast providing terminal 300 may be a terminal device (such as a mobile phone, a tablet computer, a computer, etc.) used by the anchor broadcast during live broadcast, and the live broadcast receiving terminal 400 may be a terminal device (such as a mobile phone, a tablet computer, a computer, etc.) used by the audience during live broadcast watching.
The live video provider 300 may send the live video stream to a live server, and the viewer may access the live server through the live receiver 400 to watch the live video. The live broadcast server can also receive a control instruction for the avatar sent by the live broadcast provider 300, and further correspondingly control the avatar based on the obtained control information to generate a live broadcast video stream, and push the live broadcast video stream to the live broadcast receiver 400.
With reference to fig. 3, an embodiment of the present application further provides a control information generating method based on a personalized expression base, which is applicable to an electronic device, such as the server 100 described above, to generate control information for driving an avatar. The method steps defined by the flow relating to the control information generation method may be implemented by the electronic device. The specific flow shown in fig. 3 will be described in detail below.
In step S110, a plurality of face images of the reference object in different states are obtained.
Step S120, splitting facial features contained in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases.
Step S130, obtaining video frames of the reference object under different set expressions.
Step S140, generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information.
In this embodiment, a plurality of preset expression base templates are pre-stored in the electronic device, and each expression base template may be a base template drawn by an animation producer. Each expression base template contains a plurality of facial features, such as eyes, nose, mouth, eyebrows, etc. The corresponding facial features in different emoji templates may be in different states. For example, there are expression base templates in which the eyes are in an open state, there are expression base templates in which the eyes are in a closed state, while there are expression base templates in which the mouth is in an open state, and there are expression base templates in which the mouth is in a beep state, and so on.
The preset expression base templates can individually show or show different facial feature states by setting each facial feature to be in different states.
Because the preset expression base template is often a uniform template for production and is not produced based on a certain character object, it is difficult to determine the control information by performing comparison and other operations based on character information. Therefore, in this embodiment, personalized emoticons for different specific characters need to be established based on a preset emoticons template.
In this embodiment, the image capturing apparatus 200 may be used to capture a plurality of facial images of the reference object in different states. The reference object may be any human object, for example, any model of girl, any model of boy, and the like. Of course, the reference object can be selected based on the subsequent driving avatar. For example, the avatar may include a cartoon, a Roly, or a hero, etc. If the virtual image of the RauLi image needs to be driven, the model of the girl can be selected as a reference object, and the complexity of subsequent adjustment operation can be reduced to a certain extent.
In this embodiment, the face images of the reference object in different states, such as an angry state, a laugh state, a sad state, and the like, may be collected. The obtained face images in various states contain a plurality of face features. Based on the setting mode of each facial feature in a plurality of preset expression base templates, the facial features in the facial image can be split to generate a plurality of personalized expression bases.
For example, in a face image of a reference object which appears as an angry state, an eyebrow feature, a mouth feature, an eye feature, a nose feature, and the like in that state are included. The features are extracted separately, and each feature can correspondingly obtain a personalized expression base, that is, each personalized expression base can be used for reflecting information of a certain facial feature.
By the method, the facial features of the reference object in different states can be separated independently, and each facial feature can be characterized independently in a personalized expression base mode.
On the basis, video frames of the reference object under different set expressions can be collected, wherein the set expressions can be set according to requirements, such as yawning, blinking and other expressions. And subsequently, correspondingly controlling the virtual image to display the corresponding set expression based on the information of the set expression.
The obtained video frame contains the facial information of the reference object under the set expression, and the facial information should be commonly embodied by all facial features contained on the face. And the obtained personalized expression bases can respectively and independently characterize each facial feature. Therefore, based on the comparative analysis of the video frame and the personalized expression base, the information in the video frame of the reference object can be embodied by the personalized expression base. Moreover, the video frame of the reference object should be embodied by a plurality of personalized expression bases in a certain manner, that is, control information corresponding to the set expression containing the personalized expression bases can be obtained.
By processing different set expressions in the above manner, control information corresponding to each set expression can be obtained, and the control information can be stored. During subsequent live broadcast, the virtual image can be controlled by utilizing the stored control information, so that the virtual image can show corresponding expressions, and the live broadcast interest is increased.
According to the control information generation method provided by the embodiment, the personalized expression base which reflects the characteristics of the reference object can be generated based on the expression base template, and the set expression information represented by the personalized expression base can be obtained based on the video frames and the personalized expression base of the reference object under different set expressions, so that the control information generation method is used for controlling the subsequent virtual image. By adopting the mode, the control information can be obtained only by adopting the image acquisition equipment 200 and combining the processing mode, the equipment complexity is reduced, excessive interference is avoided, and the obtained control information can accurately reflect each set expression.
In this embodiment, in order to enable the avatar to vividly display different expressions, the constructed avatar is generally a 3D avatar, and accordingly, each preset expression base template is a 3D model. Therefore, referring to fig. 4, in this embodiment, when generating the personalized expression base, the following method may be implemented:
and step S121, constructing three-dimensional models in different states according to the plurality of face images.
And S122, splitting facial features in the three-dimensional models based on a plurality of preset expression base templates and a plurality of three-dimensional models to generate a plurality of personalized expression bases.
In this embodiment, a three-dimensional model of the reference object in different states can be first constructed. For each state, a plurality of target face images in that state are obtained, the plurality of target face images being images taken from different perspectives. And performing three-dimensional reconstruction based on the multiple target face images, and performing topology processing on a model obtained by the three-dimensional reconstruction to obtain a three-dimensional model in the state.
In the present embodiment, when image capturing of the reference object is performed, a plurality of image capturing apparatuses 200 may be used to capture images from different angles, for example, a view angle facing the face of the object, a left side view angle, a bottom view angle, a right side view angle, and the like, as shown in fig. 5. Thus, in a certain state, for example, an angry state, a plurality of face images of the reference object in the state can be captured at different viewing angles. Based on the face images from different perspectives, the face of the reference object can be reconstructed three-dimensionally, and the resulting model is shown in fig. 6.
In addition, the model obtained by three-dimensional reconstruction is composed of a plurality of points, and the obtained model may have nonuniform point distribution and positions of the points which are not consistent with standard reference positions, so that the model obtained by three-dimensional reconstruction can be subjected to topological processing for subsequent convenience in processing and normalization of the constructed model. The topology processing may include adjusting the points in the three-dimensional model so that the points in the adjusted model are uniformly distributed and the positions of the points may be consistent with the standard reference position, as shown in fig. 7.
By the method, the three-dimensional models in different states can be constructed according to the multiple facial images, facial features in the three-dimensional models can be split based on the multiple preset expression base templates and the multiple obtained three-dimensional models, and then the multiple personalized expression base templates are generated.
In this embodiment, the preset expression base template may include a reference template and a plurality of variation templates, as shown in fig. 8, where the reference template is a template in the case of a non-expressive face (each facial feature is also non-expressive), and each variation template is a template having a facial feature that varies from the corresponding facial feature of the reference template.
For example, as shown in fig. 8, the first expression base template may be a reference template in which the face is non-expressive, and the following expression base templates are variable templates, wherein the second expression base template has only mouth information and other facial features unchanged from the first. That is, each of the subsequent variation templates may individually represent information of a facial feature.
Accordingly, in this embodiment, the captured facial image of the reference object also includes a facial image of the reference object in a case where the reference object is not expressive, the plurality of created personalized expression bases include a reference expression base and a plurality of modified expression bases, and each modified expression base has a facial feature different from a corresponding facial feature in the reference expression base compared with the reference expression base.
For example, as shown in fig. 9, the first personalized expression base is a reference expression base, and the latter personalized expression bases are variable expression bases. Each facial feature in the first personalized expression base characterizes a feature in a non-expressive state, for example, only the mouth feature in the second personalized expression base is different from the first personalized expression base.
By the method, the expression base group comprising the reference expression base and the plurality of variable expression bases can be obtained.
On this basis, the image capturing apparatus 200 may be reused to capture a video frame of the reference object, the image capturing apparatus 200 may be a depth camera, and the captured video frame may be a 3D image. Alternatively, video frames of the reference object at different set expressions may be captured.
For a certain set expression of the reference object, the set expression can be embodied by the expression base set through analysis of the video frame. In detail, referring to fig. 10, in the present embodiment, the control information that can embody each set expression can be obtained in the following manner.
And step S141, endowing each expression base with a base coefficient, and combining to obtain a combined expression base.
And step S142, comparing the video frame with the combined expression base, and adjusting each base coefficient based on the comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
And step S143, generating control information based on the base coefficient combination and the personalized expression base.
As can be seen from the above, each personalized expression base can individually represent a certain facial feature, and under a certain set expression, the facial information corresponding to the set expression should be comprehensively represented by a plurality of personalized expression bases. And each individual expression base comprehensively reflects the set expression according to a certain coefficient proportion, namely, each expression base is endowed with a base coefficient to be combined to obtain a combined expression base.
The combined expression base includes a reference expression base and a variable expression base, where a base coefficient of the reference expression base may be fixed to 1, and what needs to be done in this embodiment is to optimize the base coefficients of the variable expression bases. The final purpose of the optimization is to make the difference between the obtained combined expression base and the video frame as small as possible, that is, when the difference between the combined expression base and the video frame is lower than a preset threshold, it can be determined that the preset requirement is met, and the obtained base coefficient at this time is the base coefficient finally corresponding to the set expression.
In this embodiment, the initial base coefficient of each variable expression base may be assigned to 0, and then the combined expression base obtained based on the obtained combined expression base is compared with the video frame, and each base coefficient is adjusted multiple times according to the comparison result, and then the combined expression base obtained by the adjusted base coefficient is compared with the video frame. Thus, after multiple adjustments, when the preset requirements are met, the base coefficient combination meeting the requirements can be obtained. As shown in fig. 11, the combination of the base coefficients may be composed of Pexp in fig. 11 and coefficient 1 of the first reference expression base, and the resulting combined expression base is like the rightmost expression base in fig. 11.
And the combination of the plurality of personalized expression bases and the base coefficients can jointly form control information, and the virtual image can be controlled based on the control information subsequently.
In this embodiment, as can be seen from the above description, the video frame acquired by the depth camera is a 3D image, and the video frame includes point cloud information of three-dimensional feature points. When the video frame is compared with the combined expression base to determine the base coefficient, the base coefficient can be determined based on point cloud information in the video frame. In detail, referring to fig. 12, in the present embodiment, the base coefficient may be determined in the following manner.
Step S1421A, performing alignment operation on the point cloud information of the three-dimensional feature points and the feature points in the combined expression base.
Step S1422A, adjusting each of the basis coefficients according to the result of the alignment operation until a basis coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In this embodiment, the combined expression base obtained in the above manner is also a 3D model, the combined expression base includes a plurality of feature points, and information of each feature point is three-dimensional information. The point cloud information obtained based on the shot video frame includes a plurality of three-dimensional points of the face model of the reference object under the set expression.
Because the combined expression base is obtained by combining a plurality of personalized expression bases with corresponding base coefficients, and the face model in the video frame is obtained by shooting, the two are greatly different under the condition that the base coefficients are not optimized. Therefore, the characteristic points in the combined expression base and the three-dimensional points in the point cloud information are aligned continuously, namely the base coefficient is adjusted continuously. Alternatively, referring to fig. 13, the process of aligning based on the point cloud information may be implemented as follows:
step S14211A, for each feature point in the combined expression base, determining a three-dimensional feature point corresponding to the feature point in the point cloud information of the three-dimensional feature point, and performing position alignment constraint on the corresponding feature point and the three-dimensional feature point on a coordinate value.
Step S14212A, determining a plane where the three-dimensional feature point corresponding to the feature point is located, and performing distance constraint on the feature point and the plane.
Step S14213A, obtaining an alignment operation result according to the results of the position alignment constraint and the distance constraint.
In this embodiment, the number of the feature points and the positions of the feature points in the obtained combined expression base may be different from the three-dimensional feature points in the point cloud information. For each feature point in the combined expression base, the three-dimensional feature point with the minimum difference with the feature point in the three-dimensional feature points can be found, and the found three-dimensional feature point is used as the three-dimensional feature point corresponding to the feature point.
And performing position alignment constraint on the coordinate values of the feature points and the corresponding three-dimensional feature points in a point-to-point manner, namely, reducing the difference between the feature points and the three-dimensional feature points on the coordinate values.
In addition, for a certain three-dimensional feature point, the three-dimensional feature point and other adjacent three-dimensional feature points can form a plane, and the difference between the corresponding feature point and the three-dimensional feature point is also reflected in the distance between the feature point and the plane where the three-dimensional feature point is located. Therefore, the distance between the feature point and the plane where the corresponding three-dimensional feature point is located can be continuously reduced by adjusting the base coefficient and further adjusting the mode of combining the expression bases.
With reference to fig. 14, the above-mentioned position alignment constraint between the points and the distance constraint between the points and the surface are combined, so that the basis coefficients in the combined expression base are adjusted, and the results of the position alignment constraint and the distance constraint can meet the required requirements.
In addition, in the present embodiment, by mapping the video frame in the form of a 3D image into the two-dimensional coordinate system, corresponding two-dimensional image information can be obtained. That is, two-dimensional image information is also included in the video frame. When optimizing the basis coefficients in the combined expression basis, the optimization can also be performed in a manner based on two-dimensional image information, and in detail, referring to fig. 15, the optimization can be performed in the following manner:
step S1421B, comparing the two-dimensional image information with the two-dimensional information included in the combined expression base.
Step S1422B, adjusting each of the basis coefficients according to the result of the comparison operation until a basis coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In this embodiment, considering that the two-dimensional image information is generally embodied by the position information of the point and the pixel value of the point, and the obtained combined expression base is a model without the pixel value, in this embodiment, please refer to fig. 16, and the comparison operation between the two-dimensional image information and the combined expression base can be realized through the following processing manner.
Step S14211B, rendering the combined expression base based on a preset texture map to obtain a comparison image.
Step S14212B, performing position comparison operation on the key points in the comparison image and the corresponding key points in the two-dimensional image information on coordinate values, and performing pixel comparison operation on pixel values.
Step S14213B, obtaining a comparison operation result based on the results of the position comparison operation and the pixel comparison operation.
In this embodiment, a predetermined texture map, which is a map with pixel information, may be obtained. And rendering the combined expression by using the texture mapping to obtain a comparison image, wherein the comparison image is a two-dimensional image with pixel information.
The difference exists in pixel value of two-dimensional image information obtained by the preset texture mapping and the collected video frame of the reference object, and the difference is mainly caused by the difference of the corresponding key points of the two on the position information. The key points may be points at positions such as eyeball, nose tip, corner of mouth, eyebrow tip, and eyebrow tail.
Therefore, corresponding key points in the comparison image and the two-dimensional image information can be obtained, the corresponding key points are compared on the coordinate values, and the comparison is carried out on the pixel values. The final purpose of the comparison is to make the corresponding key points between the two coordinate values as consistent as possible and adjust the pixel values as consistent as possible.
With reference to fig. 17, by means of the above-mentioned key point constraint and pixel value constraint, the base coefficients in the combined expression and pixel value can be adjusted continuously, and finally the base coefficient combination meeting the preset requirement is obtained.
In the configuration of optimizing the basis coefficients, the optimization may be performed based on the point cloud information alone, or based on the two-dimensional image information alone, or based on a combination of the point cloud information and the two-dimensional image information.
If a mode of optimizing the base coefficients by combining the point cloud information and the two-dimensional image information is adopted, the base coefficients can be adjusted together according to the result of the alignment operation obtained based on the point cloud information and the result of the comparison operation obtained based on the two-dimensional image information until the preset requirement is met, and the base coefficient combination finally corresponding to the set expression is obtained.
By the above method, the base coefficient combination corresponding to each set expression can be obtained, and the plurality of personalized expression bases are fixed. Different basis coefficient combinations are combined with the personalized expression basis, and then a plurality of different control information can be obtained. The obtained control information can be stored and can be directly called subsequently.
The above process is a production process for performing corresponding control information on different set expressions on line, and in practical applications, for example, in a live broadcast process, the control information obtained in the above process can be used to control the constructed virtual image. Referring to fig. 18, in the present embodiment, the following method can be adopted:
step S210, in the live broadcast process, a control expression instruction aiming at the constructed virtual image is obtained, and a set expression corresponding to the control expression is obtained.
And step S220, extracting the stored control information corresponding to the set expression.
And step S230, controlling the virtual image to display the control expression according to the control information.
In this embodiment, the virtual image may be pre-constructed, and the constructed virtual image may be one or more, for example, may include a roman image, a cartoon image, a hero image, and the like. The anchor is at live broadcasting in-process, in order to richen live broadcast's variety and interest, can trigger the different expressions of virtual image show.
In this embodiment, the anchor may initiate a control expression instruction, and the control expression instruction may be used to instruct the avatar to display the control expression. After receiving the control expression instruction, the electronic device can obtain a set expression corresponding to the control expression.
For example, different trigger buttons can be set in the live application to correspond to different control emotions, and the anchor can click the trigger buttons to issue corresponding control emotions. In addition, an information input box can be set in the live application, and the anchor can input a corresponding control expression in the information input box and then enter a corresponding trigger window to initiate a corresponding control expression instruction.
After the set expression corresponding to the control expression is determined, the control information corresponding to the set expression may be extracted from the control information generated in advance. For example, if the control expression is a smiling expression for the avatar, the control information corresponding to the smiling expression may be extracted from a plurality of pieces of control information generated in advance. And then, the virtual image is controlled to display the control expression based on the extracted control information, for example, the control virtual expression is controlled to display a smiling expression, as shown in fig. 19.
In this embodiment, it is considered that an expression may not be limited to one frame of image frame, but may need to be represented by a plurality of consecutive frames of image frame, that is, the expression is a dynamic process, and may last for a plurality of seconds. For example, for a yawning expression, if the expression is to be completely displayed, a plurality of consecutive frames of image frames may be required to be represented.
Therefore, the video frames of the pre-collected reference object under a certain set expression may include a plurality of continuous video frames, and the control information corresponding to the set expression includes a plurality of control sub-information respectively corresponding to each video frame. That is, each control sub information may correspond to each video frame.
In this case, referring to fig. 20, when the avatar is controlled to display the control expression based on the control information, the following steps may be performed:
and S231, aiming at continuous multi-frame live video frames to be controlled in the live broadcasting process, obtaining control sub-information corresponding to each frame of live video frame in the control information.
Step S232, the virtual images in the corresponding live video frames are controlled by utilizing the control sub-information, so that the virtual images in the continuous multi-frame live video frames can display the control expression.
In this embodiment, the current time point is taken as a boundary point, and the continuous multi-frame live video frame to be controlled may be a multi-frame live video frame after the current time point. For example, if the avatar needs to be controlled to show the yawned expression, and the expression needs to be completely shown through continuous 5-frame live video frames, the continuous multi-frame live video frames to be controlled may be continuous 5-frame live video frames after the current time point.
It should be noted that the above is only an example, and the determination of the live video frame to be controlled may be performed according to an actual control requirement.
In this embodiment, the obtained control information includes a plurality of pieces of control sub information, and the control sub information corresponding to each frame of the live video frame can be obtained. And then, controlling the corresponding live video frame by utilizing each piece of control sub-information. Therefore, each piece of control sub-information can control the virtual image in the corresponding live video frame to be in a certain state, and the required control expression can be constructed in a plurality of continuous states, so that the virtual image in the continuous multi-frame video frame can display the control expression.
Alternatively, as can be seen from the above, the control information for a certain set expression includes a plurality of personalized expression bases and base coefficient combinations. And weighting a plurality of personalized expression bases in the control information according to respective base coefficients to obtain a comprehensive expression base model.
The definition mode of the feature points in the comprehensive expression base model is consistent with the definition mode of the feature points on the virtual image, and the corresponding feature points between the two can be marked as the same serial number. That is, for example, for an eye feature, N feature points may be utilized in the integrated expression base model to construct the eye feature, and accordingly, the eye feature in the avatar may also be constructed by N feature points to construct the eye feature.
Therefore, the coordinate information of the feature points in the comprehensive expression base model can be utilized to control the coordinate values of the corresponding feature points in the virtual image, so that each facial feature of the virtual image can be consistent with the state of the facial feature in the comprehensive expression base model, and further the full-face expression of the virtual image is consistent with the full-face expression of the comprehensive expression base.
The method for generating control information based on the personalized expression base, provided by the embodiment, can generate the personalized expression base which embodies the characteristics of the reference object based on the expression base template, and obtain the control information represented by the personalized expression base based on the video frames with different set expressions and the personalized expression base, so that the virtual image can be subsequently controlled by using the control information, and the virtual image can be displayed with corresponding expressions. According to the scheme, the generation of the control information can be realized without auxiliary equipment, the equipment complexity is reduced, excessive interference is avoided, and the personalized expression base information generated based on the expression base template is more standard, so that the obtained control information can be more standard, and each set expression can be accurately reflected.
Furthermore, in the embodiment, the control information generated in advance can be directly extracted in the live broadcast process to perform virtual image control, the control mode is convenient and quick, the binding with the anchor state is not needed, and the interference to the anchor is avoided. And the interactivity and the interest of live broadcasting can be enhanced, and the immersive experience of audiences on the live broadcasting is improved.
Referring to fig. 21, a schematic diagram of exemplary components of an electronic device according to an embodiment of the present application is shown, where the electronic device may be the server 100 shown in fig. 1. The electronic device may include a storage medium 110, a processor 120, a control information generating apparatus 130 based on a personalized emoticon, and a communication interface 140. In this embodiment, the storage medium 110 and the processor 120 are both located in the electronic device and are separately disposed. However, it should be understood that the storage medium 110 may be separate from the electronic device and may be accessed by the processor 120 through a bus interface. Alternatively, the storage medium 110 may be integrated into the processor 120, for example, may be a cache and/or general purpose registers.
The control information generating device 130 based on the personalized expression base may be understood as the electronic device, or the processor 120 of the electronic device, or may be understood as a software functional module that is independent of the electronic device or the processor 120 and implements the control information generating method based on the personalized expression base under the control of the electronic device.
As shown in fig. 22, the control information generating apparatus 130 based on the personalized expression base may include a first obtaining module 131, a splitting module 132, a second obtaining module 133, and a generating module 134. The functions of the individual function modules of the control information generating device 130 based on the personalized expression base will be described in detail below.
The first obtaining module 131 is configured to obtain multiple face images of the reference object in different states.
It is understood that the first obtaining module 131 can be used to execute the step S110, and for the detailed implementation of the first obtaining module 131, reference can be made to the contents related to the step S110.
The splitting module 132 is configured to split facial features included in each facial image based on a plurality of preset expression base templates and the plurality of facial images, so as to generate a plurality of personalized expression bases.
It is understood that the splitting module 132 can be used to perform the step S120, and for the detailed implementation of the splitting module 132, reference can be made to the above-mentioned contents related to the step S120.
A second obtaining module 133, configured to obtain video frames of the reference object under different set expressions.
It is understood that the second obtaining module 133 can be used to perform the step S130, and for the detailed implementation of the second obtaining module 133, reference can be made to the content related to the step S130.
A generating module 134, configured to generate, according to the video frame and the personalized expression base, control information including the personalized expression base corresponding to each set expression, and store the control information.
It is understood that the generating module 134 can be used to execute the step S140, and for the detailed implementation of the generating module 134, reference can be made to the above description about the step S140.
In a possible implementation manner, the control information generating apparatus further includes a control module, where the control module is configured to:
in the live broadcast process, obtaining a control expression instruction aiming at the constructed virtual image, and obtaining a set expression corresponding to the control expression;
extracting the stored control information corresponding to the set expression;
and controlling the virtual image to display the control expression according to the control information.
In a possible implementation manner, the generating module 134 may specifically be configured to:
each individual expression base is endowed with a base coefficient and then combined to obtain a combined expression base;
comparing the video frame with the combined expression base, and adjusting each base coefficient based on a comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met;
and generating control information based on the base coefficient combination and the personalized expression base.
In a possible implementation manner, the video frame includes point cloud information of three-dimensional feature points, and the generating module 134 may generate the base coefficient combination by:
aligning the point cloud information of the three-dimensional feature points with the feature points in the combined expression base;
and adjusting each base coefficient according to the result of the alignment operation until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In a possible implementation manner, the generating module 134 may specifically be configured to:
determining three-dimensional feature points corresponding to the feature points in the point cloud information of the three-dimensional feature points aiming at the feature points in the combined expression base, and carrying out position alignment constraint on the corresponding feature points and the three-dimensional feature points on coordinate values;
determining a plane where the three-dimensional feature points corresponding to the feature points are located, and performing distance constraint on the feature points and the plane;
and obtaining an alignment operation result according to the results of the position alignment constraint and the distance constraint.
In a possible implementation manner, the video frame includes two-dimensional image information, and the generating module 134 is further configured to generate the base coefficient combination by:
comparing the two-dimensional image information with the two-dimensional information contained in the combined expression base;
and adjusting each base coefficient according to the comparison operation result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
In a possible implementation manner, the generating module 134 may specifically be configured to:
rendering the combined expression base based on a preset texture map to obtain a comparison image;
respectively carrying out position comparison operation on the key points in the comparison image and the corresponding key points in the two-dimensional image information on coordinate values and carrying out pixel comparison operation on pixel values;
and obtaining a comparison operation result based on the results of the position comparison operation and the pixel comparison operation.
In a possible implementation manner, the video frame with the set expression includes multiple continuous video frames, the control information corresponding to the set expression includes multiple pieces of control sub information respectively corresponding to the video frames, and the control module is specifically configured to:
aiming at continuous multi-frame live video frames to be controlled in the live broadcast process, obtaining control sub-information corresponding to each frame of live video frame in the control information;
and controlling the virtual image in the corresponding live video frame by utilizing each piece of control sub-information so as to display the control expression by the virtual image in the continuous multi-frame live video frame.
In one possible implementation, the splitting module 132 may be configured to:
constructing three-dimensional models in different states according to the plurality of face images;
splitting facial features in each three-dimensional model based on a plurality of preset expression base templates and a plurality of three-dimensional models to generate a plurality of personalized expression bases;
the personalized expression bases comprise a reference expression base and a plurality of variable expression bases, and each variable expression base has a facial feature which is different from the corresponding facial feature in the reference expression base compared with the reference expression base.
In a possible implementation manner, the splitting module 132 may specifically be configured to:
for each state, acquiring a plurality of target face images in the state, wherein the target face images are images shot from different perspectives;
and performing three-dimensional reconstruction based on the multiple target face images, and performing topology processing on a model obtained by the three-dimensional reconstruction to obtain the three-dimensional model in the state.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Further, an embodiment of the present application also provides a computer-readable storage medium, where machine-executable instructions are stored in the computer-readable storage medium, and when the machine-executable instructions are executed, the method for generating control information based on a personalized expression base, provided by the foregoing embodiment, is implemented.
Specifically, the computer readable storage medium can be a general storage medium, such as a removable disk, a hard disk, and the like, and when executed, the computer program on the computer readable storage medium can execute the control information generation method based on the personalized expression base. With regard to the processes involved when the executable instructions in the computer-readable storage medium are executed, reference may be made to the related descriptions in the above method embodiments, which are not described in detail herein.
To sum up, the embodiment of the present application provides a method and an apparatus for generating control information based on a personalized expression base, an electronic device, and a readable storage medium, where a plurality of facial images of a reference object in different states are obtained, and facial features contained in each facial image are split based on a plurality of preset expression base templates, so as to generate a plurality of personalized expression bases. And then obtaining video frames of the reference object under different set expressions, generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information. According to the scheme, the personalized expression base which reflects the characteristics of the reference object can be generated based on the expression base template, and the control information represented by the personalized expression base can be obtained based on the video frames with different set expressions and the personalized expression base, so that the control information can be used for controlling the subsequent virtual image. Therefore, the complexity of the equipment can be reduced, excessive interference is avoided, and the obtained control information can accurately reflect each set expression.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A control information generation method based on a personalized expression base is characterized by comprising the following steps:
obtaining a plurality of face images of a reference object in different states;
splitting facial features contained in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases;
obtaining video frames of the reference object under different set expressions;
and generating control information containing the personalized expression bases corresponding to the set expressions according to the video frames and the personalized expression bases, and storing the control information.
2. The method for generating control information based on the personalized expression base according to claim 1, wherein the method further comprises:
in the live broadcast process, obtaining a control expression instruction aiming at the constructed virtual image, and obtaining a set expression corresponding to the control expression;
extracting the stored control information corresponding to the set expression;
and controlling the virtual image to display the control expression according to the control information.
3. The method for generating control information based on the personalized expression base according to claim 1, wherein the step of generating the control information containing the personalized expression base corresponding to the set expression according to the video frame and the personalized expression base comprises:
each individual expression base is endowed with a base coefficient and then combined to obtain a combined expression base;
comparing the video frame with the combined expression base, and adjusting each base coefficient based on a comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met;
and generating control information based on the base coefficient combination and the personalized expression base.
4. The method for generating control information based on the personalized expression base according to claim 3, wherein the video frame comprises point cloud information of three-dimensional feature points;
the step of comparing the video frame with the combined expression base and adjusting each base coefficient based on the comparison result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met comprises the following steps:
aligning the point cloud information of the three-dimensional feature points with the feature points in the combined expression base;
and adjusting each base coefficient according to the result of the alignment operation until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
5. The method for generating control information based on the personalized expression base according to claim 4, wherein the step of aligning the point cloud information of the three-dimensional feature points with the feature points in the combined expression base comprises:
determining three-dimensional feature points corresponding to the feature points in the point cloud information of the three-dimensional feature points aiming at the feature points in the combined expression base, and carrying out position alignment constraint on the corresponding feature points and the three-dimensional feature points on coordinate values;
determining a plane where the three-dimensional feature points corresponding to the feature points are located, and performing distance constraint on the feature points and the plane;
and obtaining an alignment operation result according to the results of the position alignment constraint and the distance constraint.
6. The method for generating control information based on personalized expression bases according to claim 3 or 4, wherein the video frame comprises two-dimensional image information;
comparing the video frame with the combined expression base, adjusting each base coefficient based on a comparison result, and obtaining a base coefficient combination finally corresponding to the set expression when a preset requirement is met, wherein the step comprises the following steps:
comparing the two-dimensional image information with the two-dimensional information contained in the combined expression base;
and adjusting each base coefficient according to the comparison operation result until a base coefficient combination finally corresponding to the set expression is obtained when a preset requirement is met.
7. The method for generating control information based on the personalized expression base according to claim 6, wherein the step of comparing the two-dimensional image information with the two-dimensional information contained in the combined expression base comprises:
rendering the combined expression base based on a preset texture map to obtain a comparison image;
respectively carrying out position comparison operation on the key points in the comparison image and the corresponding key points in the two-dimensional image information on coordinate values and carrying out pixel comparison operation on pixel values;
and obtaining a comparison operation result based on the results of the position comparison operation and the pixel comparison operation.
8. The method according to claim 2, wherein the video frame under the set expression comprises a plurality of consecutive video frames, and the control information corresponding to the set expression comprises a plurality of control sub-information respectively corresponding to each of the video frames;
the step of controlling the avatar to display the control expression according to the control information comprises the following steps:
aiming at continuous multi-frame live video frames to be controlled in the live broadcast process, obtaining control sub-information corresponding to each frame of live video frame in the control information;
and controlling the virtual image in the corresponding live video frame by utilizing each piece of control sub-information so as to display the control expression by the virtual image in the continuous multi-frame live video frame.
9. The method for generating control information based on personalized expression bases according to claim 1, wherein the step of splitting facial features included in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases comprises:
constructing three-dimensional models in different states according to the plurality of face images;
splitting facial features in each three-dimensional model based on a plurality of preset expression base templates and a plurality of three-dimensional models to generate a plurality of personalized expression bases;
the personalized expression bases comprise a reference expression base and a plurality of variable expression bases, and each variable expression base has a facial feature which is different from the corresponding facial feature in the reference expression base compared with the reference expression base.
10. The method for generating control information based on personalized expression bases according to claim 9, wherein the step of constructing three-dimensional models in different states according to the plurality of facial images comprises:
for each state, acquiring a plurality of target face images in the state, wherein the target face images are images shot from different perspectives;
and performing three-dimensional reconstruction based on the multiple target face images, and performing topology processing on a model obtained by the three-dimensional reconstruction to obtain the three-dimensional model in the state.
11. An apparatus for generating control information based on a personalized expression base, the apparatus comprising:
the first obtaining module is used for obtaining a plurality of face images of the reference object in different states;
the splitting module is used for splitting facial features contained in each facial image based on a plurality of preset expression base templates and the plurality of facial images to generate a plurality of personalized expression bases;
the second obtaining module is used for obtaining video frames of the reference object under different set expressions;
and the generating module is used for generating control information which corresponds to each set expression and contains the personalized expression base according to the video frame and the personalized expression base, and storing the control information.
12. An electronic device comprising one or more storage media and one or more processors in communication with the storage media, the one or more storage media storing processor-executable machine-executable instructions that, when executed by the electronic device, are executed by the processors to perform the method steps of any of claims 1-10.
13. A computer-readable storage medium, characterized in that it stores machine-executable instructions which, when executed, implement the method steps of any one of claims 1-10.
CN202110519161.1A 2021-05-12 2021-05-12 Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium Pending CN113192165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110519161.1A CN113192165A (en) 2021-05-12 2021-05-12 Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110519161.1A CN113192165A (en) 2021-05-12 2021-05-12 Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113192165A true CN113192165A (en) 2021-07-30

Family

ID=76981523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110519161.1A Pending CN113192165A (en) 2021-05-12 2021-05-12 Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113192165A (en)

Similar Documents

Publication Publication Date Title
KR102658960B1 (en) System and method for face reenactment
US9626788B2 (en) Systems and methods for creating animations using human faces
US11450051B2 (en) Personalized avatar real-time motion capture
US20180276882A1 (en) Systems and methods for augmented reality art creation
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
US11763481B2 (en) Mirror-based augmented reality experience
CN113099298B (en) Method and device for changing virtual image and terminal equipment
CN110545442B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
US20220058880A1 (en) Messaging system with neural hair rendering
US20140223474A1 (en) Interactive media systems
CN111832745A (en) Data augmentation method and device and electronic equipment
US20230419497A1 (en) Whole body segmentation
WO2022182660A1 (en) Whole body visual effects
CN111510769B (en) Video image processing method and device and electronic equipment
CN113192165A (en) Control information generation method and device based on personalized expression base, electronic equipment and readable storage medium
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium
US11983819B2 (en) Methods and systems for deforming a 3D body model based on a 2D image of an adorned subject
US20230386135A1 (en) Methods and systems for deforming a 3d body model based on a 2d image of an adorned subject
TW201305962A (en) Method and arrangement for image model construction
CN113327308A (en) Method and device for generating expression package picture
CN115100259A (en) Video remapping method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination