CN116958496A - Virtual character expression driving method and related device - Google Patents

Virtual character expression driving method and related device Download PDF

Info

Publication number
CN116958496A
CN116958496A CN202310097500.0A CN202310097500A CN116958496A CN 116958496 A CN116958496 A CN 116958496A CN 202310097500 A CN202310097500 A CN 202310097500A CN 116958496 A CN116958496 A CN 116958496A
Authority
CN
China
Prior art keywords
driving parameter
parameter assignment
driving
assignment
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310097500.0A
Other languages
Chinese (zh)
Inventor
唐敏凯
宋巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310097500.0A priority Critical patent/CN116958496A/en
Publication of CN116958496A publication Critical patent/CN116958496A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a virtual character expression driving method and a related device, wherein the method comprises the following steps: acquiring face posture data of a target object; respectively comparing the face posture data with a plurality of groups of basic face posture data, and determining a reference driving parameter assignment according to a comparison result; detecting whether a reference facial expression corresponding to the reference driving parameter assignment meets a preset facial state condition, if yes, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, correcting the reference driving parameter assignment based on a driving parameter correction algorithm to obtain the target driving parameter assignment, wherein the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on a normal facial expression; and driving the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment. The method can improve the accuracy of the assignment of the driving parameters and the presentation effect of the facial expression of the virtual character.

Description

Virtual character expression driving method and related device
Technical Field
The application relates to the technical field of computers, in particular to a virtual character expression driving method and a related device.
Background
The virtual character is driven by a virtual Engine (UE), so that the virtual character presents a facial expression consistent with a real character, which is one of research hotspots in the field of virtual object control. In general, a virtual character may be driven to present a corresponding facial expression by: capturing the facial expression of the real person by a facial acquisition system, comparing the facial expression of the real person with a plurality of preset basic facial expressions, and determining assignment of related driving parameters according to the comparison result; further, the virtual character is controlled to present the corresponding facial expression by the illusion engine according to the assignment of the driving parameters.
However, in practical applications, the facial expression of the real character is estimated by the facial acquisition system, so that deviation is easy to occur, the determined assignment accuracy of the driving parameters is low, and the facial expression presented by the virtual character driven by the illusion engine is not ideal.
Disclosure of Invention
The embodiment of the application provides a virtual character expression driving method and a related device, which can improve the accuracy of driving parameter assignment and further improve the presentation effect of the facial expression of a virtual character.
In view of the above, a first aspect of the present application provides a virtual character expression driving method, the method comprising:
Acquiring face posture data of a target object; the facial pose data is used to characterize a facial expression of the target object;
comparing the face posture data with a plurality of groups of basic face posture data respectively, and determining a reference driving parameter assignment according to a comparison result; each set of the basic facial pose data corresponds to a basic facial expression;
detecting whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition, if so, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, correcting the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on the normal facial expression;
and driving the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment.
A second aspect of the present application provides a virtual character expression driving apparatus, the apparatus comprising:
a data acquisition module for acquiring face pose data of a target object; the facial pose data is used to characterize a facial expression of the target object;
The reference assignment module is used for comparing the face posture data with a plurality of groups of basic face posture data respectively and determining reference driving parameter assignment according to a comparison result; each set of the basic facial pose data corresponds to a basic facial expression;
the assignment detection module is used for detecting whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition, if yes, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, carrying out correction processing on the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on the normal facial expression;
and the role driving module is used for driving the virtual role corresponding to the target object to present the facial expression according to the target driving parameter assignment.
A third aspect of the application provides an electronic device comprising a processor and a memory:
the memory is used for storing a computer program;
the processor is configured to execute the steps of the virtual character expression driving method according to the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program for executing the steps of the virtual character expression driving method of the first aspect described above.
A fifth aspect of the application provides a computer program product or computer program comprising computer instructions stored on a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps of the virtual character expression driving method described in the first aspect.
From the above technical solutions, the embodiment of the present application has the following advantages:
the embodiment of the application provides a virtual character expression driving method, which comprises the steps of firstly acquiring facial posture data for representing the facial expression of a target object when driving the virtual character to present the facial expression of the target object; the face pose data is then compared with sets of base face pose data, respectively, and a reference driving parameter assignment is determined based on the comparison, where each set of base face pose data corresponds to a base facial expression. Further, whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition is detected, namely whether the reference facial expression generated based on the reference driving parameter assignment driving meets the relevant requirements is detected, such as whether the reference facial expression is natural, whether the reference facial expression is matched with the facial expression of the target object or not, and the like; if yes, the assignment of the reference driving parameters can be directly determined to be the assignment of the target driving parameters according to which the virtual character is driven to present the facial expression; if not, the reference driving parameter assignment needs to be corrected based on a driving parameter correction algorithm to obtain a target driving parameter assignment, wherein the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on a normal facial expression, abnormal reference driving parameter assignment can be limited in a normal and reasonable value range through the driving parameter correction algorithm, and the reference driving parameter assignment can be correspondingly adjusted to enable the corresponding facial expression to be more matched with the facial expression of the target object. Finally, based on the target driving parameter assignment determined through the above processing, the virtual character corresponding to the target object is driven to present the facial expression of the target object. According to the method, the reference driving parameter assignment is detected, and the reference driving parameter assignment which is determined by detection and is not met with the preset facial state condition by the corresponding reference facial expression is corrected based on the driving parameter correction algorithm, so that the accuracy of target driving parameter assignment used when the virtual character is driven to present the facial expression is improved, and further, the fact that the facial expression of the virtual character produced by final driving is natural and matched with the facial expression of the target object is ensured, namely, the presenting effect of the facial expression of the virtual character is improved.
Drawings
Fig. 1 is an application scenario schematic diagram of a virtual character expression driving method provided by an embodiment of the present application;
fig. 2 is a schematic flow chart of a virtual character expression driving method according to an embodiment of the present application;
fig. 3 is a schematic diagram of assignment results of mouth driving parameters according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a blueprint script of a driving parameter correction algorithm according to an embodiment of the present application;
FIG. 5 is a graph showing the effects before and after adding the superposition state according to the embodiment of the present application;
FIG. 6 is a schematic diagram of a driving parameter assignment change procedure according to an embodiment of the present application;
FIG. 7 is a graph showing the comparison of effects before and after increasing the intermediate state according to the embodiment of the present application;
FIG. 8 is a schematic diagram of a test effect provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a virtual character expression driving device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Virtual characters (also called virtual persons) refer to movable objects in a virtual environment, when the virtual environment is a three-dimensional virtual environment, the virtual characters are three-dimensional stereoscopic models created based on an animation skeleton technology, and each virtual character has own shape and volume in the three-dimensional virtual environment and occupies a part of space in the three-dimensional virtual environment.
A face model (also known as a head model) is a model that is located on the face (or head) of a virtual character. The face model includes: bones (Bone) and grids (mesh), wherein the bones are used for building up a skeleton for supporting the image of the virtual character and driving the virtual character to move; the mesh (also called skin or skin mesh) is a polygonal mesh with multiple vertices bound between bones. For facial models, bones are used to control the position of each vertex during the facial expression or facial movements of the virtual character. That is, a change in the position of several bones in the skeleton will displace each vertex on the mesh. When the skeleton in the face model changes in position, the displacement of each vertex on the grid may be different, that is, each vertex has a respective skin weight (weight) which is used to represent the contribution degree of the control point transformation on each skeleton to the vertex transformation; alternatively, the skin weight for each vertex may be calculated by a bounded bi-tone and weight (Bounded Biharmonic Weight) method or a moving least squares method (Moving Least Square).
The skeleton skin animation is characterized in that skeletons in the face model can be divided into multiple layers of father and son skeletons, and the positions of the father and son skeletons are calculated under the driving of animation key frame data. Each frame of picture is rendered based on the positions of the vertexes on the skeleton control grid, and the continuous change effect is represented through multiple frames of continuous pictures.
It should be understood that the technical solution provided by the embodiment of the present application may be applied to various scenes that need to drive a virtual person to present a corresponding facial expression, such as a face pinching scene, a virtual character editing scene, a video shooting scene, and so on, and more specifically, the technical solution may be used to edit a virtual character in a game, drive a virtual character in a game to present a desired facial expression, and no limitation is made to an application scene to which the technical solution is applicable.
The scheme provided by the embodiment of the application relates to an artificial intelligence technology and a virtual man technology, and is specifically described by the following embodiments:
the virtual character expression driving method provided by the embodiment of the application can be executed by the electronic equipment supporting the operation of the illusion engine, and the electronic equipment can be a terminal equipment or a server. The terminal device includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, an aircraft, a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server.
It should be noted that, the information (including, but not limited to, object device information, object account information, object operation information, etc.), the data (including, but not limited to, stored data, object feature data, etc.), and the signals related to the embodiments of the present application are all authorized by the relevant object or fully authorized by each party, and the collection, use and processing of the relevant data all comply with relevant laws and regulations and standards of relevant countries and regions. For example, the face posture data of the target object according to the embodiment of the present application is acquired with sufficient authorization.
In order to facilitate understanding of the virtual character expression driving method provided by the embodiment of the present application, an application scenario of the virtual character expression driving method is described below by taking an execution subject of the virtual character expression driving method as a terminal device as an example.
Referring to fig. 1, fig. 1 is a schematic application scenario diagram of a virtual character expression driving method according to an embodiment of the present application. As shown in fig. 1, the application scene includes a terminal device 110, the terminal device 110 is equipped with a depth camera, and a face acquisition system for capturing face pose data of a real character and a illusion engine for driving a virtual character to present a corresponding facial expression are operated in the terminal device 110.
In practical applications, the terminal device 110 may acquire an image including the face of the target object through the equipped depth camera, and further, the face acquisition system in the terminal device 110 may determine face pose data of the target object, which can represent the facial expression of the target object, from the image acquired by the depth camera.
Then, the face acquisition system in the terminal device 110 may compare the face pose data of the target object with the plurality of sets of basic face pose data, respectively, and determine the reference driving parameter assignment according to the comparison result. The basic facial gesture data of the multiple groups are in one-to-one correspondence with the basic facial expressions preset in the facial acquisition system, namely, each group of basic facial gesture data corresponds to one basic facial expression and is used for representing the basic facial expression. By comparing the facial pose data of the target object with the basic facial pose data, a degree of correlation between the facial expression of the target object and the basic facial expression corresponding to the basic facial pose data can be determined, and further, the weight assignment corresponding to the set of basic facial pose data, that is, the reference driving parameter assignment, is determined accordingly.
Further, the terminal device 110 may detect whether the reference facial expression corresponding to the reference driving parameter assignment satisfies the preset facial state condition, that is, whether the reference facial expression generated based on the reference driving parameter assignment driving satisfies the relevant requirement, such as whether the reference facial expression is natural, whether the reference facial expression matches the facial expression of the target object, and so on. If the reference facial expression corresponding to the reference driving parameter assignment is detected and determined to meet the preset facial state condition, the reference driving parameter assignment can be directly determined to serve as a target driving parameter assignment according to which the virtual character is driven to present the facial expression. If the reference facial expression corresponding to the reference driving parameter assignment is detected and determined not to meet the preset facial state condition, the reference driving parameter assignment is required to be corrected based on a driving parameter correction algorithm to obtain a target driving parameter assignment, wherein the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on a normal facial expression; namely, through the driving parameter correction algorithm, abnormal reference driving parameter assignment is limited in a normal and reasonable value range, or the reference driving parameter assignment is correspondingly adjusted, so that the corresponding facial expression is more matched with the facial expression of the target object.
Finally, the terminal device 110 transmits the determined target driving parameter assignment to the illusion engine through the universal interface provided by the illusion engine, and the illusion engine drives the virtual character corresponding to the target object to present the facial expression of the target object according to the target driving parameter assignment.
It should be understood that the application scenario shown in fig. 1 is merely an example, and in practical application, the method for driving an expression of a virtual character provided in the embodiment of the present application may also be applied to other scenarios, for example, the method for driving an expression of a virtual character may be cooperatively executed by a terminal device and a server, which does not limit the application scenario of the method for driving an expression of a virtual character provided in the embodiment of the present application.
The method for driving the expression of the virtual character provided by the application is described in detail by the embodiment of the method.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for driving an expression of a virtual character according to an embodiment of the present application. For convenience of description, the following embodiments will be described by taking an execution subject of the virtual character expression driving method as a terminal device. As shown in fig. 2, the virtual character expression driving method includes the steps of:
step 201: acquiring face posture data of a target object; the facial pose data is used to characterize a facial expression of the target object.
In the embodiment of the application, the target object may be a real person facing the camera of the terminal device, and the image collected by the camera of the terminal device includes the face of the target object. The facial pose data of the target object is capable of characterizing a facial expression of the target object; for example, the facial pose data may include location information of a number of key points on the face of the target object, where a key point is a point that can provide reference information when describing a facial expression, such as a facial bone key point, etc.
In particular, the terminal device may collect an image including the face of the target object through a depth camera configured by itself, and further, a face collection system (e.g., a face collection system in ARKit) operating in the terminal device may determine face posture data for describing the facial expression of the target object based on the image collected by the depth camera.
Of course, in practical applications, the terminal device may acquire the face pose data of the target object in other manners, and the embodiment of the present application does not limit any manner of acquiring the face pose data of the target object herein.
Step 202: comparing the face posture data with a plurality of groups of basic face posture data respectively, and determining a reference driving parameter assignment according to a comparison result; each set of the base facial pose data corresponds to a base facial expression.
After the terminal equipment acquires the face posture data of the target object, the face posture data and a plurality of groups of basic face posture data can be respectively compared through a face acquisition system to obtain corresponding comparison results, and the comparison results can reflect the matching degree between the facial expression represented by the face posture data and the basic face posture corresponding to each group of basic face posture data; furthermore, the facial acquisition system may determine the reference driving parameter assignment based on the comparison.
It should be noted that, the facial collection system stores basic facial pose data corresponding to each of a plurality of basic facial expressions in advance, where the plurality of basic facial expressions are facial expressions that often occur in a real person, such as smiling, laughing, crying, puckering, and the like. Each basic facial expression corresponds to a set of basic facial pose data, the basic facial pose data being used to characterize the basic facial expression; for example, the basic face pose data may include position information of a number of key points on a model object (may be a standard character face model) face, and the facial expression determined based on the position information of the key points is a basic facial expression corresponding to the basic face pose data. Taking a face collection system as an example, the face collection system in the ARKit stores basic face posture data corresponding to each of 52 basic facial expressions.
It should be noted that, the assignment of the reference driving parameters essentially refers to the assignment of weights corresponding to each group of basic face pose data; that is, the terminal device may determine, by comparing the face pose data with the base face pose data, a degree of matching between the facial expression represented by the face pose data and the set of base face pose data, and according to the degree of matching, determine a specific gravity occupied by the base facial expression corresponding to the set of base face pose data in the facial expression represented by the face pose data, where the specific gravity is a weight assignment corresponding to the set of base face pose data.
It should be appreciated that the range of weights assigned to the set of base facial pose data is 0 to 1; when the weight corresponding to the set of basic facial gesture data is assigned to 0, representing that the basic facial expression corresponding to the set of basic facial gesture data is not covered in the facial expression represented by the facial gesture data; when the weight assignment corresponding to the set of basic facial pose data is greater than 0 and less than 1, representing that a part of basic facial expressions corresponding to the set of basic facial pose data are covered in the facial expressions represented by the facial pose data, for example, when the weight assignment corresponding to the basic facial pose data corresponding to the basic smile expression is 0.3, representing that a certain smile expression is covered in the facial expressions represented by the facial pose data, wherein the amplitude of the covered smile expression is 30% of the amplitude of the basic smile expression; when the weight corresponding to the set of basic face pose data is assigned a value of 1, the facial expression representing the representation of the face pose data fully encompasses the basic facial expression corresponding to the set of basic face pose data.
In one possible implementation manner, the terminal device may directly compare the face pose data obtained in step 201 with each set of basic face pose data stored in the face acquisition system one by one, and determine, according to the comparison result, a weight assignment corresponding to each set of basic face pose data, as a reference driving parameter assignment.
In another possible implementation manner, the terminal device may compare the face pose data acquired in step 201 with each set of basic face pose data stored in the face acquisition system, and select, according to the result of the preliminary comparison, multiple sets of basic face pose data having a higher correlation with the face pose data from each set of basic face pose data. Further, the face pose data acquired in step 201 is secondarily compared with the selected sets of basic face pose data, and weight assignment corresponding to each of the sets of basic face pose data is determined as a reference driving parameter assignment. It will be appreciated that the comparison process performed is simpler than the primary comparison compared to the secondary comparison.
It should be understood that, in practical applications, the terminal device may determine the above-mentioned reference driving parameter assignment in other manners, and the embodiment of the present application does not limit any manner of determining the reference driving parameter assignment herein.
Step 203: detecting whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition, if so, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, correcting the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm, wherein the parameter assignment limit value algorithm is determined based on a normal facial expression.
After the terminal equipment determines the assignment of the reference driving parameters, whether the reference facial expression corresponding to the assignment of the reference driving parameters meets the preset facial state condition or not can be detected; the preset facial state condition here is a condition for measuring whether or not the reference facial expression is natural or matches the facial expression of the target object. If the reference facial expression corresponding to the reference driving parameter assignment is detected to meet the preset facial state condition, the reference driving parameter assignment can be directly determined to be used as a target driving parameter assignment, and the target driving parameter assignment is a driving parameter assignment according to which the virtual character is driven to present the corresponding facial expression. If the reference facial expression corresponding to the reference driving parameter assignment is detected and determined not to meet the preset facial state condition, the reference driving parameter assignment needs to be corrected based on a driving parameter correction algorithm so as to obtain a target driving parameter assignment.
It should be noted that the driving parameter correction algorithm includes a parameter assignment limit algorithm determined based on a normal facial expression and a parameter assignment adjustment method. The parameter assignment limit algorithm is used for carrying out limit processing on the reference driving parameter assignment corresponding to the reference facial expression which does not meet the preset facial state condition, so that the reference driving parameter assignment is limited in the driving parameter assignment range corresponding to the normal facial expression. The parameter assignment adjustment algorithm is used for adjusting the reference driving parameter assignment corresponding to the reference facial expression which does not meet the preset facial state condition, so that the reference facial expression corresponding to the adjusted reference driving parameter assignment is more matched with the facial expression of the target object.
In one possible implementation manner, the terminal device may detect whether the reference facial expression corresponding to the reference driving parameter satisfies the preset facial state condition by: for a plurality of reference driving parameter assignments corresponding to the same face part, detecting whether at least two reference driving parameter assignments belong to relative driving parameter assignments; here, the multiple reference driving parameter assignments respectively correspond to different poses of the face part, and at least two corresponding poses of the reference driving parameter assignments belonging to the relative driving parameter assignments cannot be simultaneously present in the normal facial expression.
As described above, the reference driving parameter assignment is essentially a weight assignment corresponding to basic facial pose data corresponding to basic facial expressions, and the facial parts mainly involved in different basic facial expressions may be the same, for example, the facial parts mainly involved in basic facial expressions such as a left nuzzle, a right nuzzle, and a pucker are all mouths. Accordingly, for a plurality of basic facial expressions that are the same for a mainly involved facial part, the weight assignments of the basic facial pose data corresponding to each of these basic facial expressions will have a common emphasis on the facial part, and the weight assignments (i.e., the reference drive parameter assignments) of these basic facial pose data may be regarded as a plurality of reference drive parameter assignments corresponding to the same facial part.
For a plurality of reference driving parameter assignments corresponding to the same facial part, the terminal device can detect whether at least two reference driving parameter assignments which are mutually conflicting among the plurality of reference driving parameter assignments by detecting whether at least two reference driving parameter assignments which belong to relative driving parameter assignments exist in the plurality of reference driving parameter assignments, namely, whether the postures of the facial part corresponding to the plurality of reference driving parameter assignments respectively have conflicts, wherein whether the at least two postures of the facial part which cannot be simultaneously present in a normal facial expression are included.
In specific implementation, the terminal device may call the relative driving parameter assignment record file (in which the reference driving parameters belonging to the relative driving parameters are recorded, and the assignment of the reference driving parameters satisfies the condition of the relative driving parameter assignment), so as to detect whether at least two reference driving parameter assignments belonging to the relative driving parameter assignment are included for a plurality of reference driving parameter assignments corresponding to the same face portion according to the relative driving parameter assignment. For example, it is assumed that two basic facial expressions of a left nucket and a right nucket are recorded in the relative driving parameter assignment record file, and when the left nucket or the right nucket reaches a limit position, a basic facial expression of puckering cannot occur; then, for the multiple reference driving parameter assignments corresponding to the mouth, the terminal device can detect whether the reference driving parameter assignments corresponding to the left and right knoop are included at the same time, and if so, it is determined that the two reference driving parameter assignments belong to the relative driving parameter assignment; in addition, in the case that the plurality of reference driving parameter assignments corresponding to the mouth include reference driving parameter assignments corresponding to the knuckles (may be reference driving parameter assignments corresponding to the left knuckles or reference driving parameter assignments corresponding to the right knuckles), if the reference driving parameter assignments corresponding to the knuckles indicate that the extent of the knuckles reaches a limit, the terminal device further needs to detect whether reference driving parameter assignments corresponding to the pourer are included in the plurality of reference driving parameter assignments corresponding to the mouth, and if so, it is determined that the reference driving parameter assignments corresponding to the knuckles and the reference driving parameter assignments corresponding to the pourer belong to relative driving parameter assignments.
If the terminal device determines that the reference driving parameter assignment belonging to the relative driving parameter assignment does not exist in the multiple reference driving parameter assignments corresponding to the same face portion through the detection, the multiple reference driving parameter assignments corresponding to the face portion can be directly used as target driving parameter assignments. If the terminal device determines that at least two reference driving parameter assignments belonging to the relative driving parameter assignments exist in the plurality of reference driving parameter assignments corresponding to the same face part through the detection, correction processing is required to be performed on the at least two reference driving parameter assignments belonging to the relative driving parameter assignments, and the following two exemplary correction modes are provided in the embodiment of the present application:
determining at least one reference driving parameter assignment among at least two reference driving parameter assignments belonging to relative driving parameter assignments as a parameter assignment to be corrected, and determining reference driving parameter assignments except the parameter assignment to be corrected in the at least two reference driving parameter assignments as non-corrected parameter assignments; furthermore, limiting the to-be-corrected parameter assignment to a preset value based on a parameter assignment limiting algorithm to obtain a target driving parameter assignment corresponding to the to-be-corrected parameter assignment; and determining the non-correction parameter assignment as a target driving parameter assignment.
In one possible case, the respective facial part poses of at least two reference drive parameter assignments belonging to the relative drive parameter assignments cannot occur simultaneously, i.e. as long as the facial part assumes the pose corresponding to one of the reference drive parameter assignments, the poses corresponding to the other reference drive parameter assignments cannot occur at all. Aiming at the situation, the terminal equipment can select the largest reference driving parameter assignment from the reference driving parameter assignments corresponding to the gestures as non-correction parameter assignments and select other reference driving parameter assignments as parameter assignments to be corrected; further, preserving the non-correction parameter assignment itself, namely determining the non-correction parameter assignment as a target driving parameter assignment; and based on a parameter assignment limit algorithm, setting the parameter assignment limit to be corrected to 0, namely avoiding presenting the gesture corresponding to the parameter assignment to be corrected.
By way of example, two basic facial expressions of the left and right nuzzles cannot occur at the same time, and when the left and right nuzzles reach a limit position, the basic facial expression of puckering cannot occur, and assuming that reference driving parameter assignments corresponding to the mouth include reference driving parameter assignments corresponding to the left and right nuzzles and puckering simultaneously, and the reference driving parameter assignments corresponding to the left nuzzles indicate that the degree of the left nuzzles reaches the limit, the terminal device may use the reference driving parameter assignment corresponding to the left nuzzles as a non-correction parameter assignment, use the reference driving parameter assignment corresponding to the right and puckered nuzzles as a to-be-corrected parameter assignment, retain the reference driving parameter assignment corresponding to the left nuzzles as a target driving parameter assignment, and use the reference driving parameter assignment limit value corresponding to the right and puckered nuzzles as 0. For a plurality of reference driving parameter assignments corresponding to the mouth, the limit value can be performed by the following formula (1):
(1-Clamp(0.0,1.0,(V mouthLeft +V mouthRight )))*V mouthPucker (1)
Wherein Clamp () is a finite function, V mouthLeft Indicating the assignment of the corresponding reference driving parameters of the left nozzle, V mouthRight Indicating the assignment of the reference driving parameters corresponding to the right nozzle, V mouthPucker Indicating the assigned value of the corresponding reference driving parameter of the puckered mouth. Specific more complete mouth driving parameter assignment results can be seen in fig. 3, and a bar chart in fig. 3 shows driving parameter assignment results corresponding to each of various basic facial expressions related to the mouth.
A second modification mode is adopted, at least one reference driving parameter assignment in at least two reference driving parameter assignments is determined to be a reference parameter assignment, and reference driving parameter assignments except the reference parameter assignment in the at least two reference driving parameter assignments are determined to be non-reference parameter assignments; furthermore, based on a parameter assignment limit algorithm and the reference parameter assignment, carrying out limit processing on the non-reference parameter assignment to obtain a target driving parameter assignment corresponding to the non-reference parameter assignment; the reference parameter assignments are determined as target drive parameter assignments.
In one possible case, at least two reference driving parameter assignments belonging to the relative driving parameter assignments may cause the same facial part to exhibit different motion magnitudes, while the facial part cannot simultaneously exhibit the motion magnitudes indicated by the at least two reference driving parameter assignments respectively; for example, for a mouth, when the lips are closed in a mouth-opening state, the closing amplitude of the lips needs to be limited by the amplitude of the mouth-opening, and it is impossible to make the closing amplitude of the lips larger than the amplitude of the mouth-opening. For this case, the terminal device may select at least one reference drive parameter assignment from at least two reference drive parameter assignments belonging to the relative drive parameter assignments as a reference parameter assignment, and determine that other reference drive parameter assignments of the at least two reference drive parameter assignments are non-reference parameter assignments; the specific selection reference parameter assignment can be selected according to a preset selection rule; and further, based on a parameter assignment limit algorithm, the non-reference parameter assignment limit value is in a range corresponding to the reference parameter assignment, so that a target driving parameter assignment corresponding to the non-reference parameter assignment limit value is obtained, and the reference parameter assignment itself is reserved as the target driving parameter assignment.
For example, still taking the limit that the closing amplitude of the lips needs to be limited by the amplitude of the mouth opening as an example, assuming that the reference driving parameter assignments corresponding to the mouth opening and the lip closing simultaneously include the reference driving parameter assignments corresponding to the mouth opening and the closing amplitude represented by the reference driving parameter assignments corresponding to the lip closing is greater than the mouth opening amplitude represented by the reference driving parameter assignments corresponding to the mouth opening, the terminal device may take the reference driving parameter assignments corresponding to the mouth opening as the reference parameter assignments, take the reference driving parameter assignments corresponding to the lip closing as the non-reference parameter assignments, and take the reference driving parameter assignment limit value corresponding to the lip closing within the range of the reference driving parameter assignments corresponding to the mouth opening, that is, the closing amplitude represented by the reference driving parameter assignments corresponding to the lip closing is smaller than or equal to the mouth opening amplitude represented by the reference driving parameter assignments corresponding to the mouth opening, and the limit value may be specifically performed by the following formula (2):
V mouthClose <=V mouthOpen (2)
wherein V is mouthClose Indicating the corresponding reference driving parameter assignment of lip closure, V mouthOpen And the assignment of the reference driving parameters corresponding to the opening is shown.
It should be appreciated that in practical applications, the at least two reference driving parameter assignments belonging to the relative driving parameter assignments may be modified in other manners, which are not limited in any way by the embodiments of the present application.
In another possible implementation manner, the terminal device may detect whether the reference facial expression corresponding to the reference driving parameter satisfies the preset facial state condition by: determining a reference posture of the target face part according to the assignment of the reference driving parameters corresponding to the target face part; detecting whether the reference gesture of the target facial part is matched with the preset gesture of the target facial part, wherein the preset gesture of the target facial part is the gesture presented by the target facial part in the facial expression of the target object.
Specifically, for each face part, the terminal device may treat it as a target face part; when detecting whether the reference driving parameter assignment corresponding to a certain target face part is qualified or not, the terminal equipment can determine the reference gesture presented by the target face part based on the corresponding reference driving parameter assignments according to the corresponding reference driving parameter assignments of the target face part; for example, the terminal device performs superposition processing on each reference driving parameter assignment corresponding to the target face part, and further determines a reference gesture possibly presented by the target face part according to a superposition processing result. Further, it is detected whether the reference posture is matched with a preset posture of the target face portion, that is, whether the reference posture is matched with a posture actually presented by the target face portion of the target object face.
If the reference pose of the target face portion is matched with the preset pose of the target face portion, it can be determined that the reference driving parameter assignment corresponding to the target face portion meets the preset face state condition, and the reference driving parameter assignment corresponding to the target face portion can be used as the target driving parameter assignment. If the reference pose of the target face portion is not matched with the preset pose of the target face portion, it may be determined that the reference driving parameter assignment corresponding to the target face portion does not satisfy the preset face state condition, and further, the reference driving parameter assignment corresponding to the target face portion may be corrected by:
based on a parameter assignment adjustment algorithm, adjusting the reference driving parameter assignment corresponding to the target face part by using a parameter assignment adjustment coefficient to obtain a target driving parameter assignment; the parameter assignment adjustment coefficient is determined based on a difference between the reference pose of the target face portion and the preset pose of the target face portion.
Specifically, the reference pose of the target face portion is not matched with the preset pose of the target face portion, that is, the reference pose determined based on the reference driving parameter assignment corresponding to the target face portion is excessively large or insufficiently large relative to the preset pose, at this time, a parameter assignment adjustment coefficient for adjusting the reference driving parameter assignment may be determined according to the difference between the reference pose and the preset pose, and further, based on the parameter assignment adjustment algorithm, the reference driving parameter assignment corresponding to the target face portion is adjusted by using the parameter assignment adjustment coefficient, so as to obtain the target driving parameter assignment. Specifically, the adjustment can be performed by the following formula (3):
V fn =V get *k(3)
Wherein V is fn Representing target drive parameter assignments, V get Representing a reference drive parameter assignment, k representing a parameter assignment adjustment factor, the value of k being non-fixed and being adjustable in dependence on the difference between the reference pose and the preset pose.
In practical application, the terminal device can detect the corresponding reference driving parameter assignment of each face part from the two dimensions, namely, detect whether the corresponding reference gesture is matched with the preset gesture of the face part or not; and when the assignment of the reference driving parameters corresponding to the face part is detected not to meet the requirement of the dimension, the assignment is corrected by adopting a corresponding correction mode, so that the assignment of the reference driving parameters corresponding to the face part is ensured to be matched with the posture of the face part of the target object without conflict.
It should be noted that, the driving parameter correction algorithm may be specifically set up by a blueprint script system provided by the illusion engine, and fig. 4 is a partial schematic diagram of a blueprint script corresponding to the driving parameter correction algorithm. The blueprint script comprises control frames corresponding to the respective facial parts, each control frame corresponding to the respective facial part comprises a plurality of nodes corresponding to different basic facial expressions, the nodes of the basic facial expressions are used for representing reference driving parameter assignment of basic facial posture data corresponding to the basic facial expressions, as shown in fig. 4, for two nodes with connection relations of the blueprint script, the reference driving parameter assignment represented by the two nodes respectively has an influence relation, the influence relation can indicate that the reference driving parameter assignment represented by the two nodes respectively needs to judge relative driving parameter assignment, can also indicate that the determination of one reference driving parameter assignment can be influenced by the other reference driving parameter assignment, and the like.
Step 204: and driving the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment.
After the terminal device obtains the target driving parameter assignment, the terminal device can input the target driving parameter assignment to the illusion engine through a universal interface (such as a LiveLink interface) provided by the illusion engine, and then the illusion engine drives the virtual character corresponding to the target object to present the facial expression of the target object according to the target driving parameter assignment.
In one possible implementation, the terminal device may determine that the virtual character presents a corresponding facial expression by: determining a plurality of driving parameter assignments corresponding to each of a plurality of motion units of the face according to the target driving parameter assignments; performing superposition processing on a plurality of driving parameter assignments corresponding to each motion unit aiming at each motion unit to obtain superposition driving parameter assignments corresponding to the motion units; furthermore, fusion processing is carried out on the superposition driving parameter assignment corresponding to each of the plurality of motion units to obtain face driving parameter assignment, and the face driving parameter assignment comprises comprehensive driving parameter assignment corresponding to each of the plurality of motion units, which is obtained through the fusion processing; finally, according to the face driving parameter assignment, the virtual character corresponding to the driving target object presents the facial expression of the target object.
Specifically, a facial expression encoding system (Facial Action Coding System, FACS) running in the terminal device may decompose a conventional facial expression into a plurality of motion units (AUs), and further combine the AUs to generate a corresponding expression effect. More specifically, the FACS may determine, according to the target driving parameter assignment input to the illusion engine, a plurality of driving parameter assignments corresponding to the AUs, where one driving parameter assignment corresponding to one AU corresponds to one basic facial expression, that is, one driving parameter assignment corresponding to one AU is determined according to the target driving parameter assignment corresponding to one basic facial pose data; then, aiming at each AU, stacking a plurality of driving parameter assignments corresponding to the AU to obtain a stacked driving parameter assignment corresponding to the AU; furthermore, considering the squeezing and pulling effects of muscles, fusing the superposition driving parameter assignment corresponding to each AU to obtain comprehensive face driving parameter assignment, wherein the face driving parameter assignment comprises the comprehensive driving parameter assignment corresponding to each AU, and the comprehensive driving parameter assignment is obtained by adjusting the superposition driving parameter assignment corresponding to the AU under the condition of considering the squeezing and pulling of the muscles; finally, the illusion engine can drive the virtual character to present the corresponding expression according to the face driving parameter assignment.
Based on the above description, it is known that a facial expression is generally formed by overlapping and fusing a plurality of basic AUs, and the above overlapping and fusing process is not simply "1+1=2", that is, the driving parameter assignment of AUs is not simply added, and the process of stitching a plurality of AUs together, by squeezing and pulling up muscles, the overlapping and fusing conditions of different AUs may be different, the overlapping and fusing of some AUs may be "1+1=1.5", the overlapping and fusing of some AUs may be "1+1=1.8", and if the driving parameter assignment of a single AU is simply added, and then stitching a plurality of AUs together, the obtained facial expression is very likely unnatural. In order to solve the above problems, the embodiment of the present application provides a processing method for adding "superposition state":
specifically, when a first facial expression corresponding to the facial driving parameter assignment does not meet a first natural state condition, adjusting comprehensive driving parameter assignment in the facial driving parameter assignment to obtain corrected facial driving parameter assignment; and driving the virtual character to present the corresponding facial expression according to the corrected facial driving parameter assignment.
That is, after the terminal device obtains the face driving parameter assignment, it may determine a first facial expression that is presented based on the face driving parameter assignment, and further determine whether the first facial expression meets a first natural state condition, where the first natural state condition is a condition for measuring whether the facial expression belongs to a natural facial expression. If the first facial expression does not meet the first natural state condition, adjusting comprehensive driving parameter assignment in the face driving parameter assignment, namely adding an overlapped state on the basis of the face driving parameter assignment, so as to obtain corrected face driving parameter assignment; the adjustment of the comprehensive driving parameter assignment can be specifically adjusted manually by related art staff, can be automatically adjusted by a pre-trained neural network model, and can be automatically adjusted according to a preset parameter adjustment rule, and the embodiment of the application is not limited in any way. Finally, the illusion engine can drive the virtual character to present the corresponding facial expression according to the obtained corrected facial driving parameter assignment.
Fig. 5 shows a comparison of effects before and after adding the superimposed state, and when two postures (postures shown in (a) and (b) in fig. 5) corresponding to the same AU are superimposed as shown in fig. 5, the posture shown in (c) in fig. 5 will be obtained if the superimposed state is not added, and the posture shown in (d) in fig. 5 will be obtained if the superimposed state is added. By comparison, the superposition state is increased, so that the superposition effect of the AU is more natural, and abnormal AU postures are avoided.
In addition, when the face expression presented by the virtual character is changed from the first expression to the second expression, the terminal equipment can also control the target driving parameter assignment to change linearly, so as to obtain the corresponding intermediate driving parameter assignment at each intermediate moment in the changing process from the first expression to the second expression; and driving the virtual character to present a change process from the first expression to the second expression based on the corresponding intermediate driving parameter assignment at each intermediate moment.
Specifically, when the facial expression of the virtual character changes, and the first expression is to be changed into the second expression, the terminal device may control the target driving parameter assignment to change linearly, that is, control the target driving parameter assignment corresponding to the first expression to change linearly to the target driving parameter assignment corresponding to the second expression correspondingly, and obtain the intermediate driving parameter assignment corresponding to each intermediate time in the changing process, where the intermediate time may be each time sampled according to the preset time interval in the changing process. Furthermore, based on the corresponding intermediate driving parameter assignment of each intermediate moment, the driving virtual character presents the facial expression of each intermediate moment, so that the process of driving the virtual character to present the change from the first expression to the second expression is realized.
However, in many cases, the expression change process of the virtual character may not look natural, and the dynamic effect is lost, and the inventor researches and discovers that the problem occurs because the driving parameter assignment in the change process is determined only through linear change, and the three-dimensional squeezing and pulling effect of the facial muscle is difficult to be reflected. To solve this problem, the embodiment of the present application innovatively upgrades p2p (post-to-post) to pbp (post-betwen-post), i.e., increases the attention to the intermediate effect during the expression change process, so that a more natural facial expression is presented at the intermediate time of the change process.
Specifically, for each intermediate moment, the terminal device may determine, according to the intermediate driving parameter assignment corresponding to the intermediate moment, a second facial expression presented by the virtual character at the intermediate moment; if the second facial expression does not meet the second natural state condition, adjusting the intermediate driving parameter assignment corresponding to the intermediate moment to obtain a target intermediate driving parameter assignment corresponding to the intermediate moment; if the second facial expression meets a second natural state condition, determining an intermediate driving parameter assignment corresponding to the intermediate moment as a target intermediate driving parameter assignment corresponding to the intermediate moment; and driving the virtual character to present a change process from the first expression to the second expression based on the corresponding target intermediate driving parameter assignment at each intermediate moment.
For each intermediate time, the terminal device may determine, according to the intermediate driving parameter assignment corresponding to the intermediate time, a second facial expression that may be generated by driving based on the intermediate driving parameter assignment, and further determine whether the second facial expression satisfies a second natural state condition, where the second natural state condition is similar to the first natural state condition above, and is a condition for measuring whether the facial expression belongs to a natural facial expression. If the second facial expression meets the second natural state condition, the facial expression generated by the intermediate driving parameter assignment driving corresponding to the intermediate time is natural, and accordingly, the intermediate driving parameter assignment corresponding to the intermediate time can be determined to be used as the target intermediate driving parameter assignment corresponding to the intermediate time. If the second facial expression does not meet the second natural state condition, the facial expression generated by the intermediate driving parameter assignment driving corresponding to the intermediate moment is unnatural, and at the moment, the intermediate driving parameter assignment corresponding to the intermediate moment needs to be adjusted, namely the intermediate state effect is increased, so that the target intermediate driving parameter assignment corresponding to the intermediate moment is obtained; the adjustment of the intermediate driving parameter assignment can be specifically adjusted manually by related art staff, can be automatically adjusted by a pre-trained neural network model, and can be automatically adjusted according to a preset parameter adjustment rule, and the embodiment of the application is not limited in any way. Finally, after the terminal device obtains the target intermediate driving parameter assignment corresponding to each intermediate moment, the terminal device can drive the virtual character to present the facial expression of each intermediate moment based on the target intermediate driving parameter assignment corresponding to each intermediate moment, thereby presenting the change process of the facial expression of the virtual character.
Fig. 6 shows the driving parameter assignment change process when the first expression is changed from the first expression to the second expression in two modes of p2p and pbp, as shown in fig. 6, in the p2p mode, the intermediate driving parameter assignment corresponding to each intermediate time is linearly changed, and in the pbp mode, the target intermediate driving parameter assignment corresponding to each intermediate time is non-linearly changed. Fig. 7 shows a comparison of effects before and after adding the intermediate state, fig. 7 (a) shows a front view, and fig. 7 (b) shows a side view, and it can be found that the expression effect shown after adding the intermediate state is more natural by comparing the effects before and after adding the intermediate state.
When the virtual character expression driving method provided by the embodiment of the application drives the virtual character to present the facial expression of the target object, firstly acquiring the facial gesture data used for representing the facial expression of the target object; the face pose data is then compared with sets of base face pose data, respectively, and a reference driving parameter assignment is determined based on the comparison, where each set of base face pose data corresponds to a base facial expression. Further, whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition is detected, namely whether the reference facial expression generated based on the reference driving parameter assignment driving meets the relevant requirements is detected, such as whether the reference facial expression is natural, whether the reference facial expression is matched with the facial expression of the target object or not, and the like; if yes, the assignment of the reference driving parameters can be directly determined to be the assignment of the target driving parameters according to which the virtual character is driven to present the facial expression; if not, the reference driving parameter assignment needs to be corrected based on a driving parameter correction algorithm to obtain a target driving parameter assignment, wherein the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on a normal facial expression, abnormal reference driving parameter assignment can be limited in a normal and reasonable value range through the driving parameter correction algorithm, and the reference driving parameter assignment can be correspondingly adjusted to enable the corresponding facial expression to be more matched with the facial expression of the target object. Finally, based on the target driving parameter assignment determined through the above processing, the virtual character corresponding to the target object is driven to present the facial expression of the target object. According to the method, the reference driving parameter assignment is detected, and the reference driving parameter assignment which is determined by detection and is not met with the preset facial state condition by the corresponding reference facial expression is corrected based on the driving parameter correction algorithm, so that the accuracy of target driving parameter assignment used when the virtual character is driven to present the facial expression is improved, and further, the fact that the facial expression of the virtual character produced by final driving is natural and matched with the facial expression of the target object is ensured, namely, the presenting effect of the facial expression of the virtual character is improved.
The inventor of the present application tests based on the same virtual character in order to verify the validity and reliability of the method provided by the embodiment of the present application, and the test effect is shown in fig. 8. The left side of the figure 8 is the expression of the virtual character before optimization, and the right side of the figure 8 is the expression of the virtual character after optimization by adopting the method provided by the embodiment of the application, and the mouth action of the right virtual character can be found out to be more natural compared with the mouth action of the left virtual character by comparison.
Aiming at the virtual character expression driving method, the application also provides a corresponding virtual character expression driving device, so that the virtual character expression driving method is practically applied and realized.
Referring to fig. 9, fig. 9 is a schematic structural view of a virtual character expression driving apparatus 900 corresponding to the virtual character expression driving method shown in fig. 2 above. As shown in fig. 9, the virtual character expression driving apparatus 900 includes:
a data acquisition module 901 for acquiring face pose data of a target object; the facial pose data is used to characterize a facial expression of the target object;
a reference assignment module 902, configured to compare the face pose data with multiple groups of basic face pose data, and determine a reference driving parameter assignment according to a comparison result; each set of the basic facial pose data corresponds to a basic facial expression;
The assignment detection module 903 is configured to detect whether a reference facial expression corresponding to the reference driving parameter assignment meets a preset facial state condition, if yes, determine the reference driving parameter assignment as a target driving parameter assignment, and if not, correct the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on the normal facial expression;
and the character driving module 904 is configured to drive the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment.
Optionally, the assignment detection module 903 is specifically configured to:
for a plurality of reference driving parameter assignments corresponding to the same face part, detecting whether at least two reference driving parameter assignments belong to relative driving parameter assignments; the plurality of reference driving parameter assignments respectively correspond to different poses of the facial part, and the poses respectively corresponding to the at least two reference driving parameter assignments belonging to the relative driving parameter assignments cannot be simultaneously appeared in normal facial expressions.
Optionally, the assignment detection module 903 is specifically configured to:
if at least two reference driving parameter assignments belonging to the relative driving parameter assignments exist in the multiple reference driving parameter assignments, determining that at least one reference driving parameter assignment in the at least two reference driving parameter assignments is to-be-corrected parameter assignment, and determining that reference driving parameter assignments except the to-be-corrected parameter assignment in the at least two reference driving parameter assignments are to-be-corrected parameter assignment;
limiting the parameter assignment to be corrected to a preset value based on the parameter assignment limiting algorithm to obtain a target driving parameter assignment corresponding to the parameter assignment to be corrected; and determining the non-correction parameter assignment as a target driving parameter assignment.
Optionally, the assignment detection module 903 is specifically configured to:
if at least two reference driving parameter assignments belonging to the relative driving parameter assignments exist in the plurality of reference driving parameter assignments, determining that at least one reference driving parameter assignment in the at least two reference driving parameter assignments is a reference parameter assignment, and determining that reference driving parameter assignments except the reference parameter assignments in the at least two reference driving parameter assignments are non-reference parameter assignments;
Performing limit processing on the non-reference parameter assignment based on the parameter assignment limit algorithm and the reference parameter assignment to obtain a target driving parameter assignment corresponding to the non-reference parameter assignment; and determining the reference parameter assignment as a target driving parameter assignment.
Optionally, the assignment detection module 903 is specifically configured to:
determining a reference posture of a target face part according to a reference driving parameter assignment corresponding to the target face part;
detecting whether the reference gesture of the target face part is matched with the preset gesture of the target face part; the preset gesture of the target facial part is a gesture presented by the target facial part in the facial expression of the target object.
Optionally, the assignment detection module 903 is specifically configured to:
if the reference posture of the target face part is not matched with the preset posture of the target face part, adjusting the reference driving parameter assignment corresponding to the target face part by utilizing a parameter assignment adjustment coefficient based on the parameter assignment adjustment algorithm to obtain a target driving parameter assignment; the parameter assignment adjustment coefficient is determined according to the difference between the reference gesture and the preset gesture.
Optionally, the role driving module 904 is specifically configured to:
determining a plurality of driving parameter assignments corresponding to each of a plurality of motion units of the face according to the target driving parameter assignments;
performing superposition processing on a plurality of driving parameter assignments corresponding to the motion units aiming at each motion unit to obtain superposition driving parameter assignments corresponding to the motion units;
fusion processing is carried out on the superimposed driving parameter assignment corresponding to each of the plurality of motion units, so that face driving parameter assignment is obtained; the face driving parameter assignment comprises comprehensive driving parameter assignment corresponding to each of the plurality of motion units, which is obtained through the fusion processing;
and driving the virtual character to present the facial expression according to the facial driving parameter assignment.
Optionally, the role driving module 904 is further configured to:
when the first facial expression corresponding to the facial driving parameter assignment does not meet the first natural state condition, adjusting the comprehensive driving parameter assignment in the facial driving parameter assignment to obtain a corrected facial driving parameter assignment;
and driving the virtual character to present the facial expression according to the corrected facial driving parameter assignment.
Optionally, the role driving module 904 is further configured to:
when the face expression presented by the virtual character is changed from a first expression to a second expression, controlling the target driving parameter assignment to change linearly, and obtaining respective corresponding intermediate driving parameter assignment at each intermediate moment in the changing process from the first expression to the second expression;
and driving the virtual character to present the change process based on the corresponding intermediate driving parameter assignment of each intermediate moment.
Optionally, the role driving module 904 is specifically configured to:
for each intermediate moment, determining a second facial expression presented by the virtual character at the intermediate moment according to intermediate driving parameter assignment corresponding to the intermediate moment; if the second facial expression does not meet the second natural state condition, adjusting the intermediate driving parameter assignment corresponding to the intermediate moment to obtain a target intermediate driving parameter assignment corresponding to the intermediate moment; if the second facial expression meets the second natural state condition, determining an intermediate driving parameter assignment corresponding to the intermediate moment as a target intermediate driving parameter assignment corresponding to the intermediate moment;
And driving the virtual character to present the change process based on the target intermediate driving parameter assignment corresponding to each intermediate time.
When the virtual character expression driving device provided by the embodiment of the application drives the virtual character to present the facial expression of the target object, the facial gesture data for representing the facial expression of the target object is firstly obtained; the face pose data is then compared with sets of base face pose data, respectively, and a reference driving parameter assignment is determined based on the comparison, where each set of base face pose data corresponds to a base facial expression. Further, whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition is detected, namely whether the reference facial expression generated based on the reference driving parameter assignment driving meets the relevant requirements is detected, such as whether the reference facial expression is natural, whether the reference facial expression is matched with the facial expression of the target object or not, and the like; if yes, the assignment of the reference driving parameters can be directly determined to be the assignment of the target driving parameters according to which the virtual character is driven to present the facial expression; if not, the reference driving parameter assignment needs to be corrected based on a driving parameter correction algorithm to obtain a target driving parameter assignment, wherein the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on a normal facial expression, abnormal reference driving parameter assignment can be limited in a normal and reasonable value range through the driving parameter correction algorithm, and the reference driving parameter assignment can be correspondingly adjusted to enable the corresponding facial expression to be more matched with the facial expression of the target object. Finally, based on the target driving parameter assignment determined through the above processing, the virtual character corresponding to the target object is driven to present the facial expression of the target object. According to the device, the reference driving parameter assignment is detected, and the reference driving parameter assignment which is determined by detection and is not met with the preset facial state condition by the corresponding reference facial expression is corrected based on the driving parameter correction algorithm, so that the accuracy of target driving parameter assignment used when the virtual character is driven to present the facial expression is improved, and further, the fact that the facial expression of the virtual character produced by final driving is natural and matched with the facial expression of the target object is ensured, namely, the presenting effect of the facial expression of the virtual character is improved.
The embodiment of the application also provides an electronic device for driving the virtual character, which can be specifically a terminal device or a server, and the terminal device and the server provided by the embodiment of the application are introduced from the perspective of hardware materialization.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, for convenience of explanation, only the portions related to the embodiments of the present application are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present application. The terminal may be any terminal device including a smart phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal as a smart phone as an example:
fig. 10 is a block diagram illustrating a part of a structure of a smart phone related to a terminal provided by an embodiment of the present application. Referring to fig. 10, the smart phone includes: radio Frequency (RF) circuitry 1010, memory 1020, input unit 1030 (including touch panel 1031 and other input devices 1032), display unit 1040 (including display panel 1041), sensor 1050, audio circuit 1060 (which may be connected to speaker 1061 and microphone 1062), wireless fidelity (wireless fidelity, wiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the smartphone structure shown in fig. 10 is not limiting of the smartphone and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The memory 1020 may be used to store software programs and modules that the processor 1080 performs various functional applications and data processing of the smartphone by executing the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the smart phone, etc. In addition, memory 1020 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state memory device.
Processor 1080 is the control center of the smartphone, connects the various parts of the entire smartphone with various interfaces and lines, performs various functions of the smartphone and processes the data by running or executing software programs and/or modules stored in memory 1020, and invoking data stored in memory 1020. Optionally, processor 1080 may include one or more processing units; preferably, processor 1080 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1080.
In the embodiment of the present application, the processor 1080 included in the terminal is further configured to execute the steps of any implementation manner of the virtual character expression driving method provided in the embodiment of the present application.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a server 1100 according to an embodiment of the present application. The server 1100 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPU) 1122 (e.g., one or more processors) and memory 1132, one or more storage mediums 1130 (e.g., one or more mass storage devices) storing application programs 1142 or data 1144. Wherein the memory 1132 and the storage medium 1130 may be transitory or persistent. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 1122 may be provided in communication with a storage medium 1130, executing a series of instruction operations in the storage medium 1130 on the server 1100.
The Server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input/output interfaces 1158, and/or one or more operating systems, such as Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The steps performed by the server in the above embodiments may be based on the server structure shown in fig. 11.
Wherein, CPU 1122 can also be used to execute the steps of any one implementation of the virtual character expression driving method provided by the embodiment of the present application.
The embodiments of the present application also provide a computer-readable storage medium storing a computer program for executing any one of the foregoing virtual character expression driving methods according to the foregoing embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any one of the virtual character expression driving methods described in the foregoing respective embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media in which a computer program can be stored.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A virtual character expression driving method, the method comprising:
acquiring face posture data of a target object; the facial pose data is used to characterize a facial expression of the target object;
comparing the face posture data with a plurality of groups of basic face posture data respectively, and determining a reference driving parameter assignment according to a comparison result; each set of the basic facial pose data corresponds to a basic facial expression;
detecting whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition, if so, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, correcting the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on the normal facial expression;
And driving the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment.
2. The method according to claim 1, wherein detecting whether the reference facial expression corresponding to the reference driving parameter assignment satisfies a preset facial state condition comprises:
for a plurality of reference driving parameter assignments corresponding to the same face part, detecting whether at least two reference driving parameter assignments belong to relative driving parameter assignments; the plurality of reference driving parameter assignments respectively correspond to different poses of the facial part, and the poses respectively corresponding to the at least two reference driving parameter assignments belonging to the relative driving parameter assignments cannot be simultaneously appeared in normal facial expressions.
3. The method of claim 2, wherein the correcting the reference drive parameter assignment based on the drive parameter correction algorithm to obtain a target drive parameter assignment comprises:
if at least two reference driving parameter assignments belonging to the relative driving parameter assignments exist in the multiple reference driving parameter assignments, determining that at least one reference driving parameter assignment in the at least two reference driving parameter assignments is to-be-corrected parameter assignment, and determining that reference driving parameter assignments except the to-be-corrected parameter assignment in the at least two reference driving parameter assignments are to-be-corrected parameter assignment;
Limiting the parameter assignment to be corrected to a preset value based on the parameter assignment limiting algorithm to obtain a target driving parameter assignment corresponding to the parameter assignment to be corrected; and determining the non-correction parameter assignment as a target driving parameter assignment.
4. The method of claim 2, wherein the correcting the reference drive parameter assignment based on the drive parameter correction algorithm to obtain a target drive parameter assignment comprises:
if at least two reference driving parameter assignments belonging to the relative driving parameter assignments exist in the plurality of reference driving parameter assignments, determining that at least one reference driving parameter assignment in the at least two reference driving parameter assignments is a reference parameter assignment, and determining that reference driving parameter assignments except the reference parameter assignments in the at least two reference driving parameter assignments are non-reference parameter assignments;
performing limit processing on the non-reference parameter assignment based on the parameter assignment limit algorithm and the reference parameter assignment to obtain a target driving parameter assignment corresponding to the non-reference parameter assignment; and determining the reference parameter assignment as a target driving parameter assignment.
5. The method according to claim 1, wherein detecting whether the reference facial expression corresponding to the reference driving parameter assignment satisfies a preset facial state condition comprises:
determining a reference posture of a target face part according to a reference driving parameter assignment corresponding to the target face part;
detecting whether the reference gesture of the target face part is matched with the preset gesture of the target face part; the preset gesture of the target facial part is a gesture presented by the target facial part in the facial expression of the target object.
6. The method of claim 5, wherein the correcting the reference drive parameter assignment based on the drive parameter correction algorithm to obtain a target drive parameter assignment comprises:
if the reference posture of the target face part is not matched with the preset posture of the target face part, adjusting the reference driving parameter assignment corresponding to the target face part by utilizing a parameter assignment adjustment coefficient based on the parameter assignment adjustment algorithm to obtain a target driving parameter assignment; the parameter assignment adjustment coefficient is determined according to the difference between the reference gesture and the preset gesture.
7. The method of claim 1, wherein driving the virtual character corresponding to the target object to present the facial expression according to the target driving parameter assignment, comprises:
determining a plurality of driving parameter assignments corresponding to each of a plurality of motion units of the face according to the target driving parameter assignments;
performing superposition processing on a plurality of driving parameter assignments corresponding to the motion units aiming at each motion unit to obtain superposition driving parameter assignments corresponding to the motion units;
fusion processing is carried out on the superimposed driving parameter assignment corresponding to each of the plurality of motion units, so that face driving parameter assignment is obtained; the face driving parameter assignment comprises comprehensive driving parameter assignment corresponding to each of the plurality of motion units, which is obtained through the fusion processing;
and driving the virtual character to present the facial expression according to the facial driving parameter assignment.
8. The method of claim 7, wherein the method further comprises:
when the first facial expression corresponding to the facial driving parameter assignment does not meet the first natural state condition, adjusting the comprehensive driving parameter assignment in the facial driving parameter assignment to obtain a corrected facial driving parameter assignment;
And driving the virtual character to present the facial expression according to the facial driving parameter assignment, including:
and driving the virtual character to present the facial expression according to the corrected facial driving parameter assignment.
9. The method according to claim 1, wherein the method further comprises:
when the face expression presented by the virtual character is changed from a first expression to a second expression, controlling the target driving parameter assignment to change linearly, and obtaining respective corresponding intermediate driving parameter assignment at each intermediate moment in the changing process from the first expression to the second expression;
and driving the virtual character to present the change process based on the corresponding intermediate driving parameter assignment of each intermediate moment.
10. The method of claim 9, wherein driving the virtual character to present the course of change based on the respective intermediate driving parameter assignments for the respective intermediate moments comprises:
for each intermediate moment, determining a second facial expression presented by the virtual character at the intermediate moment according to intermediate driving parameter assignment corresponding to the intermediate moment; if the second facial expression does not meet the second natural state condition, adjusting the intermediate driving parameter assignment corresponding to the intermediate moment to obtain a target intermediate driving parameter assignment corresponding to the intermediate moment; if the second facial expression meets the second natural state condition, determining an intermediate driving parameter assignment corresponding to the intermediate moment as a target intermediate driving parameter assignment corresponding to the intermediate moment;
And driving the virtual character to present the change process based on the target intermediate driving parameter assignment corresponding to each intermediate time.
11. A virtual character expression driving apparatus, the apparatus comprising:
a data acquisition module for acquiring face pose data of a target object; the facial pose data is used to characterize a facial expression of the target object;
the reference assignment module is used for comparing the face posture data with a plurality of groups of basic face posture data respectively and determining reference driving parameter assignment according to a comparison result; each set of the basic facial pose data corresponds to a basic facial expression;
the assignment detection module is used for detecting whether the reference facial expression corresponding to the reference driving parameter assignment meets the preset facial state condition, if yes, determining the reference driving parameter assignment as a target driving parameter assignment, and if not, carrying out correction processing on the reference driving parameter assignment based on a driving parameter correction algorithm to obtain a target driving parameter assignment; the driving parameter correction algorithm comprises a parameter assignment limit value algorithm and a parameter assignment adjustment algorithm which are determined based on the normal facial expression;
And the role driving module is used for driving the virtual role corresponding to the target object to present the facial expression according to the target driving parameter assignment.
12. An electronic device, the device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the virtual character expression driving method according to any one of claims 1 to 10 according to the computer program.
13. A computer-readable storage medium storing a computer program for executing the virtual character expression driving method according to any one of claims 1 to 10.
14. A computer program product comprising a computer program or instructions which, when executed by a processor, implement the virtual character expression driving method of any one of claims 1 to 10.
CN202310097500.0A 2023-01-18 2023-01-18 Virtual character expression driving method and related device Pending CN116958496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310097500.0A CN116958496A (en) 2023-01-18 2023-01-18 Virtual character expression driving method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310097500.0A CN116958496A (en) 2023-01-18 2023-01-18 Virtual character expression driving method and related device

Publications (1)

Publication Number Publication Date
CN116958496A true CN116958496A (en) 2023-10-27

Family

ID=88451730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310097500.0A Pending CN116958496A (en) 2023-01-18 2023-01-18 Virtual character expression driving method and related device

Country Status (1)

Country Link
CN (1) CN116958496A (en)

Similar Documents

Publication Publication Date Title
CN109325437B (en) Image processing method, device and system
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
CN108564641B (en) Expression capturing method and device based on UE engine
CN108537881B (en) Face model processing method and device and storage medium thereof
CN109815776B (en) Action prompting method and device, storage medium and electronic device
KR102491140B1 (en) Method and apparatus for generating virtual avatar
WO2021098338A1 (en) Model training method, media information synthesizing method, and related apparatus
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN111161395A (en) Method and device for tracking facial expression and electronic equipment
CN108388889A (en) Method and apparatus for analyzing facial image
CN109859857A (en) Mask method, device and the computer readable storage medium of identity information
CN112190921A (en) Game interaction method and device
Kowalski et al. Holoface: Augmenting human-to-human interactions on hololens
CN114373044A (en) Method, device, computing equipment and storage medium for generating three-dimensional face model
CN111479087A (en) 3D monitoring scene control method and device, computer equipment and storage medium
Queiroz et al. Generating facial ground truth with synthetic faces
CN107066095A (en) A kind of information processing method and electronic equipment
CN115239857B (en) Image generation method and electronic device
CN116958496A (en) Virtual character expression driving method and related device
WO2024031882A1 (en) Video processing method and apparatus, and computer readable storage medium
CN111738087B (en) Method and device for generating face model of game character
CN115546408A (en) Model simplifying method and device, storage medium, electronic equipment and product
CN115442658A (en) Live broadcast method and device, storage medium, electronic equipment and product
CN116266408A (en) Body type estimating method, body type estimating device, storage medium and electronic equipment
CN116452601A (en) Virtual fitting method, virtual fitting device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication