WO2024027285A1 - 面部表情处理方法、装置、计算机设备和存储介质 - Google Patents

面部表情处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2024027285A1
WO2024027285A1 PCT/CN2023/095567 CN2023095567W WO2024027285A1 WO 2024027285 A1 WO2024027285 A1 WO 2024027285A1 CN 2023095567 W CN2023095567 W CN 2023095567W WO 2024027285 A1 WO2024027285 A1 WO 2024027285A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
target
control
style
expression control
Prior art date
Application number
PCT/CN2023/095567
Other languages
English (en)
French (fr)
Inventor
刘凯
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024027285A1 publication Critical patent/WO2024027285A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present application relates to the field of computer technology, and in particular to a facial expression processing method, device, computer equipment and storage medium.
  • the face of the virtual object can have an art style, and the art style includes but is not limited to cartoon, mobile game or two-dimensional.
  • a facial expression processing method device, computer equipment, computer-readable storage medium and computer program product are provided.
  • the present application provides a facial expression processing method.
  • the method includes: determining the skeletal structure of a three-dimensional facial model belonging to a target art style; determining a style expression template matching the target art style, and combining the motion units in the style expression template with the skeleton Binding relevant bones in the structure; displaying a target expression control combination, associating the expression control in the target expression control combination with at least one of the motion units in the style expression template; masking the skeletal structure skin processing to generate a virtual object's face; in response to an adjustment operation for at least one of the expression controls, so that the information of the motion unit associated with the expression control drives the bound skeletal motion to control the virtual object's face to generate a face that conforms to the The expression of the target art style.
  • the present application also provides a facial expression processing device.
  • the device includes: a skeleton determination module for determining the skeletal structure of a three-dimensional facial model belonging to a target art style; a skeleton binding module for determining a style expression template that matches the target art style, and converting the style expression template into
  • the movement units in the skeletal structure are bound to relevant bones in the skeletal structure;
  • a control display module is used to display a target expression control combination, and combine the expression control in the target expression control combination with at least one of the style expression templates
  • the motion units are associated;
  • a skeleton skinning module is used to skin the skeletal structure to generate a virtual object face; an expression control module is used to respond to an adjustment operation for at least one of the expression controls, so that the The information of the motion unit associated with the expression control drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style.
  • this application also provides a computer device.
  • the computer device includes a memory and one or more processors.
  • the memory stores computer-readable instructions. When executed by the processor, the computer-readable instructions cause the one or more processors to implement the above facial expression. Steps in the expression processing method.
  • the present application also provides one or more non-volatile readable storage media.
  • the computer-readable storage medium has computer-readable instructions stored thereon. When the computer-readable instructions are executed by one or more processors, they cause the one or more processors to implement the above facial expression processing method. step.
  • the computer program product includes computer readable instructions, which when executed by a processor, implement the steps in the facial expression processing method.
  • Figure 1 is an application environment diagram of facial expression processing methods in some embodiments.
  • Figure 2 is a schematic flowchart of a facial expression processing method in some embodiments
  • Figure 3 is an interface operation diagram of model matching in some embodiments.
  • Figure 4 is a schematic diagram of a motion unit in some embodiments.
  • Figure 5 is a schematic diagram of a motion unit in some embodiments.
  • Figure 6 is a diagram showing the relationship between expressions and motor units in some embodiments.
  • Figure 7 is a schematic diagram of facial muscles in some embodiments.
  • Figure 8 is a schematic diagram of a motion unit in some embodiments.
  • Figure 9 is a schematic diagram of a motion unit in some embodiments.
  • Figure 10 is a schematic diagram of expression parameters in some embodiments.
  • Figure 11 is a schematic diagram of exercise intensity in some embodiments.
  • Figure 12 is a schematic diagram of expression controls controlling skeletal movement in some embodiments.
  • Figure 13 is a schematic diagram of an expression control in some embodiments.
  • Figure 14 is a schematic diagram of an expression control in some embodiments.
  • Figure 15 is a schematic diagram of an expression control in some embodiments.
  • Figure 16 is a schematic diagram of an expression control in some embodiments.
  • Figure 17 is a schematic diagram of a template determination window in some embodiments.
  • Figure 18 is a rendering after binding in some embodiments.
  • Figure 19 is an interface diagram for skinning processing in some embodiments.
  • Figure 20 is an interface diagram for generating a virtual object's face in some embodiments
  • Figure 21 is a schematic diagram of adjusting the facial expression of a virtual object in some embodiments.
  • Figure 22 is a schematic diagram showing virtual object facial and expression controls in some embodiments.
  • Figure 23 is a schematic diagram of how style expression template control responds to different expression tensions in some embodiments.
  • Figure 24 is an interface diagram for updating a motion unit in some embodiments.
  • Figure 25 is an interface diagram for updating a motion unit in some embodiments.
  • Figure 26 is a schematic diagram of freezing and activating motor units in some embodiments.
  • Figure 27 is an interface diagram for exporting expression templates in some embodiments.
  • Figure 28 is an interface diagram for importing expression templates in some embodiments.
  • Figure 29 is an interface diagram for updating expression templates in some embodiments.
  • Figure 30 is a schematic flowchart of a facial expression processing method in some embodiments.
  • Figure 31 is a structural block diagram of a facial expression processing device in some embodiments.
  • Figure 32 is an internal structure diagram of a computer device in some embodiments.
  • Figure 33 is an internal structure diagram of a computer device in some embodiments.
  • the facial expression processing method provided by the embodiment of the present application can be applied in the application environment as shown in Figure 1.
  • terminal 102 passes The network communicates with server 104.
  • the data storage system may store data that server 104 needs to process.
  • the data storage system can be integrated on the server 104, or placed on the cloud or other servers.
  • the terminal 102 can determine the skeletal structure of the three-dimensional facial model belonging to the target art style, determine the style expression template that matches the target art style, bind the motion units in the style expression template to relevant bones in the skeletal structure, and display A target expression control combination, associating the expression control in the target expression control combination with at least one motion unit in the style expression template, skinning the skeletal structure, generating a virtual object face, in response to an adjustment operation for at least one expression control , so that the information of the motion unit associated with the expression control drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style, so that the face of the virtual object with expression can be generated.
  • the virtual object with expression can be exported, and the exported virtual object can be stored in the server.
  • the terminal 102 can respond to the export operation for the face of the virtual object with expression. , generate a file that stores the face of the virtual object with expressions, and send the file to the server 104.
  • the server 104 can store the file, and can also send the file to other devices.
  • the virtual object face is the face of the virtual object.
  • the art style includes but is not limited to at least one of cartoon style and mobile game style.
  • art software may be installed on the terminal 102, and the facial expression processing solution provided in this application may be implemented by the terminal 102 using the art software.
  • the art software can generate expressions suitable for the art styles for virtual objects with different art styles.
  • the art software can include customized art tools.
  • the customized art tools are the result of independent research and development by the inventor.
  • the defined art tools can be, for example, plug-ins developed in art software.
  • Art software includes but is not limited to Autodesk Maya or 3Dmax.
  • the customized art tool can be used to generate expressions suitable for the art style for virtual objects belonging to the art style.
  • the art tool can be used to generate expressions for the virtual object that conform to the cartoon style.
  • Expression if the virtual object belongs to the mobile game style, this art tool can be used to generate an expression for the virtual object that conforms to the mobile game style.
  • the terminal 102 can be, but is not limited to, various desktop computers, laptops, smartphones, tablets, Internet of Things devices, and portable wearable devices.
  • the Internet of Things devices can be smart speakers, smart TVs, smart air conditioners, smart vehicle-mounted devices, etc.
  • Portable wearable devices can be smart watches, smart bracelets, head-mounted devices, etc.
  • the server 104 can be implemented as an independent server or a server cluster composed of multiple servers.
  • a facial expression processing method is provided.
  • the method can be executed by a terminal or a server, or can also be executed jointly by a terminal and a server.
  • This method is applied to the terminal 102 in Figure 1 Take an example to illustrate, including the following steps:
  • Step 202 Determine the bone structure of the three-dimensional facial model belonging to the target art style.
  • the art style is used to characterize the artistic characteristics of virtual objects.
  • the art style includes but is not limited to at least one of cartoon style or two-dimensional style.
  • the characteristics of cartoon-style virtual objects are the same as those of two-dimensional virtual objects.
  • the characteristics of the objects are quite different.
  • the target art style can be any art style.
  • a virtual object is an object in a virtual scene, and a virtual scene refers to a digital scene outlined by a computer through digital communication technology.
  • the virtual scene includes but is not limited to at least one of a two-dimensional virtual scene or a three-dimensional virtual scene.
  • the virtual scene may be, for example, a scene in a game, a scene in VR (Virtual Reality, virtual reality), or a scene in animation.
  • the virtual object may be a virtual object with a face.
  • the virtual object may be any one of a virtual character, a virtual animal, or a virtual doll in the virtual scene.
  • a three-dimensional facial model is a three-dimensional mesh model (3D mesh) used to represent the surface of a virtual object's face.
  • the virtual object face refers to the face of the virtual object.
  • the three-dimensional mesh model is a polygonal mesh composed of a series of basic geometric figures, used to simulate the surface of virtual objects.
  • the basic geometric figure is the smallest shape that constitutes the three-dimensional mesh model.
  • the basic geometric figure can be any one of triangles or quadrilaterals.
  • the basic geometric figures include multiple vertices. For example, a triangle includes 3 vertices and a quadrilateral includes 4 vertices. vertex.
  • the skeletal structure of the three-dimensional facial model refers to the skeletal structure of the face of the virtual object.
  • the skeletal structure of the three-dimensional facial model includes at least one of a nasal bone, a maxillary bone, a palatine bone, a lacrimal bone, an inferior turbinate, a vomer bone, or a mandibular bone.
  • the bone structure of the three-dimensional facial model can be generated based on the standard bone structure.
  • the bones in the standard bone structure can be bound to the vertices in the three-dimensional facial model, and the bones in the three-dimensional facial model can be bound to the vertices in the three-dimensional facial model.
  • Each bone bound to the vertices constitutes the bone structure of the three-dimensional facial model.
  • the standard bone structure refers to the bone structure bound to the standard three-dimensional facial model.
  • the standard three-dimensional facial model is a pre-generated three-dimensional facial model, and the vertices in the standard three-dimensional facial model are pre-bound to the bones in the standard bone structure. , each vertex in the standard 3D facial model is bound to at least one bone in the standard bone structure.
  • the three-dimensional facial model belonging to the target art style may be called the target three-dimensional facial model.
  • the terminal can display the target three-dimensional facial model in response to triggering the operation of displaying the target three-dimensional facial model, and in the case of detecting the model matching operation, perform affine transformation on the standard three-dimensional facial model based on the target three-dimensional facial model to obtain the affine transformation The final 3D facial model.
  • affine transformation is used to change the position of the vertices in the standard three-dimensional facial model, so that the standard three-dimensional facial model is close to the target three-dimensional facial model in shape and position, so that the shape of the three-dimensional facial model after affine transformation is consistent with the target
  • the shape of the three-dimensional facial model is basically the same, and the space enclosed by the affine-transformed three-dimensional facial model is basically the same as the space enclosed by the target three-dimensional facial model. Therefore, the three-dimensional facial model after the affine transformation can represent the virtual object. surface of the face.
  • the target three-dimensional facial model includes a larger number of vertices than the three-dimensional facial model after affine transformation.
  • the standard three-dimensional facial model has multiple key points, and multiple refers to at least two.
  • the key points of the standard three-dimensional facial model are called standard key points.
  • the key points represent the positions of key parts on the face. Key parts include but are not limited to at least one of eyebrows, eyes, nose or chin.
  • the target 3D facial model has multiple key points, and the key points of the target 3D facial model are called target key points.
  • the target key point can correspond to the standard key point one-to-one.
  • the corresponding target key point and the standard key point represent the position of the same key part.
  • the target key point A is the point that represents the position of the nose of the target three-dimensional facial model.
  • the standard key point B is a point representing the position of the nose of the standard three-dimensional facial model
  • the target key point A corresponds to the standard key point B
  • the nose of the target three-dimensional facial model is the nose of the virtual object's face corresponding to the target three-dimensional facial model.
  • the terminal can obtain the target key points of the target three-dimensional facial model, and obtain the affine transformation matrix based on the coordinate transformation relationship between the target key points and the corresponding standard key points. It is used to transform the coordinates of a standard key point to the coordinates of a target key point corresponding to the standard key point. It can also be used to transform the coordinates of a target key point to the coordinates of a standard key point corresponding to the target key point.
  • model matching The operation is used to trigger the terminal to generate a three-dimensional facial model after affine transformation.
  • the terminal when the terminal detects the model matching operation, it performs affine transformation on the standard three-dimensional facial model based on the target three-dimensional facial model to obtain the affine-transformed three-dimensional facial model.
  • the terminal can display the target three-dimensional facial model.
  • the 3D facial model after affine transformation can be displayed.
  • the model matching operation is a click operation on the control 302 in Figure 3, that is, the "model adaptation" control.
  • the terminal detects the click operation on the "model adaptation" control it generates a three-dimensional face after affine transformation. model, and shows the three-dimensional facial model after affine transformation corresponding to the target three-dimensional facial model.
  • the solid line part represents the target three-dimensional facial model, such as the part indicated by 304, and the dotted lines and dots represent the three-dimensional face after affine transformation.
  • Model such as the section indicated by 306.
  • the vertices in the affine transformed three-dimensional facial model are vertices obtained by transforming the coordinates of the vertices of the standard three-dimensional facial model, for example, the standard three-dimensional facial model has 100 vertices, then These 100 vertices undergo affine transformation respectively, that is, the coordinates of these 100 vertices are changed.
  • the 100 vertices after changing the coordinates are the vertices that constitute the three-dimensional facial model after affine transformation, and the vertices in the standard three-dimensional facial model It is bound to the bones in the standard bone structure.
  • the terminal can move the bones in the standard bone structure and bind the moved bones to the vertices in the three-dimensional facial bones after affine transformation, so as to obtain the affine transformation.
  • the three-dimensional facial bones correspond to the bones in the bone structure.
  • vertex A1 in the standard three-dimensional facial model is bound to bone 1 in the standard bone structure.
  • Vertex B1 in the three-dimensional facial model after affine transformation is bound to vertex A1.
  • the coordinates undergo affine transformation, which is the vertex obtained after moving. Therefore, the coordinates of bone 1 can be moved, and the moved bone 1 is determined to be the bone bound to vertex B1.
  • the way the bones move is determined based on the movement of the bound vertices, so that the relative relationship between the bones and vertices remains unchanged before and after movement.
  • the terminal may determine, based on the bones bound to the vertices in the affine-transformed three-dimensional facial model, the vertices in the target three-dimensional facial model. Bones to obtain the bone structure of the target three-dimensional facial model. A vertex in the target three-dimensional facial model is bound to at least one bone in a bone structure of the target three-dimensional facial model. Specifically, the target three-dimensional surface The vertices in the facial model can be called target vertices, and the vertices in the affine-transformed three-dimensional facial model can be called affine vertices.
  • the terminal can start from the affine-transformed three-dimensional facial model. Determine the affine vertex closest to the target vertex in the facial model, determine the bone bound to the affine vertex, and determine the bone bound to the target vertex.
  • the bones bound to each target vertex form the bone structure of the target three-dimensional facial model. .
  • Step 204 Determine a style expression template that matches the target art style, and bind the motion units in the style expression template to relevant bones in the skeletal structure.
  • the style expression template is an expression template suitable for art style. Different art styles match different style expression templates.
  • the expression templates matching the art style are pre-generated, and the expression templates matching the art style can be modified.
  • the style expression template includes multiple corresponding motor units, and multiple refers to at least two, which may be 200, for example.
  • Each motor unit (AU, Action Unit) can have a number and a name. Different motor units have different numbers and different motor units have different names. As shown in Figures 4 and 5, multiple motor units are shown. The number following the AU in Figure 5 is the AU number. For example, AU11 represents the AU numbered 11.
  • the combination of multiple motor units can represent an expression, that is, the expression can be characterized by the combination of multiple motor units.
  • the combination of motor units corresponding to various expressions includes AUs numbered 1, 4, 15, and 23.
  • the motor unit can measure facial muscles in a standardized manner, and the facial muscles can be, for example, the muscles at the positions of each letter in Figure 7 .
  • the face can be divided into upper face and lower face, so the motion unit can be divided into upper face motion unit and lower face motion unit, as shown in Figure 8, which shows part of the upper face motion unit and part of the lower face motion unit.
  • the AU in the above example is the basic AU. Since the accuracy of the basic AU is not precise enough, the AU can be split or a new AU can be added.
  • the basic AU can be split into left and right, or left and middle. , can also be split into the middle and the right side to obtain an AU with more precise accuracy.
  • FIG 9 the results of splitting multiple AUs are shown.
  • "R” in Figure 9 is the abbreviation of right, which represents the right side
  • "L” is the abbreviation of left, which represents the left side
  • "M” represents middle.
  • the abbreviation represents the middle.
  • AU1 is divided according to the left and right sides to obtain AU1L and AU1R
  • AU10 is divided according to the middle and left sides to obtain AU10M and AU10L.
  • the motor units in the style expression template can include the motor units obtained by division, so as to describe the direction and strength of muscles more scientifically and comprehensively, so as to improve the accuracy of expressions.
  • Each movement unit has information respectively.
  • the information of the movement unit includes movement intensity and movement direction. Movement intensity can also be called action intensity, and movement direction can also be called action direction.
  • the style expression template may include information of the motor unit, and the information of the motor unit may be preset and may be modified. Expression parameters can be used in the style expression template to represent a motor unit. For example, AU1L refers to the inner eyebrow raising action of the left eyebrow. An expression parameter can be used in the style expression template to describe the information of this motor unit. Different motor units Represented by different expression parameters, the expression parameters can also be called animation parameters. As shown in Figure 10, multiple expression parameters are displayed, each expression parameter represents a motor unit, for example, the expression parameter 00 represents a motor unit.
  • the intensity of exercise can reflect different degrees of exercise.
  • the value range of exercise intensity can be preset, for example, it can be 0%-100%, and the movement unit is used to control the opening of the mouth.
  • A, B, C, D and E respectively represent different ranges of exercise intensity.
  • A represents 0-20%
  • B represents 20%-40%
  • C represents 40%
  • D stands for 65%-85%
  • E stands for 85%-100%. It can be seen from the figure that the greater the intensity of exercise, the greater the degree of mouth opening.
  • Binding can be understood as establishing a connection between the movement unit and the bones, and controlling the movement of the bound bones through the movement unit.
  • the terminal when it determines the style expression template, it can bind the motion units in the style expression template to the bones in the bone structure of the target three-dimensional facial model.
  • the style expression template can pre-specify the bones that can be bound to each movement unit separately.
  • the movement unit AU1L is specified to be bound to bone 1 and bone 2.
  • the terminal can start from the target three-dimensional face. Determine the bones that can be bound to the motion unit in the skeletal structure of the model, and bind the determined bones to the motion unit. After binding, the terminal can control the movement of the bones bound to the motion unit based on the motion unit, thereby controlling the face of the virtual object to produce expressions that conform to the target art style.
  • Step 206 Display the target expression control combination, and combine the expression control in the target expression control combination with at least one of the style expression templates. Motor units are associated.
  • the expression control combination is a collection composed of multiple expression controls, and the expression control is a control used to control expressions.
  • the target expression control combination is the expression control combination that matches the target business scenario.
  • the target business scenario is the business scenario to which the target virtual object belongs.
  • the target virtual object refers to the target three-dimensional facial model representing the face of the target virtual object.
  • Business scenarios include but are not limited to at least one of mobile games, client games, or digital human games. Different business scenarios have different levels of complexity.
  • the same business scenario can have multiple art styles, so the process of determining a style expression template that matches the target art style may include: determining a style expression template that matches the target art style in the target business scenario.
  • the matching expression control combinations can be the same or different.
  • the expression control combination matching the business scenario is preset. "Associating the expression control in the target expression control combination with at least one motor unit in the style expression template" may be performed before displaying the target expression control combination, or may be performed after displaying the target expression control combination.
  • an expression control is associated with a motion unit
  • operations on the expression control can trigger movement of the bones bound to the motion unit associated with the expression control.
  • the controller refers to the expression control.
  • the expression control drives the skeletal motion through the information of the motion unit, thereby controlling the facial expression of the virtual object.
  • Expression controls can also be called controllers.
  • the expression control combination may include multiple types of expression controls. For example, it may include multiple first expression controls, multiple second expression controls, and multiple shortcut expression controls. Multiple refers to at least two of the multiple first expression controls. It is used to globally adjust the facial expression of the virtual object.
  • the first expression control can quickly adjust the expression posture, so the first expression control can also be called the master control.
  • the second expression control is used to adjust the details of the facial expression of the virtual object, so the second expression control can also be called a secondary control.
  • the first expression control may be displayed at a position other than the face of the virtual object, and the second expression control may be displayed at the face of the virtual object.
  • the shortcut expression control matches a specified expression, which includes but is not limited to at least one of a specified emotion or a specified mouth shape.
  • the shortcut expression control is used to adjust the expression of the virtual object's face, so that the virtual object's face generates the shortcut expression control.
  • the shortcut expression control may be called an emotion control.
  • the shortcut expression control may be called a mouth shape control.
  • the emotion includes, but is not limited to, at least one of smile, anger, surprise, fear, disgust, sadness, contempt or anger.
  • Mouth shapes include, but are not limited to, AAA, AHH and other mouth shapes. As shown in Figure 13, 4 types of expression controls are displayed.
  • the expression control displayed in area 1302 is the main control
  • the expression control displayed in area 1304 is the mouth shape control
  • the control displayed in area 1306 is the secondary control.
  • the controls displayed in area 1308 are emotion controls, such as the emotion controls corresponding to smiles.
  • the combination of expression controls is divided into main controls, secondary controls, mouth controls and emotions, which reflects the hierarchical involvement of controls. Each control can be mixed and used to improve the accuracy of the adjusted expressions.
  • the number of the first expression controls may be determined based on the scene complexity of the target business scenario. For example, the number of the first expression controls is positively correlated with the scene complexity of the target business scenario. The more complex the target business scenario, the higher the number of the first expression controls. The greater the number of emoticon controls. For example, if the complexity of the mobile game is less than that of the client game, and the complexity of the client game is less than that of the digital human scene, then the number of master controllers in the mobile game is less than that of the client game, and then the number of master controllers in the client game is less than that of the digital human scene. Number of masters.
  • the target three-dimensional facial model belongs to the target virtual object, and the target virtual object is the virtual object in the target business scenario.
  • the terminal can display the art style in each business scenario, including the target art style in the target business scenario.
  • the terminal determines the target business When the target art style in the scene is selected, determine the style expression template that matches the target art style and the target expression control combination that matches the target business scenario.
  • the terminal can associate the expression control in the target expression control combination with at least one movement unit in the style expression template.
  • the terminal can pre-store the association between the expression control and the movement unit. For example, it stipulates that expression control 1 and If the movement unit 1 is associated, the terminal can associate the expression control in the target expression control combination with at least one movement unit in the style expression template according to the pre-stored association relationship.
  • the terminal when the terminal determines a style emoticon template that matches the target art style, the terminal may determine that the style emoticon template matches the target art style. Create a matching style expression template, bind the motion units in the style expression template to the relevant bones in the skeletal structure, determine the target expression control combination that matches the target art style, and display the target expression control combination that matches the target art style.
  • the terminal displays a template determination window in response to the binding operation.
  • the template determination window is used to determine a style expression template that matches the target art style.
  • the terminal responds to the selection completion operation triggered in the template determination window to obtain the template that matches the target art style.
  • the selection and completion operation of the style expression template that matches the art style is to indicate that the style expression template that matches the target art style has been selected and completed.
  • the motion units in the style expression template are bound to the relevant bones in the skeletal structure, and the determination is made
  • the target expression control combination matching the target art style displays the target expression control combination matching the target art style.
  • the template determination window is used to determine a style expression template that matches the target art style.
  • the template determination window is, for example, the window in Figure 17.
  • the binding operation can be, for example, a click operation on the control 308 in Figure 3, that is, the control "automatic binding".
  • the selection completion operation can be, for example, a click operation on the control 1702 in Figure 17.
  • the terminal receives the click on the control 308.
  • the template determination window in Figure 17 is displayed.
  • Figure 18 is displayed.
  • the expression control is displayed in the left window.
  • Step 208 Skin the skeletal structure to generate the face of the virtual object.
  • skinning processing refers to the processing of binding the vertices in the facial skin model to the bones in the bone structure.
  • the facial skin model is a three-dimensional mesh model representing facial skin.
  • the face of the virtual object is the result of skinning.
  • Facial skin models can be pre-generated.
  • the terminal can determine the bone closest to the vertex from the bone structure, bind the bone to the vertex, thereby connecting the vertex in the facial skin model to the bone structure.
  • the bones are bound to implement skinning, and the face of the virtual object is obtained after skinning.
  • the terminal may display the skin processing control and the face generation control, bind the selected facial skin model to the bone structure in response to a triggering operation for the skin processing control, and generate and display the facial skin model in response to the face generation control.
  • the skin of the virtual object's face is the skin represented by the facial skin model bound to the bone structure.
  • the skin processing control is, for example, control 1902 in Figure 19, which is the control "Save Skin”.
  • the face generation control is, for example, control 1904 in Figure 19, which is the control "Auto Generate”.
  • When the terminal receives a click operation on control 1904 generate and display the face of the virtual object, for example, generate and display the face of the virtual object in Figure 20.
  • the generated virtual object face is an asset that can be controlled by the artist.
  • the virtual object face can be saved through the asset file.
  • the terminal can provide a service for downloading the asset file.
  • the virtual object face can be controlled through the expression control and can be changed according to the artist's needs.
  • Step 210 In response to an adjustment operation on at least one expression control, the information of the motion unit associated with the expression control drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style.
  • the adjustment operation of the expression control is used to change the attribute value of the expression control. If the attribute value of the expression control is different, the degree of adjustment of the expression will also be different. The larger the attribute value, the greater the degree of adjustment of the expression. For example, when the expression When the control is a surprised emotion control, the greater the attribute value of the expression control, the greater the degree of surprise.
  • Expression controls include controls that can be in any form, including but not limited to sliding controls or mobile controls.
  • the first expression control and the second expression control can be mobile controls, and the mouth shape control and emotion control can be It is a sliding control.
  • the adjustment operation of the sliding control is the sliding operation of the control
  • the adjustment operation of the mobile control is the operation of moving the control.
  • the position of the bones bound to the motion units associated with the expression controls will change. Since the bones are bound to the vertices in the facial skin model, the bones Movement will cause the position of the vertices of the facial skin model to change, thereby causing the facial expression of the virtual object to change to produce an expression that conforms to the target art style.
  • the terminal changes the position of the skeleton bound to the motion unit associated with the expression control based on the information of the motion unit associated with the expression control, thereby realizing skeletal motion and changing the facial skin according to the motion of the skeleton
  • the position of the vertices in the model is used to control the face of the virtual object to produce expressions that conform to the target art style. As shown in Figure 21, it shows the face of a virtual object in a certain style of game. A control controls a mouth. The expression at the corner of the mouth is driven. The expression tension conforms to the game style setting, and is neither exaggerated nor restrained.
  • the terminal may display expression controls and movement unit information while displaying the virtual object's face. As shown in Figure 22, when the virtual object's face is displayed, the expression control and motor unit information are displayed.
  • the skeletal structure of the three-dimensional facial model belonging to the target art style is determined, the style expression template matching the target art style is determined, and the motion units in the style expression template are bound to the relevant bones in the skeletal structure, Displaying the target expression control combination, associating the expression control in the target expression control combination with at least one motion unit in the style expression template, skinning the skeletal structure, and generating a virtual object face, in response to the adjustment of the at least one expression control
  • the operation enables the information of the motion unit associated with the expression control to drive the bound skeletal motion to control the virtual object's face to produce an expression that conforms to the target art style, so that for each art style, the expression on the virtual object's face can be quickly processed , to generate expressions that conform to the corresponding art style, thus improving the efficiency of facial expression processing.
  • determining a style expression template that matches the target art style, and binding the motion units in the style expression template to relevant bones in the skeletal structure includes: displaying object types corresponding to multiple art styles;
  • the art style includes a target art style; in the case of a target object type under the target art style, determining facial skeleton information corresponding to the target object type, and determining a style expression template that matches the target art style; converting the style expression template based on the facial bone information
  • the motor units in are bound to related bones in the skeletal structure.
  • the multiple art styles refer to at least two art styles, and the multiple art styles include the target art style.
  • the object type refers to the type of the virtual object.
  • the object type can be divided according to the identity of the virtual object in the virtual scene. Taking the virtual scene as a game scene as an example, the object type can be divided into the protagonist and NPC (non-player character, non-player character). Player character), of course, it can also be divided in other ways, and there is no limit here.
  • Each art style can correspond to at least one object type. For example, it can include two object types. Taking the virtual scene as a game and the art style as a cartoon style as an example, the object types corresponding to the cartoon style can include protagonists and NPCs.
  • the target object type can be any object type in the target art style.
  • the facial skeleton information may include at least one of a bone number or a bone identification of the face of the virtual object.
  • the bone identifier may be, for example, a bone name, as shown in (b) of Figure 23 , which shows the number of bones on the virtual object's face in various art styles.
  • Facial skeleton information is preset for each object type. In the case of different art styles, the facial skeleton information corresponding to the same object type can be different or the same.
  • the terminal displays a template determination window when a binding operation is detected, and displays object types corresponding to multiple art styles in the template determination window. Including the object type corresponding to the target art style.
  • the terminal responds to the selection to complete the operation, determines the facial skeleton information corresponding to the target object type under the target art style, and determines the facial skeleton information corresponding to the target art style.
  • the stylistic expression template matching the art style binds the motion units in the stylistic expression template to the relevant bones in the skeletal structure based on the facial skeleton information.
  • the binding operation is used to trigger the display template determination window.
  • the binding operation can be, for example, a click operation on the control 308 in Figure 3, that is, the control "automatic binding".
  • the template determination window is as shown in Figure 17.
  • Game Project 1 When the terminal detects the control In the case of the click operation of 308, the template determination window shown in Figure 17 is displayed.
  • Game Project 1 "Cartoon Game”, “Default Game”, and “Digital Human Game” respectively represent games with different art styles, thus representing different art styles.
  • the "Protagonist” and "Non-Player Character” are Object type, it can be seen that different art styles correspond to at least one object type.
  • the selection completion operation may be, for example, a click operation on the control 1702 in FIG. 17 .
  • the mapping relationship between the art style and the style expression template may be pre-stored in the terminal, and the mapping relationship between the object type and the facial skeleton information under the art style may be pre-stored in the terminal.
  • the art style and style expression template may also be pre-stored in the terminal.
  • the mapping relationship between templates is pre-generated, and the mapping relationship between object types and facial skeleton information under the art style is also pre-generated.
  • the terminal can determine the style expression template that matches the target art style based on the mapping relationship between the art style and the style expression template, and can also determine the style expression template that matches the target art style based on the mapping between the object type and facial skeleton information under the art style. relationship to determine the facial skeleton information corresponding to the target object type.
  • the associated bones corresponding to each motor unit may be pre-specified in the style expression template.
  • the associated bones of the motor unit refer to the bones that are required to realize the action of the motor unit.
  • the terminal may determine the skeleton identifier of the facial skeleton of the virtual object of the target object type based on the facial skeleton information.
  • the terminal can extract the skeleton identifier from the facial skeleton information.
  • the terminal can obtain the skeleton.
  • Each bone identifier corresponding to the number of bones is generated in advance.
  • the bone identifier can be, for example, a bone name.
  • the terminal can determine each bone identifier corresponding to the number of bones as the facial skeleton of the virtual object of the target object type. Bone identification.
  • the terminal can determine the bone identified by the bone identifier from the bone structure of the target three-dimensional facial model. If the determined bone is an associated bone of a certain movement unit in the style expression template, the determined bone will be associated with the movement unit.
  • Binding is performed to bind the motion units in the style expression template to the relevant bones in the skeletal structure. After binding, the bound skeleton can be driven to move based on the motion unit to produce an expression that conforms to the corresponding art style. If the style expression template is different, the information of at least one motion unit in the style expression template will be different, so the style expression template Different, the generated expressions are also different.
  • object types corresponding to multiple art styles are displayed, so that for each art style, expressions that conform to the art style can be quickly generated based on the facial expression processing method provided by this application, which improves the efficiency of generating expressions that meet the needs of the art style. Efficiency of art style expressions.
  • the method further includes: displaying an information editing area of the motor unit in the style expression template while displaying the face of the virtual object; in response to an information editing operation triggered in the information editing area of the target motor unit, performing The information of the target motion unit is updated; based on the updated information of the target motion unit, the skeletal motion bound to the target motion unit is driven to update the facial expression of the virtual object.
  • the target motion unit can be any motion unit in the style expression template.
  • the information editing area of the target motor unit is an area used to edit the information of the target motor unit, and the information editing operation is an operation of editing the information of the target motor unit.
  • the information editing area can be any area that implements information editing, including but not limited to an input box or a sliding control.
  • the information editing operation includes, but is not limited to, inputting information into the input box or sliding operation on the sliding control.
  • Each motion unit can have its own information editing area. As shown in Figure 24, each motion unit may have its own information editing area.
  • Controls 2402 and 2404 are the two information editing areas of the motion unit 01. Control 2402 is an input box, and control 2404 is a sliding control.
  • the terminal displays the editing area of the style expression template in response to the template editing operation, and displays the information editing area of the motor unit in the editing area of the style expression template.
  • the template editing operation can be triggered in the art tool.
  • the art tool can provide a template editing entrance.
  • the template editing operation is a triggering operation for the template editing entrance.
  • the terminal displays the template editing entrance in the art tool, it responds In the trigger operation for the template editing entrance, the editing area of the style emoticon template is displayed.
  • the template editing entrance is, for example, the control 2400 in Figure 24, that is, the control "expression editor".
  • the editing area 2406 of the style expression template on the right is displayed, and in the editing area 2406
  • the information editing area of the motor unit is displayed in, for example, the two information editing areas of the motor unit 01 are displayed, that is, the control 2402 and the control 2404 are displayed.
  • the terminal when displaying the template editing entrance in the art tool, displays the editing area of the style expression template in response to a triggering operation for the template editing entrance while displaying the face of the virtual object, that is, when displaying In the case of a virtual object's face, the information editing area of the motor unit in the style expression template is displayed.
  • the window on the left shows the face of the virtual object
  • the window on the right shows the information editing area of the motion unit.
  • the information editing area has two states: a frozen state and an activated state.
  • a frozen state In this case, the terminal stops responding to the operation on the information editing area, for example, stops responding to the information editing operation.
  • the information editing area is in an activated state, the terminal may respond to the operation on the information editing area, such as responding to the information editing operation.
  • the frozen state can also be understood as exiting the editing mode, and the activated state can also be understood as entering the editing mode.
  • the target motor unit is in a frozen state, the terminal can display the activation control of the target motor unit.
  • the terminal When the terminal detects a trigger operation on the activation control, it updates the activation control to the frozen control and edits the information of the target motor unit in the area. Update to editable mode, so that the terminal can respond to a trigger operation, such as a click operation, on the information editing area.
  • a trigger operation such as a click operation
  • the terminal can display the frozen control of the target motor unit.
  • the terminal detects a trigger operation on the frozen control, it updates the frozen control to the activated control and edits the information of the target motor unit in the area. Update to a non-editable mode, so that the terminal stops responding to trigger operations such as click operations on the information editing area.
  • the information editing area of motion unit 01 in the first row is in editable mode, that is, it enters the editing mode, and the control "freeze data" is a freezing control.
  • the information editing area of motion unit 01 in the second row is in indisableable mode.
  • Editing mode that is, exiting the editing mode
  • the control "activation data” is the active control.
  • the status of the information editing area is updated from the frozen state to the activated state, so that the information editing area enters the editing mode, thereby providing the ability to edit the information of the motor unit.
  • the editing function keeps the status of the information editing area in a frozen state when there is no need to edit the information of the motion unit, so that the information editing area is in a non-editable mode, thus avoiding misoperation of the information editing area. resulting in the information being changed.
  • the editing area of the motion unit in the style expression template is displayed, and the expression of the virtual object's face is updated based on the updated information of the motion unit, so that in the determined style
  • the information of the motion units in the style expression template can be updated in a visual way to optimize the style expression template, thereby optimizing the expression, improving the flexibility of facial expression processing, and thereby improving facial expressions. Efficiency of expression processing.
  • the method further includes: displaying a bone update area of the motor unit in the style expression template; in response to a bone update operation triggered in the bone update area of the target motor unit, updating the bones bound to the target motor unit .
  • the bone update area can be a bone addition area or a bone deletion area.
  • the bone adding area is used to bind at least one new bone to the movement unit, and the bone deletion area is used to release the binding relationship between the movement unit and one or more bound bones. Multiple refers to at least two.
  • Each motion unit can have its own bone update area.
  • the bone update operation is an operation used to add or delete bones bound to the motion unit. For example, it can be a click operation on the bone update area.
  • the terminal may display the bones that are not bound to the target motion unit in the skeletal structure of the target three-dimensional facial model on the face of the virtual object, and the terminal may display the bones selected by the selection operation as selected in response to the selection operation for the displayed bones.
  • the terminal detects a bone adding operation triggered in the bone adding area of the target motion unit, it can bind the selected bone to the target motion unit.
  • the target motion unit can be, for example, the motion unit 29 in Figure 25.
  • the bone adding area can be, for example, the control 2502 in Figure 25, that is, the control "Add Bones".
  • the terminal detects a click operation on the control 2502, it will move the virtual object on the face.
  • the selected bones that are not bound to the motion unit 29 are bound to the motion unit 29 .
  • the corresponding English representation of "add bones” can be addJoints.
  • the terminal can display the skeleton display control of the motor unit in the style expression template while displaying the face of the virtual object.
  • the skeleton display control is used to trigger the display of the bones bound to the corresponding motor unit.
  • Each terminal The motion units respectively correspond to skeleton display controls.
  • the terminal receives a trigger operation of the skeleton display control for the target motion unit, the bones bound to the target motion unit can be displayed on the face of the virtual object.
  • the target movement unit may be, for example, the movement unit 29 in FIG. 25
  • the skeleton display control may be, for example, the control 2504 in FIG. 25 , that is, the control "enable gesture".
  • the corresponding English representation of "enable pose" can be enablePose.
  • the terminal may respond to a selection operation on the bones bound to the displayed target motion unit, display the bones selected by the selection operation as selected, and the terminal detects that the bones triggered in the bone deletion area of the target motion unit are In the case of bone deletion operation, unbind the selected bone from the target motion unit.
  • the terminal can display the bones bound to the target motion unit, update the position of the bones when detecting a position change operation on the displayed bones, and update the face of the virtual object as the bone positions are updated.
  • the expression is updated.
  • Each motor unit can With corresponding posture recording controls respectively, the terminal can display the posture recording control corresponding to the target movement unit, and the posture recording control of the target movement unit is used to record the updated position of the bones bound to the target movement unit.
  • the gesture recording control can be the control 2506 in Figure 25, that is, the control "Record Posture”.
  • the corresponding English representation of "record pose" can be recordPose.
  • the bone update area of the motor unit in the style expression template is displayed, so that the bone update area can be used to optimize the bones bound to the motor unit and improve the accuracy of expression control.
  • the information editing area and the skeleton update area are displayed in the editing window; the method further includes: in response to a window closing operation of the editing window, displaying the expression template export trigger control; in response to the expression template export triggering The trigger operation of the control exports the updated expression template; the updated expression template is a template obtained by updating the information of at least one motion unit or the bound skeleton in the style expression template.
  • the editing window is a window provided by the art tool
  • the editing window refers to the window that displays the editing area of the style expression template
  • the window closing operation is an operation that triggers the closing of the editing window.
  • the expression template export trigger control is used to trigger the export of updated expression templates.
  • the updated expression template is a template obtained by updating the information of the motion units in the style expression template.
  • Exporting refers to copying and storing the updated expression template from the art software including the art tool to outside the art software. Exporting can also be understood as saving and storing the updates to the style expression template.
  • the editing window may be, for example, the area displaying the information editing area and the bone update area in FIG. 27
  • the window closing operation may be, for example, the control 2702 in FIG. 27 , that is, the control " ⁇ ".
  • the terminal detects a window closing operation for the editing window, and displays the expression template export window,
  • the expression template export trigger control is displayed in the expression template export window.
  • a trigger operation such as a click operation
  • an updated expression template is generated and the updated expression template is displayed.
  • the emoticon template export window is, for example, window 2704 in Figure 27, and the emoticon template export triggering control is, for example, control 2706 in window 2704.
  • window 2704 is displayed.
  • the updated emoticon template 2708 is displayed.
  • the terminal can import a style expression template external to the art software into the art software, and use the imported style expression template as an expression template corresponding to the specified art style.
  • the specified art style can be specified as needed.
  • the style expression template in the cartoon style is updated.
  • the updated style expression template is stored, and the updated expression is stored.
  • the template is imported as a new style expression template in the cartoon style, so that the updated expression template can be used to process the expressions of other characters in the cartoon style.
  • click control 2800 that is, the control "Import Expression Template”
  • determine the expression template 2802 to be imported then the expression template 2802 can be imported.
  • the "expression movement unit editor” is an editor that provides the function of editing movement units.
  • the expression movement unit editor is an editor in the art tool. After the expression movement unit editor obtains the updated expression template, It can be imported into art tools as a new style expression template and can be edited again, so that the style expression template can be continuously optimized and the accuracy of the style expression template can be improved.
  • the updated expression template is exported, so that the updated result of the style expression template can be stored, so that it can be reused, and the processing efficiency of facial expressions is improved.
  • the three-dimensional facial model belongs to a virtual object in the target business scenario
  • the target expression control combination is an expression control combination that matches the target business scenario
  • the target expression control combination includes a plurality of first expression controls, a plurality of first expressions
  • the number of controls is determined based on the scene complexity of the target business scenario; multiple first expression controls are used to globally adjust the facial expression of the virtual object; in response to the adjustment operation for at least one expression control, the expression control
  • the information of the associated motion unit drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style, including: in response to an adjustment operation for at least one expression control, the targeted expression control is the first expression control
  • determine the adjusted attribute value of the first expression control based on the adjusted attribute value of the first expression control and the information of the motor unit associated with the first expression control, drive the associated motor unit bound Skeleton movement to control the face of the virtual object to produce expressions that conform to the target art style.
  • the three-dimensional facial model is a three-dimensional mesh model representing the face of the virtual object in the target business scene.
  • Business scenarios include but are not limited to at least one of mobile games, client games, or digital human games. Different business scenarios have different levels of complexity. The same business scenario can have multiple art styles, so the process of determining a style expression template that matches the target art style may include: determining a style expression template that matches the target art style in the target business scenario.
  • the adjustment operation of the expression control is used to change the attribute value of the expression control. If the attribute value of the expression control is different, the degree of adjustment of the expression will also be different. The larger the attribute value, the greater the degree of adjustment of the expression. For example, when the expression control is In the case of the surprised emotion control, the greater the attribute value of the expression control, the greater the degree of surprise.
  • the property values of the expression control change when the expression control is adjusted.
  • the terminal when it detects an adjustment operation for the first expression control, it determines the adjusted attribute value of the first expression control, based on the adjusted attribute value of the first expression control and the first expression control associated
  • the information of the motion unit drives the movement of the bones bound to the associated motion unit to control the face of the virtual object to produce expressions that conform to the target art style.
  • the terminal can determine the new position of the bone bound to the movement unit associated with the first expression control based on the adjusted attribute value and the information of the movement unit associated with the first expression control, and update the position of the bone to The new position is used to realize the movement of the bones, and the movement of the vertices in the facial skin model bound by the bones is controlled according to the movement of the bones, thereby achieving control of the facial expressions of the virtual object.
  • the first expression control in area 1302 in Figure 13 is a mobile control.
  • the adjustment of the first expression control can be realized by moving the first expression control.
  • the terminal responds to the The movement operation of the first expression control in 13 can update the facial expression of the virtual object in area 1302.
  • first expression controls since the number of first expression controls is determined based on the complexity of the scene, a smaller number of first expression controls can be used for simple business scenarios, and a larger number of first expression controls can be used for complex business scenarios. Expression controls improve the processing efficiency of facial expressions in different business scenarios.
  • the target expression control combination also includes a second expression control, and the second expression control is used to adjust the details of the facial expression of the virtual object; the method also includes: when the targeted expression control is the second expression control Next, determine the adjusted attribute value of the second expression control; based on the adjusted attribute value of the second expression control and the information of the motion unit associated with the second expression control, drive the movement of the skeleton bound to the associated motion unit , to adjust the details of the facial expression of the virtual object.
  • each second expression control is used to control the movement of a single bone to control the dynamic changes of facial muscles to produce expressions.
  • the terminal detects the adjustment operation for the second expression control, the terminal determines the adjusted attribute value of the second expression control based on the adjusted attribute value of the second expression control and the motion unit associated with the second expression control.
  • Information drives the skeletal movements bound to the associated motion units to adjust the details of the facial expressions of the virtual object.
  • the terminal can determine the new position of the skeleton bound to the motion unit associated with the second expression control based on the adjusted attribute value and the information of the motion unit associated with the second expression control, and update the position of the skeleton to The new position is used to realize the movement of the bones, and the movement of the vertices in the facial skin model bound by the bones is controlled according to the movement of the bones, thereby achieving control of the facial expressions of the virtual object.
  • the second expression control in area 1306 in Figure 13 as an example, the second expression control in area 1306 is a mobile control.
  • the adjustment of the second expression control can be realized by moving the second expression control, and the terminal responds to the area.
  • the movement operation of the second expression control in 1306 can update the details of the facial expression of the virtual object in the area 1306.
  • the second expression control is used to adjust the details of the facial expression of the virtual object, a function of adjusting the details of the facial expression of the virtual object is provided, thereby improving the accuracy of facial expression processing.
  • the target expression control combination includes a shortcut expression control, and the shortcut expression control matches a specified expression; in response to an adjustment operation for at least one expression control, information of a motor unit associated with the expression control drives a bound skeleton Movement to control the face of the virtual object to produce an expression that conforms to the target art style includes: in response to the adjustment operation for the shortcut expression control, obtaining the adjusted attribute value of the shortcut expression control; based on the adjusted attribute value and the attribute value associated with the shortcut expression control The information of the motion unit drives the movement of the bones bound to the associated motion unit to control the face of the virtual object to generate the specified expression that matches the shortcut expression control.
  • the shortcut expression control matches a specified expression.
  • One shortcut expression control is used to call up a specified expression.
  • the shortcut expression control Used to evoke a surprised expression.
  • the expression control includes, but is not limited to, at least one of an emotion control or a mouth shape control.
  • the terminal when it detects an adjustment operation for the shortcut expression control, it determines the adjusted attribute value of the shortcut expression control, based on the adjusted attribute value of the shortcut expression control and the information of the motion unit associated with the shortcut expression control. , drives the skeletal movement bound to the associated motion unit to control the face of the virtual object to generate the specified expression matched by the shortcut expression control.
  • the terminal can determine the new position of the bone bound to the motion unit associated with the shortcut expression control based on the adjusted attribute value and the information of the motion unit associated with the shortcut expression control, and update the position of the bone to the new position. position, thereby realizing the movement of the bones, and controlling the movement of vertices in the facial skin model bound by the bones according to the movement of the bones, to control the face of the virtual object to produce the specified expression matched by the shortcut expression control.
  • the shortcut expression control since the shortcut expression control matches the specified expression, the shortcut expression control can be used to control the virtual object's face to generate the specified expression matched by the shortcut expression control, thereby improving the efficiency of generating the specified expression.
  • the skeletal movement bound to the associated movement unit is driven to control the face of the virtual object to generate a specified expression that matches the shortcut expression control.
  • the shortcut expression control is an emotion control, determining the emotion change information corresponding to the emotion control; based on the emotion change information and the information of the motor unit associated with the emotion control, driving the movement of the bones bound to the associated motor unit, Controls a virtual object's face to present a specified emotion that matches the emotion control.
  • the specified expression matched by the emotion control is an expression generated by the specified emotion, which includes but is not limited to at least one of smile, anger, surprise, fear, disgust, sadness, contempt or anger.
  • the terminal detects an adjustment operation for the emotion control, it determines the adjusted attribute value of the emotion control, and determines the movement unit associated with the emotion control based on the adjusted attribute value and the information of the motor unit associated with the emotion control.
  • the new position of the bone bound to the motion unit updates the position of the bone to the new position to realize the movement of the bone.
  • the vertex movement in the facial skin model bound by the bone is controlled to control the virtual object.
  • the facial expression control produces an expression that matches the specified emotion. Take the emotion control corresponding to the smile in area 1308 in Figure 13 as an example.
  • the emotion control in area 1308 is a sliding control.
  • the adjustment of the emotion control can be realized by sliding the emotion control.
  • the terminal responds to the smile corresponding to area 1308.
  • the sliding operation of the emotion control can control the face of the virtual object in area 1308 to produce a smiling expression.
  • the emotion control since the emotion control is used to generate an expression of a specified emotion, the emotion control can be used to control the face of the virtual object to generate an expression of the specified emotion that matches the emotion control, thereby improving the efficiency of generating an expression of the specified emotion.
  • the skeletal movement bound to the associated movement unit is driven to control the face of the virtual object to generate a specified expression that matches the shortcut expression control.
  • the shortcut expression control is a mouth shape control, determining the mouth shape change information corresponding to the mouth shape control; based on the mouth shape change information and the information of the movement unit associated with the mouth shape control, driving the associated movement unit bound
  • the specified expression matched by the mouth shape control is the expression generated by the specified mouth shape, and the specified mouth shape includes but is not limited to at least one of AHH, AAA, EH and other mouth shapes. Lip controls can also be voice driven.
  • the terminal detects an adjustment operation for the mouth shape control, it determines the adjusted attribute value of the mouth shape control, and determines the mouth shape based on the adjusted attribute value and the information of the motion unit associated with the mouth shape control.
  • the new position of the bone bound to the motion unit associated with the control updates the position of the bone to the new position, thereby realizing the movement of the bone, and controlling the vertex movement in the facial skin model bound to the bone based on the movement of the bone.
  • the mouth shape control in area 1304 is a sliding control.
  • the adjustment of the mouth shape control can be realized by sliding the mouth shape control.
  • the terminal responds to the The sliding operation of the mouth shape control corresponding to the AAA mouth shape in 1304 can control the face of the virtual object in the area 1304 to produce the AAA mouth shape expression.
  • the mouth shape control since the mouth shape control is used to generate an expression with a specified mouth shape, the mouth shape control can be used to control the facial expression of the virtual object.
  • the raw mouth shape control matches the expression of the specified mouth shape, improving the efficiency of generating the expression of the specified mouth shape.
  • affine transformation is used to change the position of the vertices in the standard three-dimensional facial model, so that the standard three-dimensional facial model is close to the target three-dimensional facial model in shape and position, so that the shape of the three-dimensional facial model after affine transformation is consistent with the target
  • the shape of the three-dimensional facial model is basically the same, and the space enclosed by the affine-transformed three-dimensional facial model is basically the same as the space enclosed by the target three-dimensional facial model. Therefore, the three-dimensional facial model after the affine transformation can represent the virtual object. surface of the face.
  • the key points of the target three-dimensional facial model are called target key points.
  • target key points There are multiple target key points, and multiple refers to at least two.
  • Target key points can be automatically generated or predicted based on user-marked key points.
  • the key points represent the positions of key parts on the face, and the key parts include but are not limited to at least one of eyebrows, eyes, nose, or chin.
  • the terminal can obtain the target key points of the target three-dimensional facial model, and obtain an affine transformation matrix based on the coordinate transformation relationship between the target key points and the corresponding standard key points.
  • the affine transformation matrix is It is used to transform the coordinates of a standard key point into the coordinates of a target key point corresponding to the standard key point. It can also be used to transform the coordinates of a target key point into the coordinates of a standard key point corresponding to the target key point.
  • the model matching operation Used to trigger the terminal to generate a three-dimensional facial model after affine transformation.
  • the vertices in the target three-dimensional facial model may be called target vertices, and the vertices in the three-dimensional facial model after affine transformation may be called affine vertices.
  • the terminal The affine vertex closest to the target vertex can be determined from the affine transformed three-dimensional facial model, and the bones bound to the affine vertex can be determined as the bones bound to the target vertex, and the bones bound to each target vertex can be determined The skeletal structure that forms the target 3D facial model.
  • affine transformation is performed on the standard three-dimensional facial model, so that based on the bones bound to the vertices of the three-dimensional facial model after the affine transformation, the bones bound to the vertices in the target three-dimensional facial model are determined to obtain the target
  • the bone structure of the three-dimensional facial model can be quickly and accurately determined to determine the bone structure of the target three-dimensional facial model.
  • performing affine transformation on the standard three-dimensional facial model based on the target three-dimensional facial model to obtain the affine-transformed three-dimensional facial model includes: based on the reference key points of the target three-dimensional facial model, predicting and obtaining the corresponding key points of the target three-dimensional facial model.
  • Target key points determine the coordinate transformation relationship between the target key points and the corresponding standard key points, and obtain the affine transformation matrix
  • the standard key points are the key points of the standard three-dimensional facial model
  • use the affine transformation matrix to Perform affine transformation on the vertex positions to obtain the affine transformed three-dimensional facial model.
  • the reference key points may be key points obtained by marking.
  • the reference key points may be key points obtained by marking the positions of the mouth, chin, and nose.
  • Target key points are predicted by reference key points.
  • the affine transformation matrix reflects the coordinate transformation relationship between the target key points and the corresponding standard key points.
  • the terminal can respond to the key point labeling operation; the key point labeling operation is used to mark key points on the target three-dimensional facial model, and the key points marked by the key point labeling operation can be Determine as a reference key point.
  • the terminal can respond to a key point labeling operation.
  • the key point labeling operation is used to label key points on the target three-dimensional facial model.
  • the terminal can respond to a model matching operation and perform a key point labeling operation.
  • the marked key points are determined as reference key points, and based on the reference key points of the target three-dimensional facial model, the target key points corresponding to the target three-dimensional facial model are predicted.
  • the key points marked by the key point marking operation are determined as reference key points, which improves the efficiency of determining the reference key points.
  • the terminal can perform a product operation on the coordinates of the vertices in the standard three-dimensional facial model and the affine transformation matrix.
  • the result of the multiplication operation is a new coordinate, and the Points serve as vertices in the affine transformed three-dimensional facial model.
  • an affine transformation matrix is obtained according to the coordinate transformation relationship between key points, and the resulting affine transformation matrix can be used for
  • the standard three-dimensional facial model is subjected to affine transformation to obtain a three-dimensional facial model that is similar to the standard three-dimensional facial model in position and shape, thereby improving the accuracy of the affine transformation.
  • a facial expression processing method is provided. This method can be executed by a terminal or a server, or can also be executed jointly by a terminal and a server. The method is explained by applying it to a terminal as an example. Includes the following steps:
  • Step 3002 Based on the reference key points of the target three-dimensional facial model, predict the target key points corresponding to the target three-dimensional facial model.
  • Step 3004 Determine the coordinate transformation relationship between the target key point and the corresponding standard key point, and obtain the affine transformation matrix; the standard key point is the key point of the standard three-dimensional facial model.
  • Step 3006 Use an affine transformation matrix to perform affine transformation on the vertex positions in the standard three-dimensional facial model to obtain a three-dimensional facial model after affine transformation.
  • Step 3008 Based on the bones bound to the vertices in the affine transformed three-dimensional facial model, determine the bones bound to the vertices in the target three-dimensional facial model to obtain the bone structure of the target three-dimensional facial model.
  • Step 3010 Display the object types corresponding to multiple art styles; the multiple art styles include the target art style.
  • Step 3012 When the target object type under the target art style is selected, determine the facial skeleton information corresponding to the target object type, and determine a style expression template that matches the target art style.
  • Step 3014 Bind the motion units in the style expression template to related bones in the skeletal structure based on the facial skeleton information.
  • Step 3016 Display the target expression control combination, and associate the expression controls in the target expression control combination with at least one motor unit in the style expression template.
  • Step 3018 Skin the skeletal structure to generate the face of the virtual object.
  • Step 3020 In response to the adjustment operation for at least one expression control, information of the motion unit associated with the expression control drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style.
  • the facial expression processing method provided by this application defines different style accuracy for different projects.
  • Each template maps different configuration information (different number of bones and number of expression AU parameters, etc.), so that it can cope with the expression tension of different styles and provide different Precision templates are customized with exclusive controller components. Therefore, using the facial expression processing method provided by this application, expression data can be automatically generated, adapted to the expression styles and technical levels of different games, and converted into asset files that can be manipulated by artists to facilitate further production of expression animation sequences.
  • the facial expression processing method provided by this application can be adapted to the business requirements of various mobile games, client games, consoles, and CG (Computer Graphics, computer animation). It has strong versatility, a high degree of automation of supporting tools, and simple and free animation control. It won't be excessive or misleading. It effectively helps the business achieve 1-N mass production and greatly improves the production efficiency and accuracy quality of expression data.
  • the facial expression processing method provided by this application can be applied to any scene that requires facial expression processing, including but not limited to film and television special effects, visual design, games, animation, virtual reality (Virtual Reality, VR), industrial simulation and At least one of the scenarios such as digital cultural creation.
  • the facial expression processing method provided by this application can be used in scenes such as film and television special effects, visual design, games, animation, virtual reality, industrial simulation, and digital cultural creation. It can speed up the processing efficiency of facial expressions while ensuring the accuracy of facial expressions. .
  • the facial expression processing method provided by this application can generate the required expressions for virtual objects in the game.
  • the facial expression processing method provided by this application can generate the required expressions for virtual objects in animation.
  • embodiments of the present application also provide a facial expression processing method for implementing the above-mentioned facial expression processing method.
  • Situation processing device The solution to the problem provided by this device is similar to the solution described in the above method. Therefore, the specific limitations in the embodiments of one or more facial expression processing devices provided below can be found in the above description of the facial expression processing method. Limitations will not be repeated here.
  • a facial expression processing device including: a skeleton determination module 3102, a skeleton binding module 3104, a control display module 3106, a skeleton skinning module 3108 and an expression control module 3110, in:
  • the skeleton determination module 3102 is used to determine the skeleton structure of the three-dimensional facial model belonging to the target art style.
  • the skeleton binding module 3104 is used to determine a style expression template that matches the target art style, and bind the motion units in the style expression template to relevant bones in the skeletal structure.
  • the control display module 3106 is used to display the target expression control combination and associate the expression control in the target expression control combination with at least one motor unit in the style expression template.
  • the bone skinning module 3108 is used to skin the bone structure to generate the face of the virtual object.
  • the expression control module 3110 is configured to respond to the adjustment operation of at least one expression control, so that the information of the motion unit associated with the expression control drives the bound skeletal motion to control the face of the virtual object to produce an expression that conforms to the target art style.
  • the bone binding module is also used to: display the object types corresponding to multiple art styles; the multiple art styles include the target art style; when the target object type under the target art style is selected, Determine the facial skeleton information corresponding to the target object type, and determine the style expression template that matches the target art style; bind the motion units in the style expression template to the relevant bones in the skeletal structure based on the facial bone information.
  • the device further includes a movement unit update module, the movement unit update module is configured to: display the information editing area of the movement unit in the style expression template in the case of displaying the face of the virtual object; respond to the information editing area of the movement unit in the target movement unit The information editing operation triggered by the information editing area updates the information of the target motion unit; based on the updated information of the target motion unit, the skeletal motion bound to the target motion unit is driven to update the facial expression of the virtual object.
  • the movement unit update module is configured to: display the information editing area of the movement unit in the style expression template in the case of displaying the face of the virtual object; respond to the information editing area of the movement unit in the target movement unit The information editing operation triggered by the information editing area updates the information of the target motion unit; based on the updated information of the target motion unit, the skeletal motion bound to the target motion unit is driven to update the facial expression of the virtual object.
  • the motor unit update module is also used to: display the bone update area of the motor unit in the style expression template; in response to the bone update operation triggered in the bone update area of the target motor unit, bind the target motor unit The skeleton is updated.
  • the information editing area and the skeleton update area are displayed in the editing window;
  • the device also includes a template export module, the template export module is configured to: in response to a window closing operation for the editing window, display the expression template export trigger control ; In response to the triggering operation of the expression template export trigger control, export the updated expression template;
  • the updated expression template is a template obtained by updating the information of at least one motion unit or the bound skeleton in the style expression template.
  • the information editing area has two states: a frozen state and an activated state.
  • the device is also configured to: when the information editing area is in a frozen state, stop responding to operations on the information editing area; when the information editing area is in a frozen state, In the activated state, respond to operations on the information editing area.
  • the three-dimensional facial model belongs to a virtual object in the target business scenario
  • the target expression control combination is an expression control combination that matches the target business scenario
  • the target expression control combination includes a plurality of first expression controls, a plurality of first expressions
  • the number of controls is determined based on the scene complexity of the target business scenario; multiple first expression controls are used to globally adjust the facial expression of the virtual object;
  • the expression control module is also used to: respond to at least one expression control
  • the adjustment operation when the targeted expression control is the first expression control, determines the adjusted attribute value of the first expression control; based on the adjusted attribute value of the first expression control and the first expression control associated
  • the information of the motion unit drives the movement of the bones bound to the associated motion unit to control the face of the virtual object to produce expressions that conform to the target art style.
  • the target expression control combination also includes a second expression control
  • the second expression control is used to adjust the details of the facial expression of the virtual object
  • the expression control module is also used to adjust the target expression control to the second expression control.
  • determine the adjusted attribute value of the second expression control based on the adjusted attribute value of the second expression control and the information of the motor unit associated with the second expression control, drive the associated motor unit bound Skeletal movement to adjust the details of the virtual object's facial expressions.
  • the target expression control combination includes a shortcut expression control, and the shortcut expression control matches the specified expression; the expression control module is also used to: in response to the adjustment operation for the shortcut expression control, obtain the adjusted properties of the shortcut expression control Value; based on the adjusted attribute value and the information of the motion unit associated with the shortcut expression control, drive the skeletal movement bound to the associated motion unit to control the face of the virtual object to produce the specified expression that matches the shortcut expression control.
  • the expression control module is also used to: when the shortcut expression control is an emotion control, determine the adjusted attribute value corresponding to the emotion control; based on the adjusted attribute value and the movement unit associated with the emotion control Information drives the skeletal movement bound to the associated motor unit to control the virtual object's facial presentation to match the specified emotion of the emotion control.
  • the expression control module is also used to: when the shortcut expression control is a mouth shape control, determine the mouth shape change information corresponding to the mouth shape control; based on the mouth shape change information and the movement unit associated with the mouth shape control The information drives the skeletal movement bound to the associated motion unit to control the virtual object's facial presentation.
  • the mouth shape control matches the specified mouth shape.
  • the three-dimensional facial model belonging to the target art style is the target three-dimensional facial model; the skeleton determination module is also used to: perform affine transformation on the standard three-dimensional facial model based on the target three-dimensional facial model to obtain the affine-transformed three-dimensional facial model. Facial model; based on the bones bound to the vertices in the affine transformed three-dimensional facial model, determine the bones bound to the vertices in the target three-dimensional facial model to obtain the bone structure of the target three-dimensional facial model.
  • the skeleton determination module is also used to: predict the target key points corresponding to the target three-dimensional facial model based on the reference key points of the target three-dimensional facial model; determine the coordinates between the target key points and the corresponding standard key points The transformation relationship is used to obtain the affine transformation matrix; the standard key points are the key points of the standard three-dimensional facial model; the affine transformation matrix is used to perform affine transformation on the vertex positions in the standard three-dimensional facial model, and the three-dimensional facial model after affine transformation is obtained.
  • the device is further configured to: respond to a key point annotation operation when the target three-dimensional facial model is displayed; the key point annotation operation is used to annotate key points on the target three-dimensional facial model; and convert the key point annotation operations to The marked key points are determined as reference key points.
  • Each module in the above facial expression processing device can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in Figure 32.
  • the computer device includes a processor, a memory, an input/output interface (Input/Output, referred to as I/O), and a communication interface.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface is connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions and a database.
  • This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the database of the computer device is used to store data involved in the facial expression processing method.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer readable instructions when executed by the processor implement a facial expression processing method.
  • a computer device is provided.
  • the computer device may be a terminal, and its internal structure diagram may be as shown in Figure 33.
  • the computer device includes a processor, memory, input/output interface, communication interface, display unit and input device.
  • the processor, memory and input/output interface are connected through the system bus, and the communication interface, display unit and input device are connected to the system bus through the input/output interface.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions. This internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium.
  • the input/output interface of the computer device is used to exchange information between the processor and external devices.
  • the communication interface of the computer device is used for wired or wireless communication with external terminals.
  • the wireless mode can be implemented through WIFI, mobile cellular network, NFC (Near Field Communication) or other technologies.
  • the computer readable instructions when executed by the processor implement a facial expression processing method.
  • the display unit of the computer equipment is used to form a visually visible picture, which can be a display screen, a projection device or a virtual
  • the display screen can be a liquid crystal display or an electronic ink display.
  • the input device of the computer device can be a touch layer covered on the display screen, or it can be a button, trackball or touch control provided on the computer device shell. It can also be an external keyboard, trackpad or mouse.
  • Figures 32 and 33 are only block diagrams of partial structures related to the solution of the present application, and do not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • Computer equipment may include more or fewer components than shown in the figures, or some combinations of components, or have different arrangements of components.
  • a computer device including a memory and one or more processors.
  • Computer-readable instructions are stored in the memory. When executed by the processor, the computer-readable instructions cause the one or more processors to Follow the steps in the facial expression processing method above.
  • one or more non-volatile readable storage media having computer readable instructions stored thereon, which when executed by one or more processors, cause one or more processes
  • the device implements the steps in the above facial expression processing method.
  • a computer program product including computer-readable instructions that, when executed by a processor, implement the steps in the above facial expression processing method.
  • the user information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • the computer readable instructions can be stored in a non-volatile computer.
  • the computer-readable instructions when executed, may include the processes of the above method embodiments.
  • Any reference to memory, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive memory (ReRAM), magnetic variable memory (Magnetoresistive Random Access Memory (MRAM), ferroelectric memory (Ferroelectric Random Access Memory, FRAM), phase change memory (Phase Change Memory, PCM), graphene memory, etc.
  • Volatile memory may include random access memory (Random Access Memory, RAM) or external cache memory, etc.
  • RAM Random Access Memory
  • RAM random access memory
  • RAM Random Access Memory
  • the databases involved in the various embodiments provided in this application may include at least one of a relational database and a non-relational database.
  • Non-relational databases may include blockchain-based distributed databases, etc., but are not limited thereto.
  • the processors involved in the various embodiments provided in this application may be general-purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, etc., and are not limited to this.

Abstract

本申请涉及一种面部表情处理方法、装置、计算机设备和存储介质。涉及到游戏领域,包括:确定属于目标美术风格的三维面部模型的骨骼结构(202);确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定(204);展示目标表情控件组合,将所述目标表情控件组合中的表情控件与所述风格表情模板中的至少一个所述运动单元进行关联(206);对骨骼结构进行蒙皮处理,生成虚拟对象面部(208);响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情(210)。

Description

面部表情处理方法、装置、计算机设备和存储介质
本申请要求于2022年08月04日提交中国专利局,申请号为202210934106.3,申请名称为“面部表情处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种面部表情处理方法、装置、计算机设备和存储介质。
背景技术
随着计算机技术的发展,虚拟对象在多个领域广泛应用,例如在动画、电影、沉浸交互、VR/AR等领域广泛应用,如何呈现生动细腻的虚拟对象面部的表情显得尤为重要。虚拟对象面部可以是具有美术风格的,美术风格包括但不限于是卡通、手游或二次元。
传统技术中,通常是美术人员采用手工的方式,对具有美术风格的虚拟对象面部的表情进行处理,以生成符合该美术风格的表情,从而耗费较多的时间,导致面部表情处理效率较低。
发明内容
根据本申请提供的各种实施例,提供一种面部表情处理方法、装置、计算机设备、计算机可读存储介质和计算机程序产品。
一方面,本申请提供了一种面部表情处理方法。由计算机设备执行,所述方法包括:确定属于目标美术风格的三维面部模型的骨骼结构;确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定;展示目标表情控件组合,将所述目标表情控件组合中的表情控件与所述风格表情模板中的至少一个所述运动单元进行关联;对所述骨骼结构进行蒙皮处理,生成虚拟对象面部;响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情。
另一方面,本申请还提供了一种面部表情处理装置。所述装置包括:骨骼确定模块,用于确定属于目标美术风格的三维面部模型的骨骼结构;骨骼绑定模块,用于确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定;控件展示模块,用于展示目标表情控件组合,将所述目标表情控件组合中的表情控件与所述风格表情模板中的至少一个所述运动单元进行关联;骨骼蒙皮模块,用于对所述骨骼结构进行蒙皮处理,生成虚拟对象面部;表情控制模块,用于响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情。
另一方面,本申请还提供了一种计算机设备。所述计算机设备包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器实现上述面部表情处理方法中的步骤。
另一方面,本申请还提供了一个或多个非易失性可读存储介质。所述计算机可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器实现上述面部表情处理方法中的步骤。
另一方面,本申请还提供了一种计算机程序产品。所述计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述面部表情处理方法中的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其他特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一些实施例中面部表情处理方法的应用环境图;
图2为一些实施例中面部表情处理方法的流程示意图;
图3为一些实施例中模型匹配的界面操作图;
图4为一些实施例中运动单元的示意图;
图5为一些实施例中运动单元的示意图;
图6为一些实施例中表情与运动单元的关系图;
图7为一些实施例中面部肌肉的示意图;
图8为一些实施例中运动单元的示意图;
图9为一些实施例中运动单元的示意图;
图10为一些实施例中表情参数的示意图;
图11为一些实施例中运动强度的示意图;
图12为一些实施例中表情控件控制骨骼运动的原理图;
图13为一些实施例中表情控件的示意图;
图14为一些实施例中表情控件的示意图;
图15为一些实施例中表情控件的示意图;
图16为一些实施例中表情控件的示意图;
图17为一些实施例中模板确定窗口的示意图;
图18为一些实施例中绑定后的效果图;
图19为一些实施例中进行蒙皮处理的界面图;
图20为一些实施例中生成虚拟对象面部的界面图;
图21为一些实施例中调整虚拟对象面部的表情的示意图;
图22为一些实施例中展示虚拟对象面部和表情控件的示意图;
图23为一些实施例中风格表情模板控制应对不同表现张力的原理图;
图24为一些实施例中更新运动单元的界面图;
图25为一些实施例中更新运动单元的界面图;
图26为一些实施例中冻结和激活运动单元的示意图;
图27为一些实施例中导出表情模板的界面图;
图28为一些实施例中导入表情模板的界面图;
图29为一些实施例中更新表情模板的界面图;
图30为一些实施例中面部表情处理方法的流程示意图;
图31为一些实施例中面部表情处理装置的结构框图;
图32为一些实施例中计算机设备的内部结构图;
图33为一些实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的面部表情处理方法,可以应用于如图1所示的应用环境中。其中,终端102通过 网络与服务器104进行通信。数据存储系统可以存储服务器104需要处理的数据。数据存储系统可以集成在服务器104上,也可以放在云上或其他服务器上。
具体地,终端102可以确定属于目标美术风格的三维面部模型的骨骼结构,确定与目标美术风格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定,展示目标表情控件组合,将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联,对骨骼结构进行蒙皮处理,生成虚拟对象面部,响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情,从而可以生成具有表情的虚拟对象面部。在得到具有表情的虚拟对象面部的情况下,可以将具有表情的虚拟对象导出,导出后的虚拟对象可以存储到服务器中,例如,终端102可以将响应于针对具有表情的虚拟对象面部的导出操作,生成存储该具有表情的虚拟对象面部的文件,将该文件发送至服务器104,服务器104可以存储该文件,还可以将该文件发送至其他设备。虚拟对象面部为虚拟对象的面部。美术风格包括但不限于是卡通风格、手游风格中的至少一种。
在一些实施例中,终端102上可以安装有美术软件,本申请提供的面部表情处理方案可以是终端102利用该美术软件实现的。该美术软件可以为具有不同美术风格的虚拟对象生成适合该美术风格的表情,例如,该美术软件中可以包括自定义的美术工具,该自定义的美术工具是发明人自主研发的成果,该自定义的美术工具例如可以在美术软件中开发出的插件。美术软件包括但不限于是Autodesk Maya或3Dmax。该自定义的美术工具可以用于为属于美术风格的虚拟对象生成适合所属美术风格的表情,例如,在虚拟对象属于卡通风格的情况下,可以采用该美术工具为该虚拟对象生成符合卡通风格的表情,在虚拟对象属于手游风格的情况下,可以采用该美术工具为该虚拟对象生成符合手游风格的表情。
终端102可以但不限于是各种台式计算机、笔记本电脑、智能手机、平板电脑、物联网设备和便携式可穿戴设备,物联网设备可为智能音箱、智能电视、智能空调、智能车载设备等。便携式可穿戴设备可为智能手表、智能手环、头戴设备等。服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一些实施例中,如图2所示,提供了一种面部表情处理方法,该方法可以由终端或服务器执行,还可以由终端和服务器共同执行,以该方法应用于图1中的终端102为例进行说明,包括以下步骤:
步骤202,确定属于目标美术风格的三维面部模型的骨骼结构。
其中,美术风格用于表征在虚拟对象在美术上的特点,例如,美术风格包括但不限于是卡通风格或二次元风格中的至少一种,卡通风格的虚拟对象的特点与二次元风格的虚拟对象的特点有较大差别。目标美术风格可以是任意的美术风格。
虚拟对象是虚拟场景中的对象,虚拟场景,是指计算机通过数字通讯技术勾勒出的数字化场景,虚拟场景包括但不限于是二维虚拟场景或三维虚拟场景中的至少一种。虚拟场景例如可以是游戏中的场景、VR(Virtual Reality,虚拟现实)中的场景或动漫中的场景。虚拟对象可以具有面部的虚拟对象,例如,虚拟对象可以为虚拟场景中的虚拟人物、虚拟动物或虚拟玩偶中的任意一种。
三维面部模型是用于表征虚拟对象面部的表面的三维网格模型(3D mesh)。虚拟对象面部是指虚拟对象的面部。其中,三维网格模型是由一系列基本几何图形组成的多边形网格,用于模拟虚拟对象的表面。基本几何图形是构成三维网格模型的最小的形状,基本几何图形例如可以为三角形或四边形等中的任意一种,基本几何图形包括多个顶点,例如,三角形包括3个顶点,四边形包括4个顶点。
三维面部模型的骨骼结构是指虚拟对象的面部的骨骼结构,三维面部模型的骨骼结构包括鼻骨、上颌骨、腭骨、泪骨、下鼻甲、犁骨或下颌骨等中的至少一种。三维面部模型的骨骼结构可以是根据标准骨骼结构生成的,例如可以将标准骨骼结构中的骨骼与三维面部模型中的顶点进行绑定,与三维面部模型中的 顶点所绑定的各骨骼,构成该三维面部模型的骨骼结构。其中,标准骨骼结构是指标准三维面部模型所绑定的骨骼结构,标准三维面部模型是预先生成的三维面部模型,并且标准三维面部模型中的顶点与标准骨骼结构中的骨骼预先进行了绑定,标准三维面部模型中的每个顶点与标准骨骼结构中的至少一个骨骼绑定。本申请实施例中,该属于目标美术风格的三维面部模型可以称为目标三维面部模型。
具体地,终端可以响应于触发展示目标三维面部模型的操作,展示目标三维面部模型,在检测模型匹配操作的情况下,基于目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型。其中,仿射变换用于改变标准三维面部模型中的顶点的位置,从而使得标准三维面部模型在形状和位置上与目标三维面部模型接近,以使得仿射变换后的三维面部模型的形状与目标三维面部模型的形状基本上一致,并且,仿射变换后的三维面部模型所包围的空间与目标三维面部模型所包围的空间基本上相同,从而仿射变换后的三维面部模型,可以表征虚拟对象面部的表面。目标三维面部模型包括的顶点的数量比仿射变换后的三维面部模型包括的顶点的数量多。
在一些实施例中,标准三维面部模型具有多个关键点,多个是指至少两个,标准三维面部模型的关键点称为标准关键点,关键点代表面部上的关键部位的位置的点,关键部位包括但不限于是眉毛、眼睛、鼻子或下巴等中的至少一个。同样的,目标三维面部模型具有多个关键点,目标三维面部模型的关键点称为目标关键点。目标关键点可以与标准关键点一一对应,具有对应关系的目标关键点和标准关键点代表同一关键部位的位置的点,例如,目标关键点A为代表目标三维面部模型的鼻子的位置的点,标准关键点B为代表标准三维面部模型的鼻子的位置的点,则目标关键点A与标准关键点B对应。目标三维面部模型的鼻子即目标三维面部模型对应的虚拟对象面部的鼻子。在终端检测到模型匹配操作的情况下,终端可以获取目标三维面部模型的目标关键点,基于目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵,仿射变换矩阵用于将标准关键点的坐标变换到该标准关键点对应的目标关键点的坐标,还可以用于将目标关键点的坐标变换到该目标关键点对应的标准关键点的坐标,其中,模型匹配操作用于触发终端生成仿射变换后的三维面部模型。
在一些实施例中,终端在检测模型匹配操作的情况下,基于目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型,终端可以在展示目标三维面部模型的情况下,可以展示仿射变换后的三维面部模型。例如,模型匹配操作为图3中的针对控件302即“模型适配”控件的点击操作,终端在检测到对“模型适配”控件的点击操作的情况下,生成仿射变换后的三维面部模型,并展示对应于目标三维面部模型展示仿射变换后的三维面部模型,图3中,实线部分代表目标三维面部模型,例如304指示的部分,虚线和点代表仿射变换后的三维面部模型,例如306指示的部分。
在一些实施例中,由于仿射变换后的三维面部模型中的顶点,是通过将标准三维面部模型的顶点的坐标进行变换后得到的顶点,例如,标准三维面部模型有100个顶点,则将这100个顶点分别进行仿射变换,即改变这100个顶点的坐标,改变坐标后的这100个顶点即为构成仿射变换后的三维面部模型的各个顶点,而标准三维面部模型中的顶点与标准骨骼结构中的骨骼绑定,因此,终端可以将标准骨骼结构中的骨骼进行移动,将移动的骨骼与仿射变换后的三维面部骨骼中的顶点进行绑定,从而得到仿射变换后的三维面部骨骼对应的骨骼结构中的骨骼,例如,标准三维面部模型中的顶点A1与标准骨骼结构中的骨骼1绑定,仿射变换后的三维面部模型中的顶点B1是对顶点A1的坐标进行仿射变换即移动后得到的顶点,故可以将骨骼1的坐标进行移动,将移动后的骨骼1确定为顶点B1绑定的骨骼。其中,骨骼的移动方式是根据所绑定的顶点的移动确定的,以使得移动前后,骨骼与顶点之间的相对关系保持不变。
在一些实施例中,终端在生成仿射变换后的三维面部模型后,可以基于仿射变换后的三维面部模型中的顶点所绑定的骨骼,确定目标三维面部模型中的顶点所绑定的骨骼,得到目标三维面部模型的骨骼结构。目标三维面部模型中的顶点与目标三维面部模型的骨骼结构中的至少一个骨骼绑定。具体地,目标三维面 部模型中的顶点可以称为目标顶点,仿射变换后的三维面部模型中的顶点可以称为仿射顶点,针对目标三维面部模型中的每个目标顶点,终端可以从仿射变换后的三维面部模型中确定与目标顶点最近的仿射顶点,将该仿射顶点所绑定的骨骼确定为该目标顶点所绑定的骨骼,各目标顶点所绑定的骨骼形成目标三维面部模型的骨骼结构。
步骤204,确定与目标美术风格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。
其中,风格表情模板是适用于美术风格的表情模板。不同的美术风格匹配不同的风格表情模板。美术风格匹配的风格表情模板是预先生成的,且美术风格匹配的风格表情模板是可以被修改的。风格表情模板中包括多个运动单元分别对应的,多个是指至少两个,例如可以为200个。每个运动单元(AU,Action Unit)可以具有编号和名称,不同的运动单元的编号不同,不同的运动单元的名称不同,如图4和图5所示,展示了多个运动单元。图5中的AU后面跟的数字为AU的编号,例如AU11代表编号为11的AU。多个运动单元的组合可以代表一种表情,即表情可以通过多个运动单元的组合来表征。如图6所示,展示了多种表情分别对应的运动单元的组合,例如,“悲伤”对应的运动单元的组合中包括编号为1、4、15以及23的AU。运动单元可以标准化的测量面部肌肉,面部肌肉例如可以是图7中各字母在位置处的肌肉。面部可以划分为上面部和下面部,故运动单元可以划分为上面部运动单元和下面部运动单元,如图8所示,展示了部分的上面部运动单元和部分的下面部运动单元。以上举例中的AU为基础AU,由于基础的AU的精度不够精细,故可以对AU进行拆分或增加新的AU,比如可以将基础AU拆分为左边和右边,或者拆分为左边和中间,还可以拆分为中间和右边,从而得到精度更精细的AU。如图9所示,展示了对多个AU进行拆分的结果,图9中的“R”为right的缩写,代表右边,“L”为left的缩写,代表左边,“M”为middle的缩写,代表中间,图9中,按照左边和右边对AU1划分得到AU1L和AU1R,按照中间和左边对AU10划分得到AU10M和AU10L。风格表情模板中的运动单元可以包括划分所得到的运动单元,从而更加科学全面的描述肌肉走向与强度,以提升表情的精度。
每个运动单元分别具有信息,运动单元的信息包括运动强度和运动方向,运动强度也可以称为动作强度,运动方向也可以称为动作方向。风格表情模板中可以包括运动单元的信息,运动单元的信息可以是预先设置的且可以被修改的。风格表情模板中可以采用表情参数来代表一个运动单元,例如,AU1L指的是左边眉毛的内侧抬眉动作,在风格表情模板中采用一个表情参数就可以描述这个运动单元的信息,不同的运动单元采用不同的表情参数代表,表情参数也可以称为动画参数。如图10所示,展示了多个表情参数,每个表情参数代表一个运动单元,例如,表情参数00代表一个运动单元。运动强度可以反映不同的运动程度,运动强度越大,则运动程度越大,运动强度的取值范围可以预设,例如可以为0%-100%,以运动单元为控制嘴巴张开的运动单元为例,如图11所示,展示了A、B、C、D和E分别代表不同的运动强度的取值范围,A代表0-20%,B代表20%-40%,C代表40%-65%,D代表65%-85%,E代表85%-100%,从图中可以看出,运动强度越大,嘴巴张开的程度越大。绑定可以理解为在运动单元与骨骼之间建立联系,并可以通过运动单元控制所绑定的骨骼进行运动。
具体地,终端在确定了风格表情模板的情况下,可以将风格表情模板中的运动单元与目标三维面部模型的骨骼结构中的骨骼进行绑定。风格表情模板中可以预先规定了可以与每个运动单元分别绑定的骨骼,例如规定了运动单元AU1L与骨骼1和骨骼2绑定,在为运动单元绑定骨骼时,终端可以从目标三维面部模型的骨骼结构中确定可以与该运动单元绑定的骨骼,将确定出的骨骼与该运动单元进行绑定。绑定后,终端可以基于运动单元控制该运动单元所绑定的骨骼运动,从而控制虚拟对象面部产生符合目标美术风格的表情。
步骤206,展示目标表情控件组合,将目标表情控件组合中的表情控件与风格表情模板中的至少一个 运动单元进行关联。
其中,表情控件组合是由多个表情控件组成的集合,表情控件是用于控制表情的控件。目标表情控件组合是与目标业务场景所匹配的表情控件组合。目标业务场景为目标虚拟对象所属的业务场景。目标虚拟对象是指目标三维面部模型代表的是目标虚拟对象的面部。业务场景包括但不限于是手游、端游或数字人游戏中的至少一个,不同的业务场景的复杂程度不同。相同的业务场景下可以具有多种美术风格,故确定与目标美术风格匹配的风格表情模板的过程可以包括:确定与目标业务场景下的目标美术风格匹配的风格表情模板。业务场景不同,所匹配的表情控件组合可以相同也可以不同。业务场景匹配的表情控件组合是预设的。“将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联”可以是在展示目标表情控件组合之前执行的,也可以是在展示目标表情控件组合之后执行的。
将表情控件与运动单元进行关联后,针对表情控件的操作能够触发该表情控件相关联的运动单元所绑定的骨骼进行运动。如图12所示,展示了表情控件与骨骼运动之间的关系,控制器指的是表情控件,表情控件通过运动单元的信息驱动骨骼运动,从而控制虚拟对象面部的表情。表情控件也可以称为控制器。
表情控件组合可以包括多种类型的表情控件,例如可以包括多个第一表情控件、多个第二表情控件和多个快捷表情控件,多个是指至少两个,该多个第一表情控件用于在全局上调整虚拟对象面部的表情,通过第一表情控件可以快速调出表情姿态,故第一表情控件也可以称为主控。第二表情控件用于调整虚拟对象面部的表情的细节,故第二表情控件也可以称为次控。第一表情控件可以是展示在虚拟对象面部之外的位置的,第二表情控件可以是展示在虚拟对象面部处的。快捷表情控件匹配有指定表情,指定表情包括但不限于是指定的情绪或指定的口型中的至少一种,快捷表情控件用于调整虚拟对象面部的表情,使得虚拟对象面部生成该快捷表情控件所匹配的指定表情。在快捷表情控件匹配的指定表情为情绪的情况下,快捷表情控件可以称为情绪控件,在快捷表情控件匹配的指定表情为口型的情况下,快捷表情控件可以称为口型控件。其中,情绪包括但不限于是微笑、愤怒、惊讶、恐惧、厌恶、悲伤、蔑视或生气中的至少一种。口型包括但不限于是AAA、AHH等口型。如图13所示,展示了4种类型的表情控件,其中,区域1302中展示的表情控件为主控,区域1304中展示的表情控件为口型控件,区域1306中展示的控件为次控,区域1308中展示的控件为情绪控件,例如微笑对应的情绪控件。将表情控件组合划分为主控、次控、口型控件和情绪,体现了控件的分层涉及,各控件可以混合使用,使得调节出的表情精度得到提升。
第一表情控件的数量可以是根据目标业务场景的场景复杂程度确定的,例如,第一表情控件的数量与目标业务场景的场景复杂程度成正相关关系,目标业务场景的场景越复杂,则第一表情控件的数量越多。例如,手游的复杂程度小于端游,端游的复杂程度小于数字人场景,则手游的主控数量少于端游的主控数量,则端游的主控数量少于数字人场景的主控数量。例如,手游的主控为38个,端游的主控为54个,数字人角色的主控为75,如图14所示,展示了手游的主控,如图15所示,展示了端游的主控,如图16所示,展示了数字人角色的各类表情控件,其中,区域1602和区域1608为主控,区域1604为情绪控件,区域1606为口型控件。
具体地,目标三维面部模型属于目标虚拟对象,目标虚拟对象为目标业务场景中的虚拟对象,终端可以展示各业务场景下的美术风格,包括目标业务场景下的目标美术风格,终端在确定目标业务场景下的目标美术风格被选中的情况下,确定与目标美术风格匹配的风格表情模板以及与目标业务场景匹配的目标表情控件组合。终端可以将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联,例如,终端中可以预先存储了表情控件与运动单元之间的关联关系,例如规定了表情控件1与运动单元1关联,则终端可以根据预先存储的关联关系,将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联。
在一些实施例中,终端在确定与目标美术风格匹配的风格表情模板的情况下,可以确定与目标美术风 格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定,并确定与目标美术风格匹配的目标表情控件组合,展示目标美术风格匹配的目标表情控件组合。
在一些实施例中,终端响应于绑定操作,展示模板确定窗口,模板确定窗口用于确定与目标美术风格匹配的风格表情模板,终端响应于在模板确定窗口触发的选择完成操作,得到与目标美术风格匹配的风格表情模板,选择完成操作是表征与目标美术风格匹配的风格表情模板已选择完成的操作,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定,并确定与目标美术风格匹配的目标表情控件组合,展示目标美术风格匹配的目标表情控件组合。模板确定窗口用于确定与目标美术风格匹配的风格表情模板,模板确定窗口例如为图17中的窗口。绑定操作例如可以是对图3中的控件308即控件“自动绑定”的点击操作,选择完成操作如可以是对图17中的控件1702的点击操作,终端在接收到针对控件308的点击操作的情况下,展示17中的模板确定窗口,在接收到针对控件1702的点击操作的情况下,展示图18,图18中左边窗口中展示了表情控件。
步骤208,对骨骼结构进行蒙皮处理,生成虚拟对象面部。
其中,蒙皮处理是指将面部皮肤模型中的顶点与骨骼结构中的骨骼进行绑定的处理。面部皮肤模型是代表面部皮肤的三维网格模型。虚拟对象面部是进行蒙皮处理后得到的结果。面部皮肤模型可以是预先生成的。
具体地,对于面部皮肤模型中的每个顶点,终端可以从骨骼结构中确定与该顶点距离最近的骨骼,将该骨骼与该顶点绑定,从而将面部皮肤模型中的顶点与骨骼结构中的骨骼进行绑定,实现了蒙皮处理,蒙皮处理后得到虚拟对象面部。
在一些实施例中,终端可以展示蒙皮处理控件以及面部生成控件,响应于针对蒙皮处理控件的触发操作,将选中的面部皮肤模型与骨骼结构进行绑定,响应于面部生成控件生成并展示虚拟对象面部,虚拟对象面部的皮肤为与骨骼结构绑定的面部皮肤模型所代表的皮肤。蒙皮处理控件例如为图19中的控件1902即控件“保存蒙皮”,面部生成控件例如为图19中的控件1904即控件“自动生成”,终端接收到对控件1904的点击操作的情况下,生成并展示虚拟对象面部,例如生成并展示图20中的虚拟对象面部。生成的虚拟对象面部是可以被美术师操控的资产,虚拟对象面部可以通过资产文件保存,终端可以提供下载资产文件的服务,通过表情控件可以控制虚拟对象面部可以按照美术师的需求进行变化。
步骤210,响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。
其中,表情控件的调整操作,用于改变表情控件的属性值,表情控件的属性值不同,则对表情的调整程度也不同,属性值越大,则对表情的调整程度越大,例如当表情控件为惊讶的情绪控件的情况下,表情控件的属性值越大,则惊讶的程度越大。表情控件包括可以是任意形式的控件,包括但不限于是滑动式的控件或移动式的控件,例如,第一表情控件和第二表情控件可以为移动式的控件,口型控件和情绪控件可以为滑动式的控件。滑动式的控件的调整操作是对控件的滑动操作,移动式的控件的调整操作是移动控件的操作。
具体地,在表情控件发生变化的情况下,会使得与表情控件相关联的运动单元所绑定的骨骼的位置发生变化,而由于骨骼与面部皮肤模型中的顶点是绑定的,故骨骼的运动会带动面部皮肤模型的顶点的位置发生变化,从而使得虚拟对象面部的表情发生变化,以产生符合目标美术风格的表情。因此,终端响应于针对表情控件的调整操作,基于与表情控件关联的运动单元的信息改变与表情控件关联的运动单元所绑定的骨骼的位置,从而实现骨骼运动,根据骨骼的运动改变面部皮肤模型中顶点的位置,以实现控制虚拟对象面部产生符合目标美术风格的表情。如图21所示,展示了某种风格的游戏中的虚拟对象的面部,控制一个嘴巴的控件,嘴角表情被驱动,表情张力符合游戏风格设定,不夸张也不拘谨。
在一些实施例中,终端可以在展示虚拟对象面部的情况下,展示表情控件以及运动单元的信息。如图22所示,在展示虚拟对象面部情况下,展示了表情控件以及运动单元的信息。
上述面部表情处理方法中,确定属于目标美术风格的三维面部模型的骨骼结构,确定与目标美术风格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定,展示目标表情控件组合,将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联,对骨骼结构进行蒙皮处理,生成虚拟对象面部,响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情,从而对于每种美术风格,可以快捷对虚拟对象面部的表情进行处理,以生成符合对应美术风格的表情,从而提升了面部表情处理效率。
在一些实施例中,确定与目标美术风格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定包括:展示多种美术风格分别对应的对象类型;多种美术风格包括目标美术风格;在目标美术风格下的目标对象类型的情况下,确定目标对象类型对应的面部骨骼信息,以及确定与目标美术风格匹配的风格表情模板;基于面部骨骼信息将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。
其中,多种美术风格是指至少两种美术风格,且该多种美术风格包括目标美术风格。对象类型是指虚拟对象所属的类型,对象类型可以按照虚拟对象在虚拟场景中的身份划分的,以虚拟场景为游戏场景为例,则对象类型可以划分为主角和NPC(non-player character,非玩家角色),当然还可以按照其他方式划分,这里不做限定。每种美术风格可以对应有至少一种对象类型,例如可以包括两种对象类型,以虚拟场景为游戏且美术风格为卡通风格为例,卡通风格对应的对象类型可以包括主角和NPC。目标对象类型可以为目标美术风格下的对象类型中的任意一种。如图23中的(a)所示,展示了多种美术风格分别对应的对象类型,图23中“游戏项目1”、“卡通游戏”、“默认游戏”、“数字人游戏”分别代表不同美术风格的游戏,从而代表了不同了美术风格,“主角”和“非玩家角色”为对象类型,可以看出,不同的美术风格分别对应有至少一个对象类型。
面部骨骼信息可以包括虚拟对象面部的骨骼数量或骨骼标识中的至少一种。骨骼标识例如可以为骨骼名称,如图23中的(b)所示,展示了多种美术风格下的虚拟对象面部的骨骼数量。每种对象类型的面部骨骼信息为预先设置的。在美术风格不同的情况下,同一种对象类型所对应的面部骨骼信息可以不同,也可以相同。
具体地,终端在确定属于目标美术风格的三维面部模型的骨骼结构之后,在检测到绑定操作的情况下,展示模板确定窗口,在模板确定窗口中展示多种美术风格分别对应的对象类型,包括目标美术风格对应的对象类型,在目标美术风格下的目标对象类型被选中的情况下,终端响应于选择完成操作,确定目标美术风格下的目标对象类型对应的面部骨骼信息、并确定与目标美术风格匹配的风格表情模板,基于面部骨骼信息将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。绑定操作用于触发展示模板确定窗口,绑定操作例如可以是对图3中的控件308即控件“自动绑定”的点击操作,模板确定窗口如图17所示,当终端检测到对控件308的点击操作的情况下,展示图17中的模板确定窗口。图17中“游戏项目1”、“卡通游戏”、“默认游戏”、“数字人游戏”分别代表不同美术风格的游戏,从而代表了不同了美术风格,“主角”和“非玩家角色”为对象类型,可以看出,不同的美术风格分别对应有至少一个对象类型。选择完成操作如可以是对图17中的控件1702的点击操作。
在一些实施例中,终端中可以预先存储有美术风格与风格表情模板之间的映射关系,还可以预先存储有美术风格下的对象类型与面部骨骼信息之间的映射关系,美术风格与风格表情模板之间的映射关系是预先生成的,美术风格下的对象类型与面部骨骼信息之间的映射关系也是预先生成的。在目标美术风格下的 目标对象类型的情况下,终端可以根据美术风格与风格表情模板之间的映射关系,确定与目标美术风格匹配的风格表情模板,并可以基于美术风格下的对象类型与面部骨骼信息之间的映射关系,确定目标对象类型对应的面部骨骼信息。
在一些实施例中,风格表情模板中可以预先规定了每个运动单元分别对应的关联骨骼,运动单元的关联骨骼是指实现运动单元的动作所需要依赖的骨骼。终端可以基于面部骨骼信息,确定目标对象类型的虚拟对象的面部骨骼的骨骼标识。在面部骨骼信息中包括目标对象类型的虚拟对象的面部骨骼的骨骼标识的情况下,终端可以从面部骨骼信息中提取得到骨骼标识,当面部骨骼信息包括骨骼数量的情况下,终端可以获取该骨骼数量对应的各骨骼标识,骨骼数量对应的各骨骼标识是预先生成的,骨骼标识例如可以是骨骼名称,终端可以将骨骼数量对应的各骨骼标识,确定为目标对象类型的虚拟对象的面部骨骼的骨骼标识。终端可以从目标三维面部模型的骨骼结构中确定骨骼标识所标识的骨骼,在确定出的骨骼为风格表情模板中的某个运动单元的关联骨骼的情况下,将确定出的骨骼与该运动单元进行绑定,从而实现了将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。绑定后,可以基于运动单元驱动绑定的骨骼进行运动,以产生符合对应的美术风格的表情,风格表情模板不同,则风格表情模板中的至少一个运动单元的信息存在差别,故风格表情模板不同,所生成的表情也存在差异,如图23中的(c)所示,数字人与卡通人物所属的美术风格不同,故两者所匹配的风格表情模板也不同,从而同一种情绪,表情也存在差异,以生气为例,卡通人物的表情比数字人的表情较为夸张。
本实施例中,展示多种美术风格分别对应的对象类型,从而对于每种美术风格,可以基于本申请提供的面部表情处理方法,快捷的生成符合该美术风格的表情,提升了生成符合所需美术风格的表情的效率。
在一些实施例中,该方法还包括:在展示虚拟对象面部的情况下,展示风格表情模板中的运动单元的信息编辑区域;响应于在目标运动单元的信息编辑区域触发的信息编辑操作,对目标运动单元的信息进行更新;基于目标运动单元的更新后的信息,驱动与目标运动单元相绑定的骨骼运动,以更新虚拟对象面部的表情。
其中,目标运动单元可以为风格表情模板中的任一运动单元。目标运动单元的信息编辑区域是用于对目标运动单元的信息进行编辑的区域,信息编辑操作是对目标运动单元的信息进行编辑的操作。信息编辑区域可以是任意的实现信息编辑的区域,包括但不限于是输入框或滑动控件,信息编辑操作包括但不限于是向输入框中输入信息或对滑动控件的滑动操作。每个运动单元可以分别具有各自的信息编辑区域。如图24所示,每个运动单元可以分别具有各自的信息编辑区域,控件2402以及控件2404为运动单元01的两个信息编辑区域,控件2402为输入框,控件2404为滑动控件。
具体地,终端响应于模板编辑操作,展示风格表情模板的编辑区域,并在风格表情模板的编辑区域中展示运动单元的信息编辑区域。模板编辑操作可以是在美术工具中触发的,例如,美术工具可以提供了模板编辑入口,模板编辑操作是针对模板编辑入口的触发操作,终端在展示美术工具中的模板编辑入口的情况下,响应于针对模板编辑入口的触发操作,展示风格表情模板的编辑区域。模板编辑入口例如为图24中的控件2400即控件“表情编辑器”,当终端检测到针对控件2400的点击操作的情况下,展示右侧的风格表情模板的编辑区域2406,并在编辑区域2406中展示运动单元的信息编辑区域,例如展示运动单元01的两个信息编辑区域,即展示控件2402以及控件2404。
在一些实施例中,终端在展示美术工具中的模板编辑入口的情况下,响应于针对模板编辑入口的触发操作,在展示虚拟对象面部的情况下,展示风格表情模板的编辑区域,即在展示虚拟对象面部的情况下,展示风格表情模板中的运动单元的信息编辑区域。如图25所示,左边的窗口展示了虚拟对象面部,右边的窗口展示了运动单元的信息编辑区域。
在一些实施例中,信息编辑区域存在冻结状态和激活状态两种状态,在信息编辑区域处于冻结状态的 情况下,终端停止响应于针对信息编辑区域的操作,例如停止响应于信息编辑操作,在信息编辑区域处于激活状态的情况下,终端可以响应于针对信息编辑区域的操作,例如响应于信息编辑操作。冻结状态也可以理解为退出编辑模式,激活状态也可以理解为进入编辑模式。在目标运动单元处于冻结状态的情况下,终端可以展示目标运动单元的激活控件,在终端检测到对激活控件的触发操作时,将激活控件更新为冻结控件,并将目标运动单元的信息编辑区域更新为可编辑的模式,从而终端可以响应于针对信息编辑区域的触发操作例如点击操作。在目标运动单元处于激活状态的情况下,终端可以展示目标运动单元的冻结控件,在终端检测到对冻结控件的触发操作时,将冻结控件更新为激活控件,并将目标运动单元的信息编辑区域更新为不可编辑的模式,从而终端停止响应于针对信息编辑区域的触发操作例如点击操作。如图26所示,第一行中运动单元01的信息编辑区域处于可编辑的模式,即进入编辑模式,控件“冻结数据”为冻结控件,第二行中运动单元01的信息编辑区域处于不可编辑的模式,即退出编辑模式,控件“激活数据”为激活控件。本实施例中,在需要对运动单元的信息进行编辑的情况下,将信息编辑区域的状态由冻结状态更新为激活状态,以使得信息编辑区域进入编辑模式,从而可以提供对运动单元的信息进行编辑的功能,在不需要对运动单元的信息进行编辑的情况下,保持信息编辑区域的状态为冻结状态,以使得信息编辑区域处于不可编辑模式,从而可以避免了由于对信息编辑区域的误操作而导致信息被变更的情况。
本实施例中,在展示虚拟对象面部的情况下,展示风格表情模板中的运动单元的编辑区域,并基于运动单元的更新后的信息对虚拟对象面部的表情进行更新,从而在所确定的风格表情模板不满足需求的情况下,可以采用可视化的方式对风格表情模板中的运动单元的信息进行更新,以优化风格表情模板,从而优化表情,提高了面部表情处理的灵活度,从而提高了面部表情处理的效率。
在一些实施例中,该方法还包括:展示风格表情模板中的运动单元的骨骼更新区域;响应于在目标运动单元的骨骼更新区域触发的骨骼更新操作,对目标运动单元绑定的骨骼进行更新。
其中,骨骼更新区域可以为骨骼添加区域或骨骼删除区域。骨骼添加区域用于为运动单元绑定至少一个新的骨骼,骨骼删除区域用于解除运动单元与所绑定的一个或多个骨骼之间的绑定关系,多个是指至少两个。每个运动单元可以分别具有骨骼更新区域。骨骼更新操作是用于添加或删除与运动单元绑定的骨骼的操作,例如可以是针对骨骼更新区域的点击操作。
具体地,终端可以在虚拟对象面部显示目标三维面部模型的骨骼结构中目标运动单元未绑定的骨骼,并且终端可以响应于针对显示的骨骼的选择操作,将选择操作所选的骨骼显示为选中状态,终端检测到在目标运动单元的骨骼添加区域触发的骨骼添加操作的情况下,可以将处于选中状态的骨骼与目标运动单元进行绑定。目标运动单元例如可以为图25中的运动单元29,骨骼添加区域例如可以为图25中的控件2502即控件“添加骨骼”,当终端检测到对控件2502的点击操作时,将虚拟对象面部上处于选中状态的未与运动单元29绑定的骨骼与运动单元29进行绑定。“添加骨骼”对应的英文表示可以为addJoints。
在一些实施例中,终端可以在展示虚拟对象面部的情况下,展示风格表情模板中的运动单元的骨骼展示控件,骨骼展示控件用于触发展示对应的运动单元所绑定的骨骼,每个终端运动单元分别对应有骨骼展示控件,在终端接收到针对目标运动单元的骨骼展示控件的触发操作的情况下,可以在虚拟对象面部显示目标运动单元所绑定的骨骼。目标运动单元例如可以为图25中的运动单元29,骨骼展示控件例如可以为图25中的控件2504即控件“启用姿势”。“启用姿势”对应的英文表示可以为enablePose。
在一些实施例中,终端可以响应于针对显示的目标运动单元所绑定的骨骼的选择操作,将选择操作所选的骨骼显示为选中状态,终端检测到在目标运动单元的骨骼删除区域触发的骨骼删除操作的情况下,将处于选中状态的骨骼与目标运动单元解除绑定。
在一些实施例中,终端可以显示的目标运动单元所绑定的骨骼,当检测到对显示的骨骼的位置变更操作的情况下,更新骨骼的位置,并随着骨骼位置的更新对虚拟对象面部的表情进行更新。每个运动单元可 以分别对应有姿势记录控件,终端可以显示目标运动单元对应的姿势记录控件,目标运动单元的姿势记录控件,用于记录目标运动单元绑定的骨骼的更新后的位置。以目标运动单元为图25中的运动单元29为例,则姿势记录控件可以为图25中控件2506即控件“记录姿势”。“记录姿势”对应的英文表示可以为recordPose。
本实施例中,展示风格表情模板中的运动单元的骨骼更新区域,从而可以利用骨骼更新区域对运动单元所绑定的骨骼进行优化,提高表情控制的精准度。
在一些实施例中,信息编辑区域以及骨骼更新区域是展示在编辑窗口中的;该方法还包括:响应于针对编辑窗口的窗口关闭操作,展示表情模板导出触发控件;响应于针对表情模板导出触发控件的触发操作,导出更新表情模板;更新表情模板,是对风格表情模板中的至少一个运动单元的信息或绑定的骨骼进行更新后所得到的模板。
其中,编辑窗口是美术工具提供的窗口,编辑窗口是指显示风格表情模板的编辑区域的窗口,窗口关闭操作是触发关闭编辑窗口的操作。表情模板导出触发控件用于触发导出更新表情模板,更新表情模板,是对风格表情模板中的运动单元的信息进行更新后所得到的模板。导出是指将更新表情模板从包括该美术工具的美术软件内部复制到该美术软件之外进行存储,导出也可以理解为保存对风格表情模板进行的更新进行存储。编辑窗口例如可以是图27中展示信息编辑区域以及骨骼更新区域的区域,窗口关闭操作例如可以为图27中的控件2702即控件“×”。
具体地,在对至少一个运动单元更新后,运动单元的更新包括运动单元的信息或绑定的骨骼中的至少一个的更新,终端检测到针对编辑窗口的窗口关闭操作,展示表情模板导出窗口,在表情模板导出窗口中展示表情模板导出触发控件,终端检测到对表情模板导出触发控件的触发操作例如点击操作后,生成更新表情模板,并展示更新表情模板。表情模板导出窗口例如为图27中的窗口2704,表情模板导出触发控件例如为窗口2704中的控件2706,终端检测到对控件2702的点击操作后,展示窗口2704,在检测到对窗口中的控件2706的点击操作后,展示更新表情模板2708。
在一些实施例中,终端可以将美术软件外部的风格表情模板导入到美术软件中,并将导入的风格表情模板作为指定美术风格所对应的表情模板。指定美术风格可以根据需要指定。例如,在对卡通风格下的主角进行表情控制的过程中,对卡通风格下的风格表情模板进行了更新,通过生成且导出了更新表情模板实现对更新后的风格表情模板的存储,将更新表情模板导入为卡通风格下的新的风格表情模板,以便于利用更新表情模板处理卡通风格下的其他角色的表情。如图28所示,点击控件2800即控件“导入表情模板”,然后确定要导入的表情模板2802,则可以将表情模板2802导入。如图29所示,“表情运动单元编辑器”为提供对运动单元进行编辑的功能的编辑器,表情运动单元编辑器为美术工具中的编辑器,表情运动单元编辑器得到更新表情模板后,可以作为新的风格表情模板导入到美术工具中,并可以再次的被编辑,从而可以实现对风格表情模板的不断优化,提高风格表情模板的精准度。
本实施例中,导出更新表情模板,从而可以将对风格表情模板进行更新后的结果进行存储,使得可以重复的利用,提高了面部表情的处理效率。
在一些实施例中,三维面部模型属于目标业务场景中的虚拟对象,目标表情控件组合是与目标业务场景匹配的表情控件组合,目标表情控件组合包括多个第一表情控件,多个第一表情控件的控件数量,是基于目标业务场景的场景复杂程度确定的;多个第一表情控件用于在全局上调整虚拟对象面部的表情;响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情包括:响应于针对至少一个表情控件的调整操作,在所针对的表情控件为第一表情控件的情况下,确定第一表情控件的调整后的属性值;基于第一表情控件的调整后的属性值以及第一表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。
其中,三维面部模型为代表目标业务场景中的虚拟对象的面部的三维网格模型。业务场景包括但不限于是手游、端游或数字人游戏中的至少一个,不同的业务场景的复杂程度不同。相同的业务场景下可以具有多种美术风格,故确定与目标美术风格匹配的风格表情模板的过程可以包括:确定与目标业务场景下的目标美术风格匹配的风格表情模板。
表情控件的调整操作,用于改变表情控件的属性值,表情控件的属性值不同,则对表情的调整程度也不同,属性值越大,则对表情的调整程度越大,例如当表情控件为惊讶的情绪控件的情况下,表情控件的属性值越大,则惊讶的程度越大。表情控件的属性值在调整表情控件的情况下发生变化。
具体地,终端在检测到针对第一表情控件的调整操作的情况下,确定第一表情控件的调整后的属性值,基于第一表情控件的调整后的属性值以及第一表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。例如,终端可以根据调整后的属性值以及第一表情控件相关联的运动单元的信息,确定第一表情控件相关联的运动单元所绑定的骨骼的新的位置,将骨骼的位置更新为该新的位置,从而实现骨骼的运动,根据骨骼的运动控制骨骼绑定的面部皮肤模型中的顶点运动,从而实现了对虚拟对象面部的表情的控制。以图13中的第一表情控件为例,图13中区域1302中的第一表情控件为移动式的控件,通过移动第一表情控件可以实现对第一表情控件的调整,终端响应于针对图13中的第一表情控件的移动操作,可以对区域1302中的虚拟对象面部的表情进行更新。
本实施例中,由于第一表情控件的数量是根据场景复杂程度确定的,故对于简单的业务场景可以采用数量较少的第一表情控件,对于复杂的业务场景可以采用数量较多的第一表情控件,提高了不同的业务场景下的面部表情的处理效率。
在一些实施例中,目标表情控件组合还包括第二表情控件,第二表情控件用于调整虚拟对象面部的表情的细节;该方法还包括:在所针对的表情控件为第二表情控件的情况下,确定第二表情控件的调整后的属性值;基于第二表情控件的调整后的属性值以及第二表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以调整虚拟对象面部的表情的细节。
具体地,每个第二表情控件用于控制单个骨骼进行运动,以控制面部肌肉动态变化从而产生表情。终端在检测到针对第二表情控件的调整操作的情况下,确定第二表情控件的调整后的属性值,基于第二表情控件的调整后的属性值以及第二表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以调整虚拟对象面部的表情的细节。例如,终端可以根据调整后的属性值以及第二表情控件相关联的运动单元的信息,确定第二表情控件相关联的运动单元所绑定的骨骼的新的位置,将骨骼的位置更新为该新的位置,从而实现骨骼的运动,根据骨骼的运动控制骨骼绑定的面部皮肤模型中的顶点运动,从而实现了对虚拟对象面部的表情的控制。以图13中区域1306中的第二表情控件为例,区域1306中的第二表情控件为移动式的控件,通过移动第二表情控件可以实现对第二表情控件的调整,终端响应于针对区域1306中的第二表情控件的移动操作,可以对区域1306中的虚拟对象面部的表情的细节进行更新。
本实施例中,由于第二表情控件用于调整虚拟对象面部的表情的细节,从而提供了对虚拟对象面部的表情的细节进行调整的功能,提高了面部表情处理的精度。
在一些实施例中,目标表情控件组合包括快捷表情控件,快捷表情控件匹配有指定表情;响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情包括:响应于针对快捷表情控件的调整操作,获取快捷表情控件的调整后的属性值;基于调整后的属性值以及快捷表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情。
其中,快捷表情控件可以为多个,快捷表情控件匹配有指定表情,一种快捷表情控件用于调出一种指定表情,例如,快捷表情控件匹配的指定表情为惊讶的情况下,快捷表情控件用于调出惊讶的表情。快捷 表情控件包括但不限于是情绪控件或口型控件中的至少一个。
具体地,终端在检测到针对快捷表情控件的调整操作的情况下,确定快捷表情控件的调整后的属性值,基于快捷表情控件的调整后的属性值以及快捷表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情。例如,终端可以根据调整后的属性值以及快捷表情控件相关联的运动单元的信息,确定快捷表情控件相关联的运动单元所绑定的骨骼的新的位置,将骨骼的位置更新为该新的位置,从而实现骨骼的运动,根据骨骼的运动控制骨骼绑定的面部皮肤模型中的顶点运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情。
本实施例中,由于快捷表情控件匹配有指定表情,从而可以采用快捷表情控件,控制虚拟对象面部产生快捷表情控件匹配的指定表情,提高了生成指定表情的效率。
在一些实施例中,基于调整后的属性值以及快捷表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情包括:在快捷表情控件为情绪控件的情况下,确定情绪控件对应的情绪变化信息;基于情绪变化信息以及情绪控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部呈现情绪控件匹配的指定情绪。
其中,情绪控件匹配的指定表情为指定情绪产生的表情,指定情绪包括但不限于是微笑、愤怒、惊讶、恐惧、厌恶、悲伤、蔑视或生气中的至少一种。
具体地,终端在检测到针对情绪控件的调整操作的情况下,确定情绪控件的调整后的属性值,根据调整后的属性值以及情绪控件相关联的运动单元的信息,确定情绪控件相关联的运动单元所绑定的骨骼的新的位置,将骨骼的位置更新为该新的位置,从而实现骨骼的运动,根据骨骼的运动控制骨骼绑定的面部皮肤模型中的顶点运动,以控制虚拟对象面部产生情绪控件匹配的指定情绪的表情。以图13中区域1308中的微笑对应的情绪控件为例,区域1308中的情绪控件为滑动式的控件,通过滑动情绪控件可以实现对情绪控件的调整,终端响应于针对区域1308中的微笑对应的情绪控件的滑动操作,可以控制区域1308中的虚拟对象面部产生微笑的表情。
本实施例中,由于情绪控件用于产生指定情绪的表情,从而可以采用情绪控件,控制虚拟对象面部产生情绪控件匹配的指定情绪的表情,提高了生成指定情绪的表情的效率。
在一些实施例中,基于调整后的属性值以及快捷表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情包括:在快捷表情控件为口型控件的情况下,确定口型控件对应的口型变化信息;基于口型变化信息以及口型控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部呈现口型控件匹配的指定口型。
其中,口型控件匹配的指定表情为指定口型产生的表情,指定口型包括但不限于是AHH、AAA、EH等口型中的至少一种。口型控件还可以被语音驱动。
具体地,终端在检测到针对口型控件的调整操作的情况下,确定口型控件的调整后的属性值,根据调整后的属性值以及口型控件相关联的运动单元的信息,确定口型控件相关联的运动单元所绑定的骨骼的新的位置,将骨骼的位置更新为该新的位置,从而实现骨骼的运动,根据骨骼的运动控制骨骼绑定的面部皮肤模型中的顶点运动,以控制虚拟对象面部产生口型控件匹配的指定口型的表情。以图13中区域1304中的AAA口型对应的口型控件为例,区域1304中的口型控件为滑动式的控件,通过滑动口型控件可以实现对口型控件的调整,终端响应于针对区域1304中的AAA口型对应的口型控件的滑动操作,可以控制区域1304中的虚拟对象面部产生AAA口型的表情。
本实施例中,由于口型控件用于产生指定口型的表情,从而可以采用口型控件,控制虚拟对象面部产 生口型控件匹配的指定口型的表情,提高了生成指定口型的表情的效率。
在一些实施例中,属于目标美术风格的三维面部模型为目标三维面部模型;确定属于目标美术风格的三维面部模型的骨骼结构包括:基于目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型;基于仿射变换后的三维面部模型中的顶点所绑定的骨骼,确定目标三维面部模型中的顶点所绑定的骨骼,得到目标三维面部模型的骨骼结构。
其中,仿射变换用于改变标准三维面部模型中的顶点的位置,从而使得标准三维面部模型在形状和位置上与目标三维面部模型接近,以使得仿射变换后的三维面部模型的形状与目标三维面部模型的形状基本上一致,并且,仿射变换后的三维面部模型所包围的空间与目标三维面部模型所包围的空间基本上相同,从而仿射变换后的三维面部模型,可以表征虚拟对象面部的表面。
具体地,目标三维面部模型的关键点称为目标关键点。目标关键点为多个,多个是指至少两个。目标关键点可以是自动生成的,也可以是根据用户标注的关键点预测得到的。关键点代表面部上的关键部位的位置的点,关键部位包括但不限于是眉毛、眼睛、鼻子或下巴等中的至少一个。在检测到模型匹配操作的情况下,终端可以获取目标三维面部模型的目标关键点,基于目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵,仿射变换矩阵用于将标准关键点的坐标变换到该标准关键点对应的目标关键点的坐标,还可以用于将目标关键点的坐标变换到该目标关键点对应的标准关键点的坐标,其中,模型匹配操作用于触发终端生成仿射变换后的三维面部模型。
在一些实施例中,目标三维面部模型中的顶点可以称为目标顶点,仿射变换后的三维面部模型中的顶点可以称为仿射顶点,针对目标三维面部模型中的每个目标顶点,终端可以从仿射变换后的三维面部模型中确定与目标顶点最近的仿射顶点,将该仿射顶点所绑定的骨骼确定为该目标顶点所绑定的骨骼,各目标顶点所绑定的骨骼形成目标三维面部模型的骨骼结构。
本实施例中,对标准三维面部模型进行仿射变换,从而基于仿射变换后的三维面部模型的顶点所绑定的骨骼,确定目标三维面部模型中的顶点所绑定的骨骼,以得到目标三维面部模型的骨骼结构,从而快速且准确地确定出了目标三维面部模型的骨骼结构。
在一些实施例中,基于目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型包括:基于目标三维面部模型的参考关键点,预测得到目标三维面部模型对应的目标关键点;确定目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵;标准关键点为标准三维面部模型的关键点;利用仿射变换矩阵对标准三维面部模型中的顶点位置进行仿射变换,得到仿射变换后的三维面部模型。
其中,参考关键点可以是通过标注的方式得到的关键点,例如参考关键点可以是对嘴巴、下巴、鼻子的位置进行标注所得到的关键点。目标关键点是通过参考关键点预测得到的。仿射变换矩阵反映了目标关键点与对应的标准关键点之间的坐标变换关系。
具体地,在展示目标三维面部模型的情况下,终端可以响应于关键点标注操作;关键点标注操作用于在目标三维面部模型上标注关键点,并可以将关键点标注操作所标注的关键点确定为参考关键点。例如,终端在展示目标三维面部模型的情况下,可以响应于关键点标注操作,关键点标注操作用于在目标三维面部模型上标注关键点,终端可以响应于模型匹配操作,将关键点标注操作所标注的关键点确定为参考关键点,基于目标三维面部模型的参考关键点,预测得到目标三维面部模型对应的目标关键点。本实施例中,将关键点标注操作所标注的关键点确定为参考关键点,提高了确定参考关键点的效率。
在一些实施例中,得到仿射变换矩阵后,终端可以将标准三维面部模型中的顶点的坐标与仿射变换矩阵进行乘积运算,乘积运算的结果为新的坐标,将该新的坐标处的点作为仿射变换后的三维面部模型中的顶点。
本实施例中,根据关键点之间的坐标变换关系得到仿射变换矩阵,从而得到的仿射变换矩阵可以用于 将标准三维面部模型进行仿射变换,得到在位置和形状上与标准三维面部模型相近的三维面部模型,提高了仿射变换的准确度。
在一些实施例中,如图30所示,提供了一种面部表情处理方法,该方法可以由终端或服务器执行,还可以由终端和服务器共同执行,以该方法应用于终端为例进行说明,包括以下步骤:
步骤3002,基于目标三维面部模型的参考关键点,预测得到目标三维面部模型对应的目标关键点。
步骤3004,确定目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵;标准关键点为标准三维面部模型的关键点。
步骤3006,利用仿射变换矩阵对标准三维面部模型中的顶点位置进行仿射变换,得到仿射变换后的三维面部模型。
步骤3008,基于仿射变换后的三维面部模型中的顶点所绑定的骨骼,确定目标三维面部模型中的顶点所绑定的骨骼,得到目标三维面部模型的骨骼结构。
步骤3010,展示多种美术风格分别对应的对象类型;多种美术风格包括目标美术风格。
步骤3012,在目标美术风格下的目标对象类型为选中的情况下,确定目标对象类型对应的面部骨骼信息,以及确定与目标美术风格匹配的风格表情模板。
步骤3014,基于面部骨骼信息将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。
步骤3016,展示目标表情控件组合,将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联。
步骤3018,对骨骼结构进行蒙皮处理,生成虚拟对象面部。
步骤3020,响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。
本申请提供的面部表情处理方法,定义了不同项目不同风格精度,每个模板映射了不同的配置信息(不同的骨骼数和表情AU参数数量等),从而可以应对不同风格的表现张力,为不同精度模板定制了专属的控制器组件。因此,采用本申请提供的面部表情处理方法,能实现表情数据自动生成,并适配不同游戏的表情风格与技术级别,转化为美术师可以操控的资产文件,方便进一步制作表情动画序列。本申请提供的面部表情处理方法,可适配各类手游、端游、主机以及CG(Computer Graphics,计算机动画)业务要求,通用性强,配套的工具自动化程度高,且动画操控简单自由,不会过激和穿帮。有效帮助业务实现1-N的量产,极大的提升了表情数据的制作效率与精度质量。
本申请提供的面部表情处理方法,可以应用于任何的需要进行面部表情处理的场景中,包括但不限于是影视特效、可视化设计、游戏、动漫、虚拟现实(Virtual Reality,VR)、工业仿真和数字文创等场景中的至少一种。本申请提供的面部表情处理方法,应用于影视特效、可视化设计、游戏、动漫、虚拟现实、工业仿真和数字文创等场景中,可以在保证面部表情的精度的情况下加快面部表情的处理效率。例如,在游戏场景中,本申请提供的面部表情处理方法,可以为游戏中的虚拟对象产生所需要的表情。在动漫场景中,本申请提供的面部表情处理方法,可以为动漫中的虚拟对象产生所需要的表情。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
基于同样的发明构思,本申请实施例还提供了一种用于实现上述所涉及的面部表情处理方法的面部表 情处理装置。该装置所提供的解决问题的实现方案与上述方法中所记载的实现方案相似,故下面所提供的一个或多个面部表情处理装置实施例中的具体限定可以参见上文中对于面部表情处理方法的限定,在此不再赘述。
在一些实施例中,如图31所示,提供了一种面部表情处理装置,包括:骨骼确定模块3102、骨骼绑定模块3104、控件展示模块3106、骨骼蒙皮模块3108和表情控制模块3110,其中:
骨骼确定模块3102,用于确定属于目标美术风格的三维面部模型的骨骼结构。
骨骼绑定模块3104,用于确定与目标美术风格匹配的风格表情模板,将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。
控件展示模块3106,用于展示目标表情控件组合,将目标表情控件组合中的表情控件与风格表情模板中的至少一个运动单元进行关联。
骨骼蒙皮模块3108,用于对骨骼结构进行蒙皮处理,生成虚拟对象面部。
表情控制模块3110,用于响应于针对至少一个表情控件的调整操作,使得与表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。
在一些实施例中,骨骼绑定模块,还用于:展示多种美术风格分别对应的对象类型;多种美术风格包括目标美术风格;在目标美术风格下的目标对象类型为选中的情况下,确定目标对象类型对应的面部骨骼信息,以及确定与目标美术风格匹配的风格表情模板;基于面部骨骼信息将风格表情模板中的运动单元与骨骼结构中相关的骨骼进行绑定。
在一些实施例中,装置还包括运动单元更新模块,运动单元更新模块用于:在展示虚拟对象面部的情况下,展示风格表情模板中的运动单元的信息编辑区域;响应于在目标运动单元的信息编辑区域触发的信息编辑操作,对目标运动单元的信息进行更新;基于目标运动单元的更新后的信息,驱动与目标运动单元相绑定的骨骼运动,以更新虚拟对象面部的表情。
在一些实施例中,运动单元更新模块还用于:展示风格表情模板中的运动单元的骨骼更新区域;响应于在目标运动单元的骨骼更新区域触发的骨骼更新操作,对目标运动单元绑定的骨骼进行更新。
在一些实施例中,信息编辑区域以及骨骼更新区域是展示在编辑窗口中的;装置还包括模板导出模块,模板导出模块用于:响应于针对编辑窗口的窗口关闭操作,展示表情模板导出触发控件;响应于针对表情模板导出触发控件的触发操作,导出更新表情模板;更新表情模板,是对风格表情模板中的至少一个运动单元的信息或绑定的骨骼进行更新后所得到的模板。
在一些实施例中,信息编辑区域存在冻结状态和激活状态两种状态,装置还用于:在信息编辑区域处于冻结状态的情况下,停止响应于针对信息编辑区域的操作;在信息编辑区域处于激活状态的情况下,响应于针对信息编辑区域的操作。
在一些实施例中,三维面部模型属于目标业务场景中的虚拟对象,目标表情控件组合是与目标业务场景匹配的表情控件组合,目标表情控件组合包括多个第一表情控件,多个第一表情控件的控件数量,是基于目标业务场景的场景复杂程度确定的;多个第一表情控件用于在全局上调整虚拟对象面部的表情;表情控制模块,还用于:响应于针对至少一个表情控件的调整操作,在所针对的表情控件为第一表情控件的情况下,确定第一表情控件的调整后的属性值;基于第一表情控件的调整后的属性值以及第一表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生符合目标美术风格的表情。
在一些实施例中,目标表情控件组合还包括第二表情控件,第二表情控件用于调整虚拟对象面部的表情的细节;表情控制模块,还用于在所针对的表情控件为第二表情控件的情况下,确定第二表情控件的调整后的属性值;基于第二表情控件的调整后的属性值以及第二表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以调整虚拟对象面部的表情的细节。
在一些实施例中,目标表情控件组合包括快捷表情控件,快捷表情控件匹配有指定表情;表情控制模块,还用于:响应于针对快捷表情控件的调整操作,获取快捷表情控件的调整后的属性值;基于调整后的属性值以及快捷表情控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部产生快捷表情控件匹配的指定表情。
在一些实施例中,表情控制模块还用于:在快捷表情控件为情绪控件的情况下,确定情绪控件对应的调整后的属性值;基于调整后的属性值以及情绪控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部呈现情绪控件匹配的指定情绪。
在一些实施例中,表情控制模块还用于:在快捷表情控件为口型控件的情况下,确定口型控件对应的口型变化信息;基于口型变化信息以及口型控件相关联的运动单元的信息,驱动相关联的运动单元所绑定的骨骼运动,以控制虚拟对象面部呈现口型控件匹配的指定口型。
在一些实施例中,属于目标美术风格的三维面部模型为目标三维面部模型;骨骼确定模块,还用于:基于目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型;基于仿射变换后的三维面部模型中的顶点所绑定的骨骼,确定目标三维面部模型中的顶点所绑定的骨骼,得到目标三维面部模型的骨骼结构。
在一些实施例中,骨骼确定模块,还用于:基于目标三维面部模型的参考关键点,预测得到目标三维面部模型对应的目标关键点;确定目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵;标准关键点为标准三维面部模型的关键点;利用仿射变换矩阵对标准三维面部模型中的顶点位置进行仿射变换,得到仿射变换后的三维面部模型。
在一些实施例中,装置还用于:在展示目标三维面部模型的情况下,响应于关键点标注操作;关键点标注操作用于在目标三维面部模型上标注关键点;将关键点标注操作所标注的关键点确定为参考关键点。
上述面部表情处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一些实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图32所示。该计算机设备包括处理器、存储器、输入/输出接口(Input/Output,简称I/O)和通信接口。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质和内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储面部表情处理方法中涉及到的数据。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种面部表情处理方法。
在一些实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图33所示。该计算机设备包括处理器、存储器、输入/输出接口、通信接口、显示单元和输入装置。其中,处理器、存储器和输入/输出接口通过系统总线连接,通信接口、显示单元和输入装置通过输入/输出接口连接到系统总线。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的输入/输出接口用于处理器与外部设备之间交换信息。该计算机设备的通信接口用于与外部的终端进行有线或无线方式的通信,无线方式可通过WIFI、移动蜂窝网络、NFC(近场通信)或其他技术实现。该计算机可读指令被处理器执行时以实现一种面部表情处理方法。该计算机设备的显示单元用于形成视觉可见的画面,可以是显示屏、投影装置或虚 拟现实成像装置,显示屏可以是液晶显示屏或电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图32和图33中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一些实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,计算机可读指令被该处理器执行时,使得一个或多个处理器执行上述面部表情处理方法中的步骤。
在一些实施例中,提供了一个或多个非易失性可读存储介质,其上存储有计算机可读指令,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现上述面部表情处理方法中的步骤。
在一些实施例中,提供了一种计算机程序产品,包括计算机可读指令,该计算机可读指令被处理器执行时实现上述面部表情处理方法中的步骤。
需要说明的是,本申请所涉及的用户信息(包括但不限于用户设备信息、用户个人信息等)和数据(包括但不限于用于分析的数据、存储的数据、展示的数据等),均为经用户授权或者经过各方充分授权的信息和数据,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存、光存储器、高密度嵌入式非易失性存储器、阻变存储器(ReRAM)、磁变存储器(Magnetoresistive Random Access Memory,MRAM)、铁电存储器(Ferroelectric Random Access Memory,FRAM)、相变存储器(Phase Change Memory,PCM)、石墨烯存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器等。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。本申请所提供的各实施例中所涉及的数据库可包括关系型数据库和非关系型数据库中至少一种。非关系型数据库可包括基于区块链的分布式数据库等,不限于此。本申请所提供的各实施例中所涉及的处理器可为通用处理器、中央处理器、图形处理器、数字信号处理器、可编程逻辑器、基于量子计算的数据处理逻辑器等,不限于此。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请的保护范围应以所附权利要求为准。

Claims (18)

  1. 一种面部表情处理方法,其特征在于,由计算机设备执行,所述方法包括:
    确定属于目标美术风格的三维面部模型的骨骼结构;
    确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定;
    展示目标表情控件组合,将所述目标表情控件组合中的表情控件与所述风格表情模板中的至少一个所述运动单元进行关联;
    对所述骨骼结构进行蒙皮处理,生成虚拟对象面部;
    响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情。
  2. 根据权利要求1所述的方法,其特征在于,所述确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定包括:
    展示多种美术风格分别对应的对象类型;所述多种美术风格包括所述目标美术风格;
    在所述目标美术风格下的目标对象类型为选中的情况下,确定所述目标对象类型对应的面部骨骼信息,以及确定与所述目标美术风格匹配的风格表情模板;
    基于所述面部骨骼信息将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在展示所述虚拟对象面部的情况下,展示所述风格表情模板中的运动单元的信息编辑区域;
    响应于在目标运动单元的信息编辑区域触发的信息编辑操作,对所述目标运动单元的信息进行更新;
    基于所述目标运动单元的更新后的信息,驱动与所述目标运动单元相绑定的骨骼运动,以更新所述虚拟对象面部的表情。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    展示所述风格表情模板中的运动单元的骨骼更新区域;
    响应于在所述目标运动单元的骨骼更新区域触发的骨骼更新操作,对所述目标运动单元绑定的骨骼进行更新。
  5. 根据权利要求4所述的方法,其特征在于,所述信息编辑区域以及骨骼更新区域是展示在编辑窗口中的;所述方法还包括:
    响应于针对编辑窗口的窗口关闭操作,展示表情模板导出触发控件;
    响应于针对表情模板导出触发控件的触发操作,导出更新表情模板;所述更新表情模板,是对所述风格表情模板中的至少一个运动单元的信息或绑定的骨骼进行更新后所得到的模板。
  6. 根据权利要求3所述的方法,其特征在于,所述信息编辑区域存在冻结状态和激活状态两种状态,所述方法还包括:
    在所述信息编辑区域处于冻结状态的情况下,停止响应于针对所述信息编辑区域的操作;
    在所述信息编辑区域处于激活状态的情况下,响应于针对所述信息编辑区域的操作。
  7. 根据权利要求1所述的方法,其特征在于,所述三维面部模型属于目标业务场景中的虚拟对象,所述目标表情控件组合是与所述目标业务场景匹配的表情控件组合,所述目标表情控件组合包括多个第一表情控件,所述多个第一表情控件的控件数量,是基于所述目标业务场景的场景复杂程度确定的;所述多个第一表情控件用于在全局上调整所述虚拟对象面部的表情;
    所述响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情包括:
    响应于针对至少一个所述表情控件的调整操作,在所针对的表情控件为所述第一表情控件的情况下,确定所述第一表情控件的调整后的属性值;
    基于所述第一表情控件的调整后的属性值以及所述第一表情控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情。
  8. 根据权利要求7所述的方法,其特征在于,所述目标表情控件组合还包括第二表情控件,所述第二表情控件用于调整所述虚拟对象面部的表情的细节;所述方法还包括:
    在所针对的表情控件为所述第二表情控件的情况下,确定所述第二表情控件的调整后的属性值;
    基于所述第二表情控件的调整后的属性值以及所述第二表情控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以调整所述虚拟对象面部的表情的细节。
  9. 根据权利要求1所述的方法,其特征在于,所述目标表情控件组合包括快捷表情控件,所述快捷表情控件匹配有指定表情;
    所述响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情包括:
    响应于针对所述快捷表情控件的调整操作,获取所述快捷表情控件的调整后的属性值;
    基于所述调整后的属性值以及所述快捷表情控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部产生所述快捷表情控件匹配的指定表情。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述调整后的属性值以及所述快捷表情控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部产生所述快捷表情控件匹配的指定表情包括:
    在所述快捷表情控件为情绪控件的情况下,确定所述情绪控件对应的调整后的属性值;
    基于所述调整后的属性值以及所述情绪控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部呈现所述情绪控件匹配的指定情绪。
  11. 根据权利要求9所述的方法,其特征在于,所述基于所述调整后的属性值以及所述快捷表情控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部产生所述快捷表情控件匹配的指定表情包括:
    在所述快捷表情控件为口型控件的情况下,确定所述口型控件对应的口型变化信息;
    基于所述口型变化信息以及所述口型控件相关联的运动单元的信息,驱动所述相关联的运动单元所绑定的骨骼运动,以控制所述虚拟对象面部呈现所述口型控件匹配的指定口型。
  12. 根据权利要求1至11任意一项所述的方法,其特征在于,所述属于目标美术风格的三维面部模型为目标三维面部模型;所述确定属于目标美术风格的三维面部模型的骨骼结构包括:
    基于所述目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型;
    基于所述仿射变换后的三维面部模型中的顶点所绑定的骨骼,确定所述目标三维面部模型中的顶点所绑定的骨骼,得到所述目标三维面部模型的骨骼结构。
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述目标三维面部模型对标准三维面部模型进行仿射变换,得到仿射变换后的三维面部模型包括:
    基于所述目标三维面部模型的参考关键点,预测得到所述目标三维面部模型对应的目标关键点;
    确定所述目标关键点与对应的标准关键点之间的坐标变换关系,得到仿射变换矩阵;所述标准关键点为所述标准三维面部模型的关键点;
    利用所述仿射变换矩阵对所述标准三维面部模型中的顶点位置进行仿射变换,得到仿射变换后的三维面部模型。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    在展示所述目标三维面部模型的情况下,响应于关键点标注操作;所述关键点标注操作用于在所述目标三维面部模型上标注关键点;
    将所述关键点标注操作所标注的关键点确定为参考关键点。
  15. 一种面部表情处理装置,其特征在于,所述装置包括:
    骨骼确定模块,用于确定属于目标美术风格的三维面部模型的骨骼结构;
    骨骼绑定模块,用于确定与所述目标美术风格匹配的风格表情模板,将所述风格表情模板中的运动单元与所述骨骼结构中相关的骨骼进行绑定;
    控件展示模块,用于展示目标表情控件组合,将所述目标表情控件组合中的表情控件与所述风格表情模板中的至少一个所述运动单元进行关联;
    骨骼蒙皮模块,用于对所述骨骼结构进行蒙皮处理,生成虚拟对象面部;
    表情控制模块,用于响应于针对至少一个所述表情控件的调整操作,使得与所述表情控件关联的运动单元的信息驱动相绑定的骨骼运动,以控制所述虚拟对象面部产生符合所述目标美术风格的表情。
  16. 一种计算机设备,包括存储器和一个或多个处理器,所述存储器存储有计算机可读指令,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行权利要求1至14中任一项所述的方法的步骤。
  17. 一个或多个非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器实现权利要求1至14中任一项所述的方法的步骤。
  18. 一种计算机程序产品,包括计算机可读指令,其特征在于,该计算机可读指令被处理器执行时实现权利要求1至14中任一项所述的方法的步骤。
PCT/CN2023/095567 2022-08-04 2023-05-22 面部表情处理方法、装置、计算机设备和存储介质 WO2024027285A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210934106.3 2022-08-04
CN202210934106.3A CN117576270A (zh) 2022-08-04 2022-08-04 面部表情处理方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2024027285A1 true WO2024027285A1 (zh) 2024-02-08

Family

ID=89848465

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/095567 WO2024027285A1 (zh) 2022-08-04 2023-05-22 面部表情处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN117576270A (zh)
WO (1) WO2024027285A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109621419A (zh) * 2018-12-12 2019-04-16 网易(杭州)网络有限公司 游戏角色表情的生成装置方法及装置、存储介质
US20190325633A1 (en) * 2018-04-23 2019-10-24 Magic Leap, Inc. Avatar facial expression representation in multidimensional space
CN110717974A (zh) * 2019-09-27 2020-01-21 腾讯数码(天津)有限公司 展示状态信息的控制方法、装置、电子设备和存储介质
CN111798550A (zh) * 2020-07-17 2020-10-20 网易(杭州)网络有限公司 模型表情处理的方法及装置
CN114299205A (zh) * 2021-12-29 2022-04-08 完美世界(北京)软件科技发展有限公司 表情动画制作方法及装置、存储介质、计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325633A1 (en) * 2018-04-23 2019-10-24 Magic Leap, Inc. Avatar facial expression representation in multidimensional space
CN109621419A (zh) * 2018-12-12 2019-04-16 网易(杭州)网络有限公司 游戏角色表情的生成装置方法及装置、存储介质
CN110717974A (zh) * 2019-09-27 2020-01-21 腾讯数码(天津)有限公司 展示状态信息的控制方法、装置、电子设备和存储介质
CN111798550A (zh) * 2020-07-17 2020-10-20 网易(杭州)网络有限公司 模型表情处理的方法及装置
CN114299205A (zh) * 2021-12-29 2022-04-08 完美世界(北京)软件科技发展有限公司 表情动画制作方法及装置、存储介质、计算机设备

Also Published As

Publication number Publication date
CN117576270A (zh) 2024-02-20

Similar Documents

Publication Publication Date Title
US10860838B1 (en) Universal facial expression translation and character rendering system
WO2020125567A1 (zh) 一种动画自动生成方法及动画自动生成系统
CN110766776B (zh) 生成表情动画的方法及装置
CN114363712B (zh) 基于模板化编辑的ai数字人视频生成方法、装置及设备
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
KR20210113948A (ko) 가상 아바타 생성 방법 및 장치
KR20230162107A (ko) 증강 현실 콘텐츠에서의 머리 회전들에 대한 얼굴 합성
KR20230162096A (ko) 얼굴 표정의 선택을 사용한 온라인 커뮤니티를 위한 콘텐츠에서의 얼굴 합성
US11282292B2 (en) Method based on unique metadata for making direct modifications to 2D, 3D digital image formats quickly and rendering the changes on AR/VR and mixed reality platforms in real-time
US8363055B1 (en) Multiple time scales in computer graphics
US8237719B1 (en) Pose-structured animation interface
US8462163B2 (en) Computer system and motion control method
Miranda et al. Sketch express: facial expressions made easy
WO2024027285A1 (zh) 面部表情处理方法、装置、计算机设备和存储介质
CN112734949B (zh) Vr内容的属性修改方法、装置、计算机设备及存储介质
CN116115995A (zh) 图像渲染处理方法、装置及电子设备
US8077183B1 (en) Stepmode animation visualization
WO2024027307A1 (zh) 口型动画生成方法、装置、设备和介质
WO2024011733A1 (zh) 3d图像实现方法及系统
CN115170707B (zh) 基于应用程序框架的3d图像实现系统及方法
WO2023184357A1 (zh) 一种表情模型制作的方法、装置及电子设备
US11941739B1 (en) Object deformation network system and method
US10825220B1 (en) Copy pose
CN113805532B (zh) 一种制作实体机器人动作的方法及终端
Gao et al. The design of virtual reality systems for metaverse scenarios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23848995

Country of ref document: EP

Kind code of ref document: A1