CN111798550A - Method and device for processing model expressions - Google Patents

Method and device for processing model expressions Download PDF

Info

Publication number
CN111798550A
CN111798550A CN202010690729.1A CN202010690729A CN111798550A CN 111798550 A CN111798550 A CN 111798550A CN 202010690729 A CN202010690729 A CN 202010690729A CN 111798550 A CN111798550 A CN 111798550A
Authority
CN
China
Prior art keywords
face model
skeleton
expression
model
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010690729.1A
Other languages
Chinese (zh)
Inventor
陈锦禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010690729.1A priority Critical patent/CN111798550A/en
Publication of CN111798550A publication Critical patent/CN111798550A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method and a device for processing model expressions, wherein the method comprises the following steps: and acquiring a face model to be processed. And in response to the starting operation for the target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface. At least one alignment point of the face model is generated in response to a triggering operation for a first control on the operation interface. And generating the skeleton of the face model and the binding information between the skeleton and the face model according to the position of the at least one contraposition point in response to the triggering operation of the second control on the operation interface. And controlling the expression of the face model according to the skeleton of the face model and the binding information. The binding information between the contraposition point, the skeleton and the skeleton of the face model and the face model is automatically generated according to the target plug-in, so that the expression of the face model can be controlled without manual operation, and the facial expression processing efficiency of the virtual character is effectively improved.

Description

Method and device for processing model expressions
Technical Field
The embodiment of the application relates to computer technologies, and in particular relates to a method and a device for processing model expressions.
Background
As games are increasingly refined, the demand for the facial expression of virtual characters in games is increasing.
At present, in the prior art, when generating a facial expression of a virtual character, a model of a virtual object is generally established first, then bones of the model are manually established in a manual mode, a controller, an attribute association and an expression are manually created for each bone, and after the manual creation is completed, the expression of the virtual object can be generated.
However, the expression is generated manually for each model to be processed, which results in inefficient processing of the facial expression of the virtual character.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing model expressions, so as to solve the problem of low processing efficiency of facial expressions of virtual characters.
In a first aspect, an embodiment of the present application provides a method for processing a model expression, which provides a graphical user interface through a terminal device, and includes:
acquiring a face model to be processed;
in response to a starting operation for a target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface;
generating at least one alignment point of the face model in response to a triggering operation for a first control on the operation interface;
generating a skeleton of the face model and binding information between the skeleton and the face model according to the position of the at least one docking point in response to a triggering operation of a second control on the operation interface;
and controlling the expression of the face model according to the skeleton of the face model and the binding information.
In one possible design, after generating at least one alignment point of the face model, the method further comprises:
and adjusting the position of the at least one corresponding point in response to the adjustment operation for the at least one corresponding point.
In one possible design, the adjusting the position of the at least one corresponding point includes:
adjusting the position of the at least one docking point according to the routing of the face model.
In one possible design, the adjusting the position of the at least one corresponding point in response to the adjusting operation for the at least one corresponding point includes:
in response to an adjustment operation for the at least one pair of positions, determining a pair of positions selected by the adjustment operation;
and adjusting the position of the pair of positions selected by the adjusting operation.
In one possible design, the binding information between the skeleton and the face model includes:
the system comprises a controller corresponding to each bone, a property association and an expression, wherein the controller corresponds to each bone, a first performance parameter of each bone is used for indicating the performance of the bone, a second performance parameter of each controller is used for indicating the first performance parameter, the controller is used for controlling the first performance parameter according to the second performance parameter, the property association is used for associating the first performance parameter with the second performance parameter, and the expression is used for associating the first performance parameter with the second performance parameter.
In one possible design, the controlling the expression of the facial model according to the skeleton of the facial model and the binding information includes:
adding the skeleton to the face model, and performing skinning treatment on the face model to obtain a treated face model;
and adjusting the skeleton of the face model according to the binding information so as to control the expression of the processed face model.
In one possible design, the generating the skeleton of the face model according to the position of the at least one docking point includes:
and generating skeleton of the face model in a preset number range according to the position of the at least one contraposition point and preset picture precision.
In a second aspect, an embodiment of the present application provides an apparatus for processing a model expression, which provides a graphical user interface through a terminal device, and includes:
the acquisition module is used for acquiring a face model to be processed;
the control module is used for responding to the starting operation of the target plug-in, starting the target plug-in and displaying an operation interface aiming at the target plug-in on the graphical user interface;
the generating module is used for responding to the triggering operation of a first control on the operation interface and generating at least one contraposition point of the face model;
the generating module is further used for responding to a triggering operation of a second control on the operation interface, and generating a skeleton of the face model and binding information between the skeleton and the face model according to the position of the at least one contraposition point;
the control module is further used for controlling the expression of the face model according to the skeleton of the face model and the binding information.
In one possible design, the control module is further configured to:
after generating at least one pair point of the face model, adjusting the position of the at least one corresponding point in response to an adjustment operation for the at least one pair point.
In one possible design, the control module is specifically configured to:
adjusting the position of the at least one docking point according to the routing of the face model.
In one possible design, the control module is specifically configured to:
in response to an adjustment operation for the at least one pair of positions, determining a pair of positions selected by the adjustment operation;
and adjusting the position of the pair of positions selected by the adjusting operation.
In one possible design, the binding information between the skeleton and the face model includes:
the system comprises a controller corresponding to each bone, a property association and an expression, wherein the controller corresponds to each bone, a first performance parameter of each bone is used for indicating the performance of the bone, a second performance parameter of each controller is used for indicating the first performance parameter, the controller is used for controlling the first performance parameter according to the second performance parameter, the property association is used for associating the first performance parameter with the second performance parameter, and the expression is used for associating the first performance parameter with the second performance parameter.
In one possible design, the control module is specifically configured to:
adding the skeleton to the face model, and performing skinning treatment on the face model to obtain a treated face model;
and adjusting the skeleton of the face model according to the binding information so as to control the expression of the processed face model.
In one possible design, the generating module is specifically configured to:
and generating skeleton of the face model in a preset number range according to the position of the at least one contraposition point and preset picture precision.
In a third aspect, an embodiment of the present application provides an apparatus for processing model expressions, including:
a memory for storing a program;
a processor for executing the program stored by the memory, the processor being adapted to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, comprising instructions which, when executed on a computer, cause the computer to perform the method as described above in the first aspect and any one of the various possible designs of the first aspect.
The embodiment of the application provides a method and a device for processing model expressions, wherein the method comprises the following steps: and acquiring a face model to be processed. And in response to the starting operation for the target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface. At least one alignment point of the face model is generated in response to a triggering operation for a first control on the operation interface. And generating the skeleton of the face model and the binding information between the skeleton and the face model according to the position of the at least one contraposition point in response to the triggering operation of the second control on the operation interface. And controlling the expression of the face model according to the skeleton of the face model and the binding information. Binding information between the contraposition point, the skeleton and the skeleton of the face model and the face model is automatically generated according to the target plug-in unit, so that manual operation is not needed, control over the expression of the face model is achieved, the facial expression processing efficiency of the virtual character is effectively improved, meanwhile, the virtual character can have richer expression, and user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a model for creating a small number of bones provided by an embodiment of the present application;
fig. 2 is an expression diagram for creating a few bones according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for processing model expressions according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of an operation interface of a target plug-in provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a generation pair site provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of generating a skeleton and binding information according to an embodiment of the present disclosure;
FIG. 7 is a flowchart of a method for processing model expressions according to another embodiment of the present application;
FIG. 8 is a schematic illustration of an embodiment of the present application for adding bone;
fig. 9 is an expression diagram provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus for processing model expressions according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware structure of a device for processing model expressions provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The background to which this application relates is described in further detail below:
with the continuous development of the related field of games, the production requirements of games are higher and higher, and at least one virtual character is usually included in the games, wherein the virtual character may need to make some corresponding facial expressions in the games.
At present, in the prior art, when generating a facial expression of a virtual character, a model of a virtual object is generally established, bones of the model are established in a manual operation mode, a controller, an attribute association, an expression and the like are manually created for each bone, and the created bones, the controller, a constraint command and the expression are relied on, so that the expression of the virtual character can be generated.
Therefore, in the prior art, when generating an expression for a virtual character, the expression of each virtual character needs to be generated manually, but the manual processing method causes the processing efficiency of the facial expression of the virtual character to be low, and the expression is relatively single.
In a possible implementation manner, when manual processing is performed manually, in order to increase the processing speed, a smaller number of bones may be created, so as to reduce the workload and increase the processing speed, however, when the number of bones is smaller, the expression that can be generated is comparatively single, for example, as can be understood by referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of creating a model bone in the prior art provided by an embodiment of the present application, and fig. 2 is a schematic diagram of an expression obtained by creating a bone in the prior art provided by an embodiment of the present application.
As shown in fig. 1, it is assumed that two bones 101 and 102 are generally manually created for controlling an eyelid of a left eye and an eyelid of a right eye respectively currently for a certain model, and only the eyes of the virtual character can be opened or closed, wherein the schematic diagram of the opened eyes is shown in fig. 1, and the schematic diagram of the closed eyes is shown in fig. 2, so that only the virtual character with the two bones shown in fig. 1 is created, and only blinking animation can be performed, thereby resulting in a single expression effect of the virtual character and failing to realize the expression of the vivid image of the virtual character.
In summary, in the prior art, if the manual processing method is used, if the vividness of the expression of the virtual character needs to be improved, a large number of bones, controllers, and the like need to be created, so that the processing efficiency of the facial expression of the virtual character is low.
Based on the problems in the prior art, the application provides the following technical concept: a large number of repetitive commands, complicated attribute association and other operations in the skeleton generating process are written into the script language, and the written script language is directly adopted to process each model, so that each model does not need to be manually processed, and the expression processing efficiency of the virtual character is effectively improved.
The method for processing the model expression provided by the present application is described below with reference to a specific embodiment, fig. 3 is a flowchart of the method for processing the model expression provided by the embodiment of the present application, fig. 4 is a schematic view of an operation interface of a target plug-in provided by the embodiment of the present application, fig. 5 is a schematic view of generating a pair of sites provided by the embodiment of the present application, and fig. 6 is a schematic view of generating bones and binding information provided by the embodiment of the present application.
As shown in fig. 3, the method includes:
s301, obtaining a face model to be processed.
The face model is a model of the face of the virtual character which needs expression processing, and modeling can be performed on the face of the virtual character to obtain the face model to be processed.
S302, responding to the starting operation of the target plug-in, starting the target plug-in, and displaying an operation interface of the target plug-in on the graphical user interface.
In this embodiment, the target plug-in is a plug-in for generating the contra-position point, skeleton, controller, attribute association, expression of the model.
In a possible implementation manner, for example, a series of operations of generating a model, such as position, bone, controller, attribute association, and expression, may be written into a scripting language by using a maxscript language of 3DSMAX, and the target plug-in the embodiment is obtained by making according to the written scripting language.
Therefore, in this embodiment, the target plug-in may include logic codes for generating a pair point, a skeleton, a controller, an attribute association, and an expression of the model, and the target plug-in may automatically execute the corresponding logic codes in the background only when a trigger-related operation is detected, so as to implement generation of the corresponding data.
The starting operation for the target plug-in may be, for example, a click operation on the graphical user interface, or may also be an operation of triggering a command line for starting the target plug-in to run.
In one possible implementation, the operation interface of the target plug-in may be as shown in fig. 4, referring to fig. 4, the operation interface of the target plug-in may include a first control 401 and a second control 402, wherein the first control 401 may be used to trigger the corresponding logic code to generate at least one alignment point of the face model, and the second control may be used to trigger the corresponding logic code to generate the skeleton of the face model and the binding information between the skeleton and the face model.
In other possible implementation manners, the positions, shapes, sizes and the like of the first control and the second control can be selected according to actual requirements; or the target plug-in may also include only one control, and generate two sub-controls by operating one control, or the target plug-in may also include controls other than the first control and the second control.
And S303, responding to the trigger operation of the first control on the operation interface, and generating at least one alignment point of the face model.
In a possible implementation manner, the first control is a clickable control, and the triggering operation for the first control may be, for example, a click operation; or the trigger operation for the first control may also be a long-press operation, and the like, and the implementation manner of the trigger operation is not limited in this embodiment as long as the trigger operation can trigger the function corresponding to the first control.
In one possible implementation, logic code that generates the contra-location point of the model may be triggered in response to a click operation on the first control, and after the logic code is automatically executed, at least one contra-location point of the face model may be generated.
One possible implementation of the face model is shown in fig. 5, and as shown in fig. 5, a first control 502 is included in the target plug-in 501, wherein the first control 502 is used for generating at least one alignment point of the face model.
For example, after the logic code generating the contra-location points of the model is executed, at least one contra-location point of the face model, such as 503 in fig. 5, and each circular point on the face model in fig. 5 is a contra-location point, and in this embodiment, the contra-location point may be used to indicate the position of the generated skeleton in the face model.
In the actual implementation process, the setting position of each pair of sites, the setting number of the pair of sites, and the like can be set correspondingly according to the actual requirements, and the setting mode of the pair of sites is not particularly limited in this embodiment.
And S304, responding to the triggering operation of a second control on the operation interface, and generating the skeleton of the face model and the binding information between the skeleton and the face model according to the position of the at least one contraposition point.
In a possible implementation manner, the binding information between the skeleton and the face model may include controllers corresponding to the skeleton, attribute association between the skeleton and the controllers, and an expression, where each skeleton corresponds to a controller, and in a possible implementation manner, a plurality of skeletons having an association relationship may correspond to one controller.
In this embodiment, the bone corresponds to a first performance parameter, where the first performance parameter is used to indicate the performance of the bone, and the controller corresponds to a second performance parameter, where the second performance parameter is used to indicate the first performance parameter, so the controller in this embodiment may control the first performance parameter of the bone according to the second performance parameter, for example, the controller may be a component in 3dsMax that is used to process an animation task, so that the controller may control the performance parameters of each control item, such as the position, rotation, and scaling of the bone, to control the bone to implement a corresponding action, and may also be used to store parameter values of an animation keyframe.
And the attribute association in this embodiment is used to associate the first performance Parameter with the second performance Parameter, for example, the attribute association may be a Wire Parameter attribute association tool of 3ds Max, and through the attribute association tool, the performance parameters of any one controller and any one bone may be associated, so that an effect that adjusting the performance parameters of one controller may automatically change the performance parameters of the corresponding bone is achieved.
And the expression in this embodiment is used to associate the first performance parameter with the second performance parameter, where the expression may be mathematically connected to the related attributes of different objects, for example, the related parameter of one controller may be set in a mathematical formula manner, and as long as the related parameter of the controller is changed, the parameters of other associated bones will automatically change, so that the corresponding bones will also change in performance.
In this embodiment, the second control is a clickable control, and the triggering operation for the second control may be, for example, a clicking operation, or may also be the remaining operations for triggering the function corresponding to the second control.
In one possible implementation, the logic code for generating the skeleton, controller, attribute association, and expression of the model may be triggered in response to a triggering operation for the second control, for example, see 601 in fig. 6, and the second control 602 is included in the target plug-in 601, where the second control 602 is used for generating the skeleton, controller, attribute association, and expression of the face model.
After the logic code is automatically executed, the skeleton, controller, attribute association, and expression of the face model may be generated, and the finally generated content is shown as the content on the face model 603 in fig. 6.
In the present embodiment, the alignment points may indicate positions of the bones on the face model, and in one possible implementation, the bones of the face model in a preset number range may be generated according to the position of the at least one alignment point and a preset picture precision, as shown in fig. 6, and the positions of the bones on the generated face model and the positions of the at least one alignment point in fig. 5 are consistent.
And if the preset picture precision is higher, a larger number of bones of the face model can be set, and if the preset picture precision is lower, a smaller number of bones of the face model can be set, and the preset picture precision can be selected according to actual requirements.
Then, the controller, attribute association and expression of the skeleton of the face model can be generated according to the logical codes of the controller, attribute association and expression of the generation model in the target plug-in.
In the actual implementation process, the setting position of each bone, the setting number of bones, the controller of each bone, the attribute association, the specific implementation of the expression, and the like, may be set correspondingly to the logic code according to the actual requirements, and the setting manner of this embodiment is not particularly limited.
The controller, the attribute association, and the expression of the skeleton are used for processing related logic operations in the process of generating the expression of the virtual object, taking a mouth in a facial expression as an example, for example, the controller, the attribute association, and the expression may be set for the skeleton of the mouth, when the virtual object is required to make a smiling expression, the skeleton of the mouth corner is controlled to move upward, and when the virtual object is required to make a smiling expression, the skeleton of the mouth is controlled to move to a specified position, so that the expression of the virtual object may be controlled, and implementation manners of generating the controller, the attribute association, the expression, and the like of each skeleton in the target plug-in may be written with corresponding logic codes according to actual requirements, which is not particularly limited in this embodiment.
And S305, controlling the expression of the face model according to the skeleton and the binding information of the face model.
After the skeleton of the face model and the binding information are generated, the generated skeleton can be added to the corresponding position of the face model, and the movement of the skeleton is controlled according to the binding information between the skeleton and the face model, so that the expression of the face model is controlled.
The method for processing the model expression provided by the embodiment of the application comprises the following steps: and acquiring a face model to be processed. And in response to the starting operation for the target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface. At least one alignment point of the face model is generated in response to a triggering operation for a first control on the operation interface. And generating the skeleton of the face model and the binding information between the skeleton and the face model according to the position of the at least one contraposition point in response to the triggering operation of the second control on the operation interface. And controlling the expression of the face model according to the skeleton of the face model and the binding information. The binding information between the contraposition point, the skeleton and the skeleton of the face model and the face model is automatically generated according to the target plug-in, so that the expression of the face model can be controlled without manual operation, and the facial expression processing efficiency of the virtual character is effectively improved.
On the basis of the foregoing embodiment, a detailed description is given below with reference to a specific embodiment of an implementation manner of generating corresponding data according to a received operation in the present application, fig. 7 is a flowchart of a method for processing a model expression according to another embodiment of the present application, fig. 8 is a schematic diagram of adding a skeleton according to the embodiment of the present application, and fig. 9 is a schematic diagram of an expression according to the embodiment of the present application.
The present embodiment is described below:
as shown in fig. 7, the method includes:
and S701, acquiring a face model to be processed.
S702, responding to the starting operation of the target plug-in, starting the target plug-in, and displaying an operation interface of the target plug-in on the graphical user interface.
And S703, responding to the trigger operation of the first control on the operation interface, and generating at least one alignment point of the face model.
The implementation manners of S701-S703 are similar to those of S301-S303, and are not described herein again.
And S704, responding to the adjustment operation aiming at the at least one pair of the position points, and adjusting the position of the at least one corresponding point.
In the application, the target plugin processes each face model by using the same set of logic codes, so that the positions of the alignment points generated for different face models are the same, however, a certain difference exists between each face model, so that the positions of partial alignment points can be personalized and adjusted after at least one alignment point is generated according to the target plugin.
In a possible implementation manner, the position of the at least one alignment point may be adjusted according to the wiring of the face model, where the wiring of the face model may be used to indicate the trend of facial muscles of the face model, and then the at least one alignment point is adjusted according to the wiring of the face model, so that the vividness of the expression implemented by the face model may be effectively improved.
In another possible implementation manner, in response to the adjustment operation for at least one pair of position points, the pair of position points selected by the adjustment operation is determined, and the position of the pair of position points selected by the adjustment operation is adjusted.
For example, it is currently determined that a part of the pair of positions needs to be adjusted, an adjustment operation may be performed on at least one pair of positions that needs to be adjusted, where the adjustment operation may be, for example, first selecting a pair of positions to be adjusted, and moving the positions of the pair of positions, so as to adjust the positions of the pair of positions selected by the adjustment operation.
In a possible implementation manner, only the left side face or the right side face of the face model can be adjusted, the other side can be automatically processed by adopting mirror image adjustment, and the expression of the virtual object can be further more vivid by performing personalized adjustment on the loci according to different face models.
S705, responding to a trigger operation aiming at a second control on the operation interface, and generating a skeleton of the face model and binding information between the skeleton and the face model according to the position of at least one opposite point.
The implementation manner of S705 is the same as that of S304, and is not described herein again.
S706, adding the skeleton into the face model, and performing skinning processing on the face model to obtain a processed face model.
In this embodiment, after the model of the facial skeleton is generated, the skeleton of the facial model needs to be added to the facial model to control the facial model to generate the corresponding expression.
The schematic diagram of the face model after adding the skeleton of the face model to the face model is shown in fig. 8, and then skinning may be performed on the face model, where the operation interface illustrated in fig. 8 is an operation interface for skinning, so as to obtain the processed face model, where skinning refers to matching each point on the face model to the skeleton, and then using the motion of the skeleton to drive the motion of the face model, and is an important link for processing character animation.
And S707, adjusting the skeleton of the face model according to the binding information to control the expression of the processed face model.
After the skeleton addition and skinning processing are completed, the skeleton of the face model can be adjusted according to the controller, the attribute association and the expression to control the expression of the processed face model, because the target plug-in is used for automatic processing in the embodiment, a large number of skeletons can be set, and the finally generated expression of the face model can be, for example, as shown in fig. 9, it can be determined with reference to fig. 9 that when the number of the set skeletons is large, the model of the virtual character is vivid, and the automatic processing of the target plug-in the embodiment can also ensure the processing efficiency of the virtual character at the same time.
The method for processing the model expression provided by the embodiment of the application comprises the following steps: and acquiring a face model to be processed. And in response to the starting operation for the target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface. At least one alignment point of the face model is generated in response to a triggering operation for a first control on the operation interface. And adjusting the position of at least one corresponding point in response to the adjustment operation for at least one corresponding point. And generating the skeleton of the face model and the binding information between the skeleton and the face model according to the position of the at least one contraposition point in response to the triggering operation of the second control on the operation interface. And adding the skeleton into the face model, and performing skinning treatment on the face model to obtain a treated face model. And adjusting the skeleton of the face model according to the binding information so as to control the expression of the processed face model. The generation of the operations such as the paired sites, the bones, the controller, the attribute association, the expression and the like are written into the script language and made into the target plug-in, so that all steps required by the expression binding can be automatically executed in the background by operating the target plug-in, the processing of the model expression can be efficiently and quickly realized, and meanwhile, the target plug-in is adopted for processing, so that the processing accuracy can be effectively ensured.
Fig. 10 is a schematic structural diagram of a device for processing model expressions according to an embodiment of the present application. As shown in fig. 10, the apparatus 100 includes: an acquisition module 1001, a control module 1002, and a generation module 1003.
An obtaining module 1001, configured to obtain a face model to be processed;
the control module 1002 is configured to respond to a starting operation for a target plug-in, start the target plug-in, and display an operation interface for the target plug-in on the graphical user interface;
a generating module 1003, configured to generate at least one alignment point of the face model in response to a triggering operation for a first control on the operation interface;
the generating module 1003 is further configured to generate, in response to a triggering operation for a second control on the operation interface, a skeleton of the face model and binding information between the skeleton and the face model according to the position of the at least one docking point;
the control module 1002 is further configured to control an expression of the face model according to the skeleton of the face model and the binding information.
In one possible design, the control module 1002 is further configured to:
after generating at least one pair point of the face model, adjusting the position of the at least one corresponding point in response to an adjustment operation for the at least one pair point.
In one possible design, the control module 1002 is specifically configured to:
adjusting the position of the at least one docking point according to the routing of the face model.
In one possible design, the control module 1002 is specifically configured to:
in response to an adjustment operation for the at least one pair of positions, determining a pair of positions selected by the adjustment operation;
and adjusting the position of the pair of positions selected by the adjusting operation.
In one possible design, the binding information between the skeleton and the face model includes:
the system comprises a controller corresponding to each bone, a property association and an expression, wherein the controller corresponds to each bone, a first performance parameter of each bone is used for indicating the performance of the bone, a second performance parameter of each controller is used for indicating the first performance parameter, the controller is used for controlling the first performance parameter according to the second performance parameter, the property association is used for associating the first performance parameter with the second performance parameter, and the expression is used for associating the first performance parameter with the second performance parameter.
In one possible design, the control module 1002 is specifically configured to:
adding the skeleton to the face model, and performing skinning treatment on the face model to obtain a treated face model;
and adjusting the skeleton of the face model according to the binding information so as to control the expression of the processed face model.
In one possible design, the generating module 1003 is specifically configured to:
and generating skeleton of the face model in a preset number range according to the position of the at least one contraposition point and preset picture precision.
Fig. 11 is a schematic diagram of a hardware structure of a device for processing model expressions provided in an embodiment of the present application, and as shown in fig. 11, the device 110 for processing model expressions in this embodiment includes: a processor 1101 and a memory 1102; wherein
A memory 1102 for storing computer execution instructions;
a processor 1101 for executing computer-executable instructions stored in the memory to implement the steps performed by the method of processing model expressions in the above-described embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 1102 may be separate or integrated with the processor 1101.
When the memory 1102 is separately provided, the device for model expression processing further includes a bus 1103 for connecting the memory 1102 and the processor 1101.
An embodiment of the present application further provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the method for processing model expressions, which is executed by the apparatus for processing model expressions, is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for processing model expressions is characterized in that a graphical user interface is provided through a terminal device, and the method comprises the following steps:
acquiring a face model to be processed;
in response to a starting operation for a target plug-in, starting the target plug-in, and displaying an operation interface for the target plug-in on the graphical user interface;
generating at least one alignment point of the face model in response to a triggering operation for a first control on the operation interface;
generating a skeleton of the face model and binding information between the skeleton and the face model according to the position of the at least one docking point in response to a triggering operation of a second control on the operation interface;
and controlling the expression of the face model according to the skeleton of the face model and the binding information.
2. The method of claim 1, wherein after generating at least one alignment point for the face model, the method further comprises:
and adjusting the position of the at least one corresponding point in response to the adjustment operation for the at least one corresponding point.
3. The method of claim 2, wherein the adjusting the position of the at least one corresponding point comprises:
adjusting the position of the at least one docking point according to the routing of the face model.
4. The method of claim 2, wherein the adjusting the position of the at least one corresponding point in response to the adjusting the position of the at least one corresponding point comprises:
in response to an adjustment operation for the at least one pair of positions, determining a pair of positions selected by the adjustment operation;
and adjusting the position of the pair of positions selected by the adjusting operation.
5. The method of claim 1, wherein the binding information between the skeleton and the face model comprises:
the controller corresponding to the skeleton, and attribute association and expression between the controller and the controller.
6. The method of claim 1, wherein the controlling the expression of the facial model according to the skeleton of the facial model and the binding information comprises:
adding the skeleton to the face model, and performing skinning treatment on the face model to obtain a treated face model;
and adjusting the skeleton of the face model according to the binding information so as to control the expression of the processed face model.
7. The method of claim 1, wherein generating a skeleton of the face model from the location of the at least one docking point comprises:
and generating skeleton of the face model in a preset number range according to the position of the at least one contraposition point and preset picture precision.
8. An apparatus for processing model expressions, wherein a graphical user interface is provided through a terminal device, the apparatus comprising:
the acquisition module is used for acquiring a face model to be processed;
the control module is used for responding to the starting operation of the target plug-in, starting the target plug-in and displaying an operation interface aiming at the target plug-in on the graphical user interface;
the generating module is used for responding to the triggering operation of a first control on the operation interface and generating at least one contraposition point of the face model;
the generating module is further used for responding to a triggering operation of a second control on the operation interface, and generating a skeleton of the face model and binding information between the skeleton and the face model according to the position of the at least one contraposition point;
the control module is further used for controlling the expression of the face model according to the skeleton of the face model and the binding information.
9. An apparatus for model expression processing, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory, the processor being configured to perform the method of any of claims 1 to 7 when the program is executed.
10. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN202010690729.1A 2020-07-17 2020-07-17 Method and device for processing model expressions Pending CN111798550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010690729.1A CN111798550A (en) 2020-07-17 2020-07-17 Method and device for processing model expressions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010690729.1A CN111798550A (en) 2020-07-17 2020-07-17 Method and device for processing model expressions

Publications (1)

Publication Number Publication Date
CN111798550A true CN111798550A (en) 2020-10-20

Family

ID=72807528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010690729.1A Pending CN111798550A (en) 2020-07-17 2020-07-17 Method and device for processing model expressions

Country Status (1)

Country Link
CN (1) CN111798550A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device
CN113658307A (en) * 2021-08-23 2021-11-16 北京百度网讯科技有限公司 Image processing method and device
WO2024027285A1 (en) * 2022-08-04 2024-02-08 腾讯科技(深圳)有限公司 Facial expression processing method and apparatus, computer device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727302A (en) * 2018-12-28 2019-05-07 网易(杭州)网络有限公司 Bone creation method, device, electronic equipment and storage medium
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN110689604A (en) * 2019-05-10 2020-01-14 腾讯科技(深圳)有限公司 Personalized face model display method, device, equipment and storage medium
CN111161427A (en) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 Self-adaptive adjustment method and device of virtual skeleton model and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727302A (en) * 2018-12-28 2019-05-07 网易(杭州)网络有限公司 Bone creation method, device, electronic equipment and storage medium
CN110689604A (en) * 2019-05-10 2020-01-14 腾讯科技(深圳)有限公司 Personalized face model display method, device, equipment and storage medium
CN110675475A (en) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 Face model generation method, device, equipment and storage medium
CN111161427A (en) * 2019-12-04 2020-05-15 北京代码乾坤科技有限公司 Self-adaptive adjustment method and device of virtual skeleton model and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈世红: "《3ds max 命令参考大全》", 31 December 2006, 《兵器工业出版社》, pages: 668 - 669 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device
CN113658307A (en) * 2021-08-23 2021-11-16 北京百度网讯科技有限公司 Image processing method and device
WO2024027285A1 (en) * 2022-08-04 2024-02-08 腾讯科技(深圳)有限公司 Facial expression processing method and apparatus, computer device and storage medium

Similar Documents

Publication Publication Date Title
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
CN111798550A (en) Method and device for processing model expressions
US11270488B2 (en) Expression animation data processing method, computer device, and storage medium
CN109840019B (en) Virtual character control method, device and storage medium
CN111936966B (en) Design system for creating graphic content
US11394947B2 (en) Text display method and apparatus in virtual reality, and virtual reality device
CN111190589A (en) Visual programming method and terminal equipment
US20220375150A1 (en) Expression generation for animation object
CN112130951A (en) AI-based RPA flow generation end flow generation method, equipment and storage medium
CN111598987B (en) Skeleton processing method, device, equipment and storage medium of virtual object
CN113262476B (en) Position adjusting method and device of operation control, terminal and storage medium
CN112486377B (en) Text editing method and device and electronic equipment
CN116843809A (en) Virtual character processing method and device
CN109543557B (en) Video frame processing method, device, equipment and storage medium
JP2023043849A (en) Method, apparatus, electronic device and storage medium for adjusting virtual face model
CN112169331B (en) Object control method and device
CN115619484A (en) Method for displaying virtual commodity object, electronic equipment and computer storage medium
CN116212368A (en) Method and device for controlling scene establishment in game and electronic equipment
CN110992448B (en) Animation processing method, device, electronic equipment and storage medium
CN113658307A (en) Image processing method and device
US20240171782A1 (en) Live streaming method and system based on virtual image
CN112807688A (en) Method and device for setting expression in game, processor and electronic device
CN109377822A (en) A kind of combined input
CN116196620A (en) Model adjustment control method and device and electronic equipment
TWI796729B (en) Virtual reality interaction method, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination