CN112843704A - Animation model processing method, device, equipment and storage medium - Google Patents

Animation model processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112843704A
CN112843704A CN202110270424.XA CN202110270424A CN112843704A CN 112843704 A CN112843704 A CN 112843704A CN 202110270424 A CN202110270424 A CN 202110270424A CN 112843704 A CN112843704 A CN 112843704A
Authority
CN
China
Prior art keywords
model
detail
group
animation
patches
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110270424.XA
Other languages
Chinese (zh)
Other versions
CN112843704B (en
Inventor
刘纯一
杨韧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110270424.XA priority Critical patent/CN112843704B/en
Publication of CN112843704A publication Critical patent/CN112843704A/en
Application granted granted Critical
Publication of CN112843704B publication Critical patent/CN112843704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an animation model processing method, an animation model processing device, electronic equipment and a computer readable storage medium; the method comprises the following steps: obtaining a multi-detail model, wherein the multi-detail model comprises a plurality of model patches; grouping a plurality of model patches in the multi-detail model to obtain a group of each model patch; storing the animation vector corresponding to each group into the vertex color of each group; and performing animation simulation based on the vertex color of each group and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model. By the method and the device, the resources of the related calculation of the graphic processing hardware can be saved.

Description

Animation model processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to computer graphics and image technologies, and in particular, to an animation model processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, computer animation is widely used in game production, animation production and other aspects. For example, current animation production mainly depends on three-dimensional animation rendering and production software (e.g., 3D Studio Max software), and after a virtual model is introduced into the 3D Studio Max software, a model skeleton can be obtained, and then the virtual model is skinned using the model skeleton, so as to obtain a virtual character corresponding to the virtual model, and produce animation of the virtual character.
However, for the multi-detail model, if a skeleton skinning technique is adopted, a skeleton needs to be bound and skinned on each model patch in the multi-detail model to achieve a distinct animation effect, thereby causing consumption of a large amount of resources (including hardware resources and computing resources).
Disclosure of Invention
The embodiment of the application provides an animation model processing method and device, electronic equipment and a computer readable storage medium, which can save resources related to calculation of graphic processing hardware.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an animation model processing method, which comprises the following steps:
obtaining a multi-detail model, wherein the multi-detail model comprises a plurality of model patches;
grouping a plurality of model patches in the multi-detail model to obtain a group of each model patch;
storing the animation vector corresponding to each group into the vertex color of each group;
and performing animation simulation based on the vertex color of each group and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model.
An embodiment of the present application provides an animation model processing apparatus, including:
the system comprises an obtaining module, a judging module and a judging module, wherein the obtaining module is used for obtaining a multi-detail model, and the multi-detail model comprises a plurality of model patches;
the grouping module is used for grouping a plurality of model patches in the multi-detail model to obtain the group of each model patch;
the storage module is used for storing the animation vector corresponding to each group into the vertex color of each group;
and the simulation module is used for carrying out animation simulation based on the vertex color of each group and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model.
In the above technical solution, before the obtaining the multi-detail model, the apparatus further includes:
the display module is used for displaying the candidate models in the editing interface;
in response to a first selection operation for the candidate model, the selected portion of the candidate model is treated as the multi-detail model.
In the above technical solution, the candidate model includes a plurality of types of model patches; the display module is further used for responding to a first selection operation of at least one model patch in the candidate model, and determining a plurality of model patches in the candidate model, wherein the model patches are of the same type as the at least one model patch;
constructing the multi-detail model based on a plurality of model patches of the same type as the at least one model patch.
In the foregoing technical solution, the grouping module is further configured to, in response to a grouping operation for a plurality of model patches in the multi-detail model, divide the plurality of model patches into different groups according to the grouping operation.
In the above technical solution, the grouping operation includes a first triggering operation and a second selecting operation; the grouping module is further used for responding to the first trigger operation aiming at any model patch, and displaying a plurality of candidate groups corresponding to any model patch;
in response to the second selection operation for the plurality of candidate groups, treating the selected candidate group as a group of any of the model patches.
In the above technical solution, the grouping module is further configured to display grouping trigger entries for a plurality of model patches in the multi-detail model;
and in response to the triggering operation aiming at the grouping triggering inlet, clustering the plurality of model patches based on the similarity among the plurality of model patches to obtain the group of each model patch.
In the above technical solution, the grouping module is further configured to construct a connection relationship diagram of a plurality of model patches based on a connection relationship between the plurality of model patches in the multi-detail model;
and clustering the plurality of model patches based on the connection relation graphs of the plurality of model patches to obtain the group of each model patch.
In the above technical solution, the obtaining module is further configured to obtain a pivot point corresponding to each group; the storage module is further used for normalizing the animation vectors of the pivot points corresponding to each group to obtain the motion vectors of the groups;
and storing the group motion vectors into the vertex colors of the corresponding group.
In the above technical solution, the obtaining module is further configured to take the selected candidate pivot point as the pivot point of each group in response to a selection operation for the candidate pivot point in each group.
In the above technical solution, the obtaining module is further configured to display an input interface for each of the groups of pivot points;
and responding to the input operation of the input interface, and taking the input vertex data as the pivot point corresponding to each group.
In the above technical solution, the simulation module is further configured to perform inverse normalization on the group motion vectors included in the vertex color of each group through a vertex shader, so as to obtain inverse normalized motion vectors;
carrying out coordinate system transformation on the inverse normalized motion vector to obtain a transformed motion vector;
and applying the material parameters corresponding to the multi-detail model in the transformed motion vector to obtain an animation image of the multi-detail model.
In the above technical solution, the simulation module is further configured to determine an Al pha channel in the vertex color of each group;
phase shifting the starting positions of the groups based on Alpha channels in the vertex colors of the groups to distinguish the starting motion state of each group.
In the above technical solution, the apparatus further includes:
the interaction module is used for responding to the interaction operation aiming at the multi-detail model and determining the interaction speed of the multi-detail model;
and carrying out animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model, and generating an animation image of the multi-detail model in the interaction.
In the above technical solution, the simulation module is further configured to normalize the interaction speed of the multi-detail model, and use the normalized interaction speed as an Alpha parameter;
determining a motion parameter value of the multi-detail model based on a material parameter and the Alpha parameter corresponding to the multi-detail model;
and applying the motion parameter values in the multi-detail model to obtain an animation image of the multi-detail model in interaction.
In the above technical solution, the simulation module is further configured to determine a value range of a material parameter corresponding to the multi-detail model, where the value range includes a maximum value and a minimum value of the material parameter;
determining a difference between the maximum value and the minimum value;
determining a product of the difference and the Alpha parameter;
and taking the sum of the product and the minimum value as the motion parameter value of the multi-detail model.
In the above technical solution, the obtaining module is further configured to obtain a table including the material parameter through a logic interface;
and reading the table to obtain the material parameters corresponding to the multi-detail model.
An embodiment of the present application provides an electronic device for processing an animation model, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the animation model processing method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for processing an animation model provided by the embodiment of the application.
The embodiment of the application has the following beneficial effects:
by grouping a plurality of model patches in the multi-detail model and carrying out animation simulation by combining the vertex color of each group in which the animation vector is stored, the animation effect of each model patch in the multi-detail model can be efficiently controlled, and meanwhile, the resource consumption of the related calculation of the graphics processing hardware can be remarkably saved.
Drawings
FIG. 1 is a schematic diagram of a multi-feather model fitted to a generic skeletal frame provided by the related art;
FIGS. 2A-2B are schematic diagrams illustrating an application mode of an animation model processing method according to an embodiment of the application;
FIG. 3 is a schematic structural diagram of an electronic device for processing an animation model according to an embodiment of the present application;
4A-4B are flow diagrams of animation model processing methods provided by embodiments of the present application;
5A-5B are editing interfaces provided by embodiments of the present application;
FIG. 6 is an information interface diagram of the Houdini tool provided by an embodiment of the present application;
FIG. 7 is a schematic view of a parametric material panel according to an embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating an animation model processing method according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a grouping of patches provided by an embodiment of the present application;
10-11 are logic diagrams of Houdini storing vertex colors provided by embodiments of the present application;
12-13 are schematic diagrams of vertex shader logic according to an embodiment of the present application;
FIG. 14 is a logic diagram of a logical frame update provided by an embodiment of the present application;
FIG. 15 is a logic diagram of a read table provided by an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, references to the terms "first", "second", and the like are only used for distinguishing similar objects and do not denote a particular order or importance, but rather the terms "first", "second", and the like may be used interchangeably with the order of priority or the order in which they are expressed, where permissible, to enable embodiments of the present application described herein to be practiced otherwise than as specifically illustrated and described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client, an application program running in the terminal device for providing various services, such as a video playing client, a game client, etc.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on the terminal device. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual character to move in the virtual scene.
4) The virtual object, the image of various people and objects in the virtual scene which can interact with each other, or the non-movable object in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, an animal, a plant, an oil drum, a wall, a stone, etc., displayed in a virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
For example, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene match by training, or a Non-user Character (NPC) set in a virtual scene interaction. For example, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. For example, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) The multi-detail model comprises a virtual model of a multi-model patch, wherein the model patch represents a patch corresponding to details in the virtual model, and the multi-detail model can be a virtual model such as a virtual object, a virtual carrier, a virtual prop and the like, for example, trees including a plurality of leaves (the model patch is a leaf), birds including a plurality of feathers (the model patch is a feather), firearms carrying a plurality of coins (the model patch is a coin) and the like.
6) Skeletal Animation (skeeleton Animation), also called skeletal Animation, divides a three-dimensional model into two parts: the virtual character processing system comprises a covering (Skin) used for drawing a virtual character, and a Skeleton (Skeleton) used for controlling the action of the virtual character. For the virtual models, each virtual model has a basic skeleton including bones and joints, the bones correspond to a coordinate space, and the bone hierarchy is a nested coordinate space. A joint merely describes the position of a bone, i.e. the position of the origin of the bone's own coordinate space in its parent space, and rotation around a joint refers to the rotation of the bone's coordinate space itself (including all subspaces). Skinning refers to attaching (binding) vertices in a Mesh (Mesh) to bones, and each vertex can be controlled by multiple bones, so that the vertices at joints change positions due to being pulled by the parent and child bones at the same time, and cracks are eliminated.
7) Blueprint (Blueprint), a special type of resource in the illusion Engine (Unreal Engine), provides an intuitive, node-based interface for creating new types of actions (Actor) and customerphone events; it provides a tool for level designers and game developers to quickly create and iterate game playability in a fantasy editor, and a line of code does not need to be written.
8) The skeleton Animation (Skelet Animation) comprises two parts of data of a skeleton (Bone) and a skin (skin d Mesh), the interconnected skeleton forms a model skeleton, and the Animation can be generated by changing the position and the orientation of the skeleton to drive the model to move.
9) And the Shader (Shader) is used for realizing image rendering and can replace an editable program of a part of fixed rendering pipelines, wherein the vertex Shader is mainly used for calculating the geometrical relationship and the like of the vertex.
10) Vertex Color (Vertex Color): color information saved on the model vertex data.
In the related art, a 3D model motion scheme is implemented through bone skinning animation. The basic principle of the skeleton skin animation is as follows: under the control of bones, the vertices of the skinned mesh are calculated through vertex mixing dynamic calculation, and the bones are driven by animation key frame data and move relative to father bones. The bone skinning animation includes bone hierarchy data, 3D Mesh (Mesh) data, skinning data, and animation keyframe information.
The applicant has found the following problems in the course of implementing embodiments of the present application: firstly, the skin covering operation itself requires a lot of manpower to make and optimize; secondly, for each sub-level bone motion, multiple coordinate system transformations are required to calculate the final coordinates in the animation. The larger the number of bones of a model, the more complex the nested parent-child relationship, and therefore, the greater the computational overhead of a Central Processing Unit (CPU).
As shown in fig. 1, for example, with a multi-feather fashion model, in order to realize distinct feather root movements of the fashion model in a game, each feather needs to be bound with a skeleton and covered with skin to realize distinct root movements, so as to achieve the expected effect, but a large amount of CPU consumption is caused, and a large pressure is caused on the multi-person on-screen situation and the low-end models. And under the limitation of the number of model skeletons (game items limit the skeletons of a single model to be within 70), long-time adjustment is consumed to achieve the balance between the performance and the effect. If a general character model skeleton frame is adopted, independent movement of the detailed feather model cannot be realized, for example, the skeleton 101 is the skeleton in the general character model skeleton frame, and the feather 102 is one feather in the fashion model, as shown in fig. 1.
In order to solve the above problem, embodiments of the present application provide an animation model processing method, an animation model processing apparatus, an electronic device, and a computer-readable storage medium, which can save resources of related computations of graphics processing hardware. An exemplary application of the electronic device provided in the embodiments of the present application is described below, and the electronic device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal.
In order to facilitate easier understanding of the animation model processing method provided in the embodiments of the present application, an exemplary implementation scenario of the animation model processing method provided in the embodiments of the present application is first described, and a virtual scenario may be completely output based on terminal output or based on cooperation of a terminal and a server.
In some embodiments, the virtual scene may be a picture presented in a military exercise simulation, and a user may simulate a tactic, a strategy or a tactics through virtual objects belonging to different teams in the virtual scene, so that the virtual scene has a great guiding effect on the command of military operations.
In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling the actions of the virtual objects, so that the user can relieve the life pressure during the game.
In an implementation scenario, referring to fig. 2A, fig. 2A is an application mode schematic diagram of the animation model processing method provided in the embodiment of the present application, and is applicable to some application modes that can complete the calculation of related data of the virtual scenario 100 completely depending on the computing capability of the terminal 400, for example, a single-computer/offline mode game, and the terminal 400 completes the output of the virtual scenario through a smart phone, a tablet computer, a virtual reality/augmented reality device, and the like.
When the visual perception of the virtual scene 100 is formed, the terminal 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; furthermore, to enrich the perception effect, the device may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware.
As an example, the terminal 400 runs a client 410 (e.g. a standalone version of a game application), and outputs a virtual scene 100 including role play during the running process of the client 410, wherein the virtual scene is an environment for interaction of game characters, such as a plain, a street, a valley, and the like for fighting the game characters; the multi-detail model 110 is included in the virtual scene, and the multi-detail model 110 may be a game garment controlled by a user (or player), that is, the multi-detail model 110 is controlled by a real user, and will operate in the virtual scene in response to the real user's operation of buttons (including a rocker button 130, an attack button, a defense button, etc.) or gestures, for example, when the real user moves the rocker button to the left, the multi-detail model 110 will move to the left in the virtual scene.
In another implementation scenario, referring to fig. 2B, fig. 2B is a schematic diagram of an application mode of the animation model processing method provided in the embodiment of the present application, applied to a terminal 400 and a server 200, and adapted to complete virtual scene calculation depending on the calculation capability of the server 200 and output an application mode of a virtual scene at the terminal 400.
Taking the visual perception of forming the virtual scene 100 as an example, the server 200 performs calculation of display data related to the virtual scene and sends the calculated display data to the terminal 400, the terminal 400 relies on graphic calculation hardware to complete loading, analysis and rendering of the calculated display data, and relies on graphic output hardware to output the virtual scene to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of a corresponding hardware output of the terminal, e.g. using a microphone output, a tactile perception using a vibrator output, etc.
As an example, the terminal 400 runs a client 410 (e.g. a web-based game application), and by connecting the game server (i.e. the server 200) to perform game interaction with other users, the terminal 400 outputs a virtual scene 100 of the client 410, including the multi-detail model 110, the multi-detail model 110 may be a game garment controlled by the user, i.e. the multi-detail model 110 is controlled by a real user, and will operate in the virtual scene in response to the real user's operation on buttons (including a rocker button 130, an attack button, a defense button, etc.) or gestures, for example, when the real user moves the rocker to the left, the multi-detail model 110 will move to the left in the virtual scene.
In some embodiments, the terminal 400 may implement the animation model processing method provided by the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a game APP (i.e. the above-mentioned client 410); or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement data calculation, storage, processing, and sharing.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device for processing an animation model according to an embodiment of the present application, and is described by taking the electronic device as a server as an example, where the electronic device shown in fig. 3 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in electronic device 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in FIG. 3.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450, for example, comprises one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the animation model processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates an animation model processing apparatus 455 stored in a memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: an acquisition module 4551, a grouping module 4552, a storage module 4553, a simulation module 4554, a display module 4555, and an interaction module 4556, which are logical and thus may be arbitrarily combined or further divided according to functions implemented, and functions of the respective modules will be described hereinafter.
As described above, the animation model processing method provided in the embodiments of the present application may be implemented by various types of electronic devices, such as a terminal. Referring to fig. 4A, fig. 4A is a schematic flowchart of an animation model processing method according to an embodiment of the present application, and is described with reference to the steps shown in fig. 4A.
In the following steps, the multi-detail model is a virtual model including a multi-model patch, where the model patch represents a patch corresponding to details in the virtual model, and the multi-detail model may be a virtual model such as a virtual object, a virtual carrier, a virtual prop, and the like, for example, a garment including many feathers (the model patch is a feather), a firearm carrying many coins (the model patch is a coin), and the like.
In the following steps, before the multi-detail model is obtained, displaying the candidate model in an editing interface; in response to a first selection operation for the candidate model, the selected portion of the candidate model is treated as the multi-detail model. Therefore, animation simulation is carried out on only the part (the multi-detail model) needing to be edited, and image computing resources are saved.
As shown in fig. 5A, a candidate model 501, which is a virtual character with a multi-feather garment, is displayed in the editing interface, and in response to a selection operation for the candidate model, a selected portion 502 of the candidate model is made to be a multi-detail model, that is, the multi-detail model is a multi-feather garment. The selection operation may be a specific mouse track, a specific touch gesture, a trigger for an edit entry of the shortcut toolbar, or the like.
Wherein the candidate model comprises a plurality of types of model patches; in response to a first selection operation on the candidate model, treating a selected portion of the candidate model as a multi-detail model, including: in response to a first selection operation for at least one model patch in the candidate model, determining a plurality of model patches in the candidate model of the same type as the at least one model patch; and constructing the multi-detail model based on a plurality of model patches of the same type as the at least one model patch.
As shown in fig. 5A, when at least one feather included in the candidate model 501 is manually selected by the user, the selected feather is used as a standard model patch, and other model patches matched with the standard model patch are determined as the feather in the candidate model 501, so that all the feathers in the candidate model 501 are constructed into the multi-detail model.
In step 101, a multi-detail model is obtained.
The multi-detail model may include a plurality of model patches of the same type, and may further include model patches of different types.
In step 102, a plurality of model patches in the multi-detail model are grouped to obtain a group of each model patch.
The grouping of the model patches can be manually grouped, for example, after a multi-detail model is obtained, the group of each model patch is selected in a manual triggering mode; or, the model patches may be automatically grouped, for example, after obtaining the multi-detail model, the model patches may be automatically grouped to obtain the group of each model patch.
In some embodiments, grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises: in response to a grouping operation for a plurality of model patches in the multi-detail model, the plurality of model patches are divided into different groups according to the grouping operation, namely, the grouping is performed by means of manual grouping.
Taking over the above example, the grouping operation includes a first triggering operation and a second selecting operation; in response to a grouping operation for a plurality of model patches in the multi-detail model, partitioning the plurality of model patches into different groups according to the grouping operation includes: responding to a first trigger operation aiming at any model patch, and displaying a plurality of candidate groups corresponding to any model patch; and in response to a second selection operation for a plurality of candidate groups, treating the selected candidate group as a group of any model patch.
The triggering operation and the selecting operation may be clicking, long pressing, and the like, the grouping operation may be a composite operation, such as a triggering operation and a selecting operation, in response to the triggering operation for any model patch, a plurality of candidate groups of the triggered model patch are determined, and in response to the selecting operation for the plurality of candidate groups, the selected candidate group is taken as the group of any model patch; the grouping operation may also be the triggering of a specific mouse track, a specific touch gesture, a selection entry for a shortcut toolbar.
In some embodiments, grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises: displaying grouping trigger entries for a plurality of model patches in a multi-detail model; and responding to the triggering operation aiming at the grouping triggering inlet, clustering the plurality of model patches based on the similarity among the plurality of model patches to obtain the group of each model patch.
The triggering operation and the selecting operation can be clicking, long pressing and the like, the electronic equipment is triggered to group the model surface patches through an artificial intelligence technology through the triggering operation, so that the similarity among the plurality of model surface patches is determined, the clustering of the model surface patches is realized, the complicated manual operation of a user is simplified, and the grouping efficiency is improved.
Wherein the grouping trigger entry may have multiple levels, e.g., the grouping trigger entry is always displayed in a toolbar of the graph analysis interface; as shown in fig. 5B, the packet trigger entry may also be hidden by default, and clicking the right button may call out the packet trigger entry 503.
In some embodiments, grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises: constructing a connection relation graph of a plurality of model patches based on connection relations among a plurality of model patches in the multi-detail model; and clustering the plurality of model patches based on the connection relation graphs of the plurality of model patches to obtain the group of each model patch.
For example, model patches in the multi-detail model are used as nodes, connection relations between the model patches are used as edges, weights of the edges are determined based on distances between the model patches, connection relation graphs of a plurality of model patches are constructed based on the edges and the nodes, and therefore the connection relation graphs are clustered through a graph clustering method, wherein the graph clustering method comprises a community discovery algorithm, a K-means algorithm and the like.
In step 103, the animation vector corresponding to each group is stored into the vertex color of each group.
For example, the animation vector corresponding to each group is stored into the vertex color of each group through the vertex shader, so that the animation simulation is performed based on the vertex color.
Referring to fig. 4B, fig. 4B is an alternative flowchart of the animation model processing method according to the embodiment of the present application, and fig. 4B shows that 4A further includes step 105: in step 105, acquiring a pivot point (a standard point) corresponding to each group; fig. 4B shows that step 103 in 4A can be implemented by steps 1031-1032: in step 1031, normalizing the animation vectors of the pivot points corresponding to each group to obtain motion vectors of the groups; in step 1032, the group's motion vectors are stored into the vertex color of the corresponding group.
Because the range of the vertex color of the vertex in each group of model patches is [0, 1], the animation vector (xyz direction information) of the Pivot point (Pivot) needs to be normalized to the range of [0, 1], and then the normalized animation vector is stored in the vertex color, wherein the vertex color includes four-channel information (Red, Green, Blue, and Alpha).
In some embodiments, obtaining the pivot point corresponding to each group comprises: in response to a selection operation for a candidate pivot point in each group, the selected candidate pivot point is treated as the group's pivot point.
For example, the acquisition of the pivot point may be by manual selection, for example in groups according to requirements; the input interface can be manually input, the input interface aiming at the pivot point of each group is displayed, and in response to the input operation aiming at the input interface, the input vertex data is used as the corresponding pivot point of each group; each group of pivot points may also be obtained automatically, for example by choosing each group of pivot points from candidate pivot points in an artificial intelligence manner, or by averaging each group of candidate pivot points to obtain each group of pivot points.
In step 104, an animation simulation is performed based on the vertex color of each group and the material parameter corresponding to the multi-detail model, and an animation image of the multi-detail model is generated.
For example, after the vertex color of each group and the material parameter corresponding to the multi-detail model are obtained, animation simulation can be performed based on the vertex color of each group and the material parameter corresponding to the multi-detail model to generate an animation image of the multi-detail model, so that the animation effect of the model can be realized without bone skinning, and the resources of animation calculation are saved under the condition of ensuring the animation performance.
In some embodiments, performing animation simulation based on the vertex color of each group and the material parameter corresponding to the multi-detail model, and generating an animated image of the multi-detail model, includes: performing reverse normalization on the group motion vectors included in the vertex color of each group through a vertex shader to obtain reverse normalized motion vectors; carrying out coordinate system transformation on the inverse normalized motion vector to obtain a transformed motion vector; and applying the material parameters corresponding to the multi-detail model in the transformed motion vector to obtain the animation image of the multi-detail model.
For example, the motion vector (xyz direction) included in each group of vertex colors is resolved from [0, 1] to [ -xm, xm ], [ -ym, ym ], [ -zm, zm ] respectively by the vertex shader, and converted by the left-hand and right-hand coordinate systems to ensure the motion vector is correctly oriented in the illusion engine. After the motion direction is obtained through the conversion of the left and right hand coordinate systems, a RotateAboutAxis function (rotating around an axis) is used for driving all vertexes in the corresponding group, and the animation calculation of rotation and flapping is carried out according to the vector direction, so that the animation effect of the multi-detail model is realized.
In some embodiments, before applying the material parameters corresponding to the multi-detail model to the transformed motion vector to obtain the animated image of the multi-detail model, the method further includes: determining Alpha channels in the vertex color of each group; the start positions of the groups are phase-shifted based on Alpha channels in the vertex colors of the groups to distinguish the start motion state of each group.
For example, the Alpha channel in the vertex color is used before the sine function, so that each group of model patches has phase offset of the starting position, the action of each group of model patches is not repeated visually, and the animation effect of the multi-detail model is more vivid.
In some embodiments, after performing the animation simulation based on the vertex color of each group and the material parameter corresponding to the multi-detail model, and generating the animation image of the multi-detail model, the method further includes: determining an interaction speed of the multi-detail model in response to an interaction operation for the multi-detail model; and performing animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model in the interaction.
For example, to achieve better simulation effect, without being limited to repeating an animation motion all the time, the method can interact with the multi-detail model to control material parameters through interaction speed so as to achieve better simulation effect by matching with the bone animation, and without being limited to repeating an animation motion all the time.
The interactive operation is not limited to a specific mouse track and a specific touch gesture.
In some embodiments, performing animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model, and generating an animation image of the multi-detail model during interaction includes: normalizing the interaction speed of the multi-detail model, and taking the normalized interaction speed as an Alpha parameter; determining motion parameter values of the multi-detail model based on the material parameters and the Alpha parameters corresponding to the multi-detail model; and applying the motion parameter values in the multi-detail model to obtain an animation image of the multi-detail model in interaction.
For example, since the interpolation itself is a numerical value between 0 and 1, the interaction speed needs to be normalized to the range of [0, 1], and the normalized value is used as an Alpha parameter of a subsequent interpolation function, the animation of the multi-detail model in the interaction process is simulated through the Alpha parameter, that is, the motion parameter value of the multi-detail model is determined based on the material parameter and the Alpha parameter corresponding to the multi-detail model, the motion parameter value is applied to the multi-detail model to obtain the animation image of the multi-detail model in the interaction process, and thus the animation motion is changed in the interaction process through the change of the motion parameter value.
In some embodiments, determining the motion parameter value of the multi-detail model based on the material parameter and the Alpha parameter corresponding to the multi-detail model includes: determining a value range of the material parameter corresponding to the multi-detail model, wherein the value range comprises a maximum value and a minimum value of the material parameter; determining a difference between the maximum value and the minimum value; determining the product of the difference and the Alpha parameter; and taking the sum of the product and the minimum value as the motion parameter value of the multi-detail model.
For example, the value range of the material parameter is [ a, b ], an interpolation is performed by using the previously calculated Alpha parameter to obtain a motion parameter value to be set, and then the value is transmitted to a parameter value corresponding to the dynamic material instance to realize animation motion, wherein the motion parameter value calculation formula is value ═ a + (b-a)' Alpha. The motion parameter value is not limited to the above calculation formula, and other modification formulas may be adopted.
The animation amplitude range is adjusted conveniently for relevant personnel, wherein the material parameters can be configured through the table, before animation simulation is performed on the basis of the vertex color of each group and the material parameters corresponding to the multi-detail models to generate the animation images of the multi-detail models, the table including the material parameters is obtained through the logic interface, and the table is read to obtain the material parameters corresponding to the multi-detail models.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Skeleton skinning animations are widely used in the gaming, video and other industries, and interconnected bones constitute a skeleton structure, and animations are generated by changing the orientation and position of the bones. The key point of the bone skinning animation technology is skinning, namely, attaching the vertex of the Mesh to a bone and specifying a proportionality coefficient controlled by a multi-bone.
The applicant has found the following problems in the course of implementing embodiments of the present application: firstly, the skinning operation itself requires a lot of manpower for the animation designer to make and tune; secondly, for each sub-level bone motion, multiple coordinate system transformations are required to calculate the final coordinates in the animation. The larger the number of bones of a model, the more complex the nested parent-child relationship, and therefore, the greater the computational overhead of a Central Processing Unit (CPU).
In order to solve the above problems, while ensuring less performance consumption, and achieving a better animation art effect, embodiments of the present application provide a vertex animation scheme (animation model processing method) capable of simulating dynamic bone expression, which implements model motion through a vertex shader, and simulates physical bone expression during 3D model rotation operation through blueprint logic. The artists can simulate the rotation, translation and other movements of part of details in the model only by adopting tools to automatically generate vertex colors according to rules, so that the artists are suitable for multi-detail models, such as multi-feather fashion models and multi-gold-currency firearm models, ensure animation effects on the premise of reducing consumption, reduce the workload of animation designers and enable the animation effects to explore more models at a mobile end.
According to the embodiment of the application, the motion vector or Pivot point (Pivot) information is stored through the vertex color, the model motion is driven through reverse reduction in the material (the vertex color information is reduced through a vertex shader), the material parameters (logic processing when the blueprint supports model interaction) are modified through blueprint logic when the model moves so as to simulate the simulation dynamic effect of the skeleton, therefore, the motion performance of the multi-detail model can be improved, and the performance problem caused by the animation effect and the number of the skeleton is solved.
The vertex animation scheme of the embodiment of the application has wide application scenarios, for example, the vertex animation scheme is applied to the interactive movement of 3D models such as commercial fashion, firearms and pendants. The following description is made in terms of both in the development of a project and on-line with the project.
(1) In the process of project development
In the development process of a project, the technical scheme of the embodiment of the application has two application scenes: by using the scheme, an animation designer can directly verify whether the dynamic effect achieved by art resources and the effect matched with the skeleton during interaction meet the requirements in an engine, and skin binding and skeleton binding processing is not needed. In another application scenario, for a low-end model, under the condition that the number of model vertices is small and the number of skeletons is optimized, the method can be used for optimizing the effect of a pure skeleton scheme.
(2) Project online
The scheme of the embodiment of the application supports various conditions, and for a middle-high-end mobile terminal, vertex colors and vertex shader mapping sampling (storing animation intensity information in the mapping) can be matched to achieve the effect; for low end handsets (e.g., models with underlying characteristics ES2.0 that do not support vertex map sampling), a similar effect can be achieved by adding one more vertex color channel baked animation intensity information.
The following describes the parameters of the tool and material panel used in the embodiments of the present application:
as shown in fig. 6, the motion vector and the Pivot point (Pivot) information are baked (stored) into the vertex color using a three-dimensional computer graphics software (Houdini tool), and the Pivot point (Pivot) or the vertex index (index) affecting the motion vector is input after the model is selected, so that the necessary information can be baked (stored) into the vertex color by calculation. For example, vertex 6 is selected as pivot point 601 through the menu in FIG. 6 and vertex calculations are performed through blast 1.
The material parameter panel in the ghost engine as shown in fig. 7, wherein Movable represents a global animation strength switch, Rotate parameter is used for controlling the animation strength around the x-axis (the model is animated to be rotated), Shake parameter is used for controlling the animation strength around the y-axis (the model is animated to be flapping), Diff parameter (including rotadiff parameter and ShakeDiff parameter) is used for controlling the animation interlacing between different models, Freq parameter (including rotafreq parameter and ShakeFreq parameter) is used for controlling the animation frequency, Start parameter (including rotastart parameter and ShakeStart parameter) is used for the initial amplitude position, VC _ Scale corresponds to the range threshold on the model xyz three pivots, and is the Scale information of the model itself. The material parameters are used for controlling the motion of the model and comprise a Movable parameter, a Rotate parameter, a Shake parameter, a Freq parameter, a Start parameter and a VC _ Scale parameter.
The embodiment of the application is verified by using a project side, and the vertex animation of the multi-detail model can be rapidly manufactured. In addition, in order to facilitate an artist to configure animation amplitude of a multi-detail model in an interaction process between a player and the model, the embodiment of the application adjusts relevant material parameters of an article Identifier (ID) to be realized by an artist configurable table, and the table is shown in table 1, so that the difficulty of using and adjusting by relevant personnel is reduced.
TABLE 1
ID Upper limit of Shake Shake lower limit Rotate Upper Limit Lower limit of Rotate
1 0.25 0.000001 0.35 0.000001
2 0.25 0.000001 0.35 0.000001
3 0.35 0.1 0.000001 0.000001
4 0.35 0.1 0.000001 0.000001
As shown in fig. 8, the whole operation flow of the vertex animation scheme provided in the embodiment of the present application includes three parts, namely vertex color information baking, shader extraction information calculation, and blueprint logic setting interaction parameters, and the following describes the whole technical solution of the embodiment of the present application with reference to tools and engines:
first, top color baking
First, a model patch with the same animation mode (e.g., flapping in the same direction) is selected, and a group can be automatically created for the model patch by clicking enter, wherein the group information is as shown in fig. 9, different colors represent different groups, and the newly selected model patch 901 is highlighted in yellow. The model patch 902 for which grouping information is not designated in gray is also given a group separately, and the vertex color four-channel information (four channels Red, Green, Blue, and Alpha) of the model patch 902 is stored as 0.0f, which means that vertex animation is not performed.
After the model patch grouping is determined, the Pivot point (Pivot) of the animation needs to be selected for each grouping, then, as shown in FIG. 10 and FIG. 11, the tool processes the vertices according to the written logic, as shown in FIG. 10, selects model patch group60 through blast121 in the logic, selects Pivot point Pivot60, sets vertex color information through pointvop60, clicks a menu 1001, enters the logical interface of the vertex color as shown in FIG. 11, and normalizes the animation vector (xyz direction information) of the selected Pivot point (Pivot) from [ -xm, xm ] to [0, 1 ].
Since the vertex color range for the vertices of each selected set of model patches is [0, 1], the animation vector (xyz direction information) of the selected Pivot point (Pivot) needs to be normalized from [ -xm, xm ], [ -ym, ym ], [ -zm, zm ] to the [0, 1] range. Alpha information in the vertex colors stores the animation phase offsets to avoid all model patches moving along the same cadence and pattern.
Secondly, restoring the vertex color information and executing animation calculation by the vertex shader
After the vertex color information containing the motion vector is obtained, the illusion engine material editor firstly needs to analyze data, the data in the range of [0, 1] in the xyz direction are respectively analyzed into [ -xm, xm ], [ -ym, ym ], [ -zm, zm ], and then the left-hand and right-hand coordinate system conversion is carried out to ensure that the direction of the motion vector in the illusion engine is correct. As shown in fig. 12, data is analyzed by the multiplex function, and left-right hand coordinate system conversion is performed by the breakutfoat 3Components function and MakeFloat3 function.
After the directions are obtained through the conversion of the left and right hand coordinate systems, the rotator AboutAxis function is used for driving each vertex, and animation calculation of rotation and flapping is carried out according to the vector direction, wherein as shown in FIG. 13, an Alpha channel in the vertex color is used before a sine function (originally sin (x), now sin (x + Alpha)), so that the animation has phase offset of the initial position, and the action of each detail model is ensured not to be repeated visually. This part of the calculation is performed in the model space (LocalSpace), and the calculation results need to be converted into the world space (WorldSpace) to see the correct performance in the game scene.
Logic processing in supporting model interaction in blueprint
In the interactive logic, the rotation speed (interactive speed) of the model is detected every logical frame (tick), and when the speed is greater than 0, the logic of dynamically modifying the material parameters is entered, namely the material parameters are controlled by adopting the size of the interactive speed so as to be matched with the bone animation to achieve a better simulation effect, and the method is not limited to repeating animation motion all the time.
As shown in fig. 14, the interaction speed is first set within a threshold, and since the interpolation itself is a numerical value between 0 and 1, the interaction speed needs to be normalized to be within the range of [0, 1], and the normalized interaction speed is used as an Alpha parameter of a subsequent interpolation function, and the animation of the model in the interaction process is simulated through the Alpha parameter. In fig. 14, the Rotate Speed function represents the interaction Speed, and Mat Alpha represents the Alpha parameter.
In order to simulate the skeleton effect of physical simulation as much as possible and prevent the animation jitter caused by the jump of the interaction speed, when the speed suddenly drops, a smaller value is fixedly reduced based on the Alpha parameter, so that the rapid change of the movement speed is avoided.
As shown in fig. 15, in order to facilitate the relevant personnel to adjust the animation amplitude range, the material parameter may be configured through a table, the logic side extracts the parameter name to be modified of the article ID in the configuration and the corresponding parameter variation range [ a, b ] through a weipon Mat Param ID function reading table (as shown in table 1), interpolates the Alpha parameter obtained before to obtain a value to be set, and then transmits the value to the parameter value corresponding to the dynamic material instance, where the calculation formula is shown in formula (1):
value corresponding to material parameter a + (b-a)' Alpha (1)
After the interaction speed is reset to zero (the user does not interact any more), a timer is triggered again, and the parameter value is gradually returned to the minimum value within 1s to simulate the inertia of the rotation. The ID, the parameter name and the parameter range are opened in the whole interactive logic to be used as configuration items, so that the interactive logic is convenient and easy to understand, and the flexibility and the usability of the whole scheme are improved.
To sum up, the summit animation scheme of simulation skeleton dynamic expression that this application embodiment provided can save animation designer and carry out the time that independent skeleton bound and covering, and easy to operate to the role designer, only need appoint the summit to divide into groups just can bake summit information automatically, carry out audio-visual animation debugging in the engine, can also reach better simulation effect with the cooperation of skeleton animation, this scheme low performance consumption, can let move the effect under exploring more low-end models under the condition of not doing model skeleton classification, the efficiency of game development assembly line has been promoted.
The animation model processing method provided in the embodiment of the present application has been described with reference to the exemplary application and implementation of the terminal provided in the embodiment of the present application, and the following continues to describe the scheme for implementing the animation model processing by the cooperation of the modules in the animation model processing apparatus 455 provided in the embodiment of the present application.
An obtaining module 4551, configured to obtain a multiple detail model, where the multiple detail model includes multiple model patches; a grouping module 4552, configured to group a plurality of model patches in the multi-detail model to obtain a group of each model patch; a storage module 4553, configured to store the animation vector corresponding to each group into the vertex color of each group; a simulation module 4554, configured to perform animation simulation based on the vertex color of each group and the material parameter corresponding to the multi-detail model, and generate an animation image of the multi-detail model.
In some embodiments, before the obtaining the multi-detail model, the apparatus further comprises: a display module 4555 configured to display the candidate model in the editing interface; in response to a first selection operation for the candidate model, the selected portion of the candidate model is treated as the multi-detail model.
In some embodiments, the candidate model comprises a plurality of types of model patches; the display module 4555 is further configured to determine, in response to a first selection operation for at least one model patch of the candidate model, a plurality of model patches of the candidate model that are of the same type as the at least one model patch; constructing the multi-detail model based on a plurality of model patches of the same type as the at least one model patch.
In some embodiments, the grouping module 4552 is further configured to, in response to a grouping operation for a plurality of model patches of the multiple detail model, divide the plurality of model patches into different groups according to the grouping operation.
In some embodiments, the grouping operation comprises a first triggering operation and a second selecting operation; the grouping module 4552 is further configured to display a plurality of candidate groups corresponding to any of the model patches in response to the first trigger operation for any of the model patches; in response to the second selection operation for the plurality of candidate groups, treating the selected candidate group as a group of any of the model patches.
In some embodiments, the grouping module 4552 is further configured to display grouping trigger entries for a plurality of model patches in the multi-detail model; and in response to the triggering operation aiming at the grouping triggering inlet, clustering the plurality of model patches based on the similarity among the plurality of model patches to obtain the group of each model patch.
In some embodiments, the grouping module 4552 is further configured to construct a connection relationship diagram of a plurality of model patches in the multi-detail model based on connection relationships among the plurality of model patches; and clustering the plurality of model patches based on the connection relation graphs of the plurality of model patches to obtain the group of each model patch.
In some embodiments, the acquiring module 4551 is further configured to acquire a pivot point corresponding to each of the groups; the storage module 4553 is further configured to normalize the animation vector of the pivot point corresponding to each group, so as to obtain a motion vector of the group; and storing the group motion vectors into the vertex colors of the corresponding group.
In some embodiments, the obtaining module 4551 is further configured to, in response to a selection operation for a candidate pivot point in each of the groups, treat the selected candidate pivot point as the group pivot point.
In some embodiments, the obtaining module 4551 is further configured to display an input interface for each of the group pivot points; and responding to the input operation of the input interface, and taking the input vertex data as the pivot point corresponding to each group.
In some embodiments, the simulation module 4554 is further configured to perform inverse normalization on the group motion vectors included in each of the group vertex colors through a vertex shader, so as to obtain inverse normalized motion vectors; carrying out coordinate system transformation on the inverse normalized motion vector to obtain a transformed motion vector; and applying the material parameters corresponding to the multi-detail model in the transformed motion vector to obtain an animation image of the multi-detail model.
In some embodiments, the simulation module 4554 is further configured to determine Alpha channels in the vertex colors of each of the groups; phase shifting the starting positions of the groups based on Alpha channels in the vertex colors of the groups to distinguish the starting motion state of each group.
In some embodiments, the apparatus further comprises: an interaction module 4556, configured to determine an interaction speed of the multi-detail model in response to an interaction operation for the multi-detail model; and carrying out animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model, and generating an animation image of the multi-detail model in the interaction.
In some embodiments, the simulation module 4554 is further configured to normalize the interaction speed of the multi-detail model, and use the normalized interaction speed as an Alpha parameter; determining a motion parameter value of the multi-detail model based on a material parameter and the Alpha parameter corresponding to the multi-detail model; and applying the motion parameter values in the multi-detail model to obtain an animation image of the multi-detail model in interaction.
In some embodiments, the simulation module 4554 is further configured to determine a value range of a material parameter corresponding to the multi-detail model, where the value range includes a maximum value and a minimum value of the material parameter; determining a difference between the maximum value and the minimum value; determining a product of the difference and the Alpha parameter; and taking the sum of the product and the minimum value as the motion parameter value of the multi-detail model.
In some embodiments, the obtaining module 4551 is further configured to obtain, through a logical interface, a table including the material parameter; and reading the table to obtain the material parameters corresponding to the multi-detail model.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the animation model processing method according to the embodiment of the application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to perform an animation model processing method provided by embodiments of the present application, for example, an animation model processing method as shown in fig. 4A-4B.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (19)

1. A method for processing an animation model, the method comprising:
obtaining a multi-detail model, wherein the multi-detail model comprises a plurality of model patches;
grouping a plurality of model patches in the multi-detail model to obtain a group of each model patch;
storing the animation vector corresponding to each group into the vertex color of each group;
and performing animation simulation based on the vertex color of each group and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model.
2. The method of claim 1, wherein prior to obtaining the multi-detail model, the method further comprises:
displaying the candidate model in an editing interface;
in response to a first selection operation for the candidate model, the selected portion of the candidate model is treated as the multi-detail model.
3. The method of claim 2,
the candidate model comprises a plurality of types of model patches;
the taking the selected portion of the candidate model as the multi-detail model in response to the first selection operation for the candidate model comprises:
determining a plurality of model patches of the candidate model that are of the same type as at least one model patch in response to a first selection operation for the at least one model patch in the candidate model;
constructing the multi-detail model based on a plurality of model patches of the same type as the at least one model patch.
4. The method of claim 1, wherein grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises:
in response to a grouping operation for a plurality of model patches in the multi-detail model, the plurality of model patches are divided into different groups according to the grouping operation.
5. The method of claim 4,
the grouping operation comprises a first triggering operation and a second selecting operation;
the partitioning, in response to a grouping operation for a plurality of model patches in the multi-detail model, the plurality of model patches into different groups according to the grouping operation includes:
responding to the first trigger operation aiming at any model patch, and displaying a plurality of candidate groups corresponding to any model patch;
in response to the second selection operation for the plurality of candidate groups, treating the selected candidate group as a group of any of the model patches.
6. The method of claim 1, wherein grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises:
displaying grouping trigger entries for a plurality of model patches in the multi-detail model;
and in response to the triggering operation aiming at the grouping triggering inlet, clustering the plurality of model patches based on the similarity among the plurality of model patches to obtain the group of each model patch.
7. The method of claim 1, wherein grouping a plurality of model patches in the multi-detail model to obtain a group for each model patch comprises:
constructing a connection relation graph of a plurality of model patches based on connection relations among the plurality of model patches in the multi-detail model;
and clustering the plurality of model patches based on the connection relation graphs of the plurality of model patches to obtain the group of each model patch.
8. The method of claim 1,
before storing the animation vector corresponding to each group into the vertex color of each group, the method further comprises:
acquiring a pivot point corresponding to each group;
storing the animation vector corresponding to each group into the vertex color of each group, including:
normalizing the animation vector of the pivot point corresponding to each group to obtain the motion vector of the group;
and storing the group motion vectors into the vertex colors of the corresponding group.
9. The method of claim 8, wherein said obtaining a pivot point corresponding to each of said groups comprises:
in response to a selection operation for a candidate pivot point in each of the groups, treating the selected candidate pivot point as the group's pivot point.
10. The method of claim 8, wherein said obtaining a pivot point corresponding to each of said groups comprises:
displaying an input interface for each of the groups of pivot points;
and responding to the input operation of the input interface, and taking the input vertex data as the pivot point corresponding to each group.
11. The method of claim 8, wherein performing an animation simulation based on the vertex color of each of the groups and the material parameter corresponding to the multi-detail model to generate an animated image of the multi-detail model comprises:
performing reverse normalization on the group motion vectors included in the vertex color of each group through a vertex shader to obtain reverse normalized motion vectors;
carrying out coordinate system transformation on the inverse normalized motion vector to obtain a transformed motion vector;
and applying the material parameters corresponding to the multi-detail model in the transformed motion vector to obtain an animation image of the multi-detail model.
12. The method according to claim 11, wherein before applying the material parameters corresponding to the multi-detail model to the transformed motion vectors to obtain the animated image of the multi-detail model, the method further comprises:
determining Alpha channels in the vertex color of each said group;
phase shifting the starting positions of the groups based on Alpha channels in the vertex colors of the groups to distinguish the starting motion state of each group.
13. The method of claim 1, wherein after performing the animation simulation based on the vertex color of each of the groups and the material parameter corresponding to the multi-detail model, and generating the animated image of the multi-detail model, the method further comprises:
determining an interaction speed of the multi-detail model in response to an interaction operation for the multi-detail model;
and carrying out animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model, and generating an animation image of the multi-detail model in the interaction.
14. The method according to claim 13, wherein performing animation simulation based on the interaction speed of the multi-detail model and the material parameters corresponding to the multi-detail model to generate an animated image of the multi-detail model during interaction comprises:
normalizing the interaction speed of the multi-detail model, and taking the normalized interaction speed as an Alpha parameter;
determining a motion parameter value of the multi-detail model based on a material parameter and the Alpha parameter corresponding to the multi-detail model;
and applying the motion parameter values in the multi-detail model to obtain an animation image of the multi-detail model in interaction.
15. The method according to claim 14, wherein the determining motion parameter values of the multi-detail model based on the texture parameters and the Alpha parameters corresponding to the multi-detail model comprises:
determining a value range of the material parameter corresponding to the multi-detail model, wherein the value range comprises a maximum value and a minimum value of the material parameter;
determining a difference between the maximum value and the minimum value;
determining a product of the difference and the Alpha parameter;
and taking the sum of the product and the minimum value as the motion parameter value of the multi-detail model.
16. The method of claim 1, wherein before performing the animation simulation based on the vertex color of each of the groups and the material parameter corresponding to the multi-detail model to generate the animated image of the multi-detail model, the method further comprises:
obtaining a table comprising the material parameters through a logic interface;
and reading the table to obtain the material parameters corresponding to the multi-detail model.
17. An animation model processing apparatus, characterized in that the apparatus comprises:
the system comprises an obtaining module, a judging module and a judging module, wherein the obtaining module is used for obtaining a multi-detail model, and the multi-detail model comprises a plurality of model patches;
the grouping module is used for grouping a plurality of model patches in the multi-detail model to obtain the group of each model patch;
the storage module is used for storing the animation vector corresponding to each group into the vertex color of each group;
and the simulation module is used for carrying out animation simulation based on the vertex color of each group and the material parameters corresponding to the multi-detail model to generate an animation image of the multi-detail model.
18. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the animation model processing method of any one of claims 1 to 16 when executing the executable instructions stored in the memory.
19. A computer-readable storage medium storing executable instructions for implementing the animation model processing method according to any one of claims 1 to 16 when executed by a processor.
CN202110270424.XA 2021-03-12 2021-03-12 Animation model processing method, device, equipment and storage medium Active CN112843704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270424.XA CN112843704B (en) 2021-03-12 2021-03-12 Animation model processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270424.XA CN112843704B (en) 2021-03-12 2021-03-12 Animation model processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112843704A true CN112843704A (en) 2021-05-28
CN112843704B CN112843704B (en) 2022-07-29

Family

ID=75994311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270424.XA Active CN112843704B (en) 2021-03-12 2021-03-12 Animation model processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112843704B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590334A (en) * 2021-08-06 2021-11-02 广州博冠信息科技有限公司 Role model processing method, role model processing device, role model processing medium and electronic equipment
CN114047998A (en) * 2021-11-30 2022-02-15 珠海金山数字网络科技有限公司 Object updating method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529569A (en) * 2016-10-11 2017-03-22 北京航空航天大学 Three-dimensional model triangular facet feature learning classification method and device based on deep learning
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
US20190122411A1 (en) * 2016-06-23 2019-04-25 LoomAi, Inc. Systems and Methods for Generating Computer Ready Animation Models of a Human Head from Captured Data Images
CN111760277A (en) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 Illumination rendering method and device
CN112419430A (en) * 2020-05-28 2021-02-26 上海哔哩哔哩科技有限公司 Animation playing method and device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
US20190122411A1 (en) * 2016-06-23 2019-04-25 LoomAi, Inc. Systems and Methods for Generating Computer Ready Animation Models of a Human Head from Captured Data Images
CN106529569A (en) * 2016-10-11 2017-03-22 北京航空航天大学 Three-dimensional model triangular facet feature learning classification method and device based on deep learning
CN112419430A (en) * 2020-05-28 2021-02-26 上海哔哩哔哩科技有限公司 Animation playing method and device and computer equipment
CN111760277A (en) * 2020-07-06 2020-10-13 网易(杭州)网络有限公司 Illumination rendering method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590334A (en) * 2021-08-06 2021-11-02 广州博冠信息科技有限公司 Role model processing method, role model processing device, role model processing medium and electronic equipment
CN114047998A (en) * 2021-11-30 2022-02-15 珠海金山数字网络科技有限公司 Object updating method and device
CN114047998B (en) * 2021-11-30 2024-04-19 珠海金山数字网络科技有限公司 Object updating method and device

Also Published As

Publication number Publication date
CN112843704B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
WO2018095273A1 (en) Image synthesis method and device, and matching implementation method and device
CN111192354A (en) Three-dimensional simulation method and system based on virtual reality
CN108984169B (en) Cross-platform multi-element integrated development system
CN112076473B (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112843704B (en) Animation model processing method, device, equipment and storage medium
KR102374307B1 (en) Modification of animated characters
US20240037839A1 (en) Image rendering
US11816772B2 (en) System for customizing in-game character animations by players
CN112669194B (en) Animation processing method, device, equipment and storage medium in virtual scene
CN115641375B (en) Method, device, equipment and storage medium for processing hair of virtual object
CN111714880A (en) Method and device for displaying picture, storage medium and electronic device
CN115082608A (en) Virtual character clothing rendering method and device, electronic equipment and storage medium
CN115082607A (en) Virtual character hair rendering method and device, electronic equipment and storage medium
CN112843683B (en) Virtual character control method and device, electronic equipment and storage medium
CN116385605A (en) Method and device for generating flight animation of target object and electronic equipment
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN115761105A (en) Illumination rendering method and device, electronic equipment and storage medium
Zhu et al. Integrated Co-Designing Using Building Information Modeling and Mixed Reality with Erased Backgrounds for Stock Renovation
US20240161371A1 (en) Asystem for customizing in-game character animations by players
CN112473135B (en) Real-time illumination simulation method, device and equipment for mobile game and storage medium
US20240005588A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN117078824A (en) Parameter fitting method, device, equipment, storage medium and program product
CN116824040A (en) Meta-universe suspension space construction method, device, computer equipment and storage medium
Gao The Film and Television Department, Wuxi City College of Vocational Technology, Wuxi, Jiangsu, China gzp121@ 126. com
CN115997385A (en) Interface display method, device, equipment, medium and product based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40043868

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant