CN108010112B - Animation processing method, device and storage medium - Google Patents

Animation processing method, device and storage medium Download PDF

Info

Publication number
CN108010112B
CN108010112B CN201711219344.1A CN201711219344A CN108010112B CN 108010112 B CN108010112 B CN 108010112B CN 201711219344 A CN201711219344 A CN 201711219344A CN 108010112 B CN108010112 B CN 108010112B
Authority
CN
China
Prior art keywords
animation
file
memory
action
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711219344.1A
Other languages
Chinese (zh)
Other versions
CN108010112A (en
Inventor
陈慧中
李振文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Tianjin Co Ltd
Original Assignee
Tencent Cyber Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Tianjin Co Ltd filed Critical Tencent Cyber Tianjin Co Ltd
Priority to CN201711219344.1A priority Critical patent/CN108010112B/en
Publication of CN108010112A publication Critical patent/CN108010112A/en
Application granted granted Critical
Publication of CN108010112B publication Critical patent/CN108010112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Abstract

The invention discloses an animation processing method, which comprises the following steps: obtaining an animation file set recorded with animation information of an object; loading an animation file in the animation file set in a memory of the client, wherein the loaded animation file corresponds to the skeleton model of the object; sequentially loading the animation files in the animation file set in the memory of the client based on the sequence of time points, wherein the loaded animation files correspond to the actions of the object at different time sequence points; analyzing the animation file loaded by the memory of the client to generate a rendering object comprising a skeleton model and actions of the object; and detecting that the rendering time of the rendering object corresponding to the action arrives, and performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point. The invention also discloses an animation processing device and a storage medium.

Description

Animation processing method, device and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an animation processing method, an animation processing device, and a storage medium.
Background
Bone animation is an animation produced by binding pictures of body parts of an object to a piece of interactive connected 'bones', each part being linked by bones, and by controlling the positions, rotation directions and enlargement and reduction of the bones. Bone animation is widely used in animation because it has advantages (less art resources, smaller volume, etc.) over conventional frame-by-frame animation, and is favored by users.
However, for performance optimization of skeletal animation, the related art often adopts a manner of reducing skeleton, reducing the number of vertices in the skin, and the like, which more or less reduces the display effect of the animation.
Disclosure of Invention
The embodiment of the invention provides an animation processing method, an animation processing device and a storage medium, which can reduce memory occupation and enable animation playing to be smoother.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an animation processing method, which comprises the following steps:
obtaining an animation file set recorded with animation information of an object;
loading an animation file in the animation file set in a memory of the client, wherein the loaded animation file corresponds to the skeleton model of the object;
Based on the sequence of time points, sequentially loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to actions of different time sequence points;
analyzing the animation file loaded by the memory of the client to generate a rendering object comprising a skeleton model and actions of the object;
and detecting that the rendering time of the rendering object corresponding to the action arrives, and performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point.
The embodiment of the invention provides an animation processing device, which comprises:
the acquisition module is used for acquiring an animation file set recorded with animation information of the object;
the loading module is used for loading the animation files in the animation file set in the memory of the client, and the loaded animation files correspond to the skeleton model of the object;
based on the sequence of time points, sequentially loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to actions corresponding to different time sequence points;
the analysis module is used for analyzing the animation file loaded in the memory of the client to generate a rendering object comprising a skeleton model and actions of the object;
And the rendering module is used for detecting that the rendering time of the rendering object corresponding to the action arrives, and performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point.
The embodiment of the invention provides an animation processing device, which comprises:
a memory for storing an executable program;
and the processor is used for realizing the animation processing method when executing the executable program stored in the memory.
An embodiment of the present invention provides a readable storage medium storing an executable program which when executed by a processor implements the above-described animation processing method.
The embodiment of the invention has the following beneficial effects:
by applying the animation processing method, the device and the storage medium provided by the embodiment of the invention, as only partial motion data of the animation is loaded in the memory and complete motion data of the whole animation is not loaded, the memory occupation in the animation generation process is greatly reduced, the animation is played more smoothly, and the risks of black screen, blocking and the like of pages caused by insufficient memory are avoided.
Drawings
FIG. 1 is a schematic diagram of an alternative hardware configuration of an animation processing device according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of an alternative hardware structure of a terminal according to an embodiment of the present invention;
FIG. 3 is a schematic view of an alternative application scenario of the animation processing method according to the embodiment of the present invention;
FIG. 4 is a schematic view of an alternative application scenario of the animation processing method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of the working principle of an animation editor according to an embodiment of the present invention;
FIG. 6-1 is a schematic diagram of an embodiment of the present invention before deformation of a picture using a skin;
FIG. 6-2 is a schematic diagram of a skin for image deformation according to an embodiment of the present invention;
FIG. 7 is a diagram of an object map set with blank maps according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an object map setting provided by an embodiment of the present invention;
fig. 9 is a schematic flow chart of the dressing drawing provided by the embodiment of the invention;
FIG. 10 is an exemplary diagram of an action occupancy memory provided by an embodiment of the present invention;
FIG. 11 is a schematic diagram of splitting bone model data provided by an embodiment of the present invention;
FIG. 12 is a schematic flow chart of an alternative method of animation processing according to an embodiment of the present invention;
FIG. 13 is a schematic flow chart of an alternative method of animation processing according to an embodiment of the present invention;
Fig. 14 is a schematic diagram of an animation processing device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the examples provided herein are for the purpose of illustration only and are not intended to limit the invention. In addition, the embodiments provided below are some of the embodiments for carrying out the present invention, but not all of the embodiments for carrying out the present invention, and the technical solutions described in the embodiments of the present invention may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a method or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such method or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other related elements (e.g., a step in a method or a module in an apparatus, where a module may be a part of a circuit, a part of a processor, a part of a program or software, etc.) in a method or apparatus comprising the element.
For example, the animation processing method provided in the embodiment of the present invention includes a series of steps, but the animation processing method provided in the embodiment of the present invention is not limited to the described steps, and similarly, the animation processing apparatus provided in the embodiment of the present invention includes a series of modules, but the animation processing apparatus provided in the embodiment of the present invention is not limited to including the explicitly described modules, and may include units required for acquiring related information or performing processing based on the information.
Fig. 1 is an exploded view of a memory of a browser according to an embodiment of the present invention, referring to fig. 1, memory occupied by a new page of the browser includes: various resource files (including css, js, pictures and other resources) loaded on the page, code codes compiled for the browser to execute, js heat (variables generated by executing js codes), dom structures and the like; however, for devices that are not unique (cell phones), the GPU memory (memory occupied by WebGL) may share memory with the browser. Therefore, if the memory occupation is reduced, the model animation with the same quantity can be played more smoothly, so that the risk of page blackout or blockage when the memory is insufficient can be reduced.
In practical implementation of the present invention, the animation processing method provided by the embodiment of the present invention may be implemented by the terminal device alone or by the terminal and the server cooperatively. The terminal or the server comprises one or more processors and a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the programs can comprise one or more units corresponding to a group of instructions, and the one or more processors are configured to execute the instructions to implement the animation processing method according to the embodiment of the invention.
Various application clients (executable programs for the user terminal to run, and support the user to use corresponding services) can be installed on the terminal, for example: an animation class client, an animation performance optimization class client, a game class client, a social client, etc.
The terminal mentioned in the embodiment of the present invention may be understood as a device running a client on a user side, and may be various electronic devices having a display screen and supporting an interactive function, for example: the fixed terminal can be a desktop computer, a television set top box, an Xbox/PS3 game machine and the like; the mobile terminal can also be a mobile terminal such as a smart phone, a notebook computer, a tablet personal computer and the like.
Next, an alternative hardware configuration diagram of an animation processing device corresponding to an animation processing method according to an embodiment of the present invention will be described with reference to fig. 2, and the animation processing device may be implemented in various forms, such as: implemented by the terminal alone or implemented by the terminal and the server cooperatively. In the following, the hardware configuration of the animation processing device according to the embodiment of the present invention will be described in detail, and it will be understood that fig. 2 shows only an exemplary configuration of the animation processing device, not all the configurations, and that part or all of the configurations shown in fig. 2 may be implemented as needed.
Referring to fig. 2, the animation processing apparatus 100 shown in fig. 2 includes: at least one processor 101, a memory 102, a user interface 103, and at least one network interface 104. The various components in the animation processing device 100 are coupled together by a bus system 105. It is understood that the bus system 105 is used to enable connected communications between these components. The bus system 105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 105 in fig. 2.
The user interface 103 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
It will be appreciated that the memory 102 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
The memory 102 in the embodiment of the present invention is used to store various types of data to support the operation of the animation processing device 100. Examples of such data include: any computer program for operating on the animation processing device 100, such as the executable program 1021, a program implementing the animation processing method of the embodiment of the present invention may be included in the executable program 1021.
The animation processing method disclosed in the embodiment of the invention can be applied to the processor 101 or implemented by the processor 101. The processor 101 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the animation processing method may be performed by integrated logic circuits of hardware in the processor 101 or instructions in the form of software. The processor 101 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 101 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the invention can be directly embodied in the hardware of the decoding processor or can be implemented by combining hardware and software modules in the decoding processor. The software module may be located in a storage medium, where the storage medium is located in the memory 102, and the processor 101 reads information in the memory 102, and in combination with its hardware, performs the steps of the animation processing method provided by the embodiment of the invention.
In one embodiment of the present invention, the animation processing device shown in fig. 3 is implemented as a terminal, referring to fig. 3, fig. 3 is a schematic diagram of an optional hardware structure of the terminal provided in the embodiment of the present invention, where the terminal includes:
a memory 102 configured to store an executable program;
the processor 101 is configured to implement the above-mentioned animation processing method provided by the embodiment of the present invention when executing the executable program stored in the memory.
The memory 102 also stores the operating system 1022 of the terminal.
The network interface 104 may include one or more communication modules, including a mobile communication module 1041 and a wireless internet module 1042.
The a/V (audio/video) input unit 120 is for receiving an audio or video signal. A camera 121 and a microphone 122 may be included.
The sensing unit 140 includes a sensor 141 for acquiring sensing data, such as a light sensor, a motion sensor, a pressure sensor, an iris sensor, etc.
The power supply unit 190 (e.g., a battery), preferably, the power supply unit 190 may be logically connected to the processor 101 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
The output unit 150 includes:
A display unit 151 that displays information input by a user or information provided to the user, and may include a display panel;
the audio output module 152 may convert audio data received or stored in the memory into an audio signal and output as sound when the terminal is in a call signal reception mode, a talk mode, a recording mode, a voice recognition mode, a broadcast reception mode, and the like. Also, the audio output module may also provide audio output (e.g., a call signal receiving sound, a message receiving sound, etc.) related to a specific function performed by the terminal, and may include a speaker, a buzzer, etc.
The alarm unit 153 may realize an alarm of a specific event of the terminal, such as a fault alarm, etc.
The animation processing apparatus according to the embodiment of the present invention has been described so far in terms of functions, and based on the optional hardware configuration diagram of the above-described animation processing apparatus, an application scenario for implementing the animation processing method according to the embodiment of the present invention will be described below.
Fig. 4 is a schematic diagram of an optional application scenario of the animation processing method provided by the embodiment of the present invention, in the application scenario, a terminal and a server cooperatively implement the animation processing method provided by the embodiment of the present invention, referring to fig. 4, a terminal device communicates with the server through a network, the network may include various connection types, for example, a wired communication link, a wireless communication link, or an optical fiber cable, etc., a bone animation editor is provided on the server side, the terminal sends an animation playing request to the server through the network, acquires an animation file set of an animation generated by the bone animation editor on the server side, and implements rendering of the animation based on the animation file set, so as to obtain and play the bone animation.
In another application scene, a skeleton animation editor is arranged at the terminal side, an animation file set of the animation is obtained through the skeleton animation editor arranged by the terminal side, rendering of the animation is achieved based on the animation file set, and skeleton animation playing is obtained and carried out.
Based on the application scenario of the animation processing method and the animation processing device, the implementation procedure of the animation processing method according to the embodiment of the invention will be described.
First, description will be made of an animation file mentioned in the embodiment of the present invention, in which an animation file is generated by an animation editor for realizing a specific type of file of an animation information record.
The animation file set acquired by the client at least comprises: an animation file of a skeletal model of an object, and an animation file (description file) of an action of the object;
the animation file of the skeleton model at least comprises a mapping file of a texture mapping of an object, a description file of skeleton information of the object and a description file of skin information of the object. In practical application, the description file mentioned in the embodiment of the present invention may be understood as a configuration file generated by an animation editor and describing animation information of an object.
Taking an animation editor as a Spine editor as an example, other animation editors, such as an open source engine DragonBones, can be used in practical application; the animation file may include: atlas file (description file), json file (description file), png file (map file); wherein, the atlas file and png file record all cut picture information before the drawing, and the json file records skeleton configuration and animation information, namely the most critical data of the Spine animation.
Next, the process of generating the animation file by using the animation editor as a Spine editor, which corresponds to generating the H5 skeleton animation, is described, and the corresponding Spine editor may select a multi-language runtime, optionally, a PIXI open source engine.
First, the working principle of the spin editor in the embodiment of the invention will be described.
FIG. 5 is a schematic diagram of the working principle of an animation editor according to an embodiment of the present invention, referring to FIG. 5, in an embodiment of the present invention, an object of a skeletal animation is composed of a skeleton, a slot, and an accessory; wherein, the skeleton and the slot are in a relation of 1 to N (positive integer greater than 1), and the slot and the accessory are also in a relation of 1 to N, which are respectively described below.
The skeleton structure of the object is tree-shaped, including root skeleton (trunk) and other skeletons (such as head-eye beads), the skeletons are connected together through a father-son relationship, the displacement (including rotation angle and position) of the skeletons is relative to the father skeleton, and the obvious advantage of the tree-shaped structure is that if the displacement is required to be set in animation, only the displacement of the root node is required to be set.
The slots can be inserted into the skin, which has transformation data (but not changes) in addition to describing which map to post (even a region on the map).
The accessory is used to show the external outline (look) of an object, and includes three types of information: pictures, skins and weight skins; the skin is used for indicating deformation characteristics of the picture, and comprises a vertex, an edge and a triangular area, and can deform a certain area of the picture, see fig. 6-1 and 6-2, and fig. 6-1 and 6-2 are respectively schematic diagrams of the picture before and after deformation by the skin according to the embodiment of the invention, and in fig. 6, a nose of a subject can be lengthened only by moving the vertex A to the right. The weight skin is used to indicate the weight of the bone on a vertex such that the position of the vertex varies with the bone.
The actions of the object in the skeleton animation are based on a time axis, the visual state of the object is updated along with time, a complete skeleton animation action can be decomposed on each skeleton, and each skeleton action is defined by a keyframe, a keyframe corresponding to the action of a certain time sequence point is used for describing skeleton characteristics of the action of the object at the time sequence point, that is, an action keyframe can indicate one or more actions of the object, for example, an action keyframe of running can correspond to the following actions: subject torso up and down, leg up and down, arm swing back and forth, elbow bending changes. One skeletal animation may correspond to at least two action key frames (a start action key frame corresponding to the start of an action and an end action key frame corresponding to the end of an action), and the state between two action key frames corresponding to two adjacent time series points may be obtained by interpolation. The time sequence points described herein may be understood as a sequence of time according to a time sequence or an action playing sequence, and are not necessarily real time (such as several points and several minutes).
In the embodiment of the invention, the animation editor generates a description file comprising at least two actions of the object through at least two action key frames set by a user, and the description file corresponds to the Spine editor and is a json file comprising at least two actions of the object.
When the animation editor performs skeleton animation editing, the generated original object is not decorated, so that blank (transparent) maps are needed to be taken up as space and packaged in the original page (David map), as shown in fig. 7, fig. 7 is a schematic diagram of an object map set with blank maps (blank maps are shown as reference numbers 1 and 2 in fig. 7), which are provided by the embodiment of the invention, obviously, more memory of a graphics processor (Graphics Processing Unit, GPU) is occupied when the blank maps are present, therefore, in the embodiment of the invention, the blank maps corresponding to the decoration are removed through a model deriving tool of the Spine, and canvas (canvas) with the same size as the blank maps is declared. Referring to fig. 8, fig. 8 is a schematic diagram of an object map setting provided in an embodiment of the present invention, in fig. 8, all blank areas in the dressing may be removed by removing x/y axis blank areas, each of the posted maps may be smaller in size than the actual size, blank areas on four sides of the map may be cut, the size of the dressing map may be changed to 1*1 (the actual picture may be restored by three values of orig, size and offset), and then the original size changed to 1*1 size is written into model data, so that the front end may generate canvas with the same size as the dressing picture according to the original size. One example code for removing the corresponding impersonation information of the corresponding blank map is as follows:
Fig. 9 is a schematic flow chart of the decoration drawing provided by the embodiment of the present invention, referring to fig. 9, in actual implementation, after the blank map is removed (in an embodiment, only the map of the skin type of the object is left), the size data of the removed blank map is written into the animation file of the object, so that after the front end parses the animation file, the canvas with the same size is declared, and then the decoration drawing of the object is performed. When the object is replaced, only the previous decoration is erased, and the latest decoration is drawn. Therefore, the memory occupation of the GPU is greatly reduced because of no blank mapping.
In the prior art, the Mipmap technology is started by default, namely the power number of 2 is limited by default, the Mipmap aims at texture map resources of an object, the Mipmap generates corresponding eight maps with different pixel sizes according to the difference of the distances of cameras, and obviously, the corresponding eight maps occupy a lot of memory space, in the bone animation of the embodiment of the invention, the change of the distance (size) of the object is not involved, therefore, the power number of 2 is not limited, and referring to fig. 8, so that when the animation is played by the front end, only the map files of the texture map corresponding to one specific pixel size are loaded in a memory, and the memory occupation of the map is greatly reduced.
In practical application, the description file generated by the animation editor often includes a plurality of actions of the object at different time sequence points besides skeleton information and skin information of the object, in the related art, the front end loads the description file including data information of the plurality of actions of the object in the process of playing the skeleton animation, that is, all action data of the object are loaded at one time, however, many actions of the object can only be used in a specific scene. Referring to fig. 10, fig. 10 is an exemplary diagram of the memory occupied by each action according to the embodiment of the present invention, and it can be seen that loading all the action data of the object into the memory at the same time greatly affects the playing smoothness of the animation.
In the embodiment of the present invention, a description file generated by an animation editor is split (by a terminal or a server), as shown in fig. 11, fig. 11 is a schematic diagram of splitting skeletal model data generated by the animation editor, and after splitting skeletal model data of an object, an animation file corresponding to each action of the object (each action corresponds to a time sequence point, and the time sequence points corresponding to two actions may be the same or different) and an animation file of a skeletal model of the object are obtained, so as to form an animation file set corresponding to skeletal model data; wherein the animation file of the skeletal model of the object comprises: skin information of the object, description file of skeleton information, and map file of texture map of the object. Because the motion data of the object is split, the front end can load only the model data of the object and two necessary motion data for the first time in the process of performing animation rendering, then load the data of each motion according to the requirement based on a time axis, re-integrate the motion data into the model data, and one method code example of integrating the split motion data into the model data is as follows:
The generation of the skeleton animation file and the split introduction of the skeleton animation file by the animation editor are already completed, and the playing process of the skeleton animation at the terminal side is explained next.
As an alternative embodiment of the above-mentioned animation processing method, fig. 12 shows an alternative flowchart of the animation processing method provided by the embodiment of the present invention, where the animation processing method of the embodiment of the present invention is applied to a terminal, and an animation processing client is provided on the terminal, referring to fig. 12, and the steps 200 to 205 are related to each other, and are described below.
Step 201: the client obtains a set of animation files in which animation information of the object is recorded.
Step 202: and loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to the skeleton model of the object.
Here, the animation file corresponding to the skeletal model of the object is recorded with the information of the skeletal model.
Step 203: and loading the animation files in the animation file set in the memory sequence of the client based on the sequence of the time points, wherein the loaded animation files correspond to the actions of the object at different time sequence points.
And the animation file corresponding to the motion of the object at different time sequence points records the motion information of the object at different time sequence points.
Step 204: and analyzing the animation file loaded by the memory of the client to generate a rendering object comprising a skeleton model and actions of the object.
Step 205: and detecting that the rendering time of the rendering object of the corresponding action arrives, and performing animation rendering based on the generated rendering object to obtain the animation conforming to the action of the object at the target time sequence point.
As an alternative embodiment of the above-mentioned animation processing method, fig. 13 shows an alternative flowchart of the animation processing method provided by the embodiment of the present invention, where the animation processing method of the embodiment of the present invention is applied to a terminal, and an animation processing client is disposed on the terminal, and referring to fig. 13, the animation processing method provided by the embodiment of the present invention includes:
step 301: the terminal sends an animation playing request to the server.
In actual implementation, the animation playing request may carry information indicating the animation requested to be played, such as an identification or a name of the animation.
Step 302: and receiving the animation file set sent by the server.
Here, the animation file set is a set composed of a plurality of animation files obtained by splitting a skeletal animation file generated by an animation editor, and animation information of the object(s) is recorded. In this embodiment, the splitting of the skeletal animation file generated by the animation editor may be implemented by the server, or in other embodiments, the splitting of the file may be implemented by the terminal after obtaining the skeletal animation file returned by the server. It should be noted that, in some embodiments, when the motion data in the skeletal animation file is split, the motion data may be split based on time sequence points, that is, the obtained animation file of the corresponding motion may be one or more motion data corresponding to one time sequence point (it may be understood that the motion of the object corresponds to a plurality of time sequence points, each time sequence point corresponds to one key frame data, and each time sequence point may correspond to a plurality of fragment motions); in other embodiments, the motion is split based on different attribute features of the motion, and the obtained animation file of the corresponding motion may be motion data of one motion of the corresponding object.
In one embodiment, the animation file collection may comprise: an animation file corresponding to a skeletal model of the object and an animation file corresponding to an action of the object;
wherein the animation file corresponding to the action of the object includes: a plurality of description files of respective actions of the object at different time series points, in an embodiment, the animation file of the actions includes description files of actions corresponding to at least two time series points; wherein, the action data of the object at the same time sequence point forms a description file of an action, that is, the action of the object at the same time sequence point can be an action combination comprising a plurality of actions.
The animation file of the skeletal model of the corresponding object includes: a map file of a texture map of an object, a description file of skeleton information of the object, and a description file of skin information of the object.
Step 303: and loading the animation file of the skeleton model of the corresponding object in the animation file set in the memory of the client.
In practical application, after obtaining an animation file set of an object, a terminal determines an animation file of a skeleton model of the corresponding object in the animation file set and an animation file of an action of the corresponding object; firstly, loading a mapping file of a texture mapping of an object, a description file of skeleton information and a description file of skin information in a memory by a terminal side; in this way, under the condition that the description file of the action is not loaded, namely, only the mapping file of the texture mapping of the object, the description file of the skeleton information and the description file of the skin information are loaded, the loaded description file is analyzed to generate a rendering object, and then the animation rendering is carried out, so that a static model of the corresponding object can be obtained.
In some embodiments, the map files loaded in memory correspond to map files of texture maps of one particular pixel size, for replacing map files of texture maps of multiple pixel sizes of a default mechanism; for example, the animation editor side does not limit the power of 2, i.e. does not enable the Mipmap technology, so that the memory does not need to additionally load 7 texture maps with different pixel sizes, only the texture map with a specific pixel size meeting the actual requirement is loaded, and the memory occupation is greatly reduced.
Step 304: based on the sequence of time points, the description file of the action of the first time sequence point and the description file of the action of the second time sequence point are loaded in the memory of the client.
In actual implementation, the description files of the plurality of actions of the object correspond to a plurality of different time series points (in some embodiments, each time series point corresponds to a description file of an action, and each description file of an action records a plurality of fragment action data of the object), and the plurality of different time series points form a time series. In an embodiment, each action of the object (e.g. a fragment action set of an object corresponding to a certain time sequence point) may be played for a limited number of times, i.e. M times, where M is a positive integer, e.g. 10 times.
It should be noted that, in an embodiment, step 303 and step 304 may be executed simultaneously, i.e. the animation file of the skeletal model of the object and the description files of the actions of the first and second time series points of the object are loaded simultaneously, so that the terminal may obtain the animation of the object having the features corresponding to the two action description files when performing animation rendering.
Step 305: the loaded animation file is parsed to generate a rendered object that includes a skeletal model and actions of the object.
Here, in actual implementation, the generated rendering object including the skeletal model of the object includes: rendering objects corresponding to skeleton information, rendering objects corresponding to skin information and rendering objects corresponding to map information.
And analyzing the animation file loaded by the memory of the client to obtain the statement of the blank map for the decoration of the object, generating a blank canvas (canvas) with the size corresponding to the blank map based on the statement of the blank map, and drawing the decoration of the corresponding object on the blank canvas. Correspondingly, if the object is subjected to the reloading operation, the decoration of the corresponding object drawn on the blank canvas is removed, and the updated decoration of the object is drawn on the canvas after the drawn decoration is removed. After clicking the reloading, the user calls the reloading instruction of the user to reload the decorated description file corresponding to the instruction to realize the reloading of the decorated object S in the animation, and it should be noted that in some embodiments, the reloading of the object may not be completed based on the reloading instruction of the user, if the reloading is instructed in the animation file of the bone model of the loaded object at different time sequence points, then when the time arrives, the reloading operation is automatically executed, that is, the decorated object S on the canvas are removed and the reloading is carried out.
In one embodiment, the loaded animation file comprises a json file and an atlas file, and the loaded json file and the atlas file are analyzed to generate a spine object, so that the spine object is added into a container to be rendered at fixed time, and animation playing is performed.
Step 306: and detecting that the rendering time of the rendering object corresponding to the action arrives, performing animation rendering based on the generated rendering object, obtaining and playing the animation conforming to the action of the object target time sequence point.
Here, the rendering of the rendering object may be based on the OpenGL image operation interface, and according to the specific skeleton feature indicated by the skeleton information of the object and the deformation feature of the texture map indicated by the skin information, the obtained rendering object is subjected to animation rendering, so as to obtain an animation object with the specific skeleton feature and the specific external contour. When the rendering time arrives, the rendering time is aligned with the time sequence point in the time sequence, and the difference value is carried out according to the time difference of two continuous time sequence points (key frame time), so that the animation of the action of the animation object conforming to the target time sequence point is formed.
In practical implementation, after analyzing and rendering the animation file of the bone model loaded in the memory and the description file of the motion of the first two time series points in the time series, sequentially loading the animation file of each time series point except the first time series point and the second time series point in the memory of the client based on the sequence of the time points, that is, loading the motion data in the memory based on a time axis, sequentially loading the description file of each motion according to the sequence of the time points, and performing animation rendering after loading the description file each time.
The following effects can be achieved by applying the embodiment of the invention:
the action data of the object is loaded in the memory, not all the action data are loaded at one time, but are loaded according to the requirement based on the time point sequence, so that the occupation of the memory is greatly reduced;
the mapping of the object loaded in the memory is a mapping file of a specific pixel, but not eight mapping files with different pixel sizes, so that the occupation of the memory is reduced;
the mapping of the object loaded in the memory removes the blank mapping used for occupying space for the decoration of the object, and the canvas with the same size is declared to draw the decoration in a declarative mode, so that the occupation of the memory due to the existence of the blank mapping is reduced;
by greatly reducing the occupation of the memory, the page playing is smoother, and the risks of page blackout, blocking and the like caused by insufficient memory are avoided.
The embodiment of the present invention further provides an animation processing device 200, and fig. 14 is a schematic diagram of the animation processing device provided by the embodiment of the present invention, as shown in fig. 14, where the device includes:
an acquisition module 11 for acquiring an animation file set in which animation information of an object is recorded;
the loading module 12 is configured to load, in a memory of the client, an animation file in the animation file set, where the loaded animation file corresponds to the skeletal model of the object;
Based on the sequence of time points, sequentially loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to actions of different time sequence points;
the parsing module 13 is configured to parse the animation file loaded in the memory of the client, and generate a rendering object that includes a skeletal model and an action of the object;
and the rendering module 14 is configured to detect that a rendering time of a rendering object corresponding to the action arrives, and perform animation rendering based on the generated rendering object, so as to obtain an animation of the action conforming to the object target time sequence point.
In an embodiment, the obtaining module 11 is further configured to obtain an animation file set including animation files of a skeletal model of the object and animation files of an action;
the animation file of the action is split from the animation model data of the object and comprises description files of the action corresponding to at least two time sequence points.
In one embodiment, the loading module 12 is further configured to determine, based on the sequence of time points, a description file of the action at a first time-series point in the sequence and a description file of the action at a second time-series point in the sequence;
And loading the description file of the action of the first time sequence point and the description file of the action of the second time sequence point in a memory of the client.
In one embodiment, the loading module 12 is further configured to obtain an animation file of a time sequence point in the animation file set, where the time sequence point is other than the first time sequence point and the second time sequence point in the sequence;
and sequentially loading the obtained animation file of each time sequence point in the memory of the client based on the sequence of the time points.
In an embodiment, the loading module 12 is further configured to load, in a memory of the client, a map file of a texture map included in the skeletal model of the object, a description file of skeletal information included in the skeletal model of the object, and a description file of skin information included in the skeletal model of the object.
In an embodiment, the rendering module 14 is further configured to obtain a rendering object corresponding to skeleton information, a rendering object corresponding to skin information, and a map file of the loaded texture map in the generated rendering object;
and according to the specific skeleton characteristics indicated by the skeleton information and the deformation characteristics of the texture map indicated by the skin information, performing animation rendering on the obtained rendering object to obtain an animation object with the specific skeleton characteristics and the specific external contour.
In an embodiment, the loading module 12 is further configured to load, in the memory of the client, a mapping file of a texture mapping corresponding to a specific pixel size, where the mapping file of the texture mapping of the specific pixel size is used to replace mapping files of texture mapping of multiple pixel sizes of a default mechanism.
In an embodiment, the parsing module 13 is further configured to parse the animation file loaded in the memory of the client to obtain a statement of a blank map for the decoration of the object;
correspondingly, the device further comprises:
the drawing module 15 is used for generating a blank canvas with a size corresponding to the blank map based on the statement of the blank map;
and drawing the decoration corresponding to the object on the blank canvas.
In an embodiment, the drawing module 15 is further configured to clear the dressing corresponding to the object drawn on the blank canvas;
and drawing updated impersonation of the object on the drawn impersonation canvas.
It should be noted that: in the animation processing device provided in the above embodiment, when performing animation processing, only the division of each program module is used for illustration, in practical application, the processing allocation may be performed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processing described above. In addition, the animation processing device and the animation processing method provided in the above embodiments belong to the same concept, and detailed implementation procedures of the animation processing device are shown in the method embodiments, which are not repeated here.
The embodiment of the invention also provides a readable storage medium, which can comprise: a mobile storage device, a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes. The readable storage medium stores an executable program;
the executable program is configured to, when executed by a processor, implement:
obtaining an animation file set recorded with animation information of an object;
loading an animation file in the animation file set in a memory of the client, wherein the loaded animation file corresponds to the skeleton model of the object;
based on the sequence of time points, sequentially loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to actions of different time sequence points;
analyzing the animation file loaded by the memory of the client to generate a rendering object comprising a skeleton model and actions of the object;
and detecting that the rendering time of the rendering object corresponding to the action arrives, and performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point.
The executable program is further configured to, when executed by the processor, implement:
obtaining an animation file set of animation files comprising animation files and action animation files of a skeleton model of the object;
the animation file of the action is a description file which is obtained by splitting from the animation model data of the object and comprises actions corresponding to at least two time sequence points.
The executable program is further configured to, when executed by the processor, implement:
determining a description file of the action of a first time series point in the sequence and a description file of the action of a second time series point in the sequence based on the sequence of time points;
and loading the description file of the action of the first time sequence point and the description file of the action of the second time sequence point in a memory of the client.
The executable program is further configured to, when executed by the processor, implement:
acquiring animation files of time sequence points except a first time sequence point and a second time sequence point in the sequence in the animation file set;
and sequentially loading the obtained animation file of each time sequence point in the memory of the client based on the sequence of the time points.
The executable program is further configured to, when executed by the processor, implement:
and loading a mapping file of a texture mapping included in the skeleton model of the object, a description file of skeleton information included in the skeleton model of the object and a description file of skin information included in the skeleton model of the object in a memory of the client.
The executable program is further configured to, when executed by the processor, implement:
acquiring a rendering object corresponding to skeleton information in the generated rendering object, a rendering object corresponding to skin information and a mapping file of the loaded texture mapping;
and according to the specific skeleton characteristics indicated by the skeleton information and the deformation characteristics of the texture map indicated by the skin information, performing animation rendering on the obtained rendering object to obtain an animation object with the specific skeleton characteristics and the specific external contour.
The executable program is further configured to, when executed by the processor, implement:
and loading a mapping file of the texture mapping corresponding to a specific pixel size in a memory of the client, wherein the mapping file of the texture mapping with the specific pixel size is used for replacing mapping files of texture mapping with various pixel sizes of a default mechanism.
The executable program is further configured to, when executed by the processor, implement:
analyzing the animation file loaded by the memory of the client to obtain a statement of the blank map for the decoration of the object;
generating a blank canvas with a size corresponding to the blank map based on the statement of the blank map;
and drawing the decoration corresponding to the object on the blank canvas.
The executable program is further configured to, when executed by the processor, implement:
clearing the dress corresponding to the object drawn on the blank canvas;
and drawing updated impersonation of the object on the drawn impersonation canvas.
Those skilled in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a removable storage device, a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable memory device, a RAM, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A method of animation processing, the method comprising:
obtaining an animation file set; the animation file set records animation information of the object;
loading an animation file in the animation file set in a memory of the client, wherein the loaded animation file corresponds to the skeleton model of the object;
sequentially loading the animation files in the animation file set in the memory of the client based on the sequence of time points, wherein the loaded animation files correspond to the actions of the object at different time sequence points;
analyzing the animation file loaded by the memory of the client to generate a rendering object comprising a skeleton model and actions of the object and a statement of a blank map for the decoration of the object;
generating a blank canvas with a size corresponding to the blank map based on the statement of the blank map;
drawing the decoration corresponding to the object on the blank canvas;
and when the rendering time of the rendering object corresponding to the action is detected, performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point.
2. The method of claim 1, wherein the obtaining the set of animation files comprises:
Obtaining an animation file of a skeleton model of the object and an animation file of an action of the object;
forming an animation file set in which animation information of the object is recorded, based on the obtained animation file;
the animation file of the action is split from the animation model data of the object and comprises description files of the action corresponding to at least two time sequence points.
3. The method of claim 1, wherein the sequentially loading the animation files in the set of animation files in the memory of the client based on the sequence of time points, the loaded animation files corresponding to the actions of the object at different time sequence points, comprises:
determining a description file of the action of a first time series point in the sequence and a description file of the action of a second time series point in the sequence based on the sequence of time points;
and loading the description file of the action of the first time sequence point and the description file of the action of the second time sequence point in a memory of the client.
4. The method of claim 1, wherein the sequentially loading the animation files in the set of animation files in the memory of the client based on the sequence of time points, the loaded animation files corresponding to the actions of the object at different time sequence points, comprises:
Obtaining an animation file in the animation file set, wherein the time sequence points corresponding to the obtained animation file exclude a first time sequence point and a second time sequence point in the sequence;
and sequentially loading the obtained animation files of the actions of different time sequence points in the memory of the client based on the sequence of the time points.
5. The method of claim 1, wherein the loading the animation file in the set of animation files in the memory of the client, the loaded animation file corresponding to the skeletal model of the object, comprises:
the following files are loaded in the memory of the client: the method comprises the steps of mapping files of texture maps included in a skeleton model of an object, description files of skeleton information included in the skeleton model of the object and description files of skin information included in the skeleton model of the object.
6. The method of claim 5, wherein the animated rendering based on the generated rendering object comprises:
acquiring a rendering object corresponding to skeleton information in the generated rendering object, a rendering object corresponding to skin information and a mapping file of the loaded texture mapping;
And according to the specific skeleton characteristics indicated by the skeleton information and the deformation characteristics of the texture map indicated by the skin information, performing animation rendering on the obtained rendering object to obtain an animation object with the specific skeleton characteristics and the specific external contour.
7. The method of claim 5, wherein loading, in the memory of the client, a mapping file of a texture mapping included in a bone model of the object comprises:
loading a mapping file of a texture mapping corresponding to a specific pixel size in a memory of the client; wherein the particular pixel size is used to replace a plurality of pixel sizes included in a default mechanism for generating the set of animation files.
8. The method of claim 1, wherein the method further comprises:
clearing the dress corresponding to the object drawn on the blank canvas;
and drawing updated impersonation of the object on the drawn impersonation canvas.
9. An animation processing device, characterized in that the device comprises:
the acquisition module is used for acquiring an animation file set; the animation file set records animation information of the object;
The loading module is used for loading the animation files in the animation file set in the memory of the client, and the loaded animation files correspond to the skeleton model of the object;
based on the sequence of time points, sequentially loading the animation files in the animation file set in the memory of the client, wherein the loaded animation files correspond to the actions of the object at different time sequence points;
the analysis module is used for analyzing the animation file loaded in the memory of the client to generate a rendering object comprising a skeleton model and actions of the object and a statement of a blank map for the decoration of the object;
the drawing module is used for generating a blank canvas with the size corresponding to the blank map based on the statement of the blank map; drawing the decoration corresponding to the object on the blank canvas;
and the rendering module is used for detecting that the rendering time of the rendering object corresponding to the action arrives, and performing animation rendering based on the generated rendering object to obtain the animation of the action conforming to the object target time sequence point.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the acquisition module is also used for acquiring an animation file of the skeleton model of the object and an animation file of the action of the object;
Forming an animation file set in which animation information of the object is recorded, based on the obtained animation file;
the animation file of the action is split from the animation model data of the object and comprises description files of the action corresponding to at least two time sequence points.
11. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the loading module is further used for determining a description file of the action of a first time sequence point in the sequence and a description file of the action of a second time sequence point in the sequence based on the sequence of the time points;
and loading the description file of the action of the first time sequence point and the description file of the action of the second time sequence point in a memory of the client.
12. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
the loading module is further configured to obtain an animation file in the animation file set, where a time sequence point corresponding to the obtained animation file excludes a first time sequence point and a second time sequence point in the sequence;
and sequentially loading the animation files of the actions corresponding to the acquired different time sequence points in the memory of the client based on the sequence of the time points.
13. A computer-readable storage medium, characterized in that a computer-executable program is stored, which when executed by a processor, implements the animation processing method according to any one of claims 1 to 8.
14. An animation processing device, characterized in that the device comprises:
a memory for storing a computer executable program;
a processor for implementing the animation processing method according to any one of claims 1 to 8 by executing a computer-executable program stored in the memory.
CN201711219344.1A 2017-11-28 2017-11-28 Animation processing method, device and storage medium Active CN108010112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711219344.1A CN108010112B (en) 2017-11-28 2017-11-28 Animation processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711219344.1A CN108010112B (en) 2017-11-28 2017-11-28 Animation processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN108010112A CN108010112A (en) 2018-05-08
CN108010112B true CN108010112B (en) 2023-08-22

Family

ID=62054438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711219344.1A Active CN108010112B (en) 2017-11-28 2017-11-28 Animation processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN108010112B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658484A (en) * 2018-12-21 2019-04-19 上海哔哩哔哩科技有限公司 A kind of Automatic Generation of Computer Animation method and Automatic Generation of Computer Animation system
CN110060320A (en) * 2019-04-18 2019-07-26 成都四方伟业软件股份有限公司 Animation producing method and device based on WEBGL
CN112396681A (en) * 2019-08-13 2021-02-23 上海哔哩哔哩科技有限公司 Animation generation method and device and storage medium
CN110507986B (en) * 2019-08-30 2023-08-22 网易(杭州)网络有限公司 Animation information processing method and device
CN110738716B (en) * 2019-10-12 2023-12-22 天津芸芸科技有限公司 PIXI-based spine animation multi-part local reloading method
CN111798545A (en) * 2019-11-05 2020-10-20 厦门雅基软件有限公司 Method and device for playing skeleton animation, electronic equipment and readable storage medium
CN110930492B (en) * 2019-11-20 2023-11-28 网易(杭州)网络有限公司 Model rendering method, device, computer readable medium and electronic equipment
CN110992448B (en) * 2019-11-28 2023-06-30 网易(杭州)网络有限公司 Animation processing method, device, electronic equipment and storage medium
CN111124579B (en) * 2019-12-24 2023-12-19 北京金山安全软件有限公司 Special effect rendering method and device, electronic equipment and storage medium
CN111862276B (en) * 2020-07-02 2023-12-05 南京师范大学 Automatic skeletal animation production method based on formalized action description text
CN111951356B (en) * 2020-08-11 2022-12-09 深圳市前海手绘科技文化有限公司 Animation rendering method based on JSON data format
CN112037311B (en) * 2020-09-08 2024-02-20 腾讯科技(深圳)有限公司 Animation generation method, animation playing method and related devices
CN112562049B (en) * 2021-02-26 2021-05-18 湖北亿咖通科技有限公司 Method for playing system image
CN113207037B (en) * 2021-03-31 2023-04-07 影石创新科技股份有限公司 Template clipping method, device, terminal, system and medium for panoramic video animation
CN113360823A (en) * 2021-06-03 2021-09-07 广州趣丸网络科技有限公司 Animation data transmission method, device, equipment and storage medium
CN113421321B (en) * 2021-07-09 2024-03-19 北京七维视觉传媒科技有限公司 Rendering method and device for animation, electronic equipment and medium
CN113516741A (en) * 2021-07-14 2021-10-19 山东齐鲁数通科技有限公司 Animation model importing method and device, terminal equipment and storage medium
CN113992977A (en) * 2021-10-21 2022-01-28 郑州阿帕斯数云信息科技有限公司 Video data processing method and device
CN115115316B (en) * 2022-07-25 2023-01-06 天津市普迅电力信息技术有限公司 Method for simulating storage material flowing-out and warehousing operation direction-guiding type animation based on Cesium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385515A (en) * 2011-10-21 2012-03-21 广州市久邦数码科技有限公司 GO image animation engine based on Android system
CN102880616A (en) * 2011-07-15 2013-01-16 腾讯科技(深圳)有限公司 Browser page loading method and device
CN103413343A (en) * 2013-08-30 2013-11-27 广州市久邦数码科技有限公司 3D (Three-Dimensional) graphic animation engine
CN103606180A (en) * 2013-11-29 2014-02-26 广州菲动软件科技有限公司 Rendering method and device of 3D skeletal animation
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN106528201A (en) * 2016-10-10 2017-03-22 网易(杭州)网络有限公司 Method and device for cartoon loading in game
CN106970800A (en) * 2017-04-01 2017-07-21 腾讯科技(深圳)有限公司 Costume changing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098466B (en) * 2010-09-13 2016-08-17 索尼电脑娱乐公司 Image processing apparatus and image processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880616A (en) * 2011-07-15 2013-01-16 腾讯科技(深圳)有限公司 Browser page loading method and device
CN102385515A (en) * 2011-10-21 2012-03-21 广州市久邦数码科技有限公司 GO image animation engine based on Android system
CN103413343A (en) * 2013-08-30 2013-11-27 广州市久邦数码科技有限公司 3D (Three-Dimensional) graphic animation engine
CN103606180A (en) * 2013-11-29 2014-02-26 广州菲动软件科技有限公司 Rendering method and device of 3D skeletal animation
CN105976417A (en) * 2016-05-27 2016-09-28 腾讯科技(深圳)有限公司 Animation generating method and apparatus
CN106528201A (en) * 2016-10-10 2017-03-22 网易(杭州)网络有限公司 Method and device for cartoon loading in game
CN106970800A (en) * 2017-04-01 2017-07-21 腾讯科技(深圳)有限公司 Costume changing method and device

Also Published As

Publication number Publication date
CN108010112A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108010112B (en) Animation processing method, device and storage medium
EP4198909A1 (en) Image rendering method and apparatus, and computer device and storage medium
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
CN109389661B (en) Animation file conversion method and device
CN110189246B (en) Image stylization generation method and device and electronic equipment
US20220080318A1 (en) Method and system of automatic animation generation
CN104850388B (en) web page rendering method and device
CN111950056B (en) BIM display method and related equipment for building informatization model
EP4182816A1 (en) Automatic website data migration
CN110322571B (en) Page processing method, device and medium
CN112581567A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113411664A (en) Video processing method and device based on sub-application and computer equipment
CN106447756B (en) Method and system for generating user-customized computer-generated animations
EP4315255A1 (en) Facial synthesis in augmented reality content for advertisements
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
US20190371039A1 (en) Method and smart terminal for switching expression of smart terminal
CN111589111B (en) Image processing method, device, equipment and storage medium
CN108062339B (en) Processing method and device of visual chart
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN116152416A (en) Picture rendering method and device based on augmented reality and storage medium
CN115641397A (en) Method and system for synthesizing and displaying virtual image
WO2022179054A1 (en) Method for playing system image
CN114331808A (en) Action posture storage method, device, medium and electronic equipment
CN116115995A (en) Image rendering processing method and device and electronic equipment
CN110930499B (en) 3D data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant