CN112348928A - Animation synthesis method, animation synthesis device, electronic device, and medium - Google Patents

Animation synthesis method, animation synthesis device, electronic device, and medium Download PDF

Info

Publication number
CN112348928A
CN112348928A CN202011337695.4A CN202011337695A CN112348928A CN 112348928 A CN112348928 A CN 112348928A CN 202011337695 A CN202011337695 A CN 202011337695A CN 112348928 A CN112348928 A CN 112348928A
Authority
CN
China
Prior art keywords
information
initial
new
parameter information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011337695.4A
Other languages
Chinese (zh)
Inventor
赵姣姣
樊梦媛
杨兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN202011337695.4A priority Critical patent/CN112348928A/en
Publication of CN112348928A publication Critical patent/CN112348928A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides an animation synthesis method, an animation synthesis device, an electronic device and a storage medium. The method comprises the following steps: determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, wherein each dynamic effect element is provided with a corresponding initial configuration file, and the initial configuration file comprises initial external time parameter information, initial resource information and initial layer information; according to the playing time of each dynamic effect element, modifying the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element to obtain modified external time parameter information; generating new resource information and new layer information according to the modified external time parameter information, the initial resource information and the initial layer information; generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information; determining a target configuration file according to the plurality of new configuration files; the target configuration file is invoked to synthesize the animation.

Description

Animation synthesis method, animation synthesis device, electronic device, and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an animation synthesis method, an animation synthesis apparatus, an electronic device, and a storage medium.
Background
Today, computer technology, network technology and electronic technology are continuously developed, and watching animation is an entertainment means widely used in people's lives. The animation comprises a plurality of animation units, and each animation unit is provided with a corresponding initial configuration file. In other words, the animation is composed of a plurality of animation units, and the process of synthesizing the animation by the plurality of animation units is a process of modifying the initial configuration file of each animation unit to obtain a new configuration file and generating the configuration file of the animation according to the plurality of new configuration files.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art: the complexity of realizing animation synthesis operation by adopting the related technology is higher.
Disclosure of Invention
In view of this, the disclosed embodiments provide an animation synthesis method, an animation synthesis device, an electronic device, and a storage medium.
One aspect of the present disclosure provides an animation synthesis method, including: determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, wherein each dynamic effect element has a corresponding initial configuration file, and the initial configuration file comprises initial external time parameter information, initial resource information and initial layer information; according to the playing time of each dynamic effect element, modifying the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element to obtain modified external time parameter information; generating new resource information and new layer information according to the modified external time parameter information, the initial resource information and the initial layer information; generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information; determining a target configuration file according to a plurality of the new configuration files; and calling the target configuration file to synthesize the animation.
Another aspect of an embodiment of the present disclosure provides an animation synthesis apparatus, including: a first determining module, configured to determine a playing time of each dynamic effect element in a plurality of dynamic effect elements, where each dynamic effect element has a corresponding initial configuration file, and the initial configuration file includes initial external time parameter information, initial resource information, and initial layer information; the modification module is used for modifying the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element according to the playing time of each dynamic effect element to obtain modified external time parameter information; a first generating module, configured to generate new resource information and new layer information according to the modified external time parameter information, the initial resource information, and the initial layer information; a second generating module, configured to generate a new configuration file according to the modified external time parameter information, the new resource information, and the new layer information; a second determining module, configured to determine a target configuration file according to the plurality of new configuration files; and the calling module is used for calling the target configuration file to synthesize the animation.
Another aspect of an embodiment of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the embodiments of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions, which when executed by a processor, cause the processor to implement the method as described above.
Another aspect of an embodiment of the present disclosure provides a computer program product comprising a computer program for implementing the method as described above when executed by a processor.
According to the embodiment of the disclosure, by determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, each dynamic effect element has a corresponding initial configuration file, the initial configuration file includes initial external time parameter information, initial resource information and initial layer information, according to the playing time of each dynamic effect element, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified to obtain the modified external time parameter information, generating new resource information and new layer information according to the modified external time parameter information, the initial resource information and the initial layer information, generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information, and determining a target configuration file according to the plurality of new configuration files, and calling the target configuration file to synthesize the animation. Because the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified according to the playing time of the dynamic effect element, and the information related to time in the initial layer information does not need to be modified recursively, the operation of generating the configuration file of the animation is simplified. Since the operation of generating the configuration file of the animation is simplified, and the configuration file of the animation is generated based on the configuration file of the plurality of dynamic effect elements for generating the animation, the complexity of the animation synthesis operation is reduced, and therefore, the technical problem that the complexity of realizing the animation synthesis operation by adopting the related technology is high is at least partially overcome. Meanwhile, the relevance between different dynamic effect elements is small, so that the reuse rate of the high dynamic effect elements is realized.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 is a schematic diagram schematically illustrating time parameter information according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates an exemplary system architecture to which animation synthesis methods may be applied, according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow diagram of a method of animation synthesis according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a diagram of a list of action elements, according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a diagram of a playtime setting according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a flow chart of another method of profile processing for an effect element according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of an animation synthesis apparatus according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of an electronic device suitable for implementing an animation synthesis method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In order to better understand the technical solutions of the embodiments of the present disclosure, the basic concepts related to the embodiments of the present disclosure will be explained first.
The timeline is used for organizing and controlling animation changes and actions, and the main components of the timeline comprise layers and frames. The time axis is used to identify the number of animation frames and the position of each frame, wherein each scale in the time axis represents a frame, and the number on the scale represents the serial number of the frame.
The layers in the animation are similar to a stack of transparent paper, each paper has a different picture, and the paper is stacked together to form a more complex picture. If the content is added at a certain position in the previous layer, the content at the same position in the next layer is blocked, if the certain position in the previous layer has no content, the content at the same position in the next layer is not blocked, and the content at the same position in the next layer can be seen at the position in the previous layer. Each layer in the animation is independent of the other.
The layer information (layers) may include information for one or more layers. The layer information may include information layer information of a plurality of layers, which may include a resource reference identifier, built-in time parameter information, built-in size information, layer transformation attribute information, and the like. The built-in time parameter information may include a start key frame (ip), an end key frame (op), a start time (st), and the like. The built-in size information may include built-in width information and built-in height information. The layer transformation attribute information may include anchor point attribute information, position attribute information, transparency attribute information, transformation rotation attribute information, and scaling attribute information.
The resource information (assets) may provide vector graphics resources for the layers. The resource information may include at least one resource item, each of which may be characterized by a resource item identification. The resource reference identifier in the layer information has a corresponding resource item identifier.
The configuration file may be invoked for playing the animation. The configuration file may include resource information, layer information, external time parameter information, external size parameter information, font information, and other information. Other information may be understood as information that does not affect animation synthesis, such as name information, among others. Each animation unit has a corresponding configuration file. The configuration file may be in the form of a JSON (JavaScript Object Notation) file. JSON is a lightweight data exchange format.
According to an embodiment of the disclosure, the external time parameter information is used for representing the total time parameter information of the synthesis unit on the time axis. The built-in time parameter information is used for representing the time parameter information of each layer on the time axis. Each synthesis unit has a corresponding at least one layer. In the related art, the composition unit is an animation unit. If the layers are divided for the time parameter information, the external time parameter information can be considered as the highest layer time parameter information, and the internal time parameter information is the lower layer time parameter information.
Fig. 1 is a schematic diagram schematically illustrating time parameter information according to an embodiment of the present disclosure. As shown in fig. 1, the animation unit has two corresponding layers, layer 1 and layer 2. The built-in time parameter information corresponding to the layer 1 is built-in time parameter information 1, and the built-in time parameter information corresponding to the layer 2 is built-in time parameter information 2. The external time parameter information corresponding to the animation unit is composed of internal time parameter information 1 and internal time parameter information 2.
Illustratively, part of the information of the configuration file of the animation unit corresponding to fig. 1 is as follows:
Figure BDA0002797729810000061
Figure BDA0002797729810000071
wherein, the part above the "assets" represents the external time parameter information. The time parameter information in the "layers" is the built-in time parameter information, including the built-in time parameter information of the two layers. It can be seen from the above-mentioned configuration file that the hierarchy of the external time parameter information is higher than the hierarchy of the internal time parameter information.
In the related art, a configuration file of an animation is generated based on configuration files of a plurality of animation units, and the configuration file of the animation is called to realize a composite animation. For convenience of explanation, the configuration file before being modified is referred to as an initial configuration file, and the configuration file after being modified is referred to as a new configuration file. The information fields included in the initial configuration file and the new configuration file are the same. Accordingly, the initial configuration file may include initial external time parameter information, initial external size parameter information, initial resource information, initial image layer information, initial font information, and other information. The initial layer information may include initial built-in time parameter information, initial built-in size parameter information, initial layer transformation attribute information, and the like.
According to the related technology, initial external time parameter information and initial internal time parameter information in the initial layer information of each layer are modified in a recursive traversal mode according to the set playing time of the animation unit to obtain a new configuration file of the animation unit, and the new configuration file of the animation unit is used as a configuration file for generating the animation. It should be noted that the layer information in the new configuration file of the animation unit is not reconstructed layer information, but obtained by modifying the initial built-in time parameter information in the initial layer information. The initial resource information in the initial configuration file of the animation unit is not modified, that is, the resource information in the new configuration file of the animation unit is the initial resource information in the initial configuration file.
In the process of implementing the present disclosure, the inventor finds that, in the related art, the initial external time parameter information and the initial internal time parameter information in the initial layer information of each layer are modified in a recursive traversal manner, each animation unit includes a plurality of layers, each layer may also have a nested layer, the depth of the layer is deep, and the modification manner requires a large number of fields to be modified, so that the operation of generating the configuration file of the animation using the related art is complicated. Since the configuration file of the animation is generated based on the configuration file for generating the animation of the plurality of animation effect elements, the complexity of the animation synthesis operation is made high. Further, when an animation is synthesized based on a plurality of animation units, the multiplexing rate of animation units is low because the correlation between the plurality of animation units is large.
In order to solve the above problems, the inventors found that since an animation can be understood as a combination of a pattern animation effect, a decoration animation effect, and a background animation effect, an animation unit is a basis for composing the animation, and an animation unit is also a combination of a pattern animation effect, a decoration animation effect, and a background animation effect, the animation can be split at a finer granularity than that of an animation unit, or can be understood as being split in a hierarchy. When the animation synthesis is performed by adopting the split result based on the finer granularity, the higher reuse rate can be realized because the correlation between the split results of different finer granularities is smaller. Therefore, the problem of low reuse rate of the animation unit in the related technology can be solved. The disclosed embodiments refer to the result of splitting an animation more finely grained than an animation unit as a live-action element, i.e., a live-action element may be a paperwork live-action, a decoration live-action, or a background live-action. The animation unit may include a plurality of animation elements. Similarly, each dynamic effect element has a corresponding initial configuration file, and the initial configuration file may include initial external time parameter information, initial external size parameter information, initial resource information, initial layer information, initial font information, and other information. The initial layer information may include initial built-in time parameter information, initial built-in size parameter information, initial layer transformation attribute information, and the like.
The embodiment of the disclosure provides an animation synthesis method and device and electronic equipment capable of applying the method. The method generates a configuration file for the animation based on the configuration files for the plurality of animation elements. Aiming at the configuration file of the dynamic effect elements for generating the animation, the method of modifying the initial external time parameter information in the initial configuration file of each dynamic effect element according to the playing time of the dynamic effect element is provided, and the initial layer information (namely the initial internal time parameter information) in the initial configuration file is not required to be modified. On the basis, new resource information and new layer information are obtained based on the modified initial external time parameter information, the initial resource information and the initial layer information, new configuration files are obtained according to the modified external time parameter information, the new resource information and the new layer information, target configuration files are determined according to the new configuration files, and the target configuration files are called to synthesize the animation. The following description will be given with reference to specific examples.
FIG. 2 schematically illustrates an exemplary system architecture 200 to which animation synthesis methods may be applied, according to an embodiment of the disclosure. It should be noted that fig. 2 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the system architecture 200 according to this embodiment may include terminal devices 201, 202, 203, a network 204 and a server 205. The network 204 serves as a medium for providing communication links between the terminal devices 201, 202, 203 and the server 205. Network 204 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 201, 202, 203 to interact with the server 205 via the network 204 to receive or send messages or the like. The terminal devices 201, 202, 203 may have installed thereon various communication client applications, such as a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 201, 202, 203 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 205 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 201, 202, 203. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the animation synthesis method provided by the embodiment of the present disclosure may be generally executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the animation synthesis apparatus provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 2 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
FIG. 3 schematically shows a flow diagram of a method of animation synthesis according to an embodiment of the disclosure.
As shown in fig. 3, the method includes operations S310 to S360.
In operation S310, a playing time of each dynamic effect element in a plurality of dynamic effect elements is determined, where each dynamic effect element has a corresponding initial configuration file, and the initial configuration file includes initial external time parameter information, initial resource information, and initial layer information.
In an embodiment of the present disclosure, a plurality of dynamic effect elements may be obtained from the dynamic effect element list, and a corresponding playing time may be determined for each dynamic effect element. The playing time of each dynamic effect element can be used as a basis for determining the sequence of the dynamic effect elements. The playing time refers to the time interval between the 1 st frame and the current frame.
The initial configuration file may include initial external time parameter information, initial resource information, and initial layer information. Since each dynamic element may have one or at least two layers, the initial layer information may include initial layer information for each layer.
Fig. 4 schematically shows a schematic diagram of a list of action elements according to an embodiment of the present disclosure. As shown in fig. 4, three dynamic effect elements, namely dynamic effect element a, dynamic effect element B and dynamic effect element C, are selected from the dynamic effect element list.
Fig. 5 schematically shows a schematic diagram of a play-out time setting according to an embodiment of the present disclosure. As shown in fig. 5, on the time axis, corresponding playing times are set for the animation element a, the animation element B, and the animation element C selected in fig. 4, respectively.
In operation S320, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified according to the playing time of each dynamic effect element, so as to obtain modified external time parameter information.
In the embodiment of the present disclosure, after the playing time of each dynamic effect element is obtained, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element may be modified according to the playing time of each dynamic effect element, that is, the start key frame, the end key frame, and the start time in the initial external time parameter information are modified. And the initial built-in time parameter information in the initial layer information of the initial configuration file is kept unchanged, that is, the initial built-in time parameter information in the initial layer information is not required to be modified recursively.
For example, part of the information of the initial configuration file of the dynamic effect element a is as follows:
Figure BDA0002797729810000101
Figure BDA0002797729810000111
modifying the initial external time parameter information in the initial configuration file corresponding to the dynamic effect element A to obtain partial information of the modified configuration file as follows:
Figure BDA0002797729810000112
Figure BDA0002797729810000121
comparing the initial configuration file and the modified configuration file of the active element a, it can be seen that the difference between the two is: the "op" of the external time parameter information (i.e. the modified external time parameter information) in the modified configuration file is 190, the "op" of the initial external time parameter information is 150, and the rest of the external time parameter information is the same as the external time parameter information. That is, the initial external time parameter information is modified according to the playing time of the dynamic effect element, and the initial internal time parameter information is not modified.
In operation S330, new resource information and new layer information are generated according to the modified external time parameter information, the initial resource information, and the initial layer information.
In the embodiment of the disclosure, after the modified external time parameter information is obtained, new resource information and a new image layer may be constructed.
According to an embodiment of the present disclosure, the initial configuration file may further include initial external dimensional parameter information. After the modified external time parameter information is obtained, the initial resource information can be modified according to the initial layer information to obtain new resource information. And obtaining new layer information according to the modified external time parameter information and at least one piece of initial external dimension parameter information so as to construct a new layer. The new layer information may include new built-in time parameter information, new built-in size parameter information, and new layer transformation attribute information. The new layer transformation attribute information may be determined by a preset layer transformation attribute rule and at least one initial external dimension parameter information. And the preset layer transformation attribute rule is used as a basis for the new layer transformation attribute information. The new internal time parameter information is the modified external time parameter information. The new built-in size parameter information is the initial external size parameter information.
In operation S340, a new configuration file is generated according to the modified external time parameter information, the new resource information, and the new layer information.
In the embodiment of the present disclosure, after obtaining the new resource information, the new layer information, and the modified external time parameter information corresponding to each dynamic effect element, a new configuration file corresponding to each dynamic effect element may be generated according to the above information and the initial configuration file. The new configuration file comprises resource information, layer information and external time parameter information which are respectively new resource information, new layer information and modified external time parameter information, and other information except the information in the new configuration file is consistent with corresponding information in the initial configuration file.
In operation S350, a target profile is determined according to the plurality of new profiles.
In embodiments of the present disclosure, after obtaining a new configuration file for each of the animation elements, the new configuration files for the plurality of animation elements may be composited into a target configuration file, which may be invoked when compositing the animation. According to an embodiment of the present disclosure, the configuration file may be a JSON file.
In operation S360, the target profile is called to synthesize the animation.
In an embodiment of the present disclosure, the target configuration file may be a JSON file. The target configuration file corresponds to an animation composed of a plurality of animation effect elements. In addition, the target configuration file can be called by Lottie to realize playing of the animation synthesized by the plurality of dynamic effect elements. And checking the animation synthesis effect through playing. Lottie is a library which supports Android, iOS and ReactNative and realizes animation effects by adopting JSON files.
According to the technical scheme of the embodiment of the disclosure, by determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, each dynamic effect element has a corresponding initial configuration file, the initial configuration file comprises initial external time parameter information, initial resource information and initial layer information, according to the playing time of each dynamic effect element, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified to obtain modified external time parameter information, new resource information and new layer information are generated according to the modified external time parameter information, the initial resource information and the initial layer information, a new configuration file is generated according to the modified external time parameter information, the new resource information and the new layer information, a target configuration file is determined according to a plurality of new configuration files, and the target configuration file is called, to synthesize an animation. Because the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified according to the playing time of the dynamic effect element, and the information related to time in the initial layer information does not need to be modified recursively, the operation of generating the configuration file of the animation is simplified. Since the operation of generating the configuration file of the animation is simplified, and the configuration file of the animation is generated based on the configuration file of the plurality of dynamic effect elements for generating the animation, the complexity of the animation synthesis operation is reduced, and therefore, the technical problem that the complexity of realizing the animation synthesis operation by adopting the related technology is high is at least partially overcome. Meanwhile, the relevance between different dynamic effect elements is small, so that the reuse rate of the high dynamic effect elements is realized.
Optionally, on the basis of the above technical solution, the initial configuration file further includes initial external size parameter information and initial font information. Generating a new configuration file according to the modified external time parameter information, the new resource information, and the new layer information may include the following operations.
And generating a new configuration file according to the modified external time parameter information, the new resource information, the new layer information, the initial external size parameter information and the initial font information.
In the embodiment of the present disclosure, the font information in the new configuration file corresponding to each dynamic effect element is the initial font information.
Optionally, on the basis of the above technical solution, generating new resource information and new layer information according to the modified external time parameter information, the initial resource information, and the initial layer information may include the following operations.
And modifying the initial resource information according to the initial layer information to obtain new resource information. And obtaining new layer information according to the modified external time parameter information and at least one piece of initial external size parameter information.
In an embodiment of the present disclosure, the initial resource information may include at least one initial resource item. For each dynamic element, the initial layer information of the dynamic element may be set as a new resource item in the initial resource information, that is, the initial resource information is added to the initial resource information as a new resource item, so as to obtain new resource information. Wherein each resource item can be characterized by a resource item identification (id). Accordingly, the new resource item can be characterized by a new resource item identification, and the resource item identifications of different new resource items are different. It should be noted that, since the initial layer information may include information of multiple layers, the adding of the initial resource information to the initial resource information as a new resource item is described herein, that is, the adding of the information of multiple layers to the initial resource information as a new resource item. Correspondingly, for each dynamic effect element, the information of the layers corresponds to the same new resource item, that is, the information of the layers corresponds to the same new resource item identifier.
According to an embodiment of the present disclosure, since the layer information may include built-in time parameter information, built-in size parameter information, and layer transformation attribute information, in order to construct a new layer, the above information needs to be determined for the new layer. For each dynamic effect element, new internal time parameter information can be determined according to the modified external time parameter information of the dynamic effect element, and new internal size parameter information can be determined according to the initial external size parameter information of the dynamic effect element. And determining new layer transformation attribute information according to the initial size parameter information of the plurality of dynamic effect elements. According to an embodiment of the present disclosure, the new layer information may further include a new resource reference identifier (refId), and the new resource reference identifier may serve as an identifier of the referenced resource item.
According to the embodiment of the disclosure, since the resource information may provide resources for the layer, that is, when the animation is played, the layer needs to call the corresponding resource item in the resource information to implement, and the resource item has the corresponding resource item identifier, in order to implement that the layer can accurately call the corresponding resource item in the corresponding resource information, the resource reference identifier in the layer information needs to be set according to the resource item identifier in the resource information, so that the resource item identifier corresponding to the resource item in the resource information that the layer needs to call is consistent with the resource reference identifier in the layer information. On the basis, a new resource reference identifier is set according to the new resource item identifier, so that the new resource reference identifier has a corresponding new resource item identifier, and different new resource reference identifiers have different new resource item identifiers.
For example, the initial resource information and the initial layer information in the initial configuration file of the dynamic element a are as follows, where the initial layer information includes information of two layers, and resource reference identifiers corresponding to the two layers are "refId" ═ comp _0 "and" refId "═ comp _1", respectively.
Figure BDA0002797729810000151
Figure BDA0002797729810000161
The new resource information of the dynamic effect element A is as follows:
Figure BDA0002797729810000162
Figure BDA0002797729810000171
comparing the initial resource information of the dynamic element a with the new resource information of the dynamic element, it can be seen that, on the basis of the initial resource information, a resource item is added to the new resource information, the resource item identifier of the added resource item is "id" ═ custom _ comp _0", the resource item corresponds to the initial layer information, that is, the layer information in the initial layer information corresponding to the resource reference identifier" refId "═ comp _0", and the resource reference identifier "refId" ═ comp _1 ". The corresponding layer information corresponds to the resource item whose resource item identifier is "id" ═ custom _ comp _0 ".
According to the embodiment of the disclosure, the new resource information is obtained by modifying the initial resource information according to the initial layer information, and the new layer information is obtained according to the modified external time parameter information and the at least one initial external dimension parameter information, so that the new layer information is constructed on the layer level. In the related art, the initial layer information is modified instead of reconstructing new layer information.
Optionally, on the basis of the above technical solution, obtaining new layer information according to the modified external time parameter information and the at least one initial external size parameter information may include the following operations.
And determining target time parameter information according to the modified external time parameter information. And determining target layer transformation attribute information according to a preset layer transformation attribute rule and at least one piece of initial external dimension parameter information. And obtaining new layer information according to the target time parameter information and the target layer transformation attribute information.
In the embodiment of the present disclosure, in order to construct a new layer for each dynamic effect element, it is necessary to determine internal time parameter information, internal size parameter information, and layer transformation attribute information, where modified external time parameter information corresponding to the dynamic effect element may be used as new internal time parameter information, and the new internal time parameter information is referred to as target time parameter information. The initial external size parameter information corresponding to the dynamic effect element may be used as new internal size parameter information, and the new internal size parameter information is referred to as target size parameter information. The layer transformation attribute rule can be preset to obtain a preset layer transformation attribute rule, initial external time parameter information of a plurality of dynamic effect elements is processed according to the preset layer transformation attribute rule, and new layer transformation attribute information is determined, wherein the new layer transformation attribute information is called target layer transformation attribute information.
Optionally, on the basis of the above technical solution, the target layer transformation attribute information includes target anchor point attribute information and target position attribute information. Determining the target layer transformation attribute information according to the preset layer transformation attribute rule and the at least one piece of initial external dimension parameter information may include the following operations.
And determining first central position information according to the initial external size parameter information. And determining the first center position information as the attribute information of the target positioning point. And determining second central position information according to the plurality of pieces of initial external size parameter information, wherein the second central position information is the central position information of the animation synthesized by the plurality of dynamic effect elements. The second center position information is determined as the target position attribute information.
In an embodiment of the present disclosure, the initial extrinsic dimension parameter information may include initial extrinsic width information and initial extrinsic height information. The target layer transformation attribute information may include target positioning point attribute information and target position attribute information, where the target positioning point attribute information may refer to position information corresponding to a center position of the dynamic effect element, that is, center position information of the dynamic effect element, and the center position information of the dynamic effect element may be referred to as first center position information.
According to an embodiment of the present disclosure, the target position attribute information may indicate a position where an animation element appears in an animation, wherein the animation is composed of a plurality of animation elements. According to the embodiments of the present disclosure, position information corresponding to the center position of the animation, that is, center position information of the animation is referred to as second center position information, and the second center position information is determined as target position attribute information. The reason why the target position attribute information is set as the second center position information (i.e., center position information of the moving image) is that: because the initial external size parameter information of different dynamic effect elements may be different, the animation is synthesized by a plurality of dynamic effect elements, and when the animation is played, it is required to ensure that each dynamic effect element appears in the display screen, which requires selecting a position with universality, which can ensure that each dynamic effect element appears in the display screen, and the center position of the animation is a position satisfying the above condition, so that the center position information (i.e., the second center position information) of the animation is determined as the target position attribute information.
According to the embodiment of the disclosure, the first center position information (i.e. the target positioning point attribute information) includes first width information and first height information, wherein the first width information is one half of the initial external width information of the dynamic effect element, and the first height information is one half of the initial external height information. The second center position information (i.e., the target position attribute information) includes second width information and second height information, where the second width information is one-half of the maximum initial external width information among all the initial external width information, and the second height information is one-half of the maximum initial external height information among all the initial external height information.
According to the embodiment of the present disclosure, the target layer transformation attribute information may further include target transparency attribute information, target transformation rotation attribute information, and target scaling attribute information, which may be set according to a default condition.
Illustratively, the new layer information as the animation element a is as follows, where o represents target transparency attribute information, r represents target transformation rotation attribute information, s represents target scaling attribute information, a represents target anchor point attribute information, p represents target position attribute information, w represents new built-in width information (i.e., initial extrinsic width information), h represents new built-in height information (i.e., initial extrinsic height information), and refId represents new resource reference identifier.
Figure BDA0002797729810000191
Figure BDA0002797729810000201
According to the embodiment of the present disclosure, the new resource reference identifier included in the new layer information of the dynamic element a is consistent with the new resource item identifier included in the new resource information of the dynamic element a described above, where the new resource reference identifier is refId — custom _ comp _0, and the new resource item identifier is id — custom _ comp _ 0. And when the dynamic effect element A is played, determining a new resource item identifier consistent with the new resource reference identifier according to the new resource reference identifier of the dynamic effect element A.
Optionally, on the basis of the above technical solution, determining the target configuration file according to a plurality of new configuration files may include the following operations.
And determining target external size parameter information according to the initial external size parameter information in the new configuration files. And determining target external time parameter information according to the modified external time parameter information in the new configuration files. And determining target resource information according to the new resource information in the plurality of new configuration files. And determining target layer information according to the new layer information in the new configuration files. And determining target font information according to the initial font information in the plurality of new configuration files. And determining a target configuration file according to the target external size parameter information, the target external time parameter information, the target resource information and the target font information.
In the embodiment of the present disclosure, for each dynamic effect element, the external dimension parameter information in the new configuration file of the dynamic effect element is the initial external dimension parameter information of the dynamic effect element, and the external time parameter information in the new configuration file of the dynamic effect element is the modified external time parameter information of the dynamic effect element.
Determining target resource information from new resource information in the plurality of new configuration files may include: and merging the new resource information of the plurality of dynamic effect elements into the first array. Determining target layer information according to new layer information in a plurality of new configuration files may include: and merging the new layer information of the plurality of dynamic effect elements into a second array. It should be noted that, if a certain layer is a background layer, new layer information of the layer needs to be set at the end of the second array.
Determining target font information from the initial font information in the plurality of new configuration files may include: and merging the initial font information in the plurality of dynamic effect elements into a third array. Since the configuration file may include other information that does not affect animation synthesis, such as name information, in addition to the external time parameter information, the external size parameter information, the resource information, the layer information, and the font information, a default principle may be adopted for the other information in the process of synthesizing the target configuration file.
Optionally, on the basis of the foregoing technical solution, the initial external size parameter information in each new configuration file includes initial external width information and initial external height information. Determining the target external dimensional parameter information according to the initial external dimensional parameter information in the plurality of new configuration files may include the following operations.
Maximum width information is determined from the initial extrinsic width information in the plurality of new configuration files. Maximum altitude information is determined from the initial extrinsic altitude information in the plurality of new profiles. And determining the external size parameter information of the target according to the maximum width information and the maximum height information.
In an embodiment of the disclosure, the target external size parameter information may be target external width information and target external height information, where the target external width information may be the largest width information of all the initial external width information, and the target external height information may be the largest height information of all the initial external height information.
According to the embodiment of the disclosure, the second width information is one half of the target external width information, and the second height information is one half of the target external height information.
Optionally, on the basis of the above technical solution, the modified external time parameter information in each new configuration file includes a start key frame and an end key frame. Determining the target external time parameter information according to the modified external time parameter information in the plurality of new configuration files may include the following operations.
A minimum start key frame is determined from the start key frames in the plurality of new configuration files. A maximum end key frame is determined from the end key frames in the plurality of new configuration files. And determining target external time parameter information according to the minimum starting key frame and the maximum ending key frame.
In an embodiment of the present disclosure, the target external time parameter information may include a target external start key frame and a target external end key frame. The target external start key frame may be a minimum start key frame of all start key frames, and the target end key frame may be a maximum end key frame of all end key frames. Usually, the target external start key frame is 0, and the target external end key frame is the end key frame of the last dynamic effect element.
Optionally, on the basis of the foregoing technical solution, the new resource information includes at least one new resource item identifier, and each new layer information includes a new resource reference identifier. The new resource reference mark has a corresponding new resource item mark, and the new resource item mark corresponding to different new resource reference marks is different.
In the embodiment of the present disclosure, different new resource item identifiers are different, and different new resource reference identifiers correspond to different new resource item identifiers, and the reason for performing the above setting is that: because the resource information can provide resources for the layer, in order to realize that the layer can accurately call the corresponding resource item in the corresponding resource information, it is necessary to make the resource item identifier corresponding to the resource item in the resource information that the layer needs to call consistent with the resource reference identifier in the layer information. On the basis, a new resource reference identifier is set according to the new resource item identifier, so that the new resource reference identifier has a corresponding new resource item identifier, and different new resource reference identifiers have different new resource item identifiers.
Fig. 6 schematically shows a flowchart of another method for processing a configuration file of a dynamic effect element according to an embodiment of the present disclosure.
As shown in fig. 6, the method includes operations S601 to S621.
In operation S601, a play time of each of a plurality of dynamic effect elements is determined.
In the embodiment of the present disclosure, each dynamic effect element has a corresponding initial configuration file, and the initial configuration file includes initial external time parameter information, initial resource information, initial layer information, initial external size parameter information, and initial font information.
In operation S602, according to the playing time of each dynamic effect element, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified, so as to obtain modified external time parameter information.
In operation S603, the initial resource information is modified according to the initial layer information, so as to obtain new resource information.
In an embodiment of the present disclosure, the new resource information comprises at least one new resource item identification.
In operation S604, target time parameter information is determined according to the modified external time parameter information.
In operation S605, determining first center position information according to the initial external size parameter information;
in operation S606, the first center position information is determined as the target anchor point attribute information.
In operation S607, second center position information is determined according to the plurality of initial external size parameter information.
In an embodiment of the present disclosure, the second center position information is center position information of an animation synthesized by a plurality of animation elements.
In operation S608, the second center position information is determined as the target position attribute information.
In operation S609, new layer information is obtained according to the target time parameter information, the target location point attribute information, and the target position attribute information.
In an embodiment of the present disclosure, each new layer information includes a new resource reference identifier. The new resource reference mark has a corresponding new resource item mark, and the new resource item mark corresponding to different new resource reference marks is different.
In operation S610, a new configuration file is generated according to the modified external time parameter information, the new resource information, the new layer information, the initial external size parameter information, and the initial font information.
In operation S611, maximum width information is determined from the initial external width information in the plurality of new configuration files.
In operation S612, maximum altitude information is determined from the initial external altitude information in the plurality of new profiles.
In operation S613, target external size parameter information is determined according to the maximum width information and the maximum height information.
In operation S614, a minimum start key frame is determined from the start key frames in the plurality of new profiles.
In operation S615, a maximum end key frame is determined from the end key frames in the plurality of new profiles.
In operation S616, target extrinsic time parameter information is determined according to the minimum start key frame and the maximum end key frame.
In operation S617, target resource information is determined according to new resource information in the plurality of new configuration files.
In operation S618, target layer information is determined according to new layer information in the plurality of new configuration files.
In operation S619, target font information is determined according to the initial font information in the plurality of new configuration files.
In operation S620, a target configuration file is determined according to the target external size parameter information, the target external time parameter information, the target resource information, and the target font information.
In an embodiment of the present disclosure, the target profile is for being invoked when playing an animation that is synthesized by a plurality of animation elements.
In operation S621, a target profile is called to synthesize an animation.
According to the technical scheme of the embodiment of the disclosure, because the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified according to the playing time of the dynamic effect element, and the information related to time in the initial layer information does not need to be recursively modified, the operation of generating the configuration file of the animation is simplified, and therefore, the technical problem that the operation of generating the configuration file of the animation by adopting the related technology is complicated is at least partially solved. The above realizes the composition of animation from the time level. Meanwhile, the relevance between different dynamic effect elements is small, so that the reuse rate of the high dynamic effect elements is realized. And generating new layer information according to the modified external time parameter information, the preset layer transformation attribute rule and at least one piece of initial external size parameter information, thereby realizing the synthesis of animation from the layer level. On the basis, the operation of the configuration file for generating the animation is simplified, and the configuration file for generating the animation is generated based on the configuration file for generating the animation of a plurality of dynamic effect elements, so that the technical problem that the complexity of the animation synthesis operation is high by adopting the related technology is at least partially overcome, and the complexity of the animation synthesis operation is reduced.
FIG. 7 schematically shows a block diagram of an animation synthesis apparatus according to an embodiment of the present disclosure.
As shown in FIG. 7, the animation synthesis apparatus 700 may include a first determination module 710, a modification module 720, a first generation module 730, and a second generation module 740.
The first determination module 710, the modification module 720, the first generation module 730, and the second generation module 740 are communicatively coupled.
The first determining module 710 is configured to determine a playing time of each dynamic effect element in the plurality of dynamic effect elements, where each dynamic effect element has a corresponding initial configuration file, and the initial configuration file includes initial external time parameter information, initial resource information, and initial layer information.
And the modifying module 720 is configured to modify the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element according to the playing time of each dynamic effect element, so as to obtain modified external time parameter information.
The first generating module 730 is configured to generate new resource information and new layer information according to the modified external time parameter information, the initial resource information, and the initial layer information.
The second generating module 740 is configured to generate a new configuration file according to the modified external time parameter information, the new resource information, and the new layer information.
A second determining module 750, configured to determine the target configuration file according to the plurality of new configuration files.
And the calling module 760 is used for calling the target configuration file to synthesize the animation.
According to the technical scheme of the embodiment of the disclosure, by determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, each dynamic effect element has a corresponding initial configuration file, the initial configuration file comprises initial external time parameter information, initial resource information and initial layer information, according to the playing time of each dynamic effect element, the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified to obtain modified external time parameter information, new resource information and new layer information are generated according to the modified external time parameter information, the initial resource information and the initial layer information, a new configuration file is generated according to the modified external time parameter information, the new resource information and the new layer information, a target configuration file is determined according to a plurality of new configuration files, and the target configuration file is called, to synthesize an animation. Because the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element is modified according to the playing time of the dynamic effect element, and the information related to time in the initial layer information does not need to be modified recursively, the operation of generating the configuration file of the animation is simplified. Since the operation of generating the configuration file of the animation is simplified, and the configuration file of the animation is generated based on the configuration file of the plurality of dynamic effect elements for generating the animation, the complexity of the animation synthesis operation is reduced, and therefore, the technical problem that the complexity of realizing the animation synthesis operation by adopting the related technology is high is at least partially overcome. Meanwhile, the relevance between different dynamic effect elements is small, so that the reuse rate of the high dynamic effect elements is realized.
Optionally, on the basis of the above technical solution, the initial configuration file further includes initial external size parameter information and initial font information.
The second generation module 740 may include a generation submodule.
And the generating submodule is used for generating a new configuration file according to the modified external time parameter information, the new resource information, the new layer information, the initial external size parameter information and the initial font information.
Optionally, on the basis of the above technical solution, the first generating module 730 may include a first obtaining sub-module and a second obtaining sub-module.
And the first obtaining submodule is used for modifying the initial resource information according to the initial layer information to obtain new resource information.
And the second obtaining submodule is used for obtaining new layer information according to the modified external time parameter information and at least one piece of initial external size parameter information.
Optionally, on the basis of the above technical solution, the second obtaining submodule may include a first determining unit, a second determining unit, and an obtaining unit.
And the first determining unit is used for determining the target time parameter information according to the modified external time parameter information.
And the second determining unit is used for determining the target layer transformation attribute information according to the preset layer transformation attribute rule and at least one piece of initial external dimension parameter information.
And the obtaining unit is used for obtaining new layer information according to the target time parameter information and the target layer transformation attribute information.
Optionally, on the basis of the above technical solution, the target layer transformation attribute information includes target anchor point attribute information and target position attribute information.
The second determining unit may include a first determining subunit, a second determining subunit, a third determining subunit, and a fourth determining subunit.
And the first determining subunit is used for determining the first central position information according to the initial external size parameter information.
And the second determining subunit is used for determining the first center position information as the attribute information of the target positioning point.
And the third determining subunit is configured to determine second central position information according to the plurality of pieces of initial external size parameter information, where the second central position information is central position information of an animation synthesized by the plurality of animation elements.
And a fourth determining subunit, configured to determine the second center position information as the target position attribute information.
Optionally, on the basis of the foregoing technical solution, the second determining module may include a first determining sub-module, a second determining sub-module, a third determining sub-module, a fourth determining sub-module, a fifth determining sub-module, and a sixth determining sub-module.
And the first determining submodule is used for determining target external size parameter information according to the initial external size parameter information in the plurality of new configuration files.
And the second determining submodule is used for determining target external time parameter information according to the modified external time parameter information in the new configuration files.
And the third determining submodule is used for determining target resource information according to the new resource information in the new configuration files.
And the fourth determining submodule is used for determining the target layer information according to the new layer information in the new configuration files.
And the fifth determining submodule is used for determining the target font information according to the initial font information in the plurality of new configuration files.
And the sixth determining submodule is used for determining a target configuration file according to the target external size parameter information, the target external time parameter information, the target resource information and the target font information.
Optionally, on the basis of the foregoing technical solution, the initial external size parameter information in each new configuration file includes initial external width information and initial external height information.
The first determination submodule may include a third determination unit, a fourth determination unit, and a fifth determination unit.
And the third determining unit is used for determining the maximum width information from the initial external width information in the plurality of new configuration files.
And the fourth determining unit is used for determining the maximum height information from the initial external height information in the plurality of new configuration files.
And the fifth determining unit is used for determining the target external size parameter information according to the maximum width information and the maximum height information.
Optionally, on the basis of the above technical solution, the modified external time parameter information in each new configuration file includes a start key frame and an end key frame.
The second determination submodule may include a sixth determination unit, a seventh determination unit, and an eighth determination unit.
A sixth determining unit for determining a minimum starting key frame from the starting key frames in the plurality of new configuration files.
A seventh determining unit for determining a maximum ending key frame from the ending key frames in the plurality of new configuration files.
And the eighth determining unit is used for determining the target external time parameter information according to the minimum starting key frame and the maximum ending key frame.
Optionally, on the basis of the foregoing technical solution, the new resource information includes at least one new resource item identifier, and each new layer information includes a new resource reference identifier. The new resource reference mark has a corresponding new resource item mark, and the new resource item mark corresponding to different new resource reference marks is different.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented at least partially as a hardware Circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a Circuit, or implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the communication connections of the first determining module 710, the modifying module 720, the first generating module 730, the second generating module 740, the second determining module 750, and the invoking module 760 may be combined in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the communication connections of the first determining module 710, the modifying module 720, the first generating module 730, the second generating module 740, the second determining module 750 and the invoking module 760 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, at least one of the first determining module 710, the modifying module 720, the first generating module 730, the second generating module 740, the second determining module 750, and the invoking module 760, in communicative connection, may be implemented at least in part as a computer program module that, when executed, may perform a corresponding function.
It should be noted that the animation synthesis device part in the embodiment of the present disclosure corresponds to the animation synthesis method part in the embodiment of the present disclosure, and the description of the animation synthesis device part specifically refers to the animation synthesis method part, and is not described herein again.
Fig. 8 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM 802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 800 may also include input/output (I/O) interface 805, input/output (I/O) interface 805 also connected to bus 804, according to an embodiment of the present disclosure. Electronic device 800 may also include one or more of the following components connected to I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output portion 807 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable Computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM) (erasable Programmable Read-Only Memory) or flash Memory), a portable compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the preceding. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 802 and/or RAM 803 described above and/or one or more memories other than the ROM 802 and RAM 803.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (13)

1. An animation composition method, comprising:
determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, wherein each dynamic effect element has a corresponding initial configuration file, and the initial configuration file comprises initial external time parameter information, initial resource information and initial layer information;
according to the playing time of each dynamic effect element, modifying the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element to obtain modified external time parameter information;
generating new resource information and new layer information according to the modified external time parameter information, the initial resource information and the initial layer information; and
generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information;
determining a target configuration file according to the plurality of new configuration files; and
and calling the target configuration file to synthesize the animation.
2. The method of claim 1, wherein the initial configuration file further comprises initial external size parameter information and initial font information;
generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information, including:
and generating the new configuration file according to the modified external time parameter information, the new resource information, the new layer information, the initial external size parameter information and the initial font information.
3. The method according to claim 2, wherein the generating new resource information and new layer information according to the modified external time parameter information, the initial resource information, and the initial layer information includes:
modifying the initial resource information according to the initial layer information to obtain new resource information; and
and obtaining the new layer information according to the modified external time parameter information and at least one piece of initial external dimension parameter information.
4. The method according to claim 3, wherein the obtaining the new layer information according to the modified external time parameter information and at least one of the initial external size parameter information includes:
determining target time parameter information according to the modified external time parameter information;
determining target layer transformation attribute information according to a preset layer transformation attribute rule and at least one piece of initial external size parameter information; and
and obtaining the new layer information according to the target time parameter information and the target layer transformation attribute information.
5. The method according to claim 4, wherein the target layer transformation attribute information comprises target anchor point attribute information and target position attribute information;
the determining the target layer transformation attribute information according to the preset layer transformation attribute rule and the at least one piece of initial external dimension parameter information includes:
determining first central position information according to the initial external size parameter information;
determining the first center position information as the attribute information of the target positioning point;
determining second central position information according to the plurality of pieces of initial external size parameter information, wherein the second central position information is central position information of an animation synthesized by the plurality of dynamic effect elements; and
and determining the second center position information as the target position attribute information.
6. A method according to any of claims 2 to 5, wherein said determining a target profile from a plurality of said new profiles comprises:
determining target external size parameter information according to the initial external size parameter information in the new configuration files;
determining target external time parameter information according to the modified external time parameter information in the new configuration files;
determining target resource information according to new resource information in the new configuration files;
determining target layer information according to the new layer information in the new configuration files;
determining target font information according to the initial font information in the new configuration files; and
and determining the target configuration file according to the target external size parameter information, the target external time parameter information, the target resource information and the target font information.
7. The method of claim 6, wherein the initial extrinsic dimension parameter information in each of the new profiles comprises initial extrinsic width information and initial extrinsic height information;
the determining of the target external size parameter information according to the initial external size parameter information in the new configuration files comprises:
determining maximum width information from the initial outlaid width information in the plurality of new configuration files;
determining maximum altitude information from the initial external altitude information in the plurality of new profiles; and
and determining the external size parameter information of the target according to the maximum width information and the maximum height information.
8. The method of claim 6, wherein the modified external time parameter information in each of the new profiles comprises a start key frame and an end key frame;
the determining target external time parameter information according to the modified external time parameter information in the new configuration files comprises:
determining a minimum starting key frame from the starting key frames in the plurality of new configuration files;
determining a maximum ending key frame from the ending key frames in the plurality of new profiles; and
and determining the target external time parameter information according to the minimum starting key frame and the maximum ending key frame.
9. The method according to any one of claims 1 to 5, wherein the new resource information includes at least one new resource item identifier, and each new layer information includes a new resource reference identifier; the new resource reference mark has a corresponding new resource item mark, and the new resource item mark corresponding to the new resource reference mark is different.
10. An animation synthesis apparatus comprising:
the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for determining the playing time of each dynamic effect element in a plurality of dynamic effect elements, each dynamic effect element is provided with a corresponding initial configuration file, and the initial configuration files comprise initial external time parameter information, initial resource information and initial layer information;
the modification module is used for modifying the initial external time parameter information in the initial configuration file corresponding to each dynamic effect element according to the playing time of each dynamic effect element to obtain modified external time parameter information;
a first generating module, configured to generate new resource information and new layer information according to the modified external time parameter information, the initial resource information, and the initial layer information; and
the second generation module is used for generating a new configuration file according to the modified external time parameter information, the new resource information and the new layer information;
a second determining module, configured to determine a target configuration file according to the plurality of new configuration files; and
and the calling module is used for calling the target configuration file to synthesize the animation.
11. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-9.
12. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 9.
13. A computer program product comprising a computer program which, when executed by a processor, is adapted to carry out the method of any one of claims 1 to 9.
CN202011337695.4A 2020-11-25 2020-11-25 Animation synthesis method, animation synthesis device, electronic device, and medium Pending CN112348928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011337695.4A CN112348928A (en) 2020-11-25 2020-11-25 Animation synthesis method, animation synthesis device, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011337695.4A CN112348928A (en) 2020-11-25 2020-11-25 Animation synthesis method, animation synthesis device, electronic device, and medium

Publications (1)

Publication Number Publication Date
CN112348928A true CN112348928A (en) 2021-02-09

Family

ID=74365573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011337695.4A Pending CN112348928A (en) 2020-11-25 2020-11-25 Animation synthesis method, animation synthesis device, electronic device, and medium

Country Status (1)

Country Link
CN (1) CN112348928A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272536A (en) * 2022-09-26 2022-11-01 深圳乐娱游网络科技有限公司 Animation playing method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272536A (en) * 2022-09-26 2022-11-01 深圳乐娱游网络科技有限公司 Animation playing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US8924839B2 (en) Electronic reader system with bookmarking and method of operation thereof
CN105320509B (en) Picture processing method and picture processing device
WO2019228013A1 (en) Method, apparatus and device for displaying rich text on 3d model
US20150143210A1 (en) Content Stitching Templates
US20130036196A1 (en) Method and system for publishing template-based content
US20140325349A1 (en) Real-time Representations of Edited Content
CN111970571B (en) Video production method, device, equipment and storage medium
US9906626B2 (en) Resource demand-based network page generation
US20170091152A1 (en) Generating grid layouts with mutable columns
WO2022033131A1 (en) Animation rendering method based on json data format
US10579229B2 (en) Customizable media player framework
EP4080507A1 (en) Method and apparatus for editing object, electronic device and storage medium
CN110008431B (en) Page component construction method and device, page generation equipment and readable storage medium
CN112348928A (en) Animation synthesis method, animation synthesis device, electronic device, and medium
CN114154000A (en) Multimedia resource publishing method and device
CN113419806B (en) Image processing method, device, computer equipment and storage medium
CN109600558B (en) Method and apparatus for generating information
US11797719B2 (en) Dynamic preview generation in a product lifecycle management environment
CN111242688A (en) Animation resource manufacturing method and device, mobile terminal and storage medium
WO2015074059A1 (en) Configurable media processing with meta effects
CN115065866B (en) Video generation method, device, equipment and storage medium
KR102385381B1 (en) Method and system for generating script forcamera effect
CN111107425B (en) Method, system, and storage medium for acquiring computing resources based on rendering component elements
US8340427B2 (en) Providing a symbol
CN115988255A (en) Special effect generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination