CN112070866B - Animation data encoding and decoding method and device, storage medium and computer equipment - Google Patents

Animation data encoding and decoding method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112070866B
CN112070866B CN201910502818.6A CN201910502818A CN112070866B CN 112070866 B CN112070866 B CN 112070866B CN 201910502818 A CN201910502818 A CN 201910502818A CN 112070866 B CN112070866 B CN 112070866B
Authority
CN
China
Prior art keywords
attribute
animation
data
value
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910502818.6A
Other languages
Chinese (zh)
Other versions
CN112070866A (en
Inventor
陈仁健
龚海龙
齐国鹏
陈新星
梁浩彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910502818.6A priority Critical patent/CN112070866B/en
Publication of CN112070866A publication Critical patent/CN112070866A/en
Application granted granted Critical
Publication of CN112070866B publication Critical patent/CN112070866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present application relates to an animation data encoding and decoding method, apparatus, storage medium and computer device, the animation data encoding method comprising: obtaining animation vector data corresponding to each animation tag code from an animation engineering file; when the attribute type exists in the attribute structure table corresponding to the animation tag code, determining attribute identification information corresponding to each attribute; coding attribute values corresponding to all the attributes in the animation vector data according to the attribute identification information to obtain attribute contents corresponding to all the attributes; according to the attribute sequence corresponding to each attribute in the attribute structure table, sequentially storing attribute identification information and attribute content corresponding to each attribute to obtain a dynamic attribute data block corresponding to the animation tag code; generating node element coding data according to the animation tag codes and the dynamic attribute data blocks; and obtaining a target animation file corresponding to the vector derivation mode according to the node element coding data. The proposal provided by the application can obviously reduce the file size of the animation file.

Description

Animation data encoding and decoding method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of computer technology, and in particular, to a method and apparatus for encoding and decoding animation data, a computer readable storage medium, and a computer device.
Background
In order to make the video content or the picture content more vivid and interesting, the user adds an animation effect when editing the video content or the picture content, and in essence, the animation effect is presented according to an animation file, which may also be called a sticker. The more complex the animation effect, the more animation attribute data that the corresponding animation file includes, and the larger the file size of the animation file.
The manufacturing flow of the animation file in the traditional mode is as follows: firstly, designing an animation engineering file by an animation designer, wherein the animation engineering file comprises animation special effect data, and then realizing various complex animation special effects by a development engineer through a native code.
However, the above approach requires a large number of additional identifier fields to identify the attribute status of each attribute during encoding, resulting in a large volume of the resulting animation file, wasting storage space.
Disclosure of Invention
Based on this, there is a need to provide an animation data encoding method, apparatus, computer-readable storage medium and computer device, which solve the technical problem that the prior art needs a large number of additional identifier fields to identify each attribute in the process of encoding the animation data, resulting in that the volume of the resulting animation file is too large.
An animation data encoding method, comprising:
obtaining animation data corresponding to each animation tag code from an animation engineering file;
When the attribute type exists in the attribute structure table corresponding to the animation tag code, then
Determining attribute identification information corresponding to each attribute;
Coding attribute values corresponding to the attributes in the animation data according to the attribute identification information to obtain attribute contents corresponding to the attributes;
And sequentially storing attribute identification information corresponding to each attribute and the attribute content according to the attribute sequence corresponding to each attribute in the attribute structure table to obtain a dynamic attribute data block corresponding to the animation tag code.
An animation data encoding device, comprising:
The animation data acquisition module is used for acquiring animation data corresponding to each animation tag code from the animation engineering file;
the attribute identification information determining module is used for determining attribute identification information corresponding to each attribute when the attribute type exists in the attribute structure table corresponding to the animation tag code;
The attribute content coding module is used for coding attribute values corresponding to the attributes in the animation data according to the attribute identification information to obtain attribute contents corresponding to the attributes;
And the data block generation module is used for sequentially storing the attribute identification information corresponding to each attribute and the attribute content according to the attribute sequence corresponding to each attribute in the attribute structure table to obtain the dynamic attribute data block corresponding to the animation tag code.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described animation data encoding method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described animation data encoding method.
In the above-mentioned animation data encoding method, apparatus, computer-readable storage medium and computer device, the animation tag code may be used to identify a set of attributes, and the attribute structure table is used to describe a data structure of a set of attributes identified by the animation tag code, when the attribute values of the set of attributes are not determined from the types or the numbers, or when there is a great redundancy in the attribute values of the attributes, in order to avoid the problem that the animation file is too large due to the additional introduction of a great number of identifier fields for describing the types or the numbers of the attributes, the data structure of the dynamic attribute data block is introduced, so that the identifiers can be compressed maximally, and the volume occupied by the target animation file is greatly reduced. Specifically, after the animation engineering file is obtained, a set of attributes included in an attribute structure table corresponding to the animation tag code is obtained from the animation engineering file, attribute identification information in a dynamic attribute data block is used for describing attribute states of the attributes, when attribute types exist in the attribute structure table, the attribute identification information corresponding to each attribute can be determined according to the animation data, then the attribute values corresponding to each attribute are dynamically encoded according to the attribute identification information to obtain corresponding attribute contents, and the dynamic attribute data block corresponding to the animation tag code is obtained by combining the attribute identification information and the attribute contents of each attribute in the attribute structure table, so that the space occupied by the animation file can be remarkably reduced.
An animation data decoding method, comprising:
Acquiring an animation tag code;
When the attribute type exists in the attribute structure table corresponding to the animation tag code, then
Analyzing attribute identification information corresponding to each attribute from the dynamic attribute data block corresponding to the animation tag code according to the attribute type corresponding to each attribute in the attribute structure table;
And analyzing attribute contents corresponding to the attributes from the dynamic attribute data block according to the attribute identification information corresponding to the attributes.
An animation data decoding device, the device comprising:
The acquisition module is used for acquiring the animation tag code;
the attribute identification information analysis module is used for analyzing the attribute identification information corresponding to each attribute from the dynamic attribute data block corresponding to the animation tag code according to the attribute type corresponding to each attribute in the attribute structure table when the attribute type exists in the attribute structure table corresponding to the animation tag code;
and the attribute content analysis module is used for analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute.
A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described animation data decoding method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the above-described animation data decoding method.
In the method, the device, the computer readable storage medium and the computer equipment for decoding the animation data, the animation tag code can be used for identifying a group of attributes, the attribute structure table is used for describing the data structure of the group of attributes identified by the animation tag code, when the attribute values of the group of attributes are uncertain from the types or the numbers, or when the attribute values of the attributes have a large amount of redundancy, in order to avoid the problem that the volume of the animation file is too large due to the fact that a large number of identifier fields used for describing the types or the numbers of the attributes are additionally introduced, the data structure of the dynamic attribute data blocks is introduced, so that the identifiers can be compressed maximally, and the volume occupied by the target animation file is greatly reduced. Specifically, when the attribute type exists in the attribute structure table corresponding to the animation tag code after the animation tag code is read during decoding, the attribute value of a group of attributes identified by the animation tag code is encoded in the form of a dynamic attribute data block, the attribute identification information corresponding to each attribute can be analyzed in the dynamic attribute data block corresponding to the animation tag code in combination with the attribute type corresponding to each attribute, and then the attribute content corresponding to each attribute can be analyzed in the dynamic attribute data block based on the attribute identification information, so that the decoding of the animation file is realized.
Drawings
FIG. 1 is an application environment diagram of an animation data encoding method in one embodiment;
FIG. 2 is a schematic diagram of adding an animated decal to video content as it is being captured, in one embodiment;
FIG. 3 is a schematic diagram of a 48-bit data stream;
FIG. 4 is a schematic diagram of a coding structure of continuous unsigned integer type data in one embodiment;
FIG. 5 is a schematic diagram of an encoding structure of path information in one embodiment;
FIG. 6 is a diagram of a file organization of a PAG file according to one embodiment;
FIG. 7 is a schematic diagram of a node element organization of node elements in one embodiment;
FIG. 8 is a flow chart of a method of encoding animation data in one embodiment;
FIG. 9 is a schematic diagram of an attribute structure table and a dynamic attribute data block corresponding to mask information in one embodiment;
FIG. 10 is a diagram illustrating a coding structure of a time-delayed parameter array according to an embodiment;
FIG. 11 is a diagram illustrating a coding structure of an array of spatial slow parameters according to an embodiment;
FIG. 12 is a diagram showing a structure of encoding of moving picture data corresponding to a bit map sequence frame encoding scheme in one embodiment;
FIG. 13 is a diagram showing a structure of encoding video data corresponding to a video sequence frame encoding scheme in one embodiment;
FIG. 14 is a flow chart of a method of decoding animation data in one embodiment;
FIG. 15 is a file data structure corresponding to a vector derivation method according to an embodiment;
FIG. 16 is a block diagram showing a configuration of an animation data encoding device in one embodiment;
FIG. 17 is a block diagram showing the structure of an animation data decoding device in one embodiment;
FIG. 18 is a block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The following describes some terms related to the present application:
AE: adobe AFTER EFFECTS is abbreviated as Adobe AFTER EFFECTS is graphic video processing software which can be used for designing complex video special effects. The designer designs a complex animation effect by the software, and the obtained animation file is an AE engineering file (file suffix name aep), which is also called an animation engineering file hereinafter. The AE project file is used to store a Composition (Composition), which is a collection of layers (layers), and all source files and animation characteristic parameters used in the Composition.
PAG: the simple name of the Portable ANIMATED GRAPHICS is a self-grinding binary animation file format provided by the application. The PAG file may be attached to the video as an animation effect, known as an animation decal.
PAGExporter: the animation export plug-in for self-grinding PAG file can be used for reading animation special effect data in AE engineering file, and exporting the read animation special effect data in any one of vector export mode, bitmap sequence frame export mode or video sequence frame export mode to obtain PAG file. PAGExpoter can be used as plug-in of AE software to realize exporting AE engineering file to obtain PAG file after loading.
PAGVIEWER: an animation preview software for previewing PAG files can be used for previewing the PAG files and writing performance parameters into the PAG files, so that when the PAG files are uploaded to a server, the PAG files can be checked according to the performance parameters.
Fig. 1 is an application environment diagram of an animation data encoding method in one embodiment. Referring to fig. 1, the animation data encoding method is applied to a terminal 110.
The terminal 110 has installed and running thereon software supporting graphic video processing by which an animation engineering file is generated, which may be AE software, for example. The terminal 110 also has an animation export plug-in installed thereon, which may be PAGExporter, for example. When the terminal 110 runs the software to open the animation engineering file, the terminal may read the animation data of the animation engineering file through the animation export plug-in, and load the animation export plug-in to encode the animation data to obtain a target animation file, which may be a PAG file, for example. The terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
In one embodiment, PAGExporter running on the terminal 110 processes the animation data in the animation engineering file, derives a PAG file, counts the performance parameters of the PAG file in the process of playing the PAG file through PAGVIEWER, writes the calculated performance parameters into the PAG file, and finally verifies the performance parameters carried by the PAG file through a verification server, wherein the PAG file with successful verification can be issued to the network and played after being downloaded by a user. The PAG file may present an animation effect that may be attached to the video as an animation decal, as shown in fig. 2, which is a schematic diagram of adding an animation decal 202 to the video content as it is being captured in one embodiment.
The following describes the binary animation file format, i.e., the basic data types used by the PAG file, which describe the formatting of the data stored in the file.
When encoding animation data to obtain a PAG file, the basic data types used include integer type, boolean type, floating point type, array type, bit type, time type, and character string type, and the data lengths of these data types are usually constant, which is also called fixed-length data type.
The basic data types further include new data structures customized according to the arrangement combination of the data types, which may be called a combination type as a new data type, and the combination type may be used to describe animation data including a variable length coded integer type, a coordinate type, a ratio type, a color type, a font type, a transparency change information type, a color change information type, a text type, a path type, a byte stream type, a bitmap information type, and a video frame information type.
The following describes the basic data types in detail:
1. Integer type
All integer values are stored in binary form in the PAG file. As can be seen from the above table, the integer type of data can be stored in 8 bits, 16 bits, 32 bits, 64 bits, both signed and unsigned. The data in the PAG format may be stored in a "small endian" (LITTLE ENDIAN) manner, i.e., the low-order byte data is stored at the low address of the memory, and the high-order byte data is stored at the high address of the memory. The integer type of data is a type of data that requires byte alignment, that is, the 1 st bit (bit) value of one integer value is stored at the 1 st bit of a certain byte in the PAG file.
2. Boolean type
Data type Remarks
Bool 0 Represents false;1 represents true
Referring to the above table, in the PAG file, the boolean type data is represented by one bit. Storage space can be saved compared to a one byte representation. The boolean type is a data type that does not require byte alignment.
3. Floating point number type
Data type Remarks
Float Single precision 32 bits
In the PAG file, the data type of the decimal is represented by a single precision floating point number type.
4. Array type
For consecutively stored data of the same data type, the data type may be followed by an n symbol to represent the set of data, where n represents the array length. For example, uint8[10] represents a one-dimensional array of Uint8 type, length 10. As another example, int8[ n ] [ m ] is a two-dimensional array, where m represents the array length and represents data having m data types Int8[ n ].
5. Bit type
Unlike a data type that occupies a fixed number of bits, a bit type is a variable length data type that can be used to represent both signed and unsigned integers. For example SB [5] may be used to represent a data type that occupies a signed integer of 5 bits and UB [2] may be used to represent a data type that occupies an unsigned integer of 2 bits. SB is an abbreviation for Signed Bits, UB is an abbreviation for Unsigned Bits.
Unlike the data types (e.g., int8, uint 16) that require byte alignment, the data of the bit type does not require byte alignment. In particular, when data of a data type requiring Byte alignment is encoded after data of a bit type, then Byte alignment is required in the last Byte (Byte) of the bit type, i.e., bits in the last Byte other than data of the bit type are required to be Byte aligned with a zero padding bit.
Referring to fig. 3, a 48-bit (6 bytes) binary data stream is illustrated, which comprises 6 different bit-long values, the first 5 values being of the bit type and the last 1 value being of the Uint16 type, each bin representing a byte. The first value (010110) of the bitstream occupies 6 bits and the second value (10100) occupies 5 bits, which spans the 1 st byte and the 2 nd byte. The third value (100100) occupies 6 bits, the third value also spans two bytes, the fourth value (1011) occupies 4 bits, and the bits of this value all belong to the 3 rd byte. The fifth value (1101001) occupies 7 bits, and the sixth value after the fifth value is a byte aligned value (0100110010101101), so the remaining 4 bits in the byte occupied by the fifth value (byte 4) are zero padded.
In order to compress the space occupied by the PAG file as much as possible, a continuous data encoding method may be used for encoding a group of continuous data of the same type. Specifically, a header area is added before the set of continuous data of the same type, and a value in the header area is used to represent the number nBits of bits occupied by the set of continuous data of the same type, that is, the set of continuous data of the same type after the header area is stored according to the number of bits. The specific nBits value is determined according to the maximum value in the set of data, and the other smaller data in the set of data is encoded by adopting a high-order zero padding mode.
Referring to FIG. 4, a schematic diagram of an encoding structure for encoding data of consecutive unsigned integer types is shown, wherein referring to FIG. 4, a header region stores a number of bits nBits occupied by each of the consecutive data of the set of integer types, followed by encoding the consecutive data of the set of integer types using UB [ nBits ] type, and the header region occupies a maximum of 5 bits for storing nBits, for example, the storage of consecutive 32-bit unsigned integers.
For example, a set of 16-bit unsigned integers: 50. 100, 200, 300, 350, 400, if encoded according to the data type of Uint16, each value occupies 16 bits, respectively, and the set of values needs to occupy 96 bits in total. If the encoding is performed according to the consecutive data encoding method, it is determined that each data occupies 10 bits according to the maximum value 400, so the encoding is performed according to UB [10], and a head area of UB [4] type is added in front of the group of data for storing the value of "10", that is, the head area occupies 4 bits, and the group of consecutive data immediately after the head area occupies 60 bits, then the group of data occupies only 64 bits, and can occupy 32 bits (4 bytes) less than in the conventional manner.
Note that the coding of consecutive signed integers is identical to the coding of unsigned integers, except that all data after the unsigned integer reads the head region represents the numerical content, and bit 1 after the signed integer reads the head region represents the sign bit (0 positive number, 1 negative number), and the next consecutive nBits-1 bits represent the numerical content. The continuous floating point data may be encoded by the continuous data encoding method.
Further, for some continuous floating point data allowing precision loss, the continuous floating point data can be converted into integer data and then encoded according to the continuous integer type mode. Specifically, for coordinate data representing points on a space, such as a space creep parameter is of a floating point type, the floating point data may be converted into an integer by dividing by a space PRECISION coefficient (spatial_predetermined=0.05); for coordinate data representing the bezier curve creep parameter, the floating point data may be converted into an integer after being divided by a bezier PRECISION coefficient (BEZIER _predetermined=0.005) and recoded; for coordinate data representing gradation information, the floating point data may be converted into an integer number by dividing by a gradation PRECISION coefficient (gradation_precision=0.00002) and recoded.
6. Time type
Data belonging to a Time type (Time) within the PAG file is collectively described using Int64, the Time type may be represented by a frame number, and the frame number of each frame divided by the frame rate may be converted to an external Time in seconds.
7. Character string type
In the PAG file, data belonging to a String type (String) uses null characters to identify the end of the String.
8. Variable length coded integer type
Data type Remarks
EncodedInt32 Variable length coded 32 bit integer
EncodedUint32 Variable length coded 32 bit unsigned integer
EncodedInt64 Variable length coded 64 bit integer
EncodedUint64 Variable length coded 64 bit unsigned integer
The variable length coded integer type is an integer represented by bytes as the minimum storage unit, and the integer value occupies a minimum of one byte, and occupies a maximum of 4 bytes or 8 bytes. When encoding a value of the variable length encoded integer type, the value is represented by the first 7 bits of the byte, the 8 th bit is used to identify whether there is a value following it, the 8 th bit is 0, indicating that the next byte does not belong to the content of the value, and the 8 th bit is 1, indicating that the next byte also belongs to the content of the value. Thus, at decoding, if bit 8 of each byte is 0, indicating that the value has been read, if bit 8 is 1, indicating that the value has not been read, one more byte needs to be read down until the length is read (32 bits or 64 bits). The variable length coded integer type of the signed bit is determined by reading the value of the unsigned coded integer type and then determining the signed bit.
For example, if the data type of the data value 14 is EncodedInt, the corresponding binary is: 0011100, 24 bits less occupied than data type Int 32.
9. Coordinate type
Fields Field type Remarks
x Float X-axis coordinates
y Float Y-axis coordinates
The coordinate type (Point) is used to represent the coordinates of a location, i.e., the values of the x-axis and the y-axis. When encoding a value belonging to the coordinate type, the value of x is encoded with 32 bits first, and then the value of y is encoded with 32 bits.
10. Ratio type
The Ratio type (Ratio) is used to represent a Ratio. When encoding a value belonging to the ratio type, the value of the numerator occupying the variable length bits is encoded first, and then the denominator value occupying the variable length bits is encoded.
11. Color type
Fields Field type Remarks
Red Uint8 Red value (0 to 255)
Green Uint8 Green value (0 to 255)
Blue Uint8 Blue value (0 to 255)
Color type (Color) is used to represent a Color value, which is typically composed of three colors of red, green and blue. When encoding values belonging to a color type, a red value is encoded with one byte, followed by a green value with the second byte, and a blue value with the third byte, each color value having a data type Uint8.
12. Font type
Fields Field type Remarks
id EncodedUint32 Unique identifier
fontFamily String Font family series
fontStyle String Font style
The font type (FontData) is used to identify the font. When the numeric value belonging to the font type is encoded, firstly, the variable-length font identifier is encoded, and the font family series and the font style which the font belongs to are encoded by adopting the String data type respectively, and the number of bits respectively occupied by the font family series and the font style is determined according to the character length. fontFamily represents the name of a font, which may be the name of a font or the name of a type of font. fontStyle is used to define the degree of tilt of a font, including italics, tilting, or normal.
13. Transparency change information type
The transparency change information type (AlphaStop) is used to describe the gradation information of the transparency.
The floating point number type needs to be stored again according to the type of the Unit16 coded by the precision, and the read numerical value of the Unit16 needs to be multiplied by the precision to obtain the floating point number during decoding. The accuracy is for example 0.00002f.
14. Color change information type
Fields Field type Remarks
position Uint16 Start point position
midpoint Uint16 Intermediate point position
color Color Color value
The color change information type (ColorStop) is used to describe gradation information of a color.
15. Color gradient change information type
Fields Field type Remarks
alphaCount EncodedUint32 Transparency gradient information array length
colorCount EncodedUint32 Color gradation information array length
alphaStopList AlphaStop[alphaCount] AlphaCount AlphaStop in succession
colorStopList ColorStop[colorCount] ColorCount ColorStop in succession
The color gradient change information type (GradientColor) is a data type in which the transparency change information type and the color change information type are combined to obtain color gradient change information. One alphaStopList (transparency change information list) includes a plurality of transparency change information, and one transparency change information includes: a start point position, a middle point position, and a transparency value. One colorStopList (color change information list) includes a plurality of color change information, and one color change information includes: a start point position, a middle point position, and a color value.
When data belonging to the color gradient change information type is encoded, the variable length coding integer type EncodedUint is used for sequentially encoding alphaCount, colorCount, then each transparency change information in the transparency change information list is sequentially encoded according to the number of bits occupied by the transparency change information type, and then each color change information in the color change information list is sequentially encoded according to the number of bits occupied by the color change information type.
16. Text type
Text type (Text) is used to describe Text information, including Text, font, size, color, etc. Since the types and the numbers of the text information in different layers are not determined, the identification bits are used for identifying whether the data exist or not in the process of encoding, and each identification bit is represented by a bit type which occupies only one bit, so that the storage space can be saved. As can be seen from the above table, the text type includes a total of 19 identification bits, occupies 3 bytes, and the last byte of the 3 bytes is zero padded during encoding. When encoding the data of the text type, if the value of a certain identification bit is 0, the text information corresponding to the identification bit does not exist, the text information corresponding to the identification bit with the next value of 1 is encoded directly, and when decoding, the data of the next byte, namely the text information corresponding to the identification bit with the next value of 1, is read.
17. Path type
The Path type (Path) is used to represent drawn Path information describing a shape, the Path information being determined by a set of actions. In a PAG file, 8 actions are typically included:
When the path information is encoded, first, it is necessary to determine numVerbs according to the number of all actions in the path drawing information and determine the corresponding action value according to each action, and further it is necessary to determine floatNum float type values according to the coordinate data of each action, and before encoding, it is necessary to multiply the floatNum float type values by spatial_persistence to convert the float number into integer type, then calculate the maximum value in the floatNum float type values, obtain numBits according to the number of bits occupied by the maximum value, and since each value occupies no more than 31 bits in floatlist, that is numBits is less than or equal to 31, it is necessary to encode numBits according to UB [5], and then encode each float type value according to UB [ numBits ].
For example, a rectangle of (x: 5, y:0, r:15, b: 20) is drawn, and the data structure for the Path type (Path) is described as follows: executing Move action to point (5, 0), requiring action "1" and two Float values (5, 0) to be recorded, then executing HLine to point (15, 0), requiring action "3" and one Float value (15), then executing VLine to point (15, 20), requiring action "4" and one Float value (20), then executing HLine to point (5, 20), requiring action "3" and one Float value (5), then executing Close rectangle, returning to starting point (5, 0), requiring action "0" to be recorded.
Referring to fig. 5, a schematic diagram of the coding structure of the path information in the above example is shown. In encoding the path information, first numVerbs, i.e., the number of actions "5" is encoded according to EncodedUint, and then five actions "1, 3, 4, 3, 0" are sequentially encoded according to the UB [3] type. The five actions include 5 float values, which are 5,0, 15, 20, and 5, respectively, and 100, 0, 300, 400, and 100 in sequence after the five floating point numbers are converted into integer types according to the precision parameters, wherein the maximum value is 400, and 400 needs to be represented by 10 bits, so numBits =10. The header region of the set of consecutive floating point type data is obtained after encoding numBits according to UB [5 ]. Finally, 100, 0, 300, 400 and 100 are encoded in turn according to SB 10, so as to obtain the encoded data of the whole path information.
18. Byte stream type
Fields Data type Remarks
data Byte address Start address of byte
length Uint64 Byte length
The byte stream type (ByteData) is used to represent the start address of the byte stream and the length of the byte stream. When a certain string of byte stream is encoded, the starting address of the bytes in the string of byte stream is encoded, and then the length of the whole byte stream is encoded, so that the data is read according to the length of the byte stream from the starting address in the memory during decoding, and the whole byte stream is obtained. The byte address may also be an offset address, and after the computer allocates a corresponding memory for the PAG file, the start address of the byte stream may be determined according to the start address of the allocated memory and the offset address.
19. Bitmap information type
Fields Field type Remarks
x Int32 X-coordinate of start position of differential pixel region
y Int32 Y-coordinate of start position of differential pixel region
fileBytes ByteData Picture data
The bitmap information type (BitmapRect) is used for representing binary picture data obtained by compression coding the difference pixel area corresponding to each bitmap image in the bitmap image sequence according to a picture coding mode. Wherein X and Y respectively represent an X coordinate and a Y coordinate of a start point position of the differential pixel region. When the bitmap information is encoded, a starting point position in the difference pixel region needs to be found, and then the starting point position and the picture data corresponding to the difference pixel region are encoded according to the byte stream type in sequence to obtain the encoded data of each bitmap image.
20. Video frame information type
Fields Field type Remarks
frame Time Frame number
fileBytes ByteData Video frame data
The video frame information type (VideoFrame) is used for representing binary picture data obtained by compressing a synthesized bitmap corresponding to each bitmap image in the bitmap image sequence according to a video coding mode. Where frame represents the frame number of the current frame, the frame number divided by the frame rate may be converted to time in seconds. The Time type is encoded using Int 64. When video frame information is encoded, the frame sequence number is encoded according to Int64, and then video frame data corresponding to the video frame is encoded according to the byte stream type, so that encoded data of each video frame is obtained.
The file organization structure of the PAG file is described below.
As shown in fig. 6, the file organization structure of the PAG file is composed of a file header (FILEHEADER) and a node element (Tag). The header is a data structure in the PAG file at the beginning for describing header information. The file header information at least comprises a file version number, a file length and a file compression mode. The header information may be organized according to the header organization structure of the following table:
Fields Data type Meaning of field
Signing Unit8 Signature bytes for storing "P"
Signing Unit8 Signature bytes for storing "A"
Signing Unit8 Signature bytes for storing "G"
File version number Unit32 The file version number, e.g. 0X04, indicates the "4" th version
File length Unit32 The length of the whole file, including the length of the header
File compression mode Int8 Compression mode of file
The node elements all have the same data structure, as shown in fig. 7, the node elements all comprise a node element head (TAGHEADER) and node element content (TagBody), so that the current node element can be skipped directly when the node elements which cannot be resolved are encountered during decoding. The End of node (End) is a special node element that identifies that the node element of the hierarchy has been read in its entirety, and that there are no more node elements to read. The node elements may also be nested node elements, one or more child node elements may be included in a node element, and the End is used to identify that the child node elements have all been read, and there are no more child node elements to read.
Referring to fig. 7, a node element header (TAGHEADER) is used to record an animation tag code (TagCode) and a byte stream Length (Length) of the node element content. The animation tag code may be used to indicate which type of animation information is recorded in the node element, and different animation tag codes indicate different animation information is recorded. The byte stream length may be used to represent the length of the node element content in the node element.
Since TagBody may be very small or very large in data recorded, some may occupy only one byte, the Length value is 1, and some may occupy 100 bytes, and the Length value is 100, so TAGHEADER may be specifically classified into a short type and a long type structure for storage in order to reduce the storage space occupied by the file as much as possible.
Types TAGHEADER SHORT
Types TAGHEADER LONG
TAGHEADER of the short type structure uses 16 bits to record the length of the animation tag code and the node element content. In TAGHEADER of the short type structure, the length of the node element content may be up to 62 (when the 6 bits after TagCodeAndLength are 111110), which means that TagBody may store up to 62 bytes of data. In TAGHEADER of the long structure, the data type of length is Uint32, so the maximum value of the length of the node element content can be 4G, which means that TagBody can store 4G data at most.
That is, when the Length of TagBody is 62 bytes or less than 62 bytes, then the data type of TAGHEADER is Uint16, the first 10 bits are used to store the animation tag code, and the last 6 bits are used to store the Length. When TagBody is 63 bytes or longer in Length, TAGHEADER contains a TagCodeAndLength field of Uint16 and a Length field of Uint 32. In TagCodeAndLength of Uint16, the first 6 bits are used to store the animated tag code and the last 6 bits are fixed to 0x3f, i.e. when the last 6 bits are "111111", this indicates that the current TAGHEADER employs long-type structure storage, in which case the TagCodeAndLength field is followed by a 32-bit unsigned integer indicating the Length of TagBody. When decoding, firstly reading the first 10 bits of TagCodeAndLength fields to obtain the animation tag code, then reading the last 6 bits, if the value of the last 6 bits is not '111111', the value of the read 6 bits is the length of TagBody, if the value of the last 6 bits is '111111', the data of the last 32 bits (4 bytes) is continuously read, and the read data of the 4 bytes is the length of TagBody.
In the above manner, if TagBody has more data and needs to occupy more bits, the Length value can be recorded in the unsigned 64-bit integer in the long TAGHEADER.
Further, the value of Length read from TagCodeAndLength field may also be 0, representing that TagBody has a Length of 0, in which case TagBody does not exist. For example, the aforementioned node ending symbol (End) is a special node element whose Length value recorded in TAGHEADER is 0.
Of course, in the above-provided data structure of the node element header, the data type of each field may be adjusted according to the actual situation, so that each field may occupy more bits according to the actual situation, or may occupy fewer bits.
The PAG file provides a large number of animation tag codes for representing rich animation information. The data structure of the animation information corresponding to each animation tag code is represented by an attribute structure table. When the PAG file is encoded, file header information is encoded according to a file header organization structure, and then an animation tag code (TagCode), a byte stream Length (Length) of node element content and node element content (TagBody) are sequentially encoded according to animation data according to a data structure of the node element.
When the node element content (TagBody) is encoded, the encoding is required according to the attribute structure table corresponding to the animation tag code. The attribute structure table defines a data structure of animation information corresponding to the animation tag code.
If the attribute included in the animation information represented by the animation tag code is fixed from the category to the number, the data type corresponding to each attribute in the attribute structure table corresponding to the animation tag code is a basic type, the corresponding animation data of the animation tag code can be directly encoded according to the basic data types to obtain the node element content, and the data structure adopted for encoding the node element can be called as a basic data structure.
If the types and the numbers of the attributes included in the animation information recorded in TagBody are uncertain, for example, the drawing path information of the masks, the types and the numbers of actions included in the drawing path information of different masks may be different, and a large number of additional identifier fields are required at this time, so that a large amount of file space is wasted by defining the data structure in the above manner. In order to compress the file space as much as possible, for the animation information corresponding to the animation tag code, the attribute structure table corresponding to the animation tag code needs to be redefined, and corresponding encoding rules are provided to encode the node element content. The data structure represented by this newly defined attribute structure table may be referred to as a dynamic attribute data structure (AttributeBlock).
The specific method for encoding the animation data corresponding to each animation tag code according to the above-mentioned basic data structure or dynamic attribute data structure to obtain the node element content (TagBody) is described below.
As shown in fig. 8, in one embodiment, an animation data encoding method is provided. The present embodiment is mainly exemplified by the application of the method to the terminal 110 in fig. 1. Referring to fig. 8, the animation data encoding method specifically includes the steps of:
s802, obtaining animation data corresponding to each animation tag code from the animation engineering file.
The animation engineering file may be, for example, the AE engineering file mentioned above, and the extension name is. Aep. The animation tag code is used for indicating which type of animation information is specifically indicated by the node element content behind the node element head corresponding to the animation tag code. Before encoding, a one-to-one correspondence between the animation tag code and the corresponding animation information needs to be preset, so that the animation tag code and the corresponding animation information are correspondingly encoded according to the preset one-to-one correspondence. The animation data is data describing animation information corresponding to the animation tag code. For example, the animation tag code is 14, representing MaskBlock, representing some mask information in the animation engineering file, the mask information including the following attributes: mask identification, whether the mask is inverted, the blend mode of the mask, the vector path of the mask, the transparency of the mask, and the edge extension parameters of the mask, the animation data corresponding to the animation tag code is the attribute value corresponding to these attributes. For another example, the animation tag code is 13, representing Transform2D, and represents certain 2D transformation information in the animation engineering file, where the 2D transformation information includes the following attributes: anchor coordinates, location information, X-axis offset, Y-axis offset, scaling information, rotation information, and transparency information.
The animation engineering file may be a file created by a designer through AE software, and stores all parameters used for the entire animation synthesis. Animation Composition (Composition) is a collection of layers. The terminal can extract the animation data from the animation engineering file after acquiring the animation engineering file, and correspond the animation data to each animation tag code one by one according to the predefined animation tag code to obtain the animation data corresponding to each animation tag code. For example, a set of 2D conversion information is acquired, and the animation tag code representing the 2D conversion information is defined as "13", and the animation data corresponding to the animation tag code "13" is the set of 2D conversion information. Of course, in the animation engineering file, there may be multiple sets of different 2D transformation information, and each set of 2D transformation information needs to be identified by the animation tag code "13".
As can be appreciated from the foregoing description of the contents of the node element header, the PAG file uses 10 bits to store the animation tag codes, and can store up to 1024 different animation tag codes. Since the animation data is encoded based on the animation tag code and the corresponding data block (including the basic data block or the dynamic attribute data block), backward compatibility with the previous old file format can be continuously ensured while the PAG file continuously supports new animation characteristics. At present, only 37 animation tag codes are used in the PAG file, wherein 0-29 are the animation tag codes supported by a vector derivation mode, and 45-51 are 7 animation tag codes extended for a bitmap sequence frame derivation mode and a video sequence frame derivation mode. If the subsequent PAG file can also support more animation characteristics, the new animation label code can be further extended to express the animation information corresponding to the newly added animation characteristics.
The following table describes the animation information represented by each of the animation tag codes:
the attribute structure table corresponding to each type of animation information will be listed later.
S804, when the attribute type exists in the attribute structure table corresponding to the animation label code, determining the attribute identification information corresponding to each attribute.
Wherein, the attribute structure table defines the data structure of the animation information corresponding to the animation tag code. When the node element content is encoded, the encoding is required according to the attribute structure table corresponding to the animation tag code. The attribute structure table of each animation tag code is predefined, and the attribute structure table corresponding to each animation tag code is provided later. In order to clarify the encoding mode and decoding rule, when more animation characteristics need to be supported, new animation tag codes need to be expanded, and an attribute structure table corresponding to the newly added animation tag codes needs to be defined.
When the attribute included in the animation information represented by the animation tag code is uncertain from the category to the quantity, the attribute type of each attribute exists in the attribute structure table corresponding to the defined animation tag code, and the data structure presented by the attribute structure table can be called as a dynamic attribute data structure. For example, for the animation tag code "14" representing mask information, there is an attribute type in the corresponding attribute structure table. Further, if the attribute values of some attributes in the attribute structure table are generally equal to the default values, in order to reduce the size of the file, it is also unnecessary to store the attribute values of the attributes, and only one bit is needed to indicate that the attribute values of the attributes are equal to the default values. Therefore, the attribute structure table also includes default values corresponding to the respective attributes. Default values of the various attributes and corresponding attribute types can be hard coded in decoding codes of the animation file so as to directly acquire the default values when the default values are determined to be the default values according to the identification bits during analysis.
The attribute type (AttributeType) is necessary to encode the animation data in a dynamic attribute data structure and parse the dynamic attribute data block (AttributeBlock). When the attribute type exists in the attribute structure table corresponding to a certain animation tag code, the node element content behind the node element head corresponding to the animation tag code is indicated to be encoded according to the dynamic attribute data structure. In the encoding phase, the attribute type is used to determine the number of bits occupied by the attribute identification information, and in the decoding phase, the attribute type is used to determine the number of bits occupied by the attribute identification information and the corresponding reading rule.
The dynamic attribute data block is mainly composed of two parts: an attribute identification area composed of attribute identification information (AttributeFlag) corresponding to each attribute, and an attribute content area composed of attribute contents (AttributeContent) corresponding to each attribute. Taking the attribute structure table of the mask information as an example for explanation, referring to fig. 9, the attribute structure table corresponding to the mask information is shown on the left side of fig. 9, which includes 6 attributes, and the specific structure of the dynamic attribute data block corresponding to the attribute structure table is shown on the right side of fig. 9. Referring to fig. 9, each attribute has corresponding attribute identification information (attribute identification information does not exist when the attribute type corresponding to the attribute is a fixed attribute) and attribute contents. The arrangement order of each attribute identification information is in one-to-one correspondence with the arrangement order of attribute contents. And determining attribute identification information and attribute contents corresponding to each attribute according to the attribute structure table every time an attribute structure table is given, and then sequentially arranging and storing the attribute identification information and the attribute contents. For example AttributeFlag and attributeContent 0 here are the first item ID attributes of the attribute structure table corresponding to the mask information. As can be seen from fig. 9, the data structure stores all attribute identification information first and then starts storing all attribute contents, so that each item in the attribute identification information area is usually represented by a bit, and byte alignment can be avoided frequently caused by centralized storage, so that a great deal of space waste is generated. After the reading of the attribute identification information area is finished, unified byte alignment is carried out, and because extra bits generated by the byte alignment are all complemented with 0, the attribute content is continuously stored from the whole byte digital position.
Because the number of the attributes defined in the attribute structure table corresponding to each animation tag code is fixed and does not change, in order to compress the size of the animation file, the number of the attributes included in the attribute structure table corresponding to each animation tag code is not written into the animation file, but is hard-coded in the analysis code, so that the number of the attributes included in the animation information identified by the animation tag code can be directly obtained during analysis.
As mentioned above, when the attribute included in the animation information represented by the animation tag code is not determined from the category to the number, the attribute type of each attribute exists in the attribute structure table corresponding to the defined animation tag code. For example, the case where the kind or number of attributes is not determined includes the following features:
1. The data type corresponding to some attributes is a boolean type, which occupies one byte, but in practice, the value of the attribute can be identified by only 1 bit, namely 0 represents false and 1 represents true, so that we can additionally identify the attribute value of the attribute by the attribute type as a boolean value occupying 1 bit, only occupy 1 bit when encoding, and analyze the attribute value corresponding to the attribute according to the attribute type when decoding.
2. In order to compress the file size, the actual value may not be stored, and 1 bit may be directly used to identify that the attribute value corresponding to the attribute is equal to the default value during encoding, and then the default value may be implanted into the parsing code.
3. The attribute value corresponding to some attributes may be one piece of animation interval characteristic data, but the animation interval characteristic data does not include a spatial creep parameter, and data related to the spatial creep parameter may not be stored. The attribute value corresponding to some attributes may not include the animation interval characteristic data, and is a fixed value, in which case only one value needs to be stored. If the value is also equal to the default value, the default value may be identified by 1 bit.
According to the characteristics of each attribute in the dynamic attribute data structure, each attribute may have a plurality of different states, and the file size can be significantly reduced in most cases by fully utilizing the states. Thus, according to the characteristics of the above attributes, the data structure of the defined attribute identification information is as follows:
The terminal can determine the attribute identification information corresponding to each attribute according to the attribute type and the animation data of each attribute in the attribute structure table corresponding to the animation tag code. When decoding, the attribute content corresponding to the attribute can be read from the attribute content area according to the attribute identification information. However, the bit number occupied by the attribute identification information corresponding to each attribute is dynamic, and the value range is 0-3 bits. Each attribute occupies a few bits in particular, which needs to be determined according to the attribute type, that is, the attribute type can be used to determine the number of bits (0-3 bits) that the attribute identification information may occupy. The PAG file divides the attribute types into the following 8 types, as shown in the following table:
As can be seen from the above table, the attribute identification information of the common attribute can only occupy 1 bit at most, and only the content identification bit needs to be read during decoding. And the attribute identification information of the space slow moving animation attribute takes up at most 3 bits to store the attribute identification information. For example, when one spatial slow moving animation attribute does not contain animation interval characteristic data and the attribute value is equal to a default value, only 1 bit is required to represent attribute identification information, and the attribute identification information only includes an attribute content flag bit, the value of which is 0, and indicates that the corresponding attribute content is the default value. When decoding, the content identification bit is read as 0, the attribute identification information representing the attribute is read completely, and the value of the next bit is not read. In the limit, each attribute in the attribute structure table does not contain the characteristic data of the animation interval and is equal to the default value, so that only the attribute identification information corresponding to each attribute is stored by occupying a plurality of bits of the attribute, each attribute identification information only comprises a content identification bit, the value of each attribute identification information is 0, and the whole attribute content area is empty, thereby obviously reducing the size of the animation file. In addition, the attribute identification information of the fixed attribute is null, the attribute identification information of the Boolean attribute only occupies 1 bit, and the attribute identification information of the simple animation attribute, the discrete animation attribute and the multi-dimensional time slow motion animation attribute occupy 1 or 2 bits.
From the above, it can be known that the attribute identification information may be used to represent the status information of each attribute in the attribute structure table corresponding to the animation tag code, and the attribute type may be used to determine the number of bits that may be occupied by the attribute identification information. When in coding, the attribute identification information corresponding to each attribute can be determined according to the attribute type corresponding to each attribute and the animation data. During decoding, the bit number possibly occupied by the attribute identification information corresponding to each attribute can be determined according to the attribute type of each attribute, and then the attribute identification information corresponding to each attribute is obtained through analysis according to the analysis rule of the attribute identification information, so that how to read the attribute content corresponding to each attribute is determined.
It should be noted that, the number of the attributes in the attribute structure table corresponding to each animation tag code, the data type of each attribute, the attribute type, the ordering of the attributes, and the default values corresponding to the attributes are all embedded in the parsing code, that is, hard-coded in the parsing code, and do not need to be written into the animation file during encoding, so that the size of the animation file can be reduced, or, in other words, the attribute structure table corresponding to the animation tag code is hard-coded in the parsing code. For example, the terminal may parse the animation file through the PAG SDK and then render and play the animation file, and the information corresponding to the attribute structure table corresponding to each animation tag code may be hard-coded in the PAG SDK.
Specifically, when the animation data corresponding to a certain animation tag code needs to be encoded, the terminal can firstly query an attribute structure table corresponding to the animation tag code, when the attribute type exists in the attribute structure table, the attribute identification information corresponding to each attribute needs to be determined, then the animation data corresponding to each attribute is encoded according to the attribute identification information and the dynamic attribute data structure presented by the attribute structure table to obtain attribute contents, and a dynamic attribute data block formed by each pair of attribute identification information and the attribute contents is used as node element contents corresponding to the animation tag code. Further, the attribute identification information of each attribute in the dynamic attribute data structure presented by the attribute structure table may be determined according to the corresponding attribute type and the animation data itself.
In one embodiment, the above-mentioned animation data encoding method further includes:
When the attribute type does not exist in the attribute structure table corresponding to the animation tag code, coding attribute values corresponding to the attributes in the animation data in sequence according to the data type and the attribute sequence corresponding to the attributes in the attribute structure table to obtain a basic attribute data block corresponding to the animation tag code.
Specifically, when the attribute included in the animation information represented by the animation tag code is determined from the category to the number, the attribute structure table corresponding to the defined animation tag code does not have the attribute type of each attribute, and the data type of each attribute is the basic data type, and the data structure presented by the attribute structure table is also called as the basic data structure. For example, for the animation tag code "7" representing the size and color information of the frame, the attribute type does not exist in the corresponding attribute structure table, and the default value corresponding to each attribute does not exist. When in coding, the node element content behind the node element head corresponding to the animation tag code only needs to be coded according to the basic data structure presented by the attribute structure table. The basic data structure is determined according to the data type and attribute ordering corresponding to each attribute in the attribute structure table.
S806, coding attribute values corresponding to the attributes in the animation data according to the attribute identification information to obtain attribute contents corresponding to the attributes.
In the foregoing, the attribute identification information is used to represent the state information of each attribute in the attribute structure table corresponding to the animation tag code, so after determining the attribute identification information corresponding to each attribute, we can determine whether the attribute content corresponding to each attribute exists, that is, whether the attribute content is a default value, if so, the corresponding attribute content is empty, and if not, the attribute value corresponding to the attribute in the animation data needs to be encoded according to the attribute type and the data structure corresponding to the attribute, so as to obtain the attribute content corresponding to the attribute. And determining whether the attribute value corresponding to the attribute comprises animation interval characteristic data or not according to the attribute identification information, and if so, encoding the attribute value corresponding to the attribute according to a data structure corresponding to the animation interval characteristic data to obtain corresponding attribute content.
In one embodiment, the attribute type corresponding to the attribute is a common attribute or a boolean attribute, and the attribute identification information corresponding to the attribute only includes a content identification bit; determining attribute identification information corresponding to each attribute, including: if the attribute value corresponding to the attribute in the animation data is not the default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block; if the attribute value corresponding to the attribute in the animation data is a default value, the content identification bit is encoded into a value indicating that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
Specifically, if the attribute type corresponding to the attribute is a normal attribute (Value), the attribute identification information corresponding to the attribute only includes the content identification bit. The content identification bit is used for identifying whether the attribute content corresponding to the attribute exists or not, if so, namely, when the attribute value corresponding to the attribute is not a default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, and the value can be 1; when the attribute content corresponding to the attribute is a default value, then the content identification bit is encoded to a value indicating that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, which may be 0. That is, if the attribute type corresponding to the attribute is the normal type, the attribute identification information corresponding to the attribute is either 1 or 0. In fact, the content identification bit is used for determining whether the attribute value corresponding to the attribute is a default value, the specific encoding of the value of the identification bit is 1 or 0 is determined according to the set identification bit encoding rule, if the attribute value corresponding to the attribute is the default value, one identification bit (for example, when the content identification is 0) is used for directly indicating that the attribute value corresponding to the attribute is the default value, the actual value does not need to be stored in the attribute content area, the size of the animation file can be reduced, and if the attribute value corresponding to the attribute is not the default value, the attribute value corresponding to the attribute needs to be encoded in the content area corresponding to the attribute in the attribute content area, so that the attribute content corresponding to the attribute is obtained.
In the decoding stage, when the attribute value corresponding to the attribute is analyzed, the attribute type corresponding to the attribute is determined according to the attribute structure table, the attribute is a common attribute, the attribute identification information corresponding to the attribute in the dynamic attribute data block only occupies 1 bit, when the attribute identification information corresponding to the attribute read from the dynamic attribute data block is 0, the attribute value corresponding to the attribute is a default value, and when the attribute is read 1, the attribute content corresponding to the attribute is continuously read from the dynamic attribute data block, and the attribute value corresponding to the attribute is obtained.
If the attribute type to which the attribute corresponds is a boolean type (BitFlag), then the corresponding attribute value is true or false, and as mentioned above, a bit may be used to represent the value of the attribute. When in coding, the content identification bit can be used for directly representing the corresponding attribute value, when the corresponding attribute value is true, the content identification bit corresponding to the attribute can be coded into a value representing true, and the value can be 1 for example; when the corresponding attribute value is false, the content identification bit corresponding to the attribute may be encoded to a value representing false, which may be, for example, 0. Accordingly, when the attribute value corresponding to the attribute is analyzed in the decoding stage, the attribute type corresponding to the attribute is determined according to the attribute structure table, the attribute is a boolean attribute, the attribute identification information corresponding to the attribute in the dynamic attribute data block is described to occupy only 1 bit, and the bit value is the attribute value corresponding to the attribute, that is, for the boolean attribute, the read value of the content identification bit is directly used as the corresponding attribute value when the attribute is analyzed, and the analysis of the attribute content area of the dynamic attribute data block is not required.
In one embodiment, the attribute type corresponding to the attribute is a fixed attribute, and the attribute identification information is null; encoding attribute values corresponding to all the attributes in the animation data according to the attribute identification information to obtain attribute contents corresponding to all the attributes, wherein the method comprises the following steps: and directly encoding the attribute value corresponding to the attribute in the animation data according to the data type corresponding to the attribute to obtain the attribute content corresponding to the attribute.
Specifically, when the attribute type of the attribute is a fixed attribute (FixedValue), the attribute value corresponding to the attribute is shown to be fixedly existing, the state of the attribute does not need to be identified by attribute identification information, and when the attribute is coded, the attribute value corresponding to the attribute needs to be coded according to the data type corresponding to the attribute, so that the attribute content corresponding to the attribute is obtained. For example, in the attribute structure table representing the mask information, the first attribute represents the mask identifier (id) of the mask, and the attribute type corresponding to the attribute is a fixed attribute, which indicates that the mask identifier needs to be encoded according to the corresponding data type to obtain the corresponding attribute content.
In one embodiment, the attribute type corresponding to the attribute is a simple animation attribute, a discrete animation attribute, a multi-dimensional time slow motion animation attribute or a space slow motion animation attribute, and the attribute identification information at least comprises a content identification bit; determining attribute identification information corresponding to each attribute, including: if the attribute value corresponding to the attribute in the animation data is not the default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block; if the attribute value corresponding to the attribute in the animation data is a default value, the content identification bit is encoded into a value indicating that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
Specifically, when the attribute type corresponding to the attribute is a simple animation attribute (SimpleProperty), a discrete animation attribute (DiscreteProperty), a multi-dimensional time-slow animation attribute (MultiDimensionProperty) or a space-slow animation attribute (SpatialProperty), the attribute identification information corresponding to the attribute includes at least a content identification bit. Similarly, when the attribute value corresponding to the attribute in the animation data is a default value, the actual attribute value does not need to be stored in the dynamic attribute data block, but only the content identification bit needs to be encoded into a value indicating that the attribute value is the default value, the value may be 0, and when the value is analyzed in the decoding stage, it is indicated that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, and only the default value corresponding to the attribute needs to be used as the corresponding attribute content.
When the attribute value corresponding to the attribute in the animation data is not a default value, the actual attribute value needs to be stored in the dynamic attribute data block, and the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, wherein the value can be 1. When the content identification bit is 1, the attribute identification information at least further includes an animation interval identification bit, that is, the attribute identification information occupies at least 2 bits. The animation interval identification bit is used for indicating whether the attribute value corresponding to the attribute comprises animation interval special effect data. In the decoding stage, when the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, the value of the next bit of the content identification bit is read and used as the value of the animation interval identification bit.
In one embodiment, the attribute identification information further includes an animation interval identification bit; when the value of the content identification bit indicates that attribute content corresponding to the attribute exists in the dynamic attribute data block, the method further comprises: if the attribute value comprises the animation interval characteristic data, encoding the animation interval identification bit into a value which indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the animation interval characteristic data; if the attribute value does not include the animation interval characteristic data, the animation interval identification bit is encoded into a value indicating that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not include the animation interval characteristic data.
Wherein the animation interval characteristic data is a basic unit of animation data, and most of the attributes of the animation data include the animation interval characteristic data. If the attribute value corresponding to the attribute includes animation interval characteristic data, the animation interval characteristic data describes animation data corresponding to the attribute in a plurality of animation intervals, each animation interval corresponds to a time axis, the animation characteristic data of each animation interval describes a change relationship of the attribute in a certain time axis, and actually describes a change relationship of each animation frame corresponding to the time axis corresponding to the attribute. The relationship may be linear, in a Bezier curve, or static (i.e., the corresponding attribute value on the attribute remains unchanged in a time axis).
In one embodiment, encoding attribute values corresponding to each attribute in the animation data according to the attribute identification information to obtain attribute contents corresponding to each attribute, including: if the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the animation interval characteristic data, encoding the attribute value corresponding to the attribute in the animation data directly according to the data type corresponding to the attribute to obtain the attribute content corresponding to the attribute.
Specifically, the number of animation intervals in the attribute values corresponding to the attribute may be 0 to more, and when the number is 0, the attribute only includes a valid value, which indicates that the attribute value corresponding to the attribute does not include animation interval characteristic data, and the animation interval identification bit needs to be encoded into a value indicating that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not include animation interval characteristic data, and the value may be 0. In this case, the attribute identification information corresponding to the attribute is determined to be "10", and when the attribute value corresponding to the attribute is encoded, the attribute value is directly encoded according to the data type corresponding to the attribute, so as to obtain the attribute content corresponding to the attribute.
In one embodiment, encoding attribute values corresponding to each attribute in the animation data according to the attribute identification information to obtain attribute contents corresponding to each attribute, including: and if the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the animation interval characteristic data, encoding the animation interval characteristic data corresponding to the attribute according to a data storage structure corresponding to the animation interval characteristic data to obtain the attribute content corresponding to the attribute.
Specifically, when the number of animation intervals in the attribute value corresponding to the attribute is 1 or more, the animation interval flag bit needs to be encoded into a value indicating that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes animation interval characteristic data, and the value may be 1. In this case, it is necessary to record the characteristic data corresponding to each animation interval in accordance with the data structure of the animation interval, and encode the characteristic data of each animation interval in accordance with the data storage structure (property storage structure) of a plurality of animation interval characteristic data to obtain the attribute content corresponding to the attribute. The following table gives the information to be recorded for an animation interval:
The animation interval comprises the starting time and the ending time of each interval, an attribute value corresponding to the starting time and an attribute value corresponding to the ending time, an interpolator type for identifying an attribute value calculation method, a time slow-moving parameter and the like. For the attribute with the attribute type being a multidimensional time slow motion attribute, the animation interval of the attribute type possibly comprises multidimensional time slow motion parameters; for an attribute whose attribute type is a spatial creep animation attribute, the animation interval may also contain a spatial creep parameter. startValue, endValue represents the start and end values of this animation interval, and the corresponding starttimes and endTime represent the start and end times of this animation interval, so when the time value is equal to startTime, the corresponding attribute value is startValue, when the time value is equal to endTime, the corresponding attribute value is endValue, and at the time between starttimes and endTime, the value of the attribute is determined by the interpolator type. According to the characteristics of the animation characteristic data in the animation file, the type of the defined interpolator is shown in the following table:
In connection with the data structure of the animation interval, it can be seen that when the value of the interpolator type is not equal to 2 (Bezier), it may not be necessary to store the time-creep parameters bezierOut and bezierIn of the animation interval. When the attribute type is a discrete slow moving animation attribute, the value representing the interpolator type must be 3 (Hold), or by default 3, and there is no need to store the value of the interpolator type, which typically occurs when the underlying data type of the attribute is a boolean type or an enumerated type, its data is discrete, or true or false, and no intervening interpolation between true and false is possible. When the attribute type is a multi-dimensional time-warping animation attribute, the time warping parameter is composed of a plurality of Bezier curves, and each Bezier curve independently controls one component warping of the data value. dimensionality indicates that the time-delay parameter array of each animation interval is several dimensions, specifically how many bezier curves are confirmed according to the data types of startValue and endValue, for example, the time-delay parameter is a 2-dimensional array, dimensionality is 2, two bezier curves respectively control independent delay of x-axis and y-axis, that is, the value of the x-axis coordinate in the data type of Point is controlled by two bezier curves, and the value of the y-axis coordinate is also controlled by two bezier curves. For most of the conditions which are not multidimensional time slow motion animation attributes, the dimension does not need to be judged according to the basic data type, and only one-dimensional time slow motion parameters are stored by default, namely dimensionality is always 1. When the attribute type is a space slow motion animation attribute, the space slow motion parameter may exist in the animation interval, and at this time, whether the part of parameters exist in the actual current animation interval is specifically judged according to the space slow motion parameter identification bit.
When the animation interval identification bit corresponding to the attribute is 1, the attribute identification information at least further comprises a space slow motion parameter identification bit, namely the attribute identification information occupies 3 bits. The spatial creep parameter identification bit is used for indicating whether the attribute value corresponding to the attribute comprises a spatial creep parameter. And in the decoding stage, when the value of the read animation interval identification bit represents that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the animation interval characteristic data, continuing to read the value of the next bit of the animation interval identification bit as the value of the space slow motion parameter identification bit.
In one embodiment, the attribute type corresponding to the attribute is a spatial slow motion animation attribute, and the attribute identification information further comprises a spatial slow motion parameter identification bit; when the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises animation interval characteristic data, the method further comprises: if the animation interval characteristic data comprises a space slow-moving parameter, encoding the space slow-moving parameter identification bit into a value of which the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the space slow-moving parameter; if the animation interval characteristic data does not include the spatial creep parameter, the spatial creep parameter identification bit is encoded into a value of which the attribute content corresponding to the attribute stored in the dynamic attribute data block does not include the spatial creep parameter.
The spatial retardation parameter is a parameter for describing complex dynamic effects. Only when the attribute type corresponding to the attribute is a space slow moving animation attribute, the attribute identification information corresponding to the attribute may occupy three bits, and the premise of the space slow moving parameter identification bit is that the content identification bit and the animation interval identification bit are both 1. For other attribute types, it is not necessary to determine whether the third identification bit exists or not, and the storage space slow-moving parameter is not required.
In one embodiment, encoding attribute values corresponding to each attribute in the animation data according to the attribute identification information to obtain attribute contents corresponding to each attribute, including: when the value of the space slow motion parameter identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the space slow motion parameter, the animation interval characteristic data corresponding to the attribute is encoded according to the data storage structure corresponding to the animation interval characteristic data, so that the attribute content corresponding to the attribute and comprising the space slow motion parameter is obtained.
Specifically, if the animation interval characteristic data includes a spatial creep parameter, the spatial creep parameter identification bit is encoded into a value, which may be 1, of which the attribute content corresponding to the attribute stored in the dynamic attribute data block includes the spatial creep parameter. In this case, the attribute identification information corresponding to the attribute is determined to be "111", and when the attribute value corresponding to the attribute is encoded, the animation interval characteristic data corresponding to the attribute needs to be encoded according to the data storage structure corresponding to the animation interval characteristic data, so as to obtain the attribute content corresponding to the attribute.
In one embodiment, the encoding, according to the attribute identification information, the attribute value corresponding to each attribute in the animation data to obtain attribute content corresponding to each attribute includes: when the value of the space slow motion parameter identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the space slow motion parameter, the animation interval characteristic data corresponding to the attribute is encoded according to the data storage structure corresponding to the animation interval characteristic data, so that the attribute content corresponding to the attribute and not comprising the space slow motion parameter is obtained.
Specifically, if the animation interval characteristic data does not include a spatial creep parameter, the spatial creep parameter identification bit is encoded into a value stored in the dynamic attribute data block, wherein the attribute content corresponding to the attribute does not include the spatial creep parameter, and the value may be 0. In this case, the attribute identification information corresponding to the attribute is determined to be "110", and when the attribute value corresponding to the attribute is encoded, the animation interval characteristic data corresponding to the attribute needs to be encoded according to the data storage structure corresponding to the animation interval characteristic data, and the obtained attribute content does not include the spatial slow motion parameter.
In one embodiment, the fields included in the data storage structure corresponding to the animation interval characteristic data include the number of animation intervals, the type of interpolator for each animation interval, the start and end times of each animation interval, and further include a start value, an end value, a time-delay parameter, and a space-delay parameter of the attribute corresponding to each animation interval.
The data storage structure corresponding to the animation interval characteristic data is shown in the following table:
it can be seen from the data storage structure (also called Property storage structure) of the animation interval characteristic data given in the above table that when actually storing the animation characteristic data of a plurality of animation intervals, one type of data of each animation interval is sequentially stored and then the next type of data is stored in a concentrated manner, instead of storing all data of one animation interval first and then storing all data of the next animation interval, the advantage of this is that similar data can be stored in a concentrated manner and can be compressed in a concentrated manner, so that the storage space is reduced. For example, the interpolator type value typically occupies only 2 bits, and centralized storage may reduce the extra space waste due to byte alignment. For another example, startValue and startTime of the next animation interval are always equal to endValue and endTime of the previous animation interval, and the centralized storage can skip the repeated data between the animation intervals which do not need to be stored, so that the storage space can be further compressed.
It should be noted that, when decoding the data stored in the data storage structure corresponding to the animation characteristic data, byte alignment is performed once after the data value of each field is read, the remaining bits which are not read are skipped, and the data value corresponding to the next field is read from the next integer byte position.
The above table indicates that the start value and the end value (PropertyValue) of the animation interval in the animation interval characteristic data may have different data types, and the different data types correspond to different storage modes. In the PAG file, the starting value and the ending value of the animation interval are 15 types in total, and each type of coding mode is shown in the following table:
As can be seen from the above table, the data type of the start value and the end value of the animation interval may be any one of the above 15 types. In the storage structure corresponding to the animation interval characteristic data given above, valueList stores the start values and the end values of a plurality of animation intervals, the data type of which is PropertyValueList. For PropertyValueList, it is essentially converted into individual PropertyValue for storage. For example: the memory structure of Value < flow > [ ] is a continuous Value < flow >, and the data type of Value < flow > is a flow type again, so Value < flow > [ ] is essentially a set of values of continuous flow type. There are also other types of storage for conversion, for example: the data type of Value < Uint8> [ ] is converted into a continuous unsigned integer coding mode (UB [ nBits ]) for storage. PropertyValueList are stored in the following table:
In the storage structure corresponding to the animation interval characteristic data given above, TIMEEASELIST stores time-delay parameters of a plurality of animation intervals, and the data type is TimeEaseValueList. TimeEaseValueList are shown in the following table:
Since the time delay parameter of each animation interval is a pair of control points, and the data type of each control Point is a Point type, the encoding mode of the time delay parameters of a plurality of animation intervals can be as shown in fig. 10. Referring to fig. 10, in the storage manner of the timeasevaluelist, bits timeEaseNumBits occupied by elements in the time-delay parameter array are stored first, and then the time-delay parameters of each animation interval are encoded in sequence, dimensionality indicates that the arrays of bezierIn and bezierOut for each animation interval are several dimensions. When dimensionality is equal to 1, the time-warping parameter for each animation interval is 1 pair bezierOut (x, y), bezierIn (x, y), which is a value of two Point types (4 Float types), each Float value being encoded in accordance with SB timeEaseNumBits. When dimensionality is equal to 2, then the values of 2 pairs bezierOut, bezierIn are included for a total of 8 Float type values. In addition to the timeEasenumBits values, the entire TimeEaseValueList needs to store numFrames × dimensionality ×4 data values.
It should be noted that at decoding, if the interpolator type of the key frame is not Bezier, then reading of TimeEase related data is skipped directly. bezierOut, bezierIn coordinates are stored consecutively in pairs, bezierOut coordinates are encoded first, and bezierIn coordinates are encoded later.
When decoding TIMEEASELIST bits, obtaining timeEaseNumBits, then reading numFrames × dimensionality ×4 times to obtain the time slow-moving parameter array corresponding to the attribute, wherein the values read every timeEaseNumBits bits are bezierOut, bezierIn coordinates of each animation interval.
Similarly, in the storage structure corresponding to the animation interval characteristic data given above, SPATIALEASELIST stores the spatial creep parameters of a plurality of animation intervals, and the data type is SpatialEaseValueList. SpatialEaseValueList are shown in the following table:
since the spatial creep parameter of each animation interval is a pair of control points, and the data type of each control Point is a Point type, the encoding method of the spatial creep parameters of a plurality of animation intervals can be as shown in fig. 11. Referring to fig. 11, in the storage mode of the spatial easevaluelist, first, whether each animation interval contains a spatial slow motion parameter is identified by using continuous bits, then, bits spatialEaseNumBits occupied by elements in the encoded spatial slow motion parameter array are used for encoding the spatial slow motion parameters of each animation interval in sequence. In the decoding stage, the reading of the spatial creep parameters needs to rely on the list of SPATIALFLAGLIST read out and spatialEaseNumBits. SPATIALFLAGLIST is twice as long as the number of animation intervals, because the spatial creep parameter of each animation interval contains two Point type data. The values in the SPATIALFLAGLIST list in turn indicate whether SPATIALIN and spatialOut are present for each animation interval. If not, then SPATIALIN, SPATIALOUT uses the default value (0, 0) for the coordinate values. The read data is integer, and the value of x and y, which are points after being converted into Float by multiplying spatial_predefined, is required.
Note that SPATIALINFLAG and spatialOutFlag are stored in pairs and sequentially, and SPATIALINFLAG is stored first and spatialOutFlag is stored later. The value of floatNum, i.e. the length of the array of spatial creep parameters, depends on the SPATIALINFLAG and spatialOutFlag identification. If SPATIALINFLAG is 1, then continue reading spatialInPoint x, y coordinates, SPATIALINFLAG is 0 and x, y coordinates of coordinate Point are default values without reading. SPATIALIN, SPATIALOUT sequentially storing SPATIALIN coordinates and then spatialOut coordinates.
When decoding SPATIALEASELIST values, firstly reading numFrames x 2 bits to obtain SPATIALFLAGLIST values, then reading 5 bits to obtain spatialEaseNumBits values, and then reading data every spatialEaseNumBits bits according to SPATIALFLAGLIST values to obtain the spatial slow motion parameters of the animation interval with the spatial slow motion parameters.
S808, according to the attribute sequence corresponding to each attribute in the attribute structure table, the attribute identification information and the attribute content corresponding to each attribute are sequentially stored, and the dynamic attribute data block corresponding to the animation tag code is obtained.
From the previous analysis, the following can be concluded:
The attribute type is a common attribute, and when the attribute identification information corresponding to the attribute is '1', the corresponding attribute value is directly encoded according to the data type corresponding to the attribute to obtain attribute content; when the attribute identification information corresponding to the attribute is '0', the corresponding attribute value is a default value, coding of the corresponding attribute value is not needed, and no corresponding attribute content exists in the dynamic attribute data block.
The attribute type is a Boolean attribute, when the attribute identification information corresponding to the attribute is '1', the attribute value corresponding to the representative attribute is 'true', the corresponding attribute value is not required to be encoded, and the corresponding attribute content does not exist in the dynamic attribute data block; similarly, when the attribute identification information corresponding to the attribute is "0", the attribute value corresponding to the representative attribute is "false".
The attribute type is fixed attribute, no corresponding attribute identification information exists, and the corresponding attribute value is directly encoded according to the corresponding data type to obtain the attribute content.
The attribute types are simple animation attributes, discrete animation attributes and multidimensional time slow motion animation attributes, when attribute identification information corresponding to the attributes is 0, corresponding attribute values are default values, coding of the corresponding attribute values is not needed, and corresponding attribute contents do not exist in dynamic attribute data blocks; when the attribute identification information corresponding to the attribute is 10, the corresponding attribute value is not a default value and does not comprise animation interval characteristic data, and the attribute content is required to be obtained after normal encoding according to the data type corresponding to the attribute; when the attribute identification information corresponding to the attribute is 11, the corresponding attribute value is not a default value and comprises the animation interval characteristic data, and the attribute content is obtained after encoding according to the data storage structure of the animation interval characteristic data.
The attribute type is a space slow moving animation attribute, when attribute identification information corresponding to the attribute is 0, the corresponding attribute value is a default value, coding of the corresponding attribute value is not needed, and no corresponding attribute content exists in the dynamic attribute data block; when the attribute identification information corresponding to the attribute is 10, the corresponding attribute value is not a default value and does not comprise animation interval characteristic data, and the attribute content is required to be obtained after normal encoding according to the data type corresponding to the attribute; when the attribute identification information corresponding to the attribute is 110, the corresponding attribute value is not a default value, comprises animation interval characteristic data and does not comprise a space slow-moving parameter, and the attribute content which does not comprise the space slow-moving parameter is obtained after the encoding is carried out according to a data storage structure of the animation interval characteristic data; when the attribute identification information corresponding to the attribute is 111, the corresponding attribute value is not a default value, includes animation interval characteristic data and includes a space slow-moving parameter, and the attribute content including the space slow-moving parameter is required to be obtained after encoding according to a data storage structure of the animation interval characteristic data.
The attribute structure table corresponding to the animation tag code comprises a plurality of attributes, for each attribute, the attribute value corresponding to each attribute in the animation data can be encoded, namely, the attribute identification information and the attribute type of each attribute are encoded to obtain corresponding attribute contents, finally, the attribute identification information of each attribute is sorted and stored together according to the attribute to obtain an attribute identification area, the attribute contents of each attribute are sorted and stored together according to the attribute to obtain an attribute content area, and the attribute identification area and the attribute content area corresponding to the animation tag code form a dynamic attribute data block corresponding to the whole animation tag code. Essentially, the dynamic property data block is the value of the node element content (TagBody) of the node element (Tag) where the animation Tag code (TagCode) is located.
According to the animation data encoding method, the animation tag codes can be used for identifying a group of attributes, the attribute structure table is used for describing the data structure of the group of attributes identified by the animation tag codes, when the attribute values of the group of attributes are uncertain in terms of types or numbers, or when the attribute values of the attributes have a large amount of redundancy, in order to avoid the problem that the volume of an animation file is too large due to the fact that a large number of identifier fields used for describing the types or numbers of the attributes are additionally introduced, the data structure of dynamic attribute data blocks is introduced, the identifiers can be compressed maximally, and the volume occupied by a target animation file is greatly reduced. Specifically, after the animation engineering file is obtained, a set of attributes included in an attribute structure table corresponding to the animation tag code is obtained from the animation engineering file, attribute identification information in a dynamic attribute data block is used for describing attribute states of the attributes, when attribute types exist in the attribute structure table, the attribute identification information corresponding to each attribute can be determined according to the animation data, then the attribute values corresponding to each attribute are dynamically encoded according to the attribute identification information to obtain corresponding attribute contents, and the dynamic attribute data block corresponding to the animation tag code is obtained by combining the attribute identification information and the attribute contents of each attribute in the attribute structure table, so that the space occupied by the animation file can be remarkably reduced.
In one embodiment, the animated tag code is a bitmap composition tag code; obtaining animation data corresponding to each animation tag code from an animation engineering file, including: playing the animation engineering file; sequentially capturing the played pictures corresponding to the animation engineering files to obtain bitmap image sequences corresponding to the animation engineering files; and processing the bitmap image sequence according to a bitmap sequence frame export mode to obtain the picture binary data corresponding to the bitmap synthesis tag code.
Specifically, when the animation engineering file is derived by adopting a bitmap sequence frame mode to obtain the animation file, the terminal can intercept one frame of a playing picture corresponding to the animation engineering file into a bitmap image in the process of playing the animation engineering file to obtain a corresponding bitmap image sequence. In one embodiment, each frame of the animation engineering file can be intercepted by the screenshot function of the AE SDK to obtain a bitmap image corresponding to each frame, thereby obtaining a bitmap image sequence corresponding to the whole animation engineering file. The method for obtaining the image binary data corresponding to the bitmap synthesis tag code comprises the following steps of: comparing the bitmap images in the bitmap image sequence with the corresponding key bitmap images to obtain difference pixel areas in the bitmap images; when the bitmap image is a non-key bitmap image, the differential pixel region is encoded according to a picture encoding mode, and an encoded picture corresponding to the bitmap image is obtained. The coded picture is the picture binary data corresponding to each bitmap image.
In essence, when the animation engineering file is derived by using the bitmap sequence frame derivation method to obtain the animation file, the animation data corresponding to the bitmap synthesis tag code (BitmapCompositionBlock) needs to be obtained. The animation data includes some basic attribute data of the animation in addition to the above-mentioned picture binary data, and for this purpose, the attribute structure table of the defined bitmap composition tag code is as follows:
Fields Data type Remarks
CompositionID EncodedUint32 Identification mark
CompositionAttributes CompositionAttributes Tag Synthesizing base attributes
bitmapsequences BitmapSequence Tag[] Bitmap image sequence
In the above table, compositionAttributes is also an animation tag code, and the corresponding attribute structure is shown in the following table:
bitmapSequence is also an animation tag code, and the corresponding attribute structure table is shown in the following table:
As can be seen from the above three tables, the node element where the bitmap composition tag code is located is a nested node element, in which the node element where CompositionAttributes is located and the node element where bitmapsequence is located are nested. In addition to the picture binary data obtained according to the picture coding mode, the animation data corresponding to the bitmap synthesis tag code further comprises a synthesis identifier (CompositionID), a synthesis basic attribute (CompositionAttributes) and a bitmap image sequence (sequence), wherein the synthesis basic attribute comprises synthesized playing time length, frame rate and background color, the bitmap image sequence comprises width, height, number, key frame identifier and picture binary data sequence (bitmapRect [ frameCount ]) of the bitmap image, and the picture binary data sequence comprises width, height of a difference pixel region corresponding to each bitmap image, coordinates (x, y) of a starting pixel point in the difference pixel region and a picture binary data stream (fileBytes) corresponding to each difference pixel region.
As shown in fig. 12, the frame structure diagram of the moving picture data corresponding to the bitmap sequence frame coding scheme is shown. Referring to fig. 12, during encoding, firstly, the value of the identified data type code CompositionID is synthesized according to the attribute structure table corresponding to the bitmap synthesis tag code, then, the basic attribute data is sequentially encoded according to the attribute structure table of CompositionAttributes, then, the width, height and frame rate framerate during the whole animation playing process are encoded, then, whether each next frame is a key frame or not is recorded by nBits bits, the total number bitmapCount of bitmap images is encoded, and finally, the picture binary data corresponding to each bitmap image in the bitmap image sequence (bitmapsequences), namely, the value (x, y) of the coordinates of the starting pixel point in the difference pixel region and the corresponding picture binary data fileBytes are sequentially encoded.
In one embodiment, the animated tag code is a video composition tag code; obtaining animation data corresponding to each animation tag code from an animation engineering file, including: playing the animation engineering file; sequentially capturing the played pictures corresponding to the animation engineering files to obtain bitmap image sequences corresponding to the animation engineering files; and processing the bitmap image sequence according to a video sequence frame export mode to obtain picture binary data corresponding to the video synthesis tag code.
The method for obtaining the picture binary data corresponding to the video synthesis tag code comprises the following steps of: dividing the bitmap image into a color channel bitmap and a transparency channel bitmap; synthesizing a color channel bitmap and a transparency channel bitmap to obtain a synthesized bitmap; and coding the synthesized bitmap according to a video coding mode to obtain a coded picture corresponding to the bitmap image. The coded picture is the picture binary data corresponding to each bitmap image.
When the animation engineering file is exported by adopting a video sequence frame export mode to obtain the animation file, the animation data corresponding to the video synthesis tag code (VideoCompositionBlock) needs to be acquired.
Fields Field type Remarks
CompositionID EncodedUint32 Unique identifier
hasAlpha Bool Whether or not there is Alpha channel
CompositionAttributes CompositionAttributes Tag
videosequences VideoSequenceTag[]
VideoSequence is also an animation tag code, and the corresponding attribute structure table is shown in the following table:
From the attribute structure table of the video composite tag code given above, it can be seen that the node element where the video composite tag code is located is a nested node element, and the node element where CompositionAttributes is located and the node element where videoSequence is located are nested. In addition to the picture binary data obtained according to the video coding mode, the animation data corresponding to the video synthesis tag code further comprises a synthesis identifier (CompositionID), whether the animation data comprises a transparent channel (hasAlpha), a synthesis basic attribute (CompositionAttributes) and a video frame sequence (videoSequence), wherein the synthesis basic attribute comprises a synthesized playing duration, a frame rate and a background color, the video frame sequence comprises width, height and transparent channel position information of bitmap images, parameters of the video coding mode, a key frame identifier and a picture binary data sequence (videoFrames), and the picture binary data sequence comprises a time stamp and a picture binary data stream (fileBytes) corresponding to each video frame.
As shown in fig. 13, the frame coding scheme of the video sequence is a coding structure diagram of moving picture data corresponding to the frame coding scheme. Referring to fig. 13, in encoding, first, values of the identified data type codes CompositionID are synthesized according to the attribute structure table corresponding to the video synthesis tag code, whether values of the transparent channel hasAlpha exist or not are encoded according to the boolean type codes, then basic attribute data are sequentially encoded according to the attribute structure table of CompositionAttributes, then the wide width, high height, frame rate framerate, next to the transparent channel start position information alphaStartX and ALPHASTARTY, then the video coding parameters sps and pps are encoded, then the total number Count of bitmap images is encoded, and finally the frame number frame and the picture binary data fileBytes corresponding to each synthesis bitmap in the video frame sequence (videoSequence) are sequentially encoded.
In one embodiment, the above-mentioned animation data encoding method further includes: sequentially encoding the file header information according to the file header organization structure to obtain file header encoding information; sequentially encoding the animation tag code, the data block length and the data block according to the node element organization structure to obtain node element encoded data; the data blocks comprise dynamic attribute data blocks and basic attribute data blocks; and organizing the file header coding information and the node element coding data according to a target file structure to obtain a target animation file.
The target animation file here is a PAG file, and the file organization structure of the PAG file includes a file header (FILEHEADER) and a node element (Tag). Therefore, when the data of the whole animation file is encoded, the file header information is encoded according to the organization structure of the file header, so that the file header encoding information is obtained. The node element organization structure comprises animation tag codes, data block lengths and data blocks, so that the node element coding data is obtained after the animation data corresponding to each animation tag code is sequentially coded according to the node element organization structure for the animation data obtained from the animation engineering file. The data blocks herein include dynamic attribute data blocks and basic attribute data blocks. And after obtaining the file header coding information and the node element coding data, organizing according to the file structure of the target animation file to obtain the target animation file.
The dynamic attribute data block is obtained only when the animation engineering file is exported in a vector export manner. The basic attribute data block may be derived in a vector derivation method, a bitmap sequence frame derivation method, and a video sequence frame derivation method. For different animation tag codes, the corresponding attribute structure tables are different, and the attribute types of different attributes in the attribute structure tables are different, so that the animation tag codes can be encoded according to basic attribute data blocks or dynamic attribute data blocks.
The attribute structure table of the 37 animation tag codes listed above is described below.
End
Fields Data type Remarks
End Unit16 Tag end mark
End is used as a special animation tag code to mark the End of a node element, and when the tag is read in the decoding process, the content of the node element corresponding to the End is marked. If the node element contains a nest, i.e. includes child node elements, then the reading continues on the content that needs to jump out of the current node element, reading the content from the byte elements of the outer layer.
FontTables
Fields Data type Remarks
count EncodedUint32 Font number
fontData FontData[] Font array
FontTables is a collection of font information.
VectorCompositionBlock
VectorCompositionBlock is a set of animation data according to vector derivation, in which simple vector graphics data may be included, or one or more VectorComposition may be further included.
Fields Data type Remarks
width EncodedInt32 Width of layer
height EncodedInt32 High of layer
duration EncodedUint64 Duration of time
frameRate Float Frame rate
backgroundColor Color Background color
CompositionAttributes are basic attribute information of the composition.
ImageTables
Fields Data type Remarks
count EncodedInt32 Number of pictures
images ImageBytesTag[] Picture array
ImageTables is a collection of picture information.
LayerBlock
LayerBlock is a collection of layer information.
LayerAttributes
LayerAttributes denotes information related to layer properties.
SolidColor
Fields Data type Remarks
solidColor Color Color value
width EncodedInt32 Wide width of
height EncodedInt32 High height
LayerAttributes is attribute information of the layer.
TextSource
TextSource denotes text information, comprising: text, font, size, color, etc. TextPathOption tag
Fields Field type Attribute type Default value Remarks
path Mask Tag Value null Path information
reversedPath Bool DiscreteProperty false Whether or not to reverse the path
perpendicularToPath Bool DiscreteProperty false Vertical path
forceAlignment Bool DiscreteProperty false Forced alignment
firstMargin Float SimpleProperty 0.0f Start to leave white
lastMargin Float SimpleProperty 0.0f Finally leave white
TextPathOption denotes text drawing information including a drawing path, a front-back-left-right pitch, and the like.
TextMoreOption tag
TextMoreOption denotes other information of the text.
IMAGEREFERENCE tag
Fields Data type Remarks
id EncodedUint32 Picture identification
IMAGEREFERENCE denotes a picture index, and stored is an ID of a picture, by which real picture information is indexed.
CompositionReference tag
Fields Data type Remarks
id EncodedUint32 Identification mark
compositionStartTime Int64 Start time
CompositionReference denotes a layer combination index, which stores the ID of the layer combination, and the real layer combination is indexed by the ID.
Transform2D tag
Fields Data type Attribute type Default value Remarks
anchorPoint Point Value (0.0) Anchor point
position Point Value (0.0) Position information
xPosition Float Value 0.0 Offset of x-axis
yPosition Float Value 0.0 Offset of y-axis
scale Point Value (0.0) Scaling
rotation Float Value 0.0 Rotating
opacity unit8 Value 255 Transparency (0-255)
Transform2D represents 2D transformation information, including: anchor point, zoom, rotation, x-axis offset, y-axis offset, etc.
Mask label
Mask represents Mask information.
ShapeGroup tag
Fields Data type Attribute type Default value Remarks
blendMode Enumeration (Unit 8) Value BlendMode::Normal Image blending mode
transform ShapeTransform Value Without any means for Transformation
elements Shape[] Value Without any means for Element(s)
ShapeGroup denotes projection information.
Rectangle tag
Fields Data type Attribute type Default value Remarks
reversed Bool BitFlag false Sign of whether rectangle is turned over or not
size Point MultiDimensionProperty (100,100) Rectangular width and height
position Point SpatialProperty (0,0) Rectangular position
roundness Float SimpleProperty 0.0f Round corner
Rectangle represents rectangular information.
Ellipse tag
Fields Data type Attribute type Default value Remarks
reversed Bool BitFlag false Sign of whether to overturn ellipse
size Point MultiDimensionProperty (100,100) Wide and high
position Point SpatialProperty (0,0) Position of
Ellipse denotes elliptical information.
PolyStar tag
PolyStar denotes information of a polygon star. SHAPEPATH tag
Fields Data type Attribute type Default value Remarks
shapePath Path SimpleProperty Without any means for Path shape
SHAPEPATH denotes path information of the shape.
Fill label
Fill represents information related to padding.
Stroke tag
Stroke represents information related to the stroking.
GRADIENTFILL tag
GRADIENTFILL denotes color gradation fill information.
GradientStroke tag
GradientStroke represents transparency gradient information for the stroked edge.
MERGEPATHS tag
Fields Data type Attribute type Default value Remarks
mode Enumeration (Unit 8) Value MergePathsMode::Add Merge mode
MERGEPATHS denotes information about the merged path.
TRIMPATHS tag
TRIMPATHS denotes information about the path generation effect.
Repeater tag
Repeater represents Repeater related information.
RoundCorners tag
Fields Data type Attribute type Default value Remarks
radius Float SimpleProperty 10.0f Radius of radius
RoundCorners denotes information about the rounded corners.
Performance label
Fields Data type Attribute type Default value Remarks
renderingTime Int64 Value 0 Rendering is time consuming
imageDecodingTime Int64 Value 0 Picture decoding time consuming
presentingTime Int64 Value 0 Time-consuming screen-up
graphicsMemory Int64 Value 0 Rendering memory
Performance represents the Performance parameters of the animation.
DropShadowStyle tag
Fields Data type Attribute type Default value Remarks
blendMode Enumeration (Unit 8) DiscreteProperty BlendMode::Normal Image blending mode
color Color SimpleProperty Black Color of
opacity unit8 SimpleProperty 191 Transparency (0-255)
angle Float SimpleProperty 120.0f Angle of
distance Float SimpleProperty 5.0f Distance of
size Float DiscreteProperty 5.0f Size and dimension of the dimension
DropShadowStyle denotes information related to adding shadows.
StrokeStyle tag
StrokeStyle is information related to a stroked pattern.
TINTEFFECT tag
TINTEFFECT is information relating to coloring effect.
FILLEFFECT tag
Fields Data type Attribute type Default value Remarks
allMasks Bool BitFlag false All shades
fillMask Mask Tag Value null Mask to be filled
color Color SimpleProperty Red Color of
invert Bool DiscreteProperty false Whether or not to turn over the shade
horizontalFeather Float SimpleProperty 0.0f Horizontal edge feathering
verticalFeather Float SimpleProperty 0.0f Vertical edge feathering
opacity unit8 SimpleProperty 255 Transparency (0-255)
FILLEFFECT is information related to the effect of padding.
StrokeEffect tag
StrokeEffect are information related to the stroking effect.
TritoneEffect tag
TritoneEffect shows highlight, intermediate color, shadow effects.
DropShadowEffect tag
DropShadowEffect denotes padding shading effect information.
FILLRADIALWIPEEFFECT tag
FILLRADIALWIPEEFFECT denotes padding radial erasure effect information.
DISPLACEMENTMAPEFFECT tag
DISPLACEMENTMAPEFFECT denotes permutation layer information.
BitmapCompositionBlock
BitmapCompositionBlock denotes a resultant composition derived in a bitmap sequence frame fashion.
CompositionAttributes
Fields Data type Remarks
duration EncodedUint64 Duration of play
frameRate Float Frame rate
backgroundColor Color Background color
CompositionAttributes denotes the synthesized base attribute information derived in a bitmap sequence frame manner.
BitmapSequence
BitmapSequence denotes a bitmap image sequence.
ImageBytes
Fields Field type Remarks
id EncodedUint32 Identification mark
width EncodedInt32 Picture width
width EncodedInt32 Picture width
anchorX EncodedInt32 Coordinates of the x-axis
anchorY EncodedInt32 Y-axis coordinates
fileBytes ByteData Picture byte stream
ImageBytes denotes the encoded picture attribute information.
ImageBytes2
Fields Field type Remarks
id EncodedUint32 Identification mark
width EncodedInt32 Picture width
width EncodedInt32 Picture width
anchorX EncodedInt32 Coordinates of the x-axis
anchorY EncodedInt32 Y-axis coordinates
fileBytes ByteData Picture byte stream
scaleFactor Float Scaling (0-1.0)
ImageBytes2 stores picture-related attribute information, one more scaleFactor attribute than ImageBytes.
ImageBytes3 with respect to ImageBytes, the transparent frame is simply peeled off, and the attribute value is unchanged.
VideoCompositionBlock
Fields Field type Remarks
CompositionID EncodedUint32 Unique identifier
hasAlpha Bool Whether or not there is Alpha channel
CompositionAttributes CompositionAttributes Tag Synthesizing base attributes
videosequences VideoSequenceTag[] Video frame sequence
VideoCompositionBlock denotes a synthesis derived in accordance with the video sequence frame derivation.
VideoSequence
VideoSequence denotes a sequence of sequential frames.
As shown in fig. 14, in one embodiment, an animation data decoding method is provided. The present embodiment is mainly exemplified by the application of the method to the terminal 110 in fig. 1. Referring to fig. 14, the animation data decoding method specifically includes the steps of:
s1402, obtaining the animation tag code.
When the PAG file is parsed, the header information needs to be decoded according to the header organization structure, and the rest of the node elements need to be decoded according to the node element organization structure. The node ending symbol (End) is a special node element, and is used for identifying that all node elements of the hierarchy have been read, and no more node elements need to be read, so when the node ending symbol (0) is read, the content of the current node element is indicated to be read completely, and the data obtained by continuing to read is the content of the next node element. Of course, if the node element is a nested node element and includes a plurality of child node elements, the data obtained by continuing to read after the node ending symbol is the content of the outer node element until the node ending symbol corresponding to the outermost node element is obtained or the reading is completed according to the byte stream length of the content of the outermost node element, which indicates that no more content of the outermost node element needs to be read.
According to the node element organization structure described above, the node elements each include a node element header (TAGHEADER) and node element content (TagBody), the node element header including an animation tag code (TagCode) and a byte stream Length (Length) of the node element content. The node element header is of a short type and a long type, and the first 10bits in the first 2 bytes of the node element header represent animation tag codes no matter of the short type or the long type, so that after the animation tag code (0) corresponding to the END is read, 2 bytes continue to be read, and the first 10bits in the 2 bytes represent the animation tag code included in the next node element. If the last 6bits of the first 2 bytes are not "0x3f", it indicates that the node element header of the node element is of a short type, and the value of the last 6bits indicates the byte stream Length (Length) of the node element content (TagBody) according to the organization structure of the node element header of the short type. If the last 6bits of the first 2 bytes is "0x3f", it indicates that the node element header of the node element is of long type, and according to the organization structure of the node element header of long type, the 4 bytes after the 2 bytes are continuously read, and the obtained data value represents the byte stream Length (Length) of the node element content (TagBody). The value of the byte stream length represents the number of bytes occupied by the node element content, and after the byte stream length is obtained by parsing, it can be determined how many bytes of data behind the node element header are the node element content belonging to the current node element.
The animation tag code is used for indicating which kind of animation information is recorded in the node element, and different animation tag codes indicate that different animation information is recorded. When a certain animation tag code is analyzed, what animation information is represented by the node element content of the current node element can be determined, and the node element content of different animation information has different coding structures, so that the animation tag code in each node element is obtained on the premise of accurately analyzing each node element, in particular the node element content.
As mentioned earlier, PAG files employ 10 bits to store animated tag codes, up to 1024 different animated tag codes can be stored. At present, only 37 animation tag codes are used in the PAG file, wherein 0-29 are the animation tag codes supported by a vector derivation mode, and 45-51 are 7 animation tag codes extended for a bitmap sequence frame derivation mode and a video sequence frame derivation mode. If the subsequent PAG file can also support more animation characteristics, the new animation label code can be further extended to express the animation information corresponding to the newly added animation characteristics.
Further, after parsing an animation tag code, the node element content of the current node element needs to be parsed according to the attribute structure table corresponding to the animation information represented by the animation tag code. The method mainly comprises two cases, wherein when the attribute type does not exist in the attribute structure table corresponding to the obtained animation tag code, the data in the node element content is stored in the form of a basic attribute data block, and when the attribute type exists in the attribute structure table corresponding to the obtained animation tag code, the data in the node element content is stored in the form of a dynamic attribute data block.
S1404, when there is an attribute type in the attribute structure table corresponding to the animation tag code, analyzing the attribute identification information corresponding to each attribute from the dynamic attribute data block corresponding to the animation tag code according to the attribute type corresponding to each attribute in the attribute structure table.
Wherein, the attribute structure table defines the data structure of the animation information corresponding to the animation tag code. When the attribute type exists in the attribute structure table corresponding to a certain animation tag code, the node element content behind the node element head corresponding to the animation tag code is indicated to be decoded according to the dynamic attribute data structure. The dynamic attribute data block is mainly composed of two parts: an attribute identification area composed of attribute identification information (AttributeFlag) corresponding to each attribute, and an attribute content area composed of attribute contents (AttributeContent) corresponding to each attribute. The attribute type (AttributeType) is used to determine the number of bits occupied by the attribute identification information and the corresponding rule for reading the attribute content. The attribute types include a general attribute, a boolean attribute, a fixed attribute, a simple animation attribute, a discrete animation attribute, a multi-dimensional time-slow animation attribute, and a space-slow animation attribute.
Taking mask information as an example, when the animation tag code "14" representing the mask information is read, since the attribute type exists in the attribute structure table corresponding to the mask information, the node element content of the current node element needs to be parsed according to the dynamic attribute data structure. In general, the attribute identification area in the dynamic attribute data block is analyzed, then the attribute content area is read from the integer byte digital position after unified byte alignment is carried out once, and the attribute content corresponding to each attribute in the attribute structure table corresponding to the mask information is sequentially obtained.
The attribute identification information may be null, and the attribute identification information may include at least a content identification bit, an animation interval identification bit, and a spatial slow motion parameter identification bit. The content identification bit (exist) is used for indicating whether the attribute content corresponding to the attribute exists in the dynamic attribute data block or is equal to a default value, the animation interval identification bit (animatable) is used for indicating whether the attribute value corresponding to the attribute comprises animation interval special effect data, and the space slow motion parameter identification bit (HASSPATIAL) is used for indicating whether the attribute value corresponding to the attribute comprises a space slow motion parameter. The bit number occupied by the attribute identification information corresponding to the attributes of different attribute types is dynamic, and the value range is 0-3 bits. Each attribute occupies a few bits in particular, which needs to be determined according to the attribute type, that is, the attribute type can be used to determine the number of bits (0-3 bits) that the attribute identification information may occupy. The common attribute can only occupy 1 bit at most, and only the content identification bit needs to be read during decoding. And the spatial slow motion animation attribute takes up at most 3 bits to store attribute identification information. For example, when one spatial slow moving animation attribute does not contain animation interval characteristic data and the attribute value is equal to a default value, only 1 bit is required to represent attribute identification information, and the attribute identification information only includes an attribute content flag bit, the value of which is 0, and indicates that the corresponding attribute content is the default value. When decoding, the content identification bit is read as 0, the attribute identification information representing the attribute is read completely, and the value of the next bit is not read. In the limit, each attribute in the attribute structure table does not contain animation interval characteristic data and is equal to a default value, so that attribute identification information corresponding to each attribute is stored only by occupying a plurality of bits of the attribute, and each attribute identification information only comprises a content identification bit, and the value of each attribute identification information is 0.
The parsing rules of the attribute identification information corresponding to different attribute types are given below:
The attribute identification information of the common attribute only occupies one bit, and only one bit value is required to be read when the attribute identification information corresponding to the attribute is read.
The attribute identification information of the boolean attribute only occupies one bit, and only one bit is needed to be read when the attribute identification information corresponding to the attribute is read, and the value of the bit is the attribute value corresponding to the attribute.
The attribute identification information of the fixed attribute does not occupy bit, so that the attribute identification information corresponding to the attribute does not need to be read from the dynamic attribute data block.
The attribute identification information of the simple animation attribute, the discrete animation attribute and the multi-dimensional time slow motion animation attribute occupies 1-2 bits. When the value of the 1 st bit is read to be 1, the attribute value corresponding to the attribute is not a default value, and the 2 nd bit is required to be continuously read, so that the attribute identification information "11" or "10" corresponding to the attribute is obtained, when the value of the 1 st bit is read to be 0, the attribute value corresponding to the attribute is the default value, and the attribute identification information only occupies 1 bit, namely "1".
The attribute identification information of the space slow motion animation attribute occupies 1-3 bits. When the value of the 1 st bit is read to be 1, the attribute value corresponding to the attribute is not a default value, and the 2 nd bit needs to be continuously read; if the value of the 2 nd bit is 0, the attribute content corresponding to the attribute does not comprise the animation interval characteristic data, so that attribute identification information '10' corresponding to the attribute is obtained, and the later data cannot be read as the attribute identification information corresponding to the attribute; if the value of the 2 nd bit is 1, the attribute content corresponding to the attribute comprises the animation interval characteristic data, and the 3 rd bit needs to be continuously read; if the value of the 3 rd bit is 0, the attribute content corresponding to the attribute does not comprise a space slow motion parameter, and the attribute identification information 110 corresponding to the attribute is determined; the 3 rd bit has a value of 1, so that the attribute content corresponding to the attribute comprises a space slow-moving parameter, and the attribute identification information '111' corresponding to the attribute is determined; if the value of the 1 st bit is read to be 0, the attribute value corresponding to the attribute is a default value, and the attribute identification information only occupies 1 bit, namely "1", so that the following data cannot be read any more as the attribute identification information corresponding to the attribute.
In one embodiment, according to the attribute type corresponding to each attribute in the attribute structure table, analyzing attribute identification information corresponding to each attribute in the dynamic attribute data block corresponding to the dynamic tag code, including: querying an attribute structure table to obtain attribute ranks corresponding to the attributes; determining the bit number occupied by the attribute identification information corresponding to each attribute according to the attribute type corresponding to each attribute; reading data from the first bit in the dynamic attribute data block corresponding to the animation tag code; and determining the attribute identification information corresponding to each attribute from the sequentially read data according to the attribute sequencing based on the bit number occupied by the attribute identification information corresponding to each attribute.
In practice, the number of the attributes, the attribute ordering, the attribute types corresponding to the attributes and the default values corresponding to the attributes are all hard-coded in the analysis code, and after the analysis of a certain animation tag code, the information of the attribute structure table corresponding to the animation tag code can be directly determined according to the analysis code, including the attribute ordering, the attribute types corresponding to the attributes, the data types, the number, the default values and the like.
Specifically, when the attribute identification information corresponding to each attribute is analyzed from the dynamic attribute data block, the bit number possibly occupied by the attribute identification information corresponding to each attribute is determined according to the attribute type corresponding to each attribute, in addition, the data are sequentially read from the attribute identification area of the dynamic attribute data block according to the attribute sequence corresponding to each attribute, and the read data are analyzed according to the coding rules of the attribute identification information of different attribute types, so that the attribute identification information corresponding to each attribute can be accurately determined.
In one embodiment, when the attribute type does not exist in the attribute structure table corresponding to the animation tag code, the attribute values corresponding to the attributes are sequentially read from the basic attribute data block corresponding to the animation tag code according to the data type and the attribute sequence corresponding to the attributes in the attribute structure table.
Specifically, when the attribute type does not exist in the attribute structure table corresponding to the animation tag code, it is indicated that the node element content behind the node element head corresponding to the animation tag code needs to be decoded according to the basic attribute data structure, and the node element content is substantially analyzed according to the data type and the attribute ordering corresponding to each attribute in the attribute structure table corresponding to the animation tag code, so as to sequentially obtain the attribute value corresponding to each attribute.
For example, when the value of the animation tag code is "7", the current node element represents the size and Color information of the frame, and the attribute structure table corresponding to the information does not have any attribute type, and only three fields, solidColor, width and height, are included in the attribute structure table, wherein solidColor is a Color, the Color type needs to occupy three bytes, and the Color type sequentially represents the red, green and blue values, and the width and height data types are EncodedInt. When the node element content of the node element where the animation tag code is located is decoded, 24bits are required to be read first, a solidColor value is obtained, and then the values of width and height are sequentially read according to the coding rule of EncodedInt, so that each attribute value of animation information to be represented by the current node element is obtained.
S1406, analyzing attribute contents corresponding to the attributes from the dynamic attribute data block according to the attribute identification information corresponding to the attributes.
Since the attribute identification information identifies the status of each attribute, the status may be classified into, for example, whether the attribute value is a default value or exists in the attribute content area of the dynamic attribute data block, for example, whether the attribute content includes animation interval characteristic data if the attribute value exists in the attribute content area of the dynamic attribute data block, for example, whether the attribute content includes a spatial slow motion parameter, and the like. Because the coding modes of the corresponding attribute values in different states are different, the state of each attribute is determined according to the attribute identification information corresponding to each attribute, and the data structure of the data block is determined according to which attribute content is analyzed from the dynamic attribute data block or how to analyze the corresponding attribute value.
Specifically, after the attribute identification information corresponding to each attribute is obtained, the attribute content corresponding to each attribute can be resolved from the dynamic attribute data block by adopting a resolving mode corresponding to each state according to the state of the attribute identified by the attribute identification information. The analysis modes corresponding to each attribute type and the attribute identification information of each attribute type when the attribute identification information takes different values are different, and are described in detail below.
In one embodiment, the attribute type is a common attribute, and the attribute identification information includes only content identification bits; analyzing attribute contents corresponding to each attribute from dynamic attribute data blocks according to attribute identification information corresponding to each attribute, including: if the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, the attribute content corresponding to the attribute is read from the dynamic attribute data block according to the data type corresponding to the attribute; and if the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute from the attribute structure table.
When the attribute type is a general attribute, the attribute identification information includes only the content identification bit, that is, the attribute identification information occupies only one bit. If the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, for example, the value of the content identification bit may be 1, that is, if the attribute content corresponding to the attribute is not a default value, the attribute content corresponding to the attribute needs to be parsed from the dynamic attribute data block according to the data type corresponding to the attribute. If the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, the value of the content identification bit can be 0, for example, that is, when the attribute content corresponding to the attribute is a default value, the default value corresponding to the attribute is directly obtained, and the attribute content corresponding to the attribute does not need to be analyzed from the dynamic attribute data block according to the data type corresponding to the attribute.
In one embodiment, the attribute type is a fixed attribute and the attribute identification information is null; analyzing attribute contents corresponding to each attribute from dynamic attribute data blocks according to attribute identification information corresponding to each attribute, including: and directly reading attribute contents corresponding to the attributes from the dynamic attribute data block according to the data types corresponding to the attributes.
The attribute type is fixed attribute, the attribute identification information of the fixed attribute is null and does not occupy bit, in fact, because the attribute value corresponding to the fixed attribute is certainly stored in the attribute content area of the dynamic attribute data block, the attribute content corresponding to the attribute can be read from the dynamic attribute data block directly according to the data type corresponding to the attribute.
In one embodiment, the attribute type is a boolean attribute, and the attribute identification information includes only content identification bits; analyzing attribute contents corresponding to each attribute from dynamic attribute data blocks according to attribute identification information corresponding to each attribute, including: and directly taking the value of the content identification bit as attribute content corresponding to the attribute.
The attribute type is a boolean type, the boolean type attribute occupies only one bit, and the value of the bit is the value corresponding to the attribute, so that the value of the content identification bit can be directly used as the attribute value corresponding to the attribute. When the value of the content identification bit is 1, the attribute value corresponding to the attribute is true; when the value of the content identification bit is 0, the attribute value corresponding to the attribute is false.
In one embodiment, the attribute type is a simple animation attribute, a discrete animation attribute, a multi-dimensional time-delayed animation attribute, or a space-delayed animation attribute, and the attribute identification information at least comprises a content identification bit; analyzing attribute contents corresponding to each attribute from dynamic attribute data blocks according to attribute identification information corresponding to each attribute, including: if the value of the content identification bit indicates that attribute content corresponding to the attribute exists in the dynamic attribute data block, the attribute content corresponding to the attribute is read from the dynamic attribute data block; and if the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute according to the attribute structure table.
When the attribute type corresponding to the attribute is a simple animation attribute (SimpleProperty), a discrete animation attribute (DiscreteProperty), a multi-dimensional time slow motion animation attribute (MultiDimensionProperty) or a space slow motion animation attribute (SpatialProperty), the attribute identification information corresponding to the attribute at least comprises a content identification bit. Similarly, when the value of the content identification bit is parsed to indicate that the attribute value is not a default value, that is, indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, for example, the value of the content identification bit may be 1, then the attribute content corresponding to the attribute needs to be read from the dynamic attribute data block. When the value of the content identification bit is analyzed to indicate that the attribute value is a default value, that is, indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, for example, the value of the content identification bit may be 0, then only the default value corresponding to the attribute is required to be used as the corresponding attribute content.
In one embodiment, if the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, reading the attribute content corresponding to the attribute from the dynamic attribute data block includes: when the value of the content identification bit represents that attribute content corresponding to the attribute exists in the dynamic attribute data block, reading the value of the next bit of the content identification bit as the value of the animation interval identification bit; if the value of the animation interval identification bit indicates that the attribute content comprises animation interval characteristic data, analyzing the animation interval characteristic data corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data; if the value of the animation interval identification bit indicates that the attribute content does not comprise the animation interval characteristic data, the attribute content corresponding to the attribute is directly analyzed from the dynamic attribute data block according to the data type of the attribute.
Specifically, when the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, for example, when the content identification bit is 1, the attribute identification information corresponding to the attribute at least further includes an animation interval identification bit, that is, the attribute identification information occupies at least 2 bits. When the value of the next bit (e.g. 1) of the read content identification bit indicates that the attribute content includes the animation interval characteristic data, that is, the 2 nd bit of the attribute identification information is also 1, the attribute content corresponding to the attribute in the dynamic attribute data block needs to be analyzed according to the data storage structure corresponding to the animation interval characteristic data. When the value (for example, 0) of the next bit of the read content identification bit indicates that the attribute content does not include the animation interval characteristic data, that is, the 2 nd bit of the attribute identification information is also 0, that is, the attribute identification information is '10', the attribute content corresponding to the attribute in the dynamic attribute data block is directly analyzed according to the data type corresponding to the attribute.
In one embodiment, the fields included in the data storage structure corresponding to the animation interval characteristic data include the number of animation intervals, the type of interpolator for each animation interval, the start and end times of each animation interval, and further include a start value, an end value, a time-delay parameter, and a space-delay parameter of the attribute corresponding to each animation interval.
The data storage structure corresponding to the animation interval characteristic data may refer to the explanation given in the animation data encoding method. The data storage structure is used for storing the animation data of 1 or more animation intervals, and when the animation characteristic data of a plurality of animation intervals are stored, one type of data of each animation interval is sequentially stored, then the next type of data is stored in a centralized manner, rather than storing all data of one animation interval before storing all data of the next animation interval, and the advantages of storing similar data in a centralized manner and compressing the similar data in a centralized manner are that the storage space is reduced.
In one embodiment, the attribute type is a spatial slow moving animation attribute, and if the value of the animation interval flag bit indicates that the attribute content includes animation interval characteristic data, analyzing the animation interval characteristic data corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data, including: when the value of the animation interval identification bit represents that the attribute content comprises animation interval characteristic data, reading the value of the next bit of the animation interval identification bit as the value of the space slow motion parameter identification bit; if the value of the space slow motion parameter identification bit indicates that the animation interval characteristic data comprises the space slow motion parameter, analyzing the animation interval characteristic data comprising the space slow motion parameter corresponding to the attribute in the dynamic attribute data block according to the data storage structure corresponding to the animation interval characteristic data; if the value of the space inching parameter identification bit indicates that the animation interval characteristic data does not comprise the space inching parameter, analyzing the animation interval characteristic data which does not comprise the space inching parameter and corresponds to the attribute in the dynamic attribute data block according to the data storage structure corresponding to the animation interval characteristic data.
The spatial retardation parameter is a parameter for describing complex dynamic effects. Only when the attribute type corresponding to the attribute is a space slow moving animation attribute, the attribute identification information corresponding to the attribute may occupy 3 bits, and the premise of the space slow moving parameter identification bit is that the content identification bit and the animation interval identification bit are both 1. For other attribute types, it is not necessary to determine whether the third identification bit exists or not, and the storage space slow-moving parameter is not required.
Specifically, when the value of the animation interval identification bit indicates that the attribute content includes animation interval characteristic data, for example, when the animation interval identification bit is 1, the attribute identification information corresponding to the attribute at least further includes a spatial slow motion parameter identification bit, that is, the attribute identification information occupies 3 bits. When the value (for example, 1) of the next bit of the read animation interval identification bit indicates that the attribute content includes the spatial slow motion parameter, that is, when the 3 rd bit of the attribute identification information is also 1, that is, when the attribute identification information is "111", the animation interval characteristic data including the spatial slow motion parameter corresponding to the attribute is analyzed from the dynamic attribute data block, and the spatial slow motion parameter also has a corresponding coding structure, and needs to be analyzed according to the corresponding coding structure. When the value (for example, 0) of the next bit of the read animation interval identification bit indicates that the attribute content does not comprise the space slow motion parameter, if the 3 rd bit of the attribute identification information is 0, that is, if the attribute identification information is 110, the animation interval characteristic data corresponding to the analysis attribute in the dynamic attribute data block does not comprise the space slow motion parameter.
It should be noted that, when analyzing the attribute content according to the data storage structure of the animation interval characteristic data, it is specifically required to determine whether an interpolator type field exists in each animation interval according to the attribute type (the discrete animation attribute does not exist in the interpolator type field because the interpolator type defaults to Hold), determine whether a time-delayed parameter is included in the animation interval characteristic data according to the interpolator type obtained by analysis (because the value of the interpolator type is not equal to the time-delayed parameter when Bezier does not exist), determine whether a space-delayed parameter exists according to the attribute identification information, and so on. That is, the values of some fields in the data storage structure of the animation interval characteristic data do not necessarily exist, and the values need to be determined step by step in combination with the attribute type, the attribute identification information, the interpolator type, the data type of the attribute value corresponding to the animation interval, and the like, so that the storage structure of the whole attribute content can be gradually clarified, and the attribute content corresponding to the attribute can be analyzed from the attribute content area. And the analysis process of the attribute content corresponding to each attribute in the attribute structure table corresponding to the animation tag code is executed according to the analysis thought.
According to the method for decoding the animation data, the animation tag codes can be used for identifying a group of attributes, the attribute structure table is used for describing the data structure of the group of attributes identified by the animation tag codes, when the attribute values of the group of attributes are uncertain in terms of types or numbers, or when the attribute values of the attributes have a large amount of redundancy, in order to avoid the problem that the volume of an animation file is too large due to the fact that a large number of identifier fields used for describing the types or numbers of the attributes are additionally introduced, the data structure of dynamic attribute data blocks is introduced, the identifiers can be compressed maximally, and the volume occupied by a target animation file is greatly reduced. Specifically, when the attribute type exists in the attribute structure table corresponding to the animation tag code after the animation tag code is read during decoding, the attribute value of a group of attributes identified by the animation tag code is encoded in the form of a dynamic attribute data block, the attribute identification information corresponding to each attribute can be analyzed in the dynamic attribute data block corresponding to the animation tag code in combination with the attribute type corresponding to each attribute, and then the attribute content corresponding to each attribute can be analyzed in the dynamic attribute data block based on the attribute identification information, so that the decoding of the animation file is realized.
In one embodiment, obtaining an animated tag code includes: analyzing the target animation file to obtain a binary sequence; according to the target file structure of the target animation file, sequentially reading file header coding information and node element coding data of the target animation file from the binary sequence; decoding the file header coding information according to the sequence of the fields included in the file header organization structure of the target animation file and the data types of the fields to obtain file header information; decoding the node element coded data according to the node element organization structure of the target animation file to sequentially obtain animation tag codes, data block lengths and data blocks corresponding to the animation tag codes; the data blocks include dynamic attribute data blocks and basic attribute data blocks.
The target animation file is a PAG file, and the file organization structure of the PAG file includes a file header (FILEHEADER) and a node element (Tag). When the data of the whole animation file is encoded, the file header information is encoded according to the organization structure of the file header, so that the file header encoding information is obtained. The node element organization structure comprises animation tag codes, data block lengths and data blocks, and the node element coding data is obtained after the animation data corresponding to each animation tag code is sequentially coded according to the node element organization structure for the animation data obtained from the animation engineering file. The data blocks herein include dynamic attribute data blocks and basic attribute data blocks. Therefore, after the binary sequence of the target animation file is obtained by parsing, the header encoding information in the binary sequence needs to be decoded according to the header organization structure to obtain header information, and the node element encoding data in the binary sequence is decoded according to the node element organization structure to obtain the contents included in the node elements, namely the animation tag code, the data block length and the data block corresponding to each animation tag code.
In one embodiment, the target animation file is obtained by exporting the animation engineering file according to a vector export mode; analyzing the target animation file to obtain a binary sequence, wherein the method comprises the following steps: acquiring a file data structure corresponding to a vector derivation mode; and analyzing the target animation file according to the file data structure to obtain a binary sequence representing the animation vector data.
As shown in fig. 15, the file data structure corresponding to the vector derivation method in one embodiment is shown. Referring to fig. 15, the entire object animation file is synthesized from Layer information of all layers, vectorComposition representing the entire file, which includes CompositionAttributes and layers. CompositionAttributes denotes animation basic attribute data including animation width, height, etc., and also includes duration of the entire animation, frame rate framerate, background color backgroundcolor, etc. The Layer represents animation Layer data, including data corresponding to each Layer, and in the original animation file, different types of layers may be used, which may include, but are not limited to, a virtual object (NullObjectLayer), a solid color Layer (SolidLayer), a text Layer (TextLayer), a shape Layer (SHAPELAYER), an image Layer (IMAGELAYER), a pre-composition Layer (PreComposeLayer), and the like. Therefore, the binary sequence obtained by parsing the target animation file is animation vector data describing the layer information according to the layer structure. Taking a real color layer as an example, the corresponding description information comprises the width and height of the layer, the color and the layer attribute (LayerAttributes), the layer attribute comprises a basic attribute and an animation attribute group, the basic attribute comprises a layer duration, a layer start time startTime (which needs to acquire and use the layer when playing which frame), a stretching parameter stretch and the like; the animation properties group includes transform, mask, trackMatter, layerStyle, effect and content. A layer is derived from any one or a combination of the 6 animation property groups.
In one embodiment, the target animation file is obtained by deriving the animation engineering file according to a bitmap sequence frame derivation mode, and analyzing the target animation file to obtain a binary sequence, which comprises: decompressing the target animation file according to a picture decoding mode to obtain picture binary data corresponding to the target animation file; the picture binary data includes pixel data of a key bitmap image and pixel data of a difference pixel region of a non-key bitmap image in the animation engineering file.
Specifically, the target animation file obtained by deriving the animation engineering file according to the bitmap sequence frame derivation mode is actually a series of encoded pictures obtained by compression encoding the difference pixel areas corresponding to the non-key bitmap images and the key bitmap images according to the picture encoding mode. The method comprises the following specific steps: acquiring a bitmap image sequence corresponding to the animation engineering file; comparing the bitmap images in the bitmap image sequence with the corresponding key bitmap images to obtain difference pixel areas in the bitmap images; when the bitmap image is a non-key bitmap image, encoding the differential pixel region according to a picture encoding mode to obtain an encoded picture corresponding to the bitmap image; when the bitmap image is a key bitmap image, directly encoding the bitmap image by adopting a picture encoding mode to obtain an encoded picture corresponding to the bitmap image; and generating a target animation file corresponding to the animation engineering file according to the coding picture corresponding to each bitmap image in the bitmap image sequence.
Therefore, when the target animation file is analyzed, the target animation file needs to be decoded according to a corresponding picture decoding mode to obtain a picture binary sequence corresponding to the target animation file, wherein the picture binary data comprises pixel data of a key bitmap image and pixel data of a difference pixel region of a non-key bitmap image in the animation engineering file.
In one embodiment, the target animation file is obtained by deriving the animation engineering file according to a video sequence frame derivation mode, and analyzing the target animation file to obtain a binary sequence, which includes: decompressing the target animation file according to a video decoding mode to obtain picture binary data corresponding to the target animation file; the picture binary data is pixel data of a synthesized bitmap obtained by synthesizing a color channel bitmap and a transparency channel bitmap of a bitmap image included in the animation engineering file.
Specifically, the processing procedure of the target animation file obtained by exporting the animation engineering file according to the video sequence frame export mode is specifically as follows: acquiring a bitmap image sequence corresponding to the animation engineering file; dividing each bitmap image in the bitmap image sequence into a color channel bitmap and a transparency channel bitmap; synthesizing a color channel bitmap and a transparency channel bitmap to obtain a synthesized bitmap; coding the synthesized bitmap according to a video coding mode to obtain a coded picture corresponding to the bitmap image; and generating a target animation file corresponding to the animation engineering file according to the coding picture corresponding to each bitmap image in the bitmap image sequence.
Therefore, when the target animation file is analyzed, the target animation file needs to be decoded according to a corresponding video decoding mode to obtain the picture binary data corresponding to the synthesized bitmap corresponding to the target animation file.
In one embodiment, as shown in fig. 16, there is provided an animation data encoding apparatus 1600 comprising an animation data acquisition module 1602, an attribute identification information determination module 1604, an attribute content encoding module 1606, and a data block generation module 1608, wherein:
an animation data obtaining module 1602, configured to obtain animation data corresponding to each animation tag code from an animation engineering file;
The attribute identification information determining module 1604 is configured to determine attribute identification information corresponding to each attribute when an attribute type exists in the attribute structure table corresponding to the animation tag code;
An attribute content encoding module 1606, configured to encode attribute values corresponding to each attribute in the animation data according to the attribute identification information, so as to obtain attribute contents corresponding to each attribute;
And the data block generating module 1608 is used for sequentially storing the attribute identification information and the attribute content corresponding to each attribute according to the attribute sequence corresponding to each attribute in the attribute structure table to obtain the dynamic attribute data block corresponding to the animation tag code.
In one embodiment, the animation data encoding device 1600 further includes a basic attribute data block generating module, configured to, when no attribute type exists in the attribute structure table corresponding to the animation tag code, sequentially encode the attribute values corresponding to each attribute in the animation data according to the data type and the attribute sequence corresponding to each attribute in the attribute structure table, so as to obtain the basic attribute data block corresponding to the animation tag code.
In one embodiment, the attribute type corresponding to the attribute is a common attribute or a boolean attribute, and the attribute identification information corresponding to the attribute only includes a content identification bit; the attribute identification information determining module 1604 is further configured to encode the content identification bit into a value indicating that the attribute content corresponding to the attribute exists in the dynamic attribute data block if the attribute value corresponding to the attribute in the animation data is not the default value; if the attribute value corresponding to the attribute in the animation data is a default value, the content identification bit is encoded into a value indicating that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
In one embodiment, the attribute type corresponding to the attribute is a fixed attribute, and the attribute identification information is null; the attribute content encoding module 1606 is further configured to encode an attribute value corresponding to the attribute in the animation data directly according to the data type corresponding to the attribute, so as to obtain attribute content corresponding to the attribute.
In one embodiment, the attribute type corresponding to the attribute is a simple animation attribute, a discrete animation attribute, a multi-dimensional time slow motion animation attribute or a space slow motion animation attribute, and the attribute identification information at least comprises a content identification bit; the attribute identification information determining module 1604 is further configured to encode the content identification bit into a value indicating that the attribute content corresponding to the attribute exists in the dynamic attribute data block if the attribute value corresponding to the attribute in the animation data is not the default value; if the attribute value corresponding to the attribute in the animation data is a default value, the content identification bit is encoded into a value indicating that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
In one embodiment, the attribute identification information further includes an animation interval identification bit; when the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, the attribute identification information determining module 1604 is further configured to encode the animation interval identification bit into a value indicating that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes animation interval characteristic data if the attribute value includes animation interval characteristic data; if the attribute value does not include the animation interval characteristic data, the animation interval identification bit is encoded into a value indicating that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not include the animation interval characteristic data.
In one embodiment, the attribute content encoding module 1606 is further configured to encode, according to a data storage structure corresponding to the animation interval characteristic data, the animation interval characteristic data corresponding to the attribute to obtain the attribute content corresponding to the attribute if the value of the animation interval flag bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes the animation interval characteristic data; if the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the animation interval characteristic data, encoding the attribute value corresponding to the attribute in the animation data directly according to the data type corresponding to the attribute to obtain the attribute content corresponding to the attribute.
In one embodiment, the attribute type corresponding to the attribute is a spatial slow motion animation attribute, and the attribute identification information further comprises a spatial slow motion parameter identification bit; when the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes animation interval characteristic data, the attribute identification information determining module 1604 is further configured to encode the spatial slow motion parameter identification bit into a value of the spatial slow motion parameter included in the attribute content corresponding to the attribute stored in the dynamic attribute data block if the animation interval characteristic data includes the spatial slow motion parameter; if the animation interval characteristic data does not include the spatial creep parameter, the spatial creep parameter identification bit is encoded into a value of which the attribute content corresponding to the attribute stored in the dynamic attribute data block does not include the spatial creep parameter.
In one embodiment, the attribute content encoding module 1606 is further configured to encode, according to the data storage structure corresponding to the animation interval characteristic data, the animation interval characteristic data corresponding to the attribute to obtain the attribute content corresponding to the attribute and including the spatial creep parameter when the value of the spatial creep parameter identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes the spatial creep parameter, and otherwise, obtain the attribute content corresponding to the attribute and not including the spatial creep parameter.
In one embodiment, the fields included in the data storage structure corresponding to the animation interval characteristic data include the number of animation intervals, the type of interpolator for each animation interval, the start and end times of each animation interval, and further include a start value, an end value, a time-delay parameter, and a space-delay parameter of the attribute corresponding to each animation interval.
In one embodiment, the animated tag code is a bitmap composition tag code; the animation data obtaining module 1602 is further configured to play an animation project file; sequentially capturing the played pictures corresponding to the animation engineering files to obtain bitmap image sequences corresponding to the animation engineering files; and processing the bitmap image sequence according to a bitmap sequence frame export mode to obtain the picture binary data corresponding to the bitmap synthesis tag code.
In one embodiment, the animated tag code is a video composition tag code; the animation data obtaining module 1602 is further configured to play an animation project file; sequentially capturing the played pictures corresponding to the animation engineering files to obtain bitmap image sequences corresponding to the animation engineering files; and processing the bitmap image sequence according to a video sequence frame export mode to obtain picture binary data corresponding to the video synthesis tag code.
In one embodiment, the animation data encoding device 1600 further includes a header information encoding module, a node element encoding module, and a target animation file generating module, where the header information encoding module is configured to encode header information in sequence according to a header organization structure to obtain header encoded information; the node element coding module is used for coding the animation tag code, the data block length and the data block in sequence according to the node element organization structure to obtain node element coding data; the data blocks comprise dynamic attribute data blocks and basic attribute data blocks; the target animation file generation module is used for organizing the file header coding information and the node element coding data according to a target file structure to obtain a target animation file.
In the above-mentioned animation data encoding device, the animation tag code may be used to identify a set of attributes, and the attribute structure table is used to describe a data structure of a set of attributes identified by the animation tag code, when the attribute values of the set of attributes are all uncertain from the category or the number, or when there is a great redundancy in the attribute values of the attributes, in order to avoid the problem that the animation file is too large due to the additional introduction of a great number of identifier fields for describing the category or the number of attributes, a data structure of dynamic attribute data blocks is introduced, so that the identifiers can be compressed maximally, and the volume occupied by the target animation file is reduced greatly. Specifically, after the animation engineering file is obtained, a set of attributes included in an attribute structure table corresponding to the animation tag code is obtained from the animation engineering file, attribute identification information in a dynamic attribute data block is used for describing attribute states of the attributes, when attribute types exist in the attribute structure table, the attribute identification information corresponding to each attribute can be determined according to the animation data, then the attribute values corresponding to each attribute are dynamically encoded according to the attribute identification information to obtain corresponding attribute contents, and the dynamic attribute data block corresponding to the animation tag code is obtained by combining the attribute identification information and the attribute contents of each attribute in the attribute structure table, so that the space occupied by the animation file can be remarkably reduced.
In one embodiment, as shown in fig. 17, there is provided an animation data decoding apparatus 1700, which includes an acquisition module 1702, an attribute identification information parsing module 1704, and an attribute content parsing module 1706, wherein:
an obtaining module 1702 configured to obtain an animation tag code;
The attribute identification information analysis module 1704 is configured to analyze, when an attribute type exists in the attribute structure table corresponding to the animation tag code, attribute identification information corresponding to each attribute in the dynamic attribute data block corresponding to the animation tag code according to the attribute type corresponding to each attribute in the attribute structure table;
and the attribute content analysis module 1706 is configured to analyze attribute contents corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute.
In one embodiment, the animation data decoding device 1700 further includes a basic attribute data block parsing module, configured to, when no attribute type exists in the attribute structure table corresponding to the animation tag code, sequentially read the attribute values corresponding to the attributes from the basic attribute data block corresponding to the animation tag code according to the data type and the attribute sequence corresponding to the attributes in the attribute structure table.
In one embodiment, the attribute identification information parsing module 1704 is further configured to query the attribute structure table to obtain an attribute ranking corresponding to each attribute; determining the bit number occupied by the attribute identification information corresponding to each attribute according to the attribute type corresponding to each attribute; reading data from the first bit in the dynamic attribute data block corresponding to the animation tag code; and determining the attribute identification information corresponding to each attribute from the sequentially read data according to the attribute sequencing based on the bit number occupied by the attribute identification information corresponding to each attribute.
In one embodiment, the attribute type is a common attribute, and the attribute identification information includes only content identification bits; the attribute content analysis module 1706 is further configured to, if the value of the content identifier bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, read the attribute content corresponding to the attribute from the dynamic attribute data block according to the data type corresponding to the attribute; and if the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute from the attribute structure table.
In one embodiment, the attribute type is a fixed attribute and the attribute identification information is null; the attribute content parsing module 1706 is further configured to directly read attribute content corresponding to the attribute from the dynamic attribute data block according to the data type corresponding to the attribute.
In one embodiment, the attribute type is a boolean attribute, and the attribute identification information includes only content identification bits; the attribute content parsing module 1706 is further configured to directly use the value of the content identification bit as attribute content corresponding to the attribute.
In one embodiment, the attribute type is a simple animation attribute, a discrete animation attribute, a multi-dimensional time-delayed animation attribute, or a space-delayed animation attribute, and the attribute identification information at least comprises a content identification bit; the attribute content analysis module 1706 is further configured to read attribute content corresponding to the attribute from the dynamic attribute data block if the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block; and if the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute according to the attribute structure table.
In one embodiment, the attribute content parsing module 1706 is further configured to, when the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, read the value of the next bit of the content identification bit as the value of the animation interval identification bit; if the value of the animation interval identification bit indicates that the attribute content comprises animation interval characteristic data, analyzing the animation interval characteristic data corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data; if the value of the animation interval identification bit indicates that the attribute content does not comprise the animation interval characteristic data, the attribute content corresponding to the attribute is directly analyzed from the dynamic attribute data block according to the data type of the attribute.
In one embodiment, the attribute type is a spatial slow motion animation attribute, and the attribute content analysis module 1706 is further configured to, when the value of the animation interval flag bit indicates that the attribute content includes animation interval characteristic data, read a value of a next bit of the animation interval flag bit as a value of a spatial slow motion parameter flag bit; if the value of the space slow motion parameter identification bit indicates that the animation interval characteristic data comprises the space slow motion parameter, analyzing the animation interval characteristic data comprising the space slow motion parameter corresponding to the attribute in the dynamic attribute data block according to the data storage structure corresponding to the animation interval characteristic data; if the value of the space inching parameter identification bit indicates that the animation interval characteristic data does not comprise the space inching parameter, analyzing the animation interval characteristic data which does not comprise the space inching parameter and corresponds to the attribute in the dynamic attribute data block according to the data storage structure corresponding to the animation interval characteristic data.
In one embodiment, the fields included in the data storage structure corresponding to the animation interval characteristic data include the number of animation intervals, the type of interpolator for each animation interval, the start and end times of each animation interval, and further include a start value, an end value, a time-delay parameter, and a space-delay parameter of the attribute corresponding to each animation interval.
In one embodiment, the obtaining module 1702 further includes a target animation file parsing module, a file header information parsing module, and a node element parsing module, where the target animation file parsing module is configured to parse a target animation file to obtain a binary sequence; according to the target file structure of the target animation file, sequentially reading file header coding information and node element coding data of the target animation file from the binary sequence; the file header information analysis module is used for decoding the file header coding information according to the sequence of the fields included in the file header organization structure of the target animation file and the data types of the fields to obtain file header information; the node element analysis module is used for decoding the node element coded data according to the node element organization structure of the target animation file, and sequentially obtaining animation tag codes, data block lengths and data blocks corresponding to the animation tag codes; the data blocks include dynamic attribute data blocks and basic attribute data blocks.
In one embodiment, the target animation file is obtained by exporting the animation engineering file according to a vector export mode; the target animation file analysis module is also used for acquiring a file data structure corresponding to the vector derivation mode; and analyzing the target animation file according to the file data structure to obtain a binary sequence representing the animation vector data.
In one embodiment, the target animation file is obtained by deriving the animation engineering file according to a bitmap sequence frame derivation mode, and the target animation file analysis module is further configured to decompress the target animation file according to a picture decoding mode to obtain picture binary data corresponding to the target animation file; the picture binary data includes pixel data of a key bitmap image and pixel data of a difference pixel region of a non-key bitmap image in the animation engineering file.
In one embodiment, the target animation file is obtained by exporting the animation engineering file according to a video sequence frame export mode, and the target animation file analysis module is further used for decompressing the target animation file according to a video decoding mode to obtain picture binary data corresponding to the target animation file; the picture binary data is pixel data of a synthesized bitmap obtained by synthesizing a color channel bitmap and a transparency channel bitmap of a bitmap image included in the animation engineering file.
In the above-mentioned animation data decoding apparatus 1700, the animation tag code may be used to identify a set of attributes, and the attribute structure table is used to describe a data structure of a set of attributes identified by the animation tag code, when the attribute values of the set of attributes are not determined from the types or the numbers, or when there is a large redundancy in the attribute values of the attributes, in order to avoid the problem that the animation file is too large due to the additional introduction of a large number of identifier fields for describing the types or the numbers of the attributes, a data structure of dynamic attribute data blocks is introduced, so that the identifiers can be compressed maximally, and the volume occupied by the target animation file is reduced greatly. Specifically, when the attribute type exists in the attribute structure table corresponding to the animation tag code after the animation tag code is read during decoding, the attribute value of a group of attributes identified by the animation tag code is encoded in the form of a dynamic attribute data block, the attribute identification information corresponding to each attribute can be analyzed in the dynamic attribute data block corresponding to the animation tag code in combination with the attribute type corresponding to each attribute, and then the attribute content corresponding to each attribute can be analyzed in the dynamic attribute data block based on the attribute identification information, so that the decoding of the animation file is realized.
FIG. 18 illustrates an internal block diagram of a computer device in one embodiment. The computer device may be specifically the terminal 110 of fig. 1. As shown in fig. 18, the computer device includes a processor and a memory connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement an animation data encoding method or an animation data decoding method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform an animation data encoding method or an animation data decoding method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 18 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the animation data encoding apparatus 1600 provided by the present application may be implemented in the form of a computer program that can run on a computer device as shown in fig. 18. The memory of the computer device may store therein various program modules constituting the animation data encoding apparatus 1600, such as an animation data acquisition module 1602, an attribute identification information determination module 1604, an attribute content encoding module 1606, and a data block generation module 1608 shown in fig. 16. The computer program constituted by the respective program modules causes the processor to execute the steps in the animation data encoding method of the respective embodiments of the present application described in the present specification.
For example, the computer apparatus shown in fig. 18 may perform step S802 by the animation data acquisition module 1602 in the animation data encoding apparatus 1600 shown in fig. 16. The computer apparatus may perform step S804 through the attribute identification information determining module 1604. The computer device may perform step S1606 through the attribute content encoding module 1606. The computer device may perform step S1608 via the data block generation module 1608.
In one embodiment, the animation data decoding apparatus 1700 provided by the present application may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 18. The memory of the computer device may store various program modules constituting the animation data decoding apparatus 1700, for example, the program modules shown in fig. 17 include an acquisition module 1702, an attribute identification information analysis module 1704, and an attribute content analysis module 1706. The computer program constituted by the respective program modules causes the processor to execute the steps in the animation data encoding method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 18 may execute step S1402 by the acquisition module 1702 in the animation data decoding apparatus 1700 as shown in fig. 17. The computer device may perform step S1404 through the attribute identification information parsing module 1704. The computer device may perform step S1406 through the attribute content parsing module 1706.
In one embodiment, a computer device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the animation data encoding method described above. The steps of the animation data encoding method herein may be the steps in the animation data encoding method of each of the above embodiments.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the above-described animation data encoding method. The steps of the animation data encoding method herein may be the steps in the animation data decoding method of each of the above embodiments.
In one embodiment, a computer device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the above-described animation data decoding method. The steps of the animation data decoding method herein may be the steps in the animation data decoding method of each of the above embodiments.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the above-described animation data decoding method. The steps of the animation data decoding method herein may be the steps in the animation data decoding method of each of the above embodiments.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (26)

1. An animation data encoding method, comprising:
Obtaining animation vector data corresponding to each animation tag code from an animation engineering file;
When the attribute type exists in the attribute structure table corresponding to the animation tag code, then
Determining attribute identification information corresponding to each attribute; coding attribute values corresponding to the attributes in the animation vector data according to the attribute identification information to obtain attribute contents corresponding to the attributes; according to the attribute sequence corresponding to each attribute in the attribute structure table, sequentially storing attribute identification information corresponding to each attribute and the attribute content to obtain a dynamic attribute data block corresponding to the animation tag code;
Generating node element coding data according to the animation tag code and the dynamic attribute data block;
and obtaining a target animation file corresponding to the vector derivation mode according to the node element coding data.
2. The method according to claim 1, wherein the method further comprises:
When the attribute type does not exist in the attribute structure table corresponding to the animation tag code, then
According to the data type and attribute sequence corresponding to each attribute in the attribute structure table, sequentially encoding attribute values corresponding to each attribute in the animation vector data to obtain a basic attribute data block corresponding to the animation tag code;
and generating node element coding data according to the animation tag code and the basic attribute data block.
3. The method according to claim 1, wherein the attribute type corresponding to the attribute is a general attribute or a boolean attribute, and the attribute identification information corresponding to the attribute only includes a content identification bit; the determining the attribute identification information corresponding to each attribute includes:
When the attribute value corresponding to the attribute in the animation vector data is not a default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block;
And when the attribute value corresponding to the attribute in the animation vector data is a default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
4. The method of claim 1, wherein the attribute type corresponding to the attribute is a fixed attribute, and the attribute identification information is null; the step of encoding the attribute value corresponding to each attribute in the animation vector data according to the attribute identification information to obtain the attribute content corresponding to each attribute, including:
And directly encoding the attribute value corresponding to the attribute in the animation vector data according to the data type corresponding to the attribute to obtain the attribute content corresponding to the attribute.
5. The method according to claim 1, wherein the attribute type corresponding to the attribute is a simple animation attribute, a discrete animation attribute, a multi-dimensional time-delayed animation attribute, or a space-delayed animation attribute, and the attribute identification information at least includes a content identification bit; the determining the attribute identification information corresponding to each attribute includes:
When the attribute value corresponding to the attribute in the animation vector data is not a default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block;
And when the attribute value corresponding to the attribute in the animation vector data is a default value, the content identification bit is encoded into a value which indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block.
6. The method of claim 5, wherein the attribute identification information further comprises an animation interval identification bit; when the value of the content identification bit indicates that attribute content corresponding to the attribute exists in the dynamic attribute data block, the method further comprises:
when the attribute value comprises animation interval characteristic data, encoding the animation interval identification bit into a value which indicates that attribute content corresponding to the attribute stored in the dynamic attribute data block comprises animation interval characteristic data;
And when the attribute value does not comprise the animation interval characteristic data, encoding the animation interval identification bit into a value which indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the animation interval characteristic data.
7. The method according to claim 6, wherein the encoding the attribute value corresponding to each attribute in the motion vector data according to the attribute identification information to obtain the attribute content corresponding to each attribute includes:
The value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises animation interval characteristic data, and the animation interval characteristic data corresponding to the attribute is encoded according to a data storage structure corresponding to the animation interval characteristic data to obtain attribute content corresponding to the attribute;
And if the value of the animation interval identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the animation interval characteristic data, directly encoding the attribute value corresponding to the attribute in the animation vector data according to the data type corresponding to the attribute to obtain the attribute content corresponding to the attribute.
8. The method of claim 6, wherein the attribute type corresponding to the attribute is a spatial slow motion animation attribute, and the attribute identification information further comprises a spatial slow motion parameter identification bit; when the value of the animation interval flag bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block includes animation interval characteristic data, the method further includes:
when the animation interval characteristic data comprises a space slow-moving parameter, encoding the space slow-moving parameter identification bit into a value of the space slow-moving parameter included in attribute content corresponding to the attribute stored in the dynamic attribute data block;
and when the animation interval characteristic data does not comprise the space inching parameter, encoding the space inching parameter identification bit into a value of which the attribute content corresponding to the attribute stored in the dynamic attribute data block does not comprise the space inching parameter.
9. The method according to claim 8, wherein the encoding the attribute value corresponding to each attribute in the motion vector data according to the attribute identification information to obtain the attribute content corresponding to each attribute includes:
And when the value of the space slow motion parameter identification bit indicates that the attribute content corresponding to the attribute stored in the dynamic attribute data block comprises the space slow motion parameter, encoding the animation interval characteristic data corresponding to the attribute according to a data storage structure corresponding to the animation interval characteristic data to obtain the attribute content corresponding to the attribute and comprising the space slow motion parameter, otherwise, obtaining the attribute content corresponding to the attribute and not comprising the space slow motion parameter.
10. The method of claim 9, wherein the fields included in the data storage structure corresponding to the animation interval characteristic data include a number of animation intervals, an interpolator type of each animation interval, a start time and an end time of each animation interval, and further include a start value, an end value, a time-delay parameter, and a space-delay parameter corresponding to the attribute of each animation interval.
11. The method according to any one of claims 1 to 10, wherein the obtaining the target animation file corresponding to the vector derivation method according to the node element encoded data includes:
sequentially encoding the file header information according to the file header organization structure to obtain file header encoding information;
And organizing the file header coding information and the node element coding data according to a target file structure to obtain a target animation file.
12. An animation data decoding method, comprising:
analyzing the target animation file to obtain a binary sequence representing animation vector data;
reading node element coding data in the target animation file from the binary sequence;
Decoding the node element coded data according to the node element organization structure of the target animation file to sequentially obtain animation tag codes and dynamic attribute data blocks corresponding to the animation tag codes;
analyzing attribute identification information corresponding to each attribute from the dynamic attribute data block according to the attribute type corresponding to each attribute in the attribute structure table corresponding to the animation tag code;
And analyzing attribute contents corresponding to the attributes from the dynamic attribute data block according to the attribute identification information corresponding to the attributes.
13. The method according to claim 12, wherein the method further comprises:
decoding the node element coded data according to the node element organization structure of the target animation file to sequentially obtain the animation tag codes and basic attribute data blocks corresponding to the animation tag codes;
And sequentially reading attribute values corresponding to the attributes from the basic attribute data block according to the data types and the attribute ranks corresponding to the attributes in the attribute structure table corresponding to the animation tag codes.
14. The method according to claim 12, wherein parsing the attribute identification information corresponding to each attribute from the dynamic attribute data block according to the attribute type corresponding to each attribute in the attribute structure table corresponding to the animation tag code, comprises:
Querying the attribute structure table corresponding to the animation tag code to obtain attribute ordering and attribute types corresponding to the attributes;
determining the bit number occupied by the attribute identification information corresponding to each attribute according to the attribute type corresponding to each attribute;
reading data from the first bit in the dynamic attribute data block;
and determining the attribute identification information corresponding to each attribute from the sequentially read data according to the attribute sequencing based on the bit number occupied by the attribute identification information corresponding to each attribute.
15. The method of claim 12, wherein the attribute type is a general attribute, and the attribute identification information includes only a content identification bit; analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute, including:
The value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, and the attribute content corresponding to the attribute is read from the dynamic attribute data block according to the data type corresponding to the attribute;
and if the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute from the attribute structure table.
16. The method of claim 12, wherein the attribute type is a fixed attribute and the attribute identification information is null; analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute, including:
and directly reading attribute contents corresponding to the attributes from the dynamic attribute data block according to the data types corresponding to the attributes.
17. The method of claim 12, wherein the attribute type is a boolean attribute, and the attribute identification information includes only content identification bits; analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute, including:
And directly taking the value of the content identification bit as attribute content corresponding to the attribute.
18. The method of claim 12, wherein the attribute type is a simple animation attribute, a discrete animation attribute, a multi-dimensional time-slow animation attribute, or a space-slow animation attribute, and the attribute identification information includes at least a content identification bit; analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute, including:
reading attribute contents corresponding to the attributes from the dynamic attribute data block if the value of the content identification bit indicates that the attribute contents corresponding to the attributes exist in the dynamic attribute data block;
and when the value of the content identification bit indicates that the attribute content corresponding to the attribute does not exist in the dynamic attribute data block, inquiring a default value corresponding to the attribute according to the attribute structure table.
19. The method according to claim 18, wherein if the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, reading the attribute content corresponding to the attribute from the dynamic attribute data block includes:
when the value of the content identification bit indicates that the attribute content corresponding to the attribute exists in the dynamic attribute data block, reading the value of the next bit of the content identification bit as the value of the animation interval identification bit;
When the value of the animation interval identification bit indicates that the attribute content comprises animation interval characteristic data, analyzing the animation interval characteristic data corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data;
When the value of the animation interval identification bit indicates that the attribute content does not comprise the animation interval characteristic data, analyzing the attribute content corresponding to the attribute from the dynamic attribute data block directly according to the data type of the attribute.
20. The method according to claim 19, wherein the attribute type is a spatial slow moving animation attribute, and wherein if the value of the animation interval flag bit indicates that the attribute content includes animation interval property data, parsing the animation interval property data corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval property data, comprises:
when the value of the animation interval identification bit indicates that the attribute content comprises animation interval characteristic data, reading the value of the next bit of the animation interval identification bit as the value of a space slow motion parameter identification bit;
Analyzing the animation interval characteristic data comprising the space inching parameter corresponding to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data when the value of the space inching parameter identification bit indicates that the animation interval characteristic data comprises the space inching parameter;
and analyzing the animation interval characteristic data which does not comprise the space inching parameter and corresponds to the attribute from the dynamic attribute data block according to a data storage structure corresponding to the animation interval characteristic data when the value of the space inching parameter identification bit indicates that the animation interval characteristic data does not comprise the space inching parameter.
21. The method of claim 20, wherein the fields included in the data storage structure corresponding to the animation interval characteristic data comprise a number of animation intervals, an interpolator type for each animation interval, a start time and an end time for each animation interval, and further comprising a start value, an end value, a time-warping parameter, and a space-warping parameter corresponding to the attribute for each animation interval.
22. The method according to any one of claims 12 to 21, further comprising:
Reading file header coding information of the target animation file from the binary sequence;
And decoding the file header coding information according to the sequence of the fields included in the file header organization structure of the target animation file and the data types of the fields to obtain file header information.
23. An animation data encoding device, the device comprising:
The animation data acquisition module is used for acquiring animation vector data corresponding to each animation tag code from the animation engineering file;
The dynamic attribute data block generation module is used for determining attribute identification information corresponding to each attribute when the attribute type exists in the attribute structure table corresponding to the animation tag code; coding attribute values corresponding to the attributes in the animation vector data according to the attribute identification information to obtain attribute contents corresponding to the attributes; according to the attribute sequence corresponding to each attribute in the attribute structure table, sequentially storing attribute identification information corresponding to each attribute and the attribute content to obtain a dynamic attribute data block corresponding to the animation tag code;
the node element coding module is used for generating node element coding data according to the animation tag code and the dynamic attribute data block;
and the target animation file generation module is used for obtaining a target animation file corresponding to the vector derivation mode according to the node element coding data.
24. An animation data decoding device, the device comprising:
the target animation file analysis module is used for analyzing the target animation file to obtain a binary sequence representing the animation vector data;
The node element coding data analysis module is used for reading node element coding data in the target animation file from the binary sequence; decoding the node element coded data according to the node element organization structure of the target animation file to sequentially obtain animation tag codes and dynamic attribute data blocks corresponding to the animation tag codes;
The attribute identification information analysis module is used for analyzing the attribute identification information corresponding to each attribute from the dynamic attribute data block according to the attribute type corresponding to each attribute in the attribute structure table corresponding to the animation tag code;
and the attribute content analysis module is used for analyzing the attribute content corresponding to each attribute from the dynamic attribute data block according to the attribute identification information corresponding to each attribute.
25. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 22.
26. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 22.
CN201910502818.6A 2019-06-11 Animation data encoding and decoding method and device, storage medium and computer equipment Active CN112070866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910502818.6A CN112070866B (en) 2019-06-11 Animation data encoding and decoding method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910502818.6A CN112070866B (en) 2019-06-11 Animation data encoding and decoding method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112070866A CN112070866A (en) 2020-12-11
CN112070866B true CN112070866B (en) 2024-07-02

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238390A (en) * 2011-08-05 2011-11-09 中国科学院深圳先进技术研究院 Image-library-based video and image coding and decoding method and system
CN109104405A (en) * 2018-06-28 2018-12-28 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Binary protocol coding, coding/decoding method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238390A (en) * 2011-08-05 2011-11-09 中国科学院深圳先进技术研究院 Image-library-based video and image coding and decoding method and system
CN109104405A (en) * 2018-06-28 2018-12-28 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Binary protocol coding, coding/decoding method and device

Similar Documents

Publication Publication Date Title
US20210350605A1 (en) Animation data encoding/decoding method and apparatus, storage medium, and computer device
US12017145B2 (en) Method and system of automatic animation generation
CN111899155B (en) Video processing method, device, computer equipment and storage medium
US7027059B2 (en) Dynamically constructed rasterizers
CN111899322B (en) Video processing method, animation rendering SDK, equipment and computer storage medium
US20070018994A1 (en) Texture encoding apparatus, texture decoding apparatus, method, and program
WO1990013876A1 (en) Method and apparatus for manipulating digital video data
AU2018233015B2 (en) System and method for image processing
CN112070863A (en) Animation file processing method and device, computer readable storage medium and computer equipment
CN112070867A (en) Animation file processing method and device, computer readable storage medium and computer equipment
CN110113617A (en) A kind of method and device of compression of images and decompression
CA2758140C (en) Conversion of swf morph shape definitions for vector graphics rendering
CN105513115A (en) Method and device for converting SWF into Canvas cartoon
CA2758143C (en) Conversion of swf shape definitions for vector graphics rendering
CN112070866B (en) Animation data encoding and decoding method and device, storage medium and computer equipment
CN112150585A (en) Animation data encoding method, animation data decoding method, animation data encoding apparatus, animation data decoding apparatus, storage medium, and computer device
CN112070850A (en) Animation data encoding method, animation data decoding method, animation data encoding apparatus, animation data decoding apparatus, storage medium, and computer device
KR100882250B1 (en) A method for combining dynamic virtual image
CN112150589A (en) Animation data encoding method, animation data decoding method, animation data encoding apparatus, animation data decoding apparatus, storage medium, and computer device
CN112070866A (en) Animation data encoding method, animation data decoding method, animation data encoding apparatus, animation data decoding apparatus, storage medium, and computer device
CN101957837B (en) Method for accessing stroke vector font
CN115250335A (en) Video processing method, device, equipment and storage medium
Zhang The studies and implementation for conversion of image file format
CN112330768A (en) Image rapid synthesis method based on data characteristics
CN105701858A (en) Map processing method and device based on alpha fusion

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant