WO2021073293A1 - Animation file generating method and device, and storage medium - Google Patents

Animation file generating method and device, and storage medium Download PDF

Info

Publication number
WO2021073293A1
WO2021073293A1 PCT/CN2020/112752 CN2020112752W WO2021073293A1 WO 2021073293 A1 WO2021073293 A1 WO 2021073293A1 CN 2020112752 W CN2020112752 W CN 2020112752W WO 2021073293 A1 WO2021073293 A1 WO 2021073293A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
animation
playback
format
file
Prior art date
Application number
PCT/CN2020/112752
Other languages
French (fr)
Chinese (zh)
Inventor
李俊
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2021073293A1 publication Critical patent/WO2021073293A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a method, device and storage medium for generating animation files.
  • designers can use animation design software (such as After Effects or Animate CC) to make animation files in the target format in advance, and then the developers can directly store the animation files made by the designer in the terminal for use. Download and use by terminal users.
  • animation design software such as After Effects or Animate CC
  • the animation file generation method in the related art has poor flexibility.
  • the embodiments of the present disclosure provide a method, a device and a storage medium for generating an animation file, which can solve the problem of poor flexibility of the method for generating an animation file in the related art.
  • the technical solution is as follows:
  • a method for generating an animation file includes:
  • an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
  • a device for generating animation files includes:
  • the acquisition module is used to acquire the target parameters and target pictures used to generate the animation file
  • an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
  • a device for generating animation files comprising:
  • a memory for storing executable instructions of the processor
  • the processor is configured to:
  • a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium.
  • the computer-readable storage medium runs on a computer, the computer executes the animation as described in the above-mentioned aspect.
  • the method of file generation is provided.
  • the embodiments of the present disclosure provide a method, device and storage medium for generating an animation file.
  • the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
  • FIG. 1 is a schematic structural diagram of an implementation environment involved in various embodiments of the present disclosure
  • Fig. 2 is a flowchart of a method for generating an animation file provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of another method for generating an animation file provided by an embodiment of the present disclosure
  • FIG. 4 is a flowchart of a method for generating animation data provided by an embodiment of the present disclosure
  • FIG. 5 is a flowchart of another method for generating animation data provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of an acquired playback track provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of an application scenario of an animation file provided by an embodiment of the present disclosure.
  • FIG. 8 is a block diagram of a device for generating an animation file provided by an embodiment of the present disclosure.
  • FIG. 9 is a block diagram of an animation data generation module provided by an embodiment of the present disclosure.
  • FIG. 10 is a block diagram of another animation data generation module provided by an embodiment of the present disclosure.
  • FIG. 11 is a block diagram of another animation data generation module provided by an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a terminal provided by an embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram of an implementation environment involved in a method for generating an animation file provided by an embodiment of the present disclosure.
  • the implementation environment may include a terminal 110, which may be a computer, a notebook computer, or a smart phone, etc., and FIG. 1 takes the terminal 110 as a computer as an example for illustration.
  • an animation file generating device may be installed in the terminal 110, and the animation file generating device may include an animation data processing module and an encoding export file module.
  • the animation data processing module can obtain the target parameters and target pictures (also called animation materials) input by the developer to generate the animation files, and can take corresponding methods to perform the target parameters according to the type of the input target parameters. Processing to obtain the animation data in the target format, and the generated animation data and the target picture can be sent to the encoding export file module.
  • the encoding export file module can convert the received animation data into a file in the target format, and compress and pack the file in the target format and the target picture together to generate an animation file in the target format for playback of the animation file that supports the target format Player to play.
  • the target format refers to the format in which the animation file is played.
  • the target format is a scalable vector graphics animation (Scalable Vector Graphics Animation, SVGA) format compatible with different system platforms at the same time as an example for description.
  • SVGA Scalable Vector Graphics Animation
  • FIG. 2 is a flowchart of a method for generating an animation file provided by an embodiment of the present disclosure, and the method may be applied to the terminal 110 shown in FIG. 1. As shown in Figure 2, the method may include:
  • Step 201 Obtain target parameters and target pictures used to generate an animation file.
  • the developer when an animation file needs to be developed, the developer can input the target parameter and target picture for generating the animation file in the terminal, and accordingly, the terminal can obtain the target parameter and target picture.
  • the target parameter refers to a parameter related to the animation file to be generated (such as playing duration and playing effect), and the target picture may include one or more pictures.
  • Step 202 Generate animation data in the target format based on the target parameters.
  • the animation file generally includes multiple frames of pictures, and the animation data can be used to indicate the playback form of the target picture in the multiple frames of pictures.
  • the terminal may encapsulate the acquired target parameters according to the encapsulation standard of the target format to generate animation data in the target format.
  • the target format may be the SVGA format.
  • Step 203 Generate an animation file in the target format according to the animation data and the target picture.
  • the terminal after the terminal generates the animation data in the target format, it can continue to write the animation data and the target picture into the same file according to the file standard of the target format and compress and pack them to generate the animation file in the target format.
  • the embodiments of the present disclosure provide a method for generating an animation file.
  • the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters and target pictures, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
  • FIG. 3 is a flowchart of another method for generating an animation file provided by an embodiment of the present disclosure, which can be applied to the terminal 110 shown in FIG. 1. As shown in Figure 3, the method may include:
  • Step 301 Obtain target parameters and target pictures for generating an animation file.
  • the developer when an animation file needs to be developed, the developer can input the target parameter and target picture for generating the animation file in the terminal, and accordingly, the terminal can obtain the target parameter and target picture.
  • the target parameter refers to a parameter related to the animation file to be generated
  • the target picture may include one or more pictures.
  • the animation file refers to a file that the target picture can be played dynamically
  • the animation file generally includes multiple frames, and the target picture is spliced together in the playback state of each frame to form the animation file.
  • the target parameter used to generate the animation file may include multiple element state values, and each element state value may be used to indicate the playback state of the target picture in one frame (that is, the target parameter may include complete animation data).
  • the element state value may include at least one of the position of the target picture in a frame of the picture, the degree of transparency, the angle of rotation, and the degree of zoom.
  • the target parameter may not include multiple element state values, but only include related parameters that can obtain multiple element state values.
  • the target parameter may include playback duration, playback effect, and frame rate, or the target parameter may include playback duration, playback effect, frame rate, and playback track.
  • the playback duration refers to the duration of the playback of the animation file
  • the playback effect refers to the playback form of the animation file.
  • the playback effect can be jitter (that is, the target picture is constantly displayed during the playback time), and translation (that is, the target picture Continuously during the playback time), zoom (that is, the target image is continuously zoomed during the playback time), spring effect (that is, the target image is continuously stretched and compressed during the playback time), or gradient (that is, the transparency or color of the target image changes during the playback time) )Wait.
  • the frame rate refers to the frequency of continuous appearance of the target image during the playback time.
  • the playback track refers to the playback position of the target picture within the playback duration.
  • the target parameter may also include the picture size (size) of the target picture.
  • the target parameter obtained by the terminal may be a parameter input by the user for the selection operation of the multiple optional items.
  • the target parameter acquired by the terminal may be a parameter input by the user in real time.
  • the terminal may be pre-configured with a variety of candidate playback effects, and the playback effects obtained by the terminal may be determined by the terminal according to a developer's selection operation for the pre-configured multiple candidate playback effects.
  • the playback effect may be determined by the terminal according to the playback effect input by the developer.
  • the playing track may be determined according to a touch operation performed by the user on the drawing interface, or a variety of candidate tracks may be pre-configured in the terminal, and the playing track may be based on the user's selection operation for at least one candidate playing track
  • the playing track may be the contour of the target object obtained by recognizing the target object. Assuming that the target object is a person, the contour may be a face contour obtained by recognizing the face.
  • the embodiments of the present disclosure are not limited to the recognition of human face contours.
  • Step 302 Generate animation data in the target format based on the target parameters.
  • the animation data can be used to indicate the playback form of the target picture in the multi-frame picture, that is, it can be used to indicate the playback effect of the target picture during the playback time.
  • the operation of generating animation data may be different due to different target parameters.
  • the terminal may directly encapsulate the multiple element state values and other parameters based on the packaging standard of the target format to generate animation data in the target format.
  • the terminal needs to first determine the multiple element state values based on the target parameter, and then encapsulate the multiple element state values and other parameters to generate animation data in the target format. Therefore, after acquiring the target parameter, the terminal can select a corresponding processing method to process the target parameter according to the target parameter, so as to generate animation data in the target format.
  • the embodiment of the present disclosure introduces the method for generating animation data in the following ways:
  • the target parameter includes the playback duration and multiple element state values. That is, what the developer inputs to the terminal can be the animation data of the entire animation interval.
  • the terminal can directly encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format, thereby generating animation data in the target format. It should be noted that, before encapsulation, the terminal may first analyze the playback status of each frame according to multiple element status values frame by frame.
  • the generated animation file is not limited to the existing playback effects, that is, the method of generating the animation file is more flexible, and the content and style of the generated animation file More abundant.
  • FIG. 4 is a flowchart of a method for generating animation data provided by an embodiment of the present disclosure. As shown in Figure 4, the method may include:
  • Step 3021A Determine multiple element state values according to the playback duration, frame rate and playback effect.
  • the terminal can determine multiple element state values based on the total number of frames and the playback effect. For example, for each target picture, the terminal may first fill in the specific data of the target picture in all frames according to the playback effect, and the specific data refers to the playback state of the target picture. Then, the terminal can use a pre-configured animation interface to create a corresponding animation object according to the playback effect, and cycle to play the created animation object within the playback duration according to the total number of frames and the frame rate. For example, assuming that the total number of frames is n and the frame rate is f, the terminal can execute n cycles, each cycle incrementing by 1/f second (s). In addition, the terminal can use a pre-configured interpolation algorithm to determine multiple element state values during each loop playback.
  • the animation interface may be a built-in animation interface of the terminal system, or may be an interface that the terminal receives when developing an animation file by a developer.
  • the interpolation algorithm can be an algorithm that comes with the terminal system, or it can be an algorithm written by a developer when developing an animation file.
  • Step 3022A According to the encapsulation standard of the target format, encapsulate the playback duration and multiple element state values to generate animation data in the target format.
  • the terminal determines multiple element state values based on the target parameters, it can encapsulate the playback duration and multiple element state values according to the encapsulation standard of the SVGA format (also called a combination ) To get the animation data in SVGA format.
  • the encapsulation standard of the SVGA format also called a combination
  • the terminal can automatically determine the status of multiple elements based on target parameters such as playback effect, duration, and frame rate, by only inputting parameters such as playback effect, duration, and frame rate, the efficiency of generating animation files can be improved and the work of developers can be reduced. , To ensure the reliability of the generated animation files.
  • FIG. 5 is a flowchart of another method for generating animation data provided by an embodiment of the present disclosure. As shown in Figure 5, the method may include:
  • Step 3021B Determine at least one playback position of the target picture according to the playback track.
  • the terminal may first determine a plurality of sampling points according to the acquired playback trajectory, and then, the terminal may determine the position of at least one target sampling point selected from the plurality of sampling points as at least the position of the target picture A playback position. Wherein, the distance between every two adjacent target sampling points is greater than the distance threshold. That is, the terminal may first tile the target picture according to the determined playback track, and every two adjacent target pictures obtained by the tile do not overlap.
  • the spacing threshold may be a fixed value pre-configured in the terminal, or the spacing threshold may be a parameter input by the developer received by the terminal, or the spacing threshold may be the size of the target picture.
  • the playback trajectory acquired by the terminal is a star shown in Figure 6, and the star trajectory occupies a rectangle P of M*N.
  • the spacing threshold is m/2-1.
  • the terminal may first determine multiple sampling points a0 to an according to the star-shaped trajectory, and determine the locations of multiple adjacent target sampling points with a spacing greater than m/2-1 as multiple playback positions of the target picture. For example, for each sampling point, the terminal can use the sampling point as the midpoint to determine a rectangle Qi of size m*n, and determine the midpoint of a rectangle that is adjacent to the sampling point and has no intersection at all. It is a target sampling point.
  • the terminal may determine the determined positions of the multiple target sampling points as the playback position of the target picture.
  • the terminal can delete the part where the rectangle Qi and the rectangle P intersect, so that multiple playback positions of the target picture covering the entire star track can be obtained.
  • Step 3022B Multiply the playback duration and the frame rate to obtain the total number of frames included in the animation file.
  • the method for implementing this step can refer to the method of calculating the total number of frames in step 3021A above.
  • Step 3023B Determine multiple element state values based on the playback effect, at least one playback position of the target picture, and the total number of frames.
  • the implementation of this step can also refer to the method of determining multiple element state values in step 3021A.
  • this method also needs to determine multiple element state values according to at least one playback position determined by the playback track, that is, the step is completed.
  • the obtained multiple element state values are related to the playback track.
  • Step 3024B According to the encapsulation standard of the target format, encapsulate the playback duration and multiple element state values to generate animation data in the target format.
  • Step 303 Encode the animation data according to the file standard of the SVGA format.
  • the target format may be the SVGA format. Accordingly, after the terminal obtains the animation data, it can continue to encode the animation data according to the SVGA format file standard, that is, encode the animation data into a SVGA format file form.
  • the file standard of the SVGA format may include version 1.0 and version 2.0.
  • the file standard of the SVGA format of version 1.0 is a json format file
  • the file standard of the SVGA format of version 2.0 is a google protobuf format file.
  • Step 304 Compress the encoded animation data and the target picture to obtain an animation file in SVGA format.
  • the terminal may continue to write the encoded animation data and the target picture into the same file, and compress and pack the file, thereby generating an animation file in the SVGA format.
  • the following takes the gifting scene of the animation gift in the live broadcast client as an example to introduce the animation file application generated by the animation file generation method provided by the embodiment of the present disclosure.
  • the “personality” option is an animation gift generation option.
  • the first client detects that the user clicks the "personality” option, the first client can detect the animation gift generation instruction.
  • the display interface of the first client can display a painting mask H1 and a gift bar G1 containing multiple candidate gift pictures.
  • the gift bar G1 includes lollipop pictures, Rose pictures, love pictures, and love letter pictures, and the logo and unit price of the gift picture can also be displayed below each candidate gift picture, such as lollipop 0.1Y, Y can be the first client’s
  • the unit for setting the price of the gift picture, and the identifier of the gift picture may be a text used to describe the gift picture.
  • the user selects the lollipop icon and draws a heart-shaped pattern on the painting mask H1. After the user clicks the send button, the animated gift drawn by him can be successfully sent to the second client to watch The animated gift can be played on the interface of the client of the live broadcast user. And, referring to FIG.
  • the display interface may also display a text prompt "10 lollipops are drawn, the total price is 1Y, and the balance is 0.7Y".
  • the first client when the first client receives an instruction to send an animation file, it can send the received animation file to the server, and then the server will send it to other clients watching the live broadcast for display by other clients Play the animation file on the interface.
  • the embodiments of the present disclosure provide a method for generating an animation file.
  • the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters and target pictures, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
  • FIG. 8 is a block diagram of a device for generating an animation file provided by a disclosed embodiment, which can be applied to the terminal 110 shown in FIG. 1.
  • the device 80 may include:
  • the obtaining module 801 is used to obtain target parameters and target pictures used to generate an animation file.
  • the animation data generating module 802 is used to generate animation data in a target format based on the target parameters, and the animation data is used to indicate the playback form of the target picture in a multi-frame screen.
  • the animation file generating module 803 is used to generate an animation file in the target format according to the animation data and the target picture, and the animation file includes multiple frames.
  • the target parameter may include the playback duration and multiple element state values, and each element state value is used to indicate the playback state of the target picture in one frame.
  • FIG. 9 is a block diagram of an animation data generating module 802 provided by an embodiment of the present disclosure. As shown in FIG. 9, the animation data generating module 802 may include:
  • the encapsulation sub-module 8021A is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
  • FIG. 10 is a block diagram of another animation data generating module 802 provided by an embodiment of the present disclosure.
  • the animation data generating module 802 may include:
  • the determining sub-module 8021B is used to determine multiple element status values according to the playback duration, frame rate and playback effect, and each element status value can be used to indicate the playback status of the target picture in one frame.
  • the encapsulation sub-module 8022B is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
  • the determining sub-module 8021B can be used to multiply the playback duration and the frame rate to obtain the total number of frames of the screen included in the animation file, and determine multiple element state values based on the playback effect and the total number of frames.
  • FIG. 11 is a block diagram of another animation data generating module 802 provided by an embodiment of the present disclosure. As shown in FIG. 11, the animation data generating module 802 may include:
  • the first determining sub-module 8021C is configured to determine at least one playback position of the target picture according to the playback track.
  • the multiplying sub-module 8022C is used to multiply the playing duration and the frame rate to obtain the total number of frames included in the animation file.
  • the second determining sub-module 8023C is configured to determine multiple element state values based on the playback effect, at least one playback position of the target picture, and the total number of frames.
  • the encapsulation sub-module 8024C is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
  • the first determining submodule 8021C may be used to determine multiple sampling points included in the playback track, and determine the location of at least one target sampling point selected from the multiple sampling points as at least one of the target pictures The playback position, where the distance between each two adjacent target sampling points is greater than the distance threshold.
  • the playing track may be determined according to a touch operation performed by the user on the drawing interface, or the playing track may be determined according to the user's selection operation for at least one candidate playing track, or the playing track may be a target The contour of the target object obtained by object recognition.
  • the target format may be a scalable vector graphics SVGA format.
  • the animation file generation module 803 can be used to encode the animation data according to the file standard of the SVGA format, and compress the encoded animation data and the target picture to obtain an animation file in the SVGA format.
  • the embodiment of the present disclosure provides a device for generating animation files.
  • the device can obtain target parameters and target pictures used to generate an animation file, can generate animation data in a target format based on the target parameters, and can generate an animation file based on the animation data and the target picture. Since the terminal can automatically generate animation files based on the acquired target parameters and target pictures, compared to the related art that can only receive the animation files pre-designed by the designer, the animation file generating device has higher flexibility for generating animation files. Correspondingly, the content and style of the generated animation file will be richer.
  • FIG. 12 is a structural block diagram of a mobile terminal 1200 provided by an exemplary embodiment of the present disclosure.
  • the terminal 1200 may be a portable mobile terminal, such as a smart phone, a tablet computer, a notebook computer, or a desktop computer.
  • the terminal 1200 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names.
  • the terminal 1200 includes a processor 1201 and a memory 1202.
  • the processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1201 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve.
  • the processor 1201 may also include a main processor and a coprocessor.
  • the main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state.
  • the processor 1201 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen.
  • the processor 1201 may also include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 1202 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1202 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1202 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1201 to implement the animation file provided in the method embodiment of the present application. The method of generation.
  • the terminal 1200 may optionally further include: a peripheral device interface 1203 and at least one peripheral device.
  • the processor 1201, the memory 1202, and the peripheral device interface 1203 may be connected by a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1203 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1204, a display screen 1205, a camera 1206, an audio circuit 1207, a positioning component 1208, and a power supply 1209.
  • the peripheral device interface 1203 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1201 and the memory 1202.
  • the processor 1201, the memory 1202, and the peripheral device interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1201, the memory 1202, and the peripheral device interface 1203 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1204 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1204 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 1204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on.
  • the radio frequency circuit 1204 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
  • the radio frequency circuit 1204 may also include a circuit related to NFC (Near Field Communication), which is not limited in this application.
  • the display screen 1205 is used to display a UI (User Interface, user interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 1205 also has the ability to collect touch signals on or above the surface of the display screen 1205.
  • the touch signal can be input to the processor 1201 as a control signal for processing.
  • the display screen 1205 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 1205 may be one display screen 1205, which is provided with the front panel of the terminal 1200; in other embodiments, there may be at least two display screens 1205, which are respectively arranged on different surfaces of the terminal 1200 or in a folded design; In still other embodiments, the display screen 1205 may be a flexible display screen, which is disposed on the curved surface or the folding surface of the terminal 1200. Furthermore, the display screen 1205 can also be set as a non-rectangular irregular pattern, that is, a special-shaped screen.
  • the display screen 1205 may be an LCD (Liquid Crystal Display, liquid crystal display array) display screen or an OLED (Organic Light-Emitting Diode, organic light emitting diode) display screen.
  • the camera assembly 1206 is used to capture images or videos.
  • the camera assembly 1206 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • the camera assembly 1206 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • the audio circuit 1207 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1201 for processing, or input to the radio frequency circuit 1204 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 1200.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert the electrical signal from the processor 1201 or the radio frequency circuit 1204 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes.
  • the audio circuit 1207 may also include a headphone jack.
  • the positioning component 1208 is used to locate the current geographic location of the terminal 1200 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1208 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 1209 is used to supply power to various components in the terminal 1200.
  • the power source 1209 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • a wired rechargeable battery is a battery charged through a wired line
  • a wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 1200 further includes one or more sensors 1210.
  • the one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211, a gyroscope sensor 1212, a pressure sensor 1213, a fingerprint sensor 1214, an optical sensor 1215, and a proximity sensor 1216.
  • the acceleration sensor 1211 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1200.
  • the acceleration sensor 1211 can be used to detect the components of gravitational acceleration on three coordinate axes.
  • the processor 1201 may control the touch display screen 1205 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 1211.
  • the acceleration sensor 1211 may also be used for the collection of game or user motion data.
  • the gyroscope sensor 1212 can detect the body direction and rotation angle of the terminal 1200, and the gyroscope sensor 1212 can cooperate with the acceleration sensor 1211 to collect the user's 3D actions on the terminal 1200.
  • the processor 1201 can implement the following functions according to the data collected by the gyroscope sensor 1212: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1213 may be disposed on the side frame of the terminal 1200 and/or the lower layer of the touch display screen 1205.
  • the processor 1201 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 1213.
  • the processor 1201 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 1205.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1214 is used to collect the user's fingerprint.
  • the processor 1201 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user's identity according to the collected fingerprint.
  • the processor 1201 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 1214 may be provided on the front, back or side of the terminal 1200. When a physical button or a manufacturer logo is provided on the terminal 1200, the fingerprint sensor 1214 can be integrated with the physical button or the manufacturer logo.
  • the optical sensor 1215 is used to collect the ambient light intensity.
  • the processor 1201 may control the display brightness of the touch screen 1205 according to the intensity of the ambient light collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1205 is decreased.
  • the processor 1201 may also dynamically adjust the shooting parameters of the camera assembly 1206 according to the ambient light intensity collected by the optical sensor 1215.
  • the proximity sensor 1216 also called a distance sensor, is usually arranged on the front panel of the terminal 1200.
  • the proximity sensor 1216 is used to collect the distance between the user and the front of the terminal 1200.
  • the processor 1201 controls the touch screen 1205 to switch from the on-screen state to the off-screen state; when the proximity sensor 1216 detects When the distance between the user and the front of the terminal 1200 gradually increases, the processor 1201 controls the touch display screen 1205 to switch from the rest screen state to the bright screen state.
  • FIG. 12 does not constitute a limitation on the terminal 1200, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • the embodiments of the present disclosure also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium.
  • the computer-readable storage medium runs on a computer, the computer can execute as shown in FIG. 2 and FIG. 3 The method of generating animation files.

Abstract

An animation file generation method and device, and a storage medium, relating to the technical field of computers. A terminal can obtain target parameters and target pictures for generating an animation file, generate animation data in a target format on the basis of the target parameters, and generate the animation file according to the animation data and the target pictures. Because a terminal can automatically generate an animation file on the basis of obtained target parameters, the animation file generation method is higher in flexibility compared with the prior art that a terminal can only receive an animation file pre-designed by a designer. Correspondingly, the contents and styles of generated animation files are relatively rich.

Description

动画文件的生成方法、装置及存储介质Method, device and storage medium for generating animation file
本申请要求于2019年10月16日提交至中国专利局、申请号为201910983828.6、发明名称为“动画文件的生成方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed to the Chinese Patent Office on October 16, 2019, with the application number 201910983828.6, and the invention title of "Animation file generation method, device and storage medium", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本公开涉及计算机技术领域,特别涉及一种动画文件的生成方法、装置及存储介质。The present disclosure relates to the field of computer technology, and in particular to a method, device and storage medium for generating animation files.
背景技术Background technique
随着计算机技术的发展,用户之间的通信不再局限于发送或接收静态图片,一系列可供播放的动态图片(也可称为动画)应运而生。With the development of computer technology, communication between users is no longer limited to sending or receiving static pictures. A series of dynamic pictures (also known as animations) for playback have emerged.
相关技术中,设计人员可以采用动画设计软件(如After Effects或Animate CC)预先制作好目标格式的动画文件,然后,再由开发人员将设计人员制作好的动画文件直接存储至终端中,以供使用终端的用户下载使用。In related technologies, designers can use animation design software (such as After Effects or Animate CC) to make animation files in the target format in advance, and then the developers can directly store the animation files made by the designer in the terminal for use. Download and use by terminal users.
相关技术中的动画文件的生成方法的灵活性较差。The animation file generation method in the related art has poor flexibility.
发明内容Summary of the invention
本公开实施例提供了一种动画文件的生成方法、装置及存储介质,可以解决相关技术中动画文件的生成方法的灵活性较差的问题。所述技术方案如下:The embodiments of the present disclosure provide a method, a device and a storage medium for generating an animation file, which can solve the problem of poor flexibility of the method for generating an animation file in the related art. The technical solution is as follows:
一方面,提供了一种动画文件的生成方法,所述方法包括:In one aspect, a method for generating an animation file is provided, and the method includes:
获取用于生成动画文件的目标参数和目标图片;Obtain target parameters and target pictures used to generate animation files;
基于所述目标参数生成目标格式的动画数据,所述动画数据用于指示所述目标图片在多帧画面中的播放形式;Generating animation data in a target format based on the target parameters, where the animation data is used to indicate a playback form of the target picture in a multi-frame screen;
根据所述动画数据和所述目标图片,生成所述目标格式的动画文件,所述动画文件包括所述多帧画面。According to the animation data and the target picture, an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
另一方面,提供了一种动画文件的生成装置,所述装置包括:In another aspect, a device for generating animation files is provided, and the device includes:
获取模块,用于获取用于生成动画文件的目标参数和目标图片;The acquisition module is used to acquire the target parameters and target pictures used to generate the animation file;
基于所述目标参数生成目标格式的动画数据,所述动画数据用于指示所述目标图片在多帧画面中的播放形式;Generating animation data in a target format based on the target parameters, where the animation data is used to indicate a playback form of the target picture in a multi-frame screen;
根据所述动画数据和所述目标图片,生成所述目标格式的动画文件,所述动画文件包括所述多帧画面。According to the animation data and the target picture, an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
又一方面,提供了一种动画文件的生成装置,所述装置包括:In yet another aspect, a device for generating animation files is provided, the device comprising:
处理器;processor;
用于存储所述处理器的可执行指令的存储器;A memory for storing executable instructions of the processor;
其中,所述处理器被配置为:Wherein, the processor is configured to:
执行如上述方面所述的动画文件的生成方法。Perform the animation file generation method as described in the above aspect.
再一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述计算机可读存储介质在计算机上运行时,使得计算机执行如上述方面所述的动画文件的生成方法。In another aspect, a computer-readable storage medium is provided, and instructions are stored in the computer-readable storage medium. When the computer-readable storage medium runs on a computer, the computer executes the animation as described in the above-mentioned aspect. The method of file generation.
本公开实施例提供的技术方案带来的有益效果至少可以包括:The beneficial effects brought about by the technical solutions provided by the embodiments of the present disclosure may at least include:
综上所述,本公开实施例提供了一种动画文件的生成方法、装置及存储介质。其中,终端可以获取用于生成动画文件的目标参数和目标图片,可以基于该目标参数生成目标格式的动画数据,且可以根据该动画数据和目标图片生成动画文件。由于终端可以基于获取到的目标参数自动生成动画文件,因此相对于相关技术中终端仅能接收设计师预先设计好的动画文件,该动画文件生成方法的灵活性较高。相应的,生成的动画文件的内容和样式即会较为丰富。In summary, the embodiments of the present disclosure provide a method, device and storage medium for generating an animation file. Wherein, the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
附图说明Description of the drawings
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present disclosure, the following will briefly introduce the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can be obtained from these drawings without creative work.
图1是本公开各个实施例所涉及的一种实施环境的结构示意图;FIG. 1 is a schematic structural diagram of an implementation environment involved in various embodiments of the present disclosure;
图2是本公开实施例提供的一种动画文件的生成方法流程图;Fig. 2 is a flowchart of a method for generating an animation file provided by an embodiment of the present disclosure;
图3是本公开实施例提供的另一种动画文件的生成方法流程图;FIG. 3 is a flowchart of another method for generating an animation file provided by an embodiment of the present disclosure;
图4是本公开实施例提供的一种生成动画数据的生成方法流程图;4 is a flowchart of a method for generating animation data provided by an embodiment of the present disclosure;
图5是本公开实施例提供的另一种生成动画数据的生成方法流程图;FIG. 5 is a flowchart of another method for generating animation data provided by an embodiment of the present disclosure;
图6是本公开实施例提供的一种获取的播放轨迹的示意图;FIG. 6 is a schematic diagram of an acquired playback track provided by an embodiment of the present disclosure;
图7是本公开实施例提供的一种动画文件的应用场景示意图;FIG. 7 is a schematic diagram of an application scenario of an animation file provided by an embodiment of the present disclosure;
图8是本公开实施例提供的一种动画文件的生成装置的框图;FIG. 8 is a block diagram of a device for generating an animation file provided by an embodiment of the present disclosure;
图9是本公开实施例提供的一种动画数据生成模块的框图;FIG. 9 is a block diagram of an animation data generation module provided by an embodiment of the present disclosure;
图10是本公开实施例提供的另一种动画数据生成模块的框图;FIG. 10 is a block diagram of another animation data generation module provided by an embodiment of the present disclosure;
图11是本公开实施例提供的又一种动画数据生成模块的框图;FIG. 11 is a block diagram of another animation data generation module provided by an embodiment of the present disclosure;
图12是本公开实施例提供的一种终端的结构示意图。FIG. 12 is a schematic structural diagram of a terminal provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开实施方式作进一步地详细描述。In order to make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes the embodiments of the present disclosure in detail with reference to the accompanying drawings.
图1是本公开实施例提供的一种动画文件的生成方法所涉及的实施环境示意图。如图1所示,该实施环境可以包括:终端110,该终端110可以为计算机、笔记本电脑或智能手机等,图1以该终端110为计算机为例进行说明。FIG. 1 is a schematic diagram of an implementation environment involved in a method for generating an animation file provided by an embodiment of the present disclosure. As shown in FIG. 1, the implementation environment may include a terminal 110, which may be a computer, a notebook computer, or a smart phone, etc., and FIG. 1 takes the terminal 110 as a computer as an example for illustration.
例如,该终端110中可以安装有动画文件的生成装置,该动画文件的生成装置可以包括动画数据处理模块和编码导出文件模块。其中,动画数据的处理模块可以获取开发人员输入的用于生成动画文件的目标参数和目标图片(也可以称为动画素材),可以根据输入的目标参数的类型,采取对应的方式对目标参数进行处理以得到目标格式的动画数据,且可以将生成的动画数据和目标图片发送至编码导出文件模块。该编码导出文件模块可以将接收到的动画数据转换为目标格式的文件,并将该目标格式的文件与目标图片一并压缩打包从而生成目标格式的动画文件,以供支持目标格式的动画文件播放的播放器进行播放。For example, an animation file generating device may be installed in the terminal 110, and the animation file generating device may include an animation data processing module and an encoding export file module. Among them, the animation data processing module can obtain the target parameters and target pictures (also called animation materials) input by the developer to generate the animation files, and can take corresponding methods to perform the target parameters according to the type of the input target parameters. Processing to obtain the animation data in the target format, and the generated animation data and the target picture can be sent to the encoding export file module. The encoding export file module can convert the received animation data into a file in the target format, and compress and pack the file in the target format and the target picture together to generate an animation file in the target format for playback of the animation file that supports the target format Player to play.
其中,该目标格式是指动画文件播放的格式。可选的,本公开实施例均以该目标格式为一种能够同时兼容不同系统平台的可伸缩矢量图形动画(Scalable Vector GraphicsAnimation,SVGA)格式为例进行说明。Among them, the target format refers to the format in which the animation file is played. Optionally, in the embodiments of the present disclosure, the target format is a scalable vector graphics animation (Scalable Vector Graphics Animation, SVGA) format compatible with different system platforms at the same time as an example for description.
图2是本公开实施例提供的一种动画文件的生成方法流程图,该方法可以应用于图1所示的终端110中。如图2所示,该方法可以包括:FIG. 2 is a flowchart of a method for generating an animation file provided by an embodiment of the present disclosure, and the method may be applied to the terminal 110 shown in FIG. 1. As shown in Figure 2, the method may include:
步骤201、获取用于生成动画文件的目标参数和目标图片。Step 201: Obtain target parameters and target pictures used to generate an animation file.
在本公开实施例中,在需要开发动画文件时,开发人员可以在终端中输入用于生成动画文件的目标参数和目标图片,相应的,终端即可以获取到该目标参数和目标图片。其中,该目标参数是指与待生成的动画文件相关的参数(如播放时长和播放效果),该目标图片可以包括一张或多张。In the embodiment of the present disclosure, when an animation file needs to be developed, the developer can input the target parameter and target picture for generating the animation file in the terminal, and accordingly, the terminal can obtain the target parameter and target picture. Wherein, the target parameter refers to a parameter related to the animation file to be generated (such as playing duration and playing effect), and the target picture may include one or more pictures.
步骤202、基于目标参数生成目标格式的动画数据。Step 202: Generate animation data in the target format based on the target parameters.
其中,动画文件一般包括多帧画面,该动画数据可以用于指示目标图片在多帧画面中的播放形式。在本公开实施例中,终端可以按照目标格式的封装标准将获取到的目标参数进行封装,以生成目标格式的动画数据。可选的,该目标格式可以为SVGA格式。Wherein, the animation file generally includes multiple frames of pictures, and the animation data can be used to indicate the playback form of the target picture in the multiple frames of pictures. In the embodiment of the present disclosure, the terminal may encapsulate the acquired target parameters according to the encapsulation standard of the target format to generate animation data in the target format. Optionally, the target format may be the SVGA format.
步骤203、根据动画数据和目标图片,生成目标格式的动画文件。Step 203: Generate an animation file in the target format according to the animation data and the target picture.
在本公开实施例中,终端在生成目标格式的动画数据后,可以继续按照目标格式的文件标准将动画数据和目标图片写入同一个文件中并压缩打包,从而生成目标格式的动画文件。In the embodiment of the present disclosure, after the terminal generates the animation data in the target format, it can continue to write the animation data and the target picture into the same file according to the file standard of the target format and compress and pack them to generate the animation file in the target format.
综上所述,本公开实施例提供了一种动画文件的生成方法。其中,终端可以获取用于生成动画文件的目标参数和目标图片,可以基于该目标参数生成目标格式的动画数据,且可以根据该动画数据和目标图片生成动画文件。由于终端可以基于获取到的目标参数和目标图片自动生成动画文件,因此相对于相关技术中终端仅能接收设计师预先设计好的动画文件,该动画文件的生成方法的灵活性较高。相应的,生成的动画文件的内容和样式即会较为丰富。In summary, the embodiments of the present disclosure provide a method for generating an animation file. Wherein, the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters and target pictures, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
下述以目标格式为SVGA格式为例,介绍本公开实施例提供的动画文件的生成方法。图3是本公开实施例提供的另一种动画文件的生成方法流程图,可以应用于图1所示的终端110中。如图3所示,该方法可以包括:In the following, taking the target format as the SVGA format as an example, the method for generating the animation file provided by the embodiment of the present disclosure is introduced. FIG. 3 is a flowchart of another method for generating an animation file provided by an embodiment of the present disclosure, which can be applied to the terminal 110 shown in FIG. 1. As shown in Figure 3, the method may include:
步骤301、获取用于生成动画文件的目标参数和目标图片。Step 301: Obtain target parameters and target pictures for generating an animation file.
在本公开实施例中,在需要开发动画文件时,开发人员可以在终端中输入用于生成动画文件的目标参数和目标图片,相应的,终端即可以获取 到该目标参数和目标图片。其中,该目标参数是指与待生成的动画文件相关的参数,该目标图片可以包括一张或多张。In the embodiment of the present disclosure, when an animation file needs to be developed, the developer can input the target parameter and target picture for generating the animation file in the terminal, and accordingly, the terminal can obtain the target parameter and target picture. Wherein, the target parameter refers to a parameter related to the animation file to be generated, and the target picture may include one or more pictures.
由于动画文件是指目标图片可以动态播放的文件,因此该动画文件一般包括多帧画面,目标图片在各帧画面的播放状态拼接起来即为该动画文件。相应的,用于生成动画文件的目标参数可以包括多个元素状态值,每个元素状态值可以用于指示目标图片在一帧画面的播放状态(即该目标参数可以包括完整的动画数据)。例如,该元素状态值可以包括目标图片在一帧画面中所处的位置、透明程度、旋转角度和缩放程度中的至少一种。或者,该目标参数可以不包括多个元素状态值,而仅包括能够得到多个元素状态值的相关参数。例如,该目标参数可以包括播放时长、播放效果和帧率,或者,该目标参数可以包括播放时长、播放效果、帧率和播放轨迹。Since the animation file refers to a file that the target picture can be played dynamically, the animation file generally includes multiple frames, and the target picture is spliced together in the playback state of each frame to form the animation file. Correspondingly, the target parameter used to generate the animation file may include multiple element state values, and each element state value may be used to indicate the playback state of the target picture in one frame (that is, the target parameter may include complete animation data). For example, the element state value may include at least one of the position of the target picture in a frame of the picture, the degree of transparency, the angle of rotation, and the degree of zoom. Alternatively, the target parameter may not include multiple element state values, but only include related parameters that can obtain multiple element state values. For example, the target parameter may include playback duration, playback effect, and frame rate, or the target parameter may include playback duration, playback effect, frame rate, and playback track.
其中,该播放时长是指动画文件的播放持续时长,播放效果是指动画文件的播放形式,例如,该播放效果可以为抖动(即目标图片在播放时长内不断抖动显示)、平移(即目标图片在播放时长内不断)、缩放(即目标图片在播放时长内不断缩放)、弹簧效果(即目标图片在播放时长内不断伸展压缩)或渐变(即目标图片在播放时长内透明程度或颜色发生变化)等。该帧率是指目标图像在播放时长内连续出现的频率。该播放轨迹是指目标图片在播放时长内的播放位置。可选的,该目标参数还可以包括目标图片的图片大小(size)。Among them, the playback duration refers to the duration of the playback of the animation file, and the playback effect refers to the playback form of the animation file. For example, the playback effect can be jitter (that is, the target picture is constantly displayed during the playback time), and translation (that is, the target picture Continuously during the playback time), zoom (that is, the target image is continuously zoomed during the playback time), spring effect (that is, the target image is continuously stretched and compressed during the playback time), or gradient (that is, the transparency or color of the target image changes during the playback time) )Wait. The frame rate refers to the frequency of continuous appearance of the target image during the playback time. The playback track refers to the playback position of the target picture within the playback duration. Optionally, the target parameter may also include the picture size (size) of the target picture.
可选的,对于每种参数,终端中均可以预先配置有多个可选项,相应的,终端获取到的目标参数可以为用户针对多个可选项的选择操作输入的参数。或者,终端获取到的目标参数可以为用户实时输入的参数。Optionally, for each parameter, multiple optional items may be pre-configured in the terminal. Correspondingly, the target parameter obtained by the terminal may be a parameter input by the user for the selection operation of the multiple optional items. Or, the target parameter acquired by the terminal may be a parameter input by the user in real time.
例如,终端中可以预先配置多种备选播放效果,终端获取到的播放效果可以为终端根据开发人员针对该预先配置的多种备选播放效果的选择操作确定的。或者,该播放效果可以为终端根据开发人员输入的播放效果所确定的。For example, the terminal may be pre-configured with a variety of candidate playback effects, and the playback effects obtained by the terminal may be determined by the terminal according to a developer's selection operation for the pre-configured multiple candidate playback effects. Or, the playback effect may be determined by the terminal according to the playback effect input by the developer.
又例如,该播放轨迹可以根据用户在绘制界面上执行的触控操作确定,或,终端中可以预先配置有多种备选轨迹,该播放轨迹可以根据用户针对至少一个备选播放轨迹的选择操作确定,或,该播放轨迹可以为对目标对 象识别得到的目标对象的轮廓,假设该目标对象为人物,相应的,该轮廓可以为对人脸进行识别得到的人脸轮廓。当然,本公开实施例不仅限于对人脸轮廓的识别。For another example, the playing track may be determined according to a touch operation performed by the user on the drawing interface, or a variety of candidate tracks may be pre-configured in the terminal, and the playing track may be based on the user's selection operation for at least one candidate playing track Or, the playing track may be the contour of the target object obtained by recognizing the target object. Assuming that the target object is a person, the contour may be a face contour obtained by recognizing the face. Of course, the embodiments of the present disclosure are not limited to the recognition of human face contours.
步骤302、基于目标参数生成目标格式的动画数据。Step 302: Generate animation data in the target format based on the target parameters.
在本公开实施例中,该动画数据可以用于指示目标图片在多帧画面中的播放形式,即可以用于指示目标图片在播放时长内的播放效果。In the embodiment of the present disclosure, the animation data can be used to indicate the playback form of the target picture in the multi-frame picture, that is, it can be used to indicate the playback effect of the target picture during the playback time.
需要说明的是,由于基于不同的目标参数,生成动画数据的操作可能不同。例如,在目标参数包括多个元素状态值时,终端可以直接基于目标格式的封装标准将多个元素状态值以及其他参数进行封装以生成目标格式的动画数据。在目标参数不包括多个元素状态值,终端需要先基于目标参数确定多个元素状态值,然后再将多个元素状态值和其他参数进行封装以生成目标格式的动画数据。因此,终端在获取到目标参数后,可以根据目标参数选择对应的处理方式对目标参数进行处理,以生成目标格式的动画数据。It should be noted that the operation of generating animation data may be different due to different target parameters. For example, when the target parameter includes multiple element state values, the terminal may directly encapsulate the multiple element state values and other parameters based on the packaging standard of the target format to generate animation data in the target format. When the target parameter does not include multiple element state values, the terminal needs to first determine the multiple element state values based on the target parameter, and then encapsulate the multiple element state values and other parameters to generate animation data in the target format. Therefore, after acquiring the target parameter, the terminal can select a corresponding processing method to process the target parameter according to the target parameter, so as to generate animation data in the target format.
可选的,本公开实施例以以下几种方式介绍动画数据的生成方法:Optionally, the embodiment of the present disclosure introduces the method for generating animation data in the following ways:
一种可选的实现方式:该目标参数包括播放时长和多个元素状态值。即开发人员输入至终端的可以为整个动画区间的动画数据。相应的,终端可以直接按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,从而生成目标格式的动画数据。需要说明的是,终端在封装之前,可以先根据多个元素状态值逐帧解析出各帧画面的播放状态。An optional implementation manner: the target parameter includes the playback duration and multiple element state values. That is, what the developer inputs to the terminal can be the animation data of the entire animation interval. Correspondingly, the terminal can directly encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format, thereby generating animation data in the target format. It should be noted that, before encapsulation, the terminal may first analyze the playback status of each frame according to multiple element status values frame by frame.
通过直接获取多个元素状态值,一方面,由于终端无需再基于目标参数计算确定多个元素状态值,因此可以有效减少终端处理资源,节省终端功耗。另一方面,由于多个元素状态值可以由开发人员指定,因此可以使得生成的动画文件不仅仅局限于现有的播放效果,即生成动画文件的方法更加灵活,生成的动画文件的内容和样式更为丰富。By directly obtaining multiple element state values, on the one hand, since the terminal no longer needs to calculate and determine multiple element state values based on target parameters, it can effectively reduce terminal processing resources and save terminal power consumption. On the other hand, because multiple element state values can be specified by the developer, the generated animation file is not limited to the existing playback effects, that is, the method of generating the animation file is more flexible, and the content and style of the generated animation file More abundant.
另一种可选的实现方式:该目标参数可以包括播放时长、帧率和播放效果。相应的,图4是本公开实施例提供的一种生成动画数据的方法流程图。如图4所示,该方法可以包括:Another optional implementation manner: the target parameter may include playback duration, frame rate, and playback effect. Correspondingly, FIG. 4 is a flowchart of a method for generating animation data provided by an embodiment of the present disclosure. As shown in Figure 4, the method may include:
步骤3021A、根据播放时长、帧率和播放效果,确定多个元素状态值。 Step 3021A: Determine multiple element state values according to the playback duration, frame rate and playback effect.
在本公开实施例中,终端可以先将播放时长(duration)和帧率(frame rate)相乘,以得到动画文件包括的画面的总帧数(frame count)。即该总帧数可以满足下述公式:frame count=duration*frame rate公式(1)。In the embodiment of the present disclosure, the terminal may first multiply the playback duration (duration) and the frame rate (frame rate) to obtain the total frame count (frame count) of the pictures included in the animation file. That is, the total number of frames can satisfy the following formula: frame count=duration*frame rate formula (1).
例如,假设终端获取到的播放时长为3秒(s),帧率为15赫兹(Hz),则终端根据公式(1)计算得到的总帧数frame count即为:frame count=duration*frame rate=3*15=45。For example, assuming that the playback duration acquired by the terminal is 3 seconds (s) and the frame rate is 15 hertz (Hz), the total frame count calculated by the terminal according to formula (1) is: frame count = duration * frame rate =3*15=45.
然后,终端可以基于该总帧数和播放效果,确定多个元素状态值。例如,针对每张目标图片,终端可以先根据播放效果,填充该目标图片在所有帧里的具体数据,该具体数据即是指目标图片的播放状态。然后,终端可以根据播放效果,采用预先配置的动画接口创建对应的动画对象,并根据总帧数和帧率循环在播放时长内循环播放该创建好的动画对象。例如,假设总帧数为n,帧率为f,则终端可以执行n次循环,每次循环递增1/f秒(s)。并且,在每次循环播放时,终端均可以采用预先配置的插值算法确定出多个元素状态值。Then, the terminal can determine multiple element state values based on the total number of frames and the playback effect. For example, for each target picture, the terminal may first fill in the specific data of the target picture in all frames according to the playback effect, and the specific data refers to the playback state of the target picture. Then, the terminal can use a pre-configured animation interface to create a corresponding animation object according to the playback effect, and cycle to play the created animation object within the playback duration according to the total number of frames and the frame rate. For example, assuming that the total number of frames is n and the frame rate is f, the terminal can execute n cycles, each cycle incrementing by 1/f second (s). In addition, the terminal can use a pre-configured interpolation algorithm to determine multiple element state values during each loop playback.
可选的,该动画接口可以为终端系统自带的动画接口,或者,可以为终端接收开发人员在开发动画文件时写入的接口。同理,该插值算法可以为终端系统自带的算法,或者,可以为开发人员在开发动画文件时写入的算法。Optionally, the animation interface may be a built-in animation interface of the terminal system, or may be an interface that the terminal receives when developing an animation file by a developer. In the same way, the interpolation algorithm can be an algorithm that comes with the terminal system, or it can be an algorithm written by a developer when developing an animation file.
步骤3022A、按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,生成目标格式的动画数据。 Step 3022A: According to the encapsulation standard of the target format, encapsulate the playback duration and multiple element state values to generate animation data in the target format.
例如,假设目标格式为SVGA格式,则终端在基于目标参数确定了多个元素状态值后,即可以按照SVGA格式的封装标准,将播放时长和多个元素状态值进行封装(也可以称为组合),从而得到SVGA格式的动画数据。For example, assuming that the target format is the SVGA format, after the terminal determines multiple element state values based on the target parameters, it can encapsulate the playback duration and multiple element state values according to the encapsulation standard of the SVGA format (also called a combination ) To get the animation data in SVGA format.
由于终端可以基于播放效果、时长和帧率等目标参数,自动确定多个元素状态值,因此通过仅输入播放效果、时长和帧率等参数,可以提高生成动画文件的效率,减少开发人员的工作,保证生成的动画文件的可靠性。Since the terminal can automatically determine the status of multiple elements based on target parameters such as playback effect, duration, and frame rate, by only inputting parameters such as playback effect, duration, and frame rate, the efficiency of generating animation files can be improved and the work of developers can be reduced. , To ensure the reliability of the generated animation files.
又一种可选的实现方式:该目标参数可以包括播放时长、帧率、播放效果和播放轨迹,相应的,图5是本公开实施例提供的另一种动画数据的生成方法流程图。如图5所示,该方法可以包括:Yet another optional implementation manner: the target parameter may include playback duration, frame rate, playback effect, and playback track. Correspondingly, FIG. 5 is a flowchart of another method for generating animation data provided by an embodiment of the present disclosure. As shown in Figure 5, the method may include:
步骤3021B、根据播放轨迹,确定目标图片的至少一个播放位置。 Step 3021B: Determine at least one playback position of the target picture according to the playback track.
在本公开实施例中,终端可以根据获取到的播放轨迹,先确定多个采样点,然后,终端可以将从该多个采样点中选取的至少一个目标采样点的位置确定为目标图片的至少一个播放位置。其中,每相邻的两个目标采样点之间的间距大于间距阈值。也即是,终端可以先将目标图片根据确定的播放轨迹进行平铺设置,且平铺得到的每相邻两个目标图片不发生重叠。In the embodiment of the present disclosure, the terminal may first determine a plurality of sampling points according to the acquired playback trajectory, and then, the terminal may determine the position of at least one target sampling point selected from the plurality of sampling points as at least the position of the target picture A playback position. Wherein, the distance between every two adjacent target sampling points is greater than the distance threshold. That is, the terminal may first tile the target picture according to the determined playback track, and every two adjacent target pictures obtained by the tile do not overlap.
可选的,该间距阈值可以为终端中预先配置的固定值,或,该间距阈值可以为终端接收开发人员输入的参数,或,该间距阈值可以为目标图片的尺寸。Optionally, the spacing threshold may be a fixed value pre-configured in the terminal, or the spacing threshold may be a parameter input by the developer received by the terminal, or the spacing threshold may be the size of the target picture.
例如,参考图6,假设目标图片的尺寸为m*n的矩形Q,终端获取到的播放轨迹如图6所示的星型,该星型轨迹占用的面积为M*N的一个矩形P,间距阈值为m/2-1。则终端可以先根据该星型轨迹确定多个采样点a0至an,并将相邻且间距大于m/2-1的多个目标采样点的位置确定为目标图片的多个播放位置。如对于每个采样点,终端均可以以该采样点为中点,确定一个大小为m*n的矩形Qi,并将与该采样点对应的矩形相邻且完全无交集的矩形的中点确定为一个目标采样点。相应的,每两个相邻的目标采样点的间距即为m/2,大于间距阈值。然后,终端可以将确定的多个目标采样点的位置确定为目标图片的播放位置。并且,在以目标采样点确定了多个矩形Qi后,终端可以将矩形Qi与矩形P相交的部分删除,从而即可以得到铺满整个星型轨迹的目标图片的多个播放位置。For example, referring to Figure 6, assuming that the size of the target picture is a rectangle Q of m*n, the playback trajectory acquired by the terminal is a star shown in Figure 6, and the star trajectory occupies a rectangle P of M*N. The spacing threshold is m/2-1. The terminal may first determine multiple sampling points a0 to an according to the star-shaped trajectory, and determine the locations of multiple adjacent target sampling points with a spacing greater than m/2-1 as multiple playback positions of the target picture. For example, for each sampling point, the terminal can use the sampling point as the midpoint to determine a rectangle Qi of size m*n, and determine the midpoint of a rectangle that is adjacent to the sampling point and has no intersection at all. It is a target sampling point. Correspondingly, the distance between every two adjacent target sampling points is m/2, which is greater than the distance threshold. Then, the terminal may determine the determined positions of the multiple target sampling points as the playback position of the target picture. In addition, after multiple rectangles Qi are determined by the target sampling points, the terminal can delete the part where the rectangle Qi and the rectangle P intersect, so that multiple playback positions of the target picture covering the entire star track can be obtained.
步骤3022B、将播放时长与帧率相乘,得到动画文件包括的画面的总帧数。 Step 3022B: Multiply the playback duration and the frame rate to obtain the total number of frames included in the animation file.
该步骤的实现方法可以参考上述步骤3021A中计算得到总帧数的方法。The method for implementing this step can refer to the method of calculating the total number of frames in step 3021A above.
步骤3023B、基于播放效果、目标图片的至少一个播放位置和总帧数,确定多个元素状态值。 Step 3023B: Determine multiple element state values based on the playback effect, at least one playback position of the target picture, and the total number of frames.
该步骤的实现方式也可以参考上述步骤3021A中确定多个元素状态值的方法,区别在于,该方法还需要根据播放轨迹确定的至少一个播放位置来确定多个元素状态值,即执行完该步骤得到的多个元素状态值与播放轨 迹相关。The implementation of this step can also refer to the method of determining multiple element state values in step 3021A. The difference is that this method also needs to determine multiple element state values according to at least one playback position determined by the playback track, that is, the step is completed. The obtained multiple element state values are related to the playback track.
步骤3024B、按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,生成目标格式的动画数据。 Step 3024B: According to the encapsulation standard of the target format, encapsulate the playback duration and multiple element state values to generate animation data in the target format.
该步骤的实现方法可以参考上述步骤3022A的方法,在此不再赘述。For the implementation method of this step, reference may be made to the method of step 3022A above, which will not be repeated here.
通过再基于播放轨迹确定多个元素状态值,可以使得开发人员自行设计动画文件的播放样式,即在提高生成动画文件的效率的前提下,进一步提高了生成动画文件的灵活性,提高了产品趣味性和用户体验。By determining the state values of multiple elements based on the playback track, developers can design the playback style of animation files by themselves, that is, on the premise of improving the efficiency of generating animation files, the flexibility of generating animation files is further improved, and the interest of the product is improved. Sex and user experience.
步骤303、按照SVGA格式的文件标准,对动画数据进行编码。Step 303: Encode the animation data according to the file standard of the SVGA format.
在本公开实施例中,该目标格式可以为SVGA格式,相应的,终端在得到动画数据后,可以再继续按照SVGA格式的文件标准,对动画数据进行编码,即将动画数据编码成SVGA格式的文件形式。In the embodiment of the present disclosure, the target format may be the SVGA format. Accordingly, after the terminal obtains the animation data, it can continue to encode the animation data according to the SVGA format file standard, that is, encode the animation data into a SVGA format file form.
可选的,该SVGA格式的文件标准可以包括1.0版本和2.0版本,1.0版本的SVGA格式的文件标准为json格式文件,2.0版本的SVGA格式的文件标准为google protobuf格式文件。Optionally, the file standard of the SVGA format may include version 1.0 and version 2.0. The file standard of the SVGA format of version 1.0 is a json format file, and the file standard of the SVGA format of version 2.0 is a google protobuf format file.
步骤304、将编码后的动画数据和目标图片进行压缩,得到SVGA格式的动画文件。Step 304: Compress the encoded animation data and the target picture to obtain an animation file in SVGA format.
在本公开实施例中,终端可以继续将编码后的动画数据和目标图片写入同一个文件中,并将该文件进行压缩打包,从而生成SVGA格式的动画文件。In the embodiment of the present disclosure, the terminal may continue to write the encoded animation data and the target picture into the same file, and compress and pack the file, thereby generating an animation file in the SVGA format.
示例的,下述以直播客户端中动画礼物的赠送场景为例,对本公开实施例提供的动画文件的生成方法生成的动画文件应用进行介绍。By way of example, the following takes the gifting scene of the animation gift in the live broadcast client as an example to introduce the animation file application generated by the animation file generation method provided by the embodiment of the present disclosure.
例如,参考图7,假设第一客户端正在播放第二客户端的用户XX录制的直播视频,该第一客户端的显示界面下方显示有“礼物”选项、“个性”选项和“包裹”选项,该“个性”选项为动画礼物生成选项。当第一客户端检测到用户单击该“个性”选项的操作时,第一客户端即可以检测到动画礼物生成指令。此时,如图7所示,第一客户端的显示界面上可以显示有绘画蒙层H1,以及包含有多个备选礼物图片的礼物栏G1,该礼物栏G1中包括棒棒糖的图片、玫瑰花的图片、爱心的图片和情书的图片,且每个备选礼物图片的下方还可以显示有该礼物图片的标识和单价,如棒棒糖 0.1Y,Y可以是第一客户端为该礼物图片的价格设置的单位,礼物图片的标识可以为用于描述该礼物图片的文字。参考图7,用户选择了棒棒糖的图标,并在绘画蒙层H1上绘制了心形图案,在用户点击发送按钮后,即可以成功将其绘制的动画礼物发送至第二客户端,观看该直播的用户的客户端的界面上均可以播放该动画礼物。并且,参考图7,在用户绘制了动画文件后,该显示界面上还可以显示有“画了10个棒棒糖、总价格1Y,余额0.7Y”的文字提示。对于终端侧:在第一客户端接收到动画文件的发送指令时,即可以将接收到的动画文件发送至服务器,再由服务器下发至观看该直播的其他客户端,以供其他客户端的显示界面上播放该动画文件。For example, referring to Figure 7, suppose the first client is playing a live video recorded by user XX of the second client, and the “gift” option, “personality” option, and “package” option are displayed at the bottom of the display interface of the first client. The "personality" option is an animation gift generation option. When the first client detects that the user clicks the "personality" option, the first client can detect the animation gift generation instruction. At this time, as shown in Figure 7, the display interface of the first client can display a painting mask H1 and a gift bar G1 containing multiple candidate gift pictures. The gift bar G1 includes lollipop pictures, Rose pictures, love pictures, and love letter pictures, and the logo and unit price of the gift picture can also be displayed below each candidate gift picture, such as lollipop 0.1Y, Y can be the first client’s The unit for setting the price of the gift picture, and the identifier of the gift picture may be a text used to describe the gift picture. Referring to Figure 7, the user selects the lollipop icon and draws a heart-shaped pattern on the painting mask H1. After the user clicks the send button, the animated gift drawn by him can be successfully sent to the second client to watch The animated gift can be played on the interface of the client of the live broadcast user. And, referring to FIG. 7, after the user draws the animation file, the display interface may also display a text prompt "10 lollipops are drawn, the total price is 1Y, and the balance is 0.7Y". For the terminal side: when the first client receives an instruction to send an animation file, it can send the received animation file to the server, and then the server will send it to other clients watching the live broadcast for display by other clients Play the animation file on the interface.
需要说明的是,本公开实施例提供的动画文件的生成方法的步骤的先后顺序可以进行适当调整,步骤也可以根据情况进行相应增减。例如,上述步骤303和304可以同步执行。任何熟悉本技术领域的技术人员在本公开揭露的技术范围内可轻易想到变化的方法,都应涵盖在发明的保护范围之内,因此不再赘述。It should be noted that the sequence of steps in the method for generating animation files provided by the embodiments of the present disclosure can be adjusted appropriately, and the steps can also be increased or decreased accordingly according to the situation. For example, the above steps 303 and 304 can be executed simultaneously. Any person skilled in the art can easily think of a method of change within the technical scope disclosed in the present disclosure, which should be covered by the protection scope of the invention, and therefore will not be repeated.
综上所述,本公开实施例提供了一种动画文件的生成方法。其中,终端可以获取用于生成动画文件的目标参数和目标图片,可以基于该目标参数生成目标格式的动画数据,且可以根据该动画数据和目标图片生成动画文件。由于终端可以基于获取到的目标参数和目标图片自动生成动画文件,因此相对于相关技术中终端仅能接收设计师预先设计好的动画文件,该动画文件的生成方法的灵活性较高。相应的,生成的动画文件的内容和样式即会较为丰富。In summary, the embodiments of the present disclosure provide a method for generating an animation file. Wherein, the terminal can obtain the target parameter and target picture used to generate the animation file, can generate the animation data in the target format based on the target parameter, and can generate the animation file according to the animation data and the target picture. Since the terminal can automatically generate an animation file based on the acquired target parameters and target pictures, compared with the related art that the terminal can only receive the animation file pre-designed by the designer, the animation file generation method is more flexible. Correspondingly, the content and style of the generated animation file will be richer.
图8是公开实施例提供的一种动画文件的生成装置框图,可以应用于图1所示的终端110中。如图8所示,该装置80可以包括:FIG. 8 is a block diagram of a device for generating an animation file provided by a disclosed embodiment, which can be applied to the terminal 110 shown in FIG. 1. As shown in Fig. 8, the device 80 may include:
获取模块801,用于获取用于生成动画文件的目标参数和目标图片。The obtaining module 801 is used to obtain target parameters and target pictures used to generate an animation file.
动画数据生成模块802,用于基于目标参数生成目标格式的动画数据,动画数据用于指示目标图片在多帧画面中的播放形式。The animation data generating module 802 is used to generate animation data in a target format based on the target parameters, and the animation data is used to indicate the playback form of the target picture in a multi-frame screen.
动画文件生成模块803,用于根据动画数据和目标图片,生成目标格式的动画文件,动画文件包括多帧画面。The animation file generating module 803 is used to generate an animation file in the target format according to the animation data and the target picture, and the animation file includes multiple frames.
一种可选的实现方式:该目标参数可以包括播放时长和多个元素状态 值,每个元素状态值用于指示目标图片在一帧画面的播放状态。相应的,图9是本公开实施例提供的一种动画数据生成模块802的框图。如图9所示,该动画数据生成模块802可以包括:An optional implementation manner: the target parameter may include the playback duration and multiple element state values, and each element state value is used to indicate the playback state of the target picture in one frame. Correspondingly, FIG. 9 is a block diagram of an animation data generating module 802 provided by an embodiment of the present disclosure. As shown in FIG. 9, the animation data generating module 802 may include:
封装子模块8021A,用于按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,生成目标格式的动画数据。The encapsulation sub-module 8021A is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
另一种可选的实现方式:该目标参数可以包括播放时长、帧率和播放效果。相应的,图10是本公开实施例提供的另一种动画数据生成模块802的框图。如图10所示,该动画数据生成模块802可以包括:Another optional implementation manner: the target parameter may include playback duration, frame rate, and playback effect. Correspondingly, FIG. 10 is a block diagram of another animation data generating module 802 provided by an embodiment of the present disclosure. As shown in FIG. 10, the animation data generating module 802 may include:
确定子模块8021B,用于根据播放时长、帧率和播放效果,确定多个元素状态值,每个元素状态值可以用于指示目标图片在一帧画面的播放状态。The determining sub-module 8021B is used to determine multiple element status values according to the playback duration, frame rate and playback effect, and each element status value can be used to indicate the playback status of the target picture in one frame.
封装子模块8022B,用于按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,生成目标格式的动画数据。The encapsulation sub-module 8022B is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
可选的,确定子模块8021B,可以用于将播放时长与帧率相乘,得到动画文件包括的画面的总帧数,并基于播放效果和总帧数,确定多个元素状态值。Optionally, the determining sub-module 8021B can be used to multiply the playback duration and the frame rate to obtain the total number of frames of the screen included in the animation file, and determine multiple element state values based on the playback effect and the total number of frames.
又一种可选的实现方式:该目标参数可以包括播放时长、帧率、播放效果和播放轨迹。相应的,图11是本公开实施例提供的又一种动画数据生成模块802的框图。如图11所示,该动画数据生成模块802可以包括:Yet another optional implementation manner: the target parameter may include playback duration, frame rate, playback effect, and playback track. Correspondingly, FIG. 11 is a block diagram of another animation data generating module 802 provided by an embodiment of the present disclosure. As shown in FIG. 11, the animation data generating module 802 may include:
第一确定子模块8021C,用于根据播放轨迹,确定目标图片的至少一个播放位置。The first determining sub-module 8021C is configured to determine at least one playback position of the target picture according to the playback track.
相乘子模块8022C,用于将播放时长与帧率相乘,得到动画文件包括的画面的总帧数。The multiplying sub-module 8022C is used to multiply the playing duration and the frame rate to obtain the total number of frames included in the animation file.
第二确定子模块8023C,用于基于播放效果、目标图片的至少一个播放位置和总帧数,确定多个元素状态值。The second determining sub-module 8023C is configured to determine multiple element state values based on the playback effect, at least one playback position of the target picture, and the total number of frames.
封装子模块8024C,用于按照目标格式的封装标准,将播放时长和多个元素状态值进行封装,生成目标格式的动画数据。The encapsulation sub-module 8024C is used to encapsulate the playback duration and multiple element state values according to the encapsulation standard of the target format to generate animation data in the target format.
可选的,该第一确定子模块8021C,可以用于:确定播放轨迹包括的多个采样点,并将从多个采样点中选取的至少一个目标采样点的位置确定 为目标图片的至少一个播放位置,其中每相邻的两个目标采样点之间的间距大于间距阈值。Optionally, the first determining submodule 8021C may be used to determine multiple sampling points included in the playback track, and determine the location of at least one target sampling point selected from the multiple sampling points as at least one of the target pictures The playback position, where the distance between each two adjacent target sampling points is greater than the distance threshold.
可选的,该播放轨迹可以根据用户在绘制界面上执行的触控操作确定,或,该播放轨迹可以根据用户针对至少一个备选播放轨迹的选择操作确定,或,该播放轨迹可以为对目标对象识别得到的目标对象的轮廓。Optionally, the playing track may be determined according to a touch operation performed by the user on the drawing interface, or the playing track may be determined according to the user's selection operation for at least one candidate playing track, or the playing track may be a target The contour of the target object obtained by object recognition.
可选的,该目标格式可以为可伸缩矢量图形SVGA格式。相应的,动画文件生成模块803,可以用于:按照SVGA格式的文件标准,对动画数据进行编码,并将编码后的动画数据和目标图片进行压缩,得到SVGA格式的动画文件。Optionally, the target format may be a scalable vector graphics SVGA format. Correspondingly, the animation file generation module 803 can be used to encode the animation data according to the file standard of the SVGA format, and compress the encoded animation data and the target picture to obtain an animation file in the SVGA format.
综上所述,本公开实施例提供了一种动画文件的生成装置。其中,该装置可以获取用于生成动画文件的目标参数和目标图片,可以基于该目标参数生成目标格式的动画数据,且可以根据该动画数据和目标图片生成动画文件。由于终端可以基于获取到的目标参数和目标图片自动生成动画文件,因此相对于相关技术中仅能接收设计师预先设计好的动画文件,该动画文件的生成装置生成动画文件的灵活性较高。相应的,生成的动画文件的内容和样式即会较为丰富。In summary, the embodiment of the present disclosure provides a device for generating animation files. Wherein, the device can obtain target parameters and target pictures used to generate an animation file, can generate animation data in a target format based on the target parameters, and can generate an animation file based on the animation data and the target picture. Since the terminal can automatically generate animation files based on the acquired target parameters and target pictures, compared to the related art that can only receive the animation files pre-designed by the designer, the animation file generating device has higher flexibility for generating animation files. Correspondingly, the content and style of the generated animation file will be richer.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the device in the foregoing embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and detailed description will not be given here.
图12是本公开一个示例性实施例提供的移动终端1200的结构框图。该终端1200可以是便携式移动终端,比如:智能手机、平板电脑、笔记本电脑或台式电脑。终端1200还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。通常,终端1200包括有:处理器1201和存储器1202。FIG. 12 is a structural block diagram of a mobile terminal 1200 provided by an exemplary embodiment of the present disclosure. The terminal 1200 may be a portable mobile terminal, such as a smart phone, a tablet computer, a notebook computer, or a desktop computer. The terminal 1200 may also be called user equipment, portable terminal, laptop terminal, desktop terminal and other names. Generally, the terminal 1200 includes a processor 1201 and a memory 1202.
处理器1201可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1201可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1201也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit, 中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1201可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1201还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1201 may adopt at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array, Programmable Logic Array). achieve. The processor 1201 may also include a main processor and a coprocessor. The main processor is a processor used to process data in the awake state, also called a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor used to process data in the standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing content that needs to be displayed on the display screen. In some embodiments, the processor 1201 may also include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
存储器1202可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1202还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1202中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1201所执行以实现本申请中方法实施例提供的动画文件的生成方法。The memory 1202 may include one or more computer-readable storage media, which may be non-transitory. The memory 1202 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1202 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 1201 to implement the animation file provided in the method embodiment of the present application. The method of generation.
在一些实施例中,终端1200还可选包括有:外围设备接口1203和至少一个外围设备。处理器1201、存储器1202和外围设备接口1203之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1203相连。具体地,外围设备包括:射频电路1204、显示屏1205、摄像头1206、音频电路1207、定位组件1208和电源1209中的至少一种。In some embodiments, the terminal 1200 may optionally further include: a peripheral device interface 1203 and at least one peripheral device. The processor 1201, the memory 1202, and the peripheral device interface 1203 may be connected by a bus or a signal line. Each peripheral device can be connected to the peripheral device interface 1203 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1204, a display screen 1205, a camera 1206, an audio circuit 1207, a positioning component 1208, and a power supply 1209.
外围设备接口1203可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1201和存储器1202。在一些实施例中,处理器1201、存储器1202和外围设备接口1203被集成在同一芯片或电路板上;在一些其他实施例中,处理器1201、存储器1202和外围设备接口1203中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。The peripheral device interface 1203 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, the memory 1202, and the peripheral device interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1201, the memory 1202, and the peripheral device interface 1203 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
射频电路1204用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1204通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1204将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1204包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1204可以通过至少一种无线通信 协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1204还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。The radio frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices through electromagnetic signals. The radio frequency circuit 1204 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuit 1204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and so on. The radio frequency circuit 1204 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network. In some embodiments, the radio frequency circuit 1204 may also include a circuit related to NFC (Near Field Communication), which is not limited in this application.
显示屏1205用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1205是触摸显示屏时,显示屏1205还具有采集在显示屏1205的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1201进行处理。此时,显示屏1205还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1205可以为一个,设置终端1200的前面板;在另一些实施例中,显示屏1205可以为至少两个,分别设置在终端1200的不同表面或呈折叠设计;在再一些实施例中,显示屏1205可以是柔性显示屏,设置在终端1200的弯曲表面上或折叠面上。甚至,显示屏1205还可以设置成非矩形的不规则图形,也即异形屏。显示屏1205可以为LCD(Liquid Crystal Display,液晶显示阵列)显示屏或OLED(Organic Light-Emitting Diode,有机发光二极管)显示屏。The display screen 1205 is used to display a UI (User Interface, user interface). The UI can include graphics, text, icons, videos, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to collect touch signals on or above the surface of the display screen 1205. The touch signal can be input to the processor 1201 as a control signal for processing. At this time, the display screen 1205 may also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards. In some embodiments, there may be one display screen 1205, which is provided with the front panel of the terminal 1200; in other embodiments, there may be at least two display screens 1205, which are respectively arranged on different surfaces of the terminal 1200 or in a folded design; In still other embodiments, the display screen 1205 may be a flexible display screen, which is disposed on the curved surface or the folding surface of the terminal 1200. Furthermore, the display screen 1205 can also be set as a non-rectangular irregular pattern, that is, a special-shaped screen. The display screen 1205 may be an LCD (Liquid Crystal Display, liquid crystal display array) display screen or an OLED (Organic Light-Emitting Diode, organic light emitting diode) display screen.
摄像头组件1206用于采集图像或视频。可选地,摄像头组件1206包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1206还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。The camera assembly 1206 is used to capture images or videos. Optionally, the camera assembly 1206 includes a front camera and a rear camera. Generally, the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal. In some embodiments, there are at least two rear cameras, each of which is a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with the wide-angle camera to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, the camera assembly 1206 may also include a flash. The flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
音频电路1207可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1201进行处理,或者输入至射频电路1204以实现语音通信。出于立体声采集或降噪的目的,麦克风可 以为多个,分别设置在终端1200的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1201或射频电路1204的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1207还可以包括耳机插孔。The audio circuit 1207 may include a microphone and a speaker. The microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1201 for processing, or input to the radio frequency circuit 1204 to implement voice communication. For the purpose of stereo collection or noise reduction, there may be multiple microphones, which are respectively set in different parts of the terminal 1200. The microphone can also be an array microphone or an omnidirectional collection microphone. The speaker is used to convert the electrical signal from the processor 1201 or the radio frequency circuit 1204 into sound waves. The speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, it can not only convert the electrical signal into human audible sound waves, but also convert the electrical signal into human inaudible sound waves for distance measurement and other purposes. In some embodiments, the audio circuit 1207 may also include a headphone jack.
定位组件1208用于定位终端1200的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1208可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统或俄罗斯的伽利略系统的定位组件。The positioning component 1208 is used to locate the current geographic location of the terminal 1200 to implement navigation or LBS (Location Based Service, location-based service). The positioning component 1208 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
电源1209用于为终端1200中的各个组件进行供电。电源1209可以是交流电、直流电、一次性电池或可充电电池。当电源1209包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。The power supply 1209 is used to supply power to various components in the terminal 1200. The power source 1209 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. A wired rechargeable battery is a battery charged through a wired line, and a wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charging technology.
在一些实施例中,终端1200还包括有一个或多个传感器1210。该一个或多个传感器1210包括但不限于:加速度传感器1211、陀螺仪传感器1212、压力传感器1213、指纹传感器1214、光学传感器1215以及接近传感器1216。In some embodiments, the terminal 1200 further includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: an acceleration sensor 1211, a gyroscope sensor 1212, a pressure sensor 1213, a fingerprint sensor 1214, an optical sensor 1215, and a proximity sensor 1216.
加速度传感器1211可以检测以终端1200建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器1211可以用于检测重力加速度在三个坐标轴上的分量。处理器1201可以根据加速度传感器1211采集的重力加速度信号,控制触摸显示屏1205以横向视图或纵向视图进行用户界面的显示。加速度传感器1211还可以用于游戏或者用户的运动数据的采集。The acceleration sensor 1211 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established by the terminal 1200. For example, the acceleration sensor 1211 can be used to detect the components of gravitational acceleration on three coordinate axes. The processor 1201 may control the touch display screen 1205 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for the collection of game or user motion data.
陀螺仪传感器1212可以检测终端1200的机体方向及转动角度,陀螺仪传感器1212可以与加速度传感器1211协同采集用户对终端1200的3D动作。处理器1201根据陀螺仪传感器1212采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。The gyroscope sensor 1212 can detect the body direction and rotation angle of the terminal 1200, and the gyroscope sensor 1212 can cooperate with the acceleration sensor 1211 to collect the user's 3D actions on the terminal 1200. The processor 1201 can implement the following functions according to the data collected by the gyroscope sensor 1212: motion sensing (for example, changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
压力传感器1213可以设置在终端1200的侧边框和/或触摸显示屏1205的下层。当压力传感器1213设置在终端1200的侧边框时,可以检测用户对终端1200的握持信号,由处理器1201根据压力传感器1213采集的握持信号进行左右手识别或快捷操作。当压力传感器1213设置在触摸显示屏1205的下层时,由处理器1201根据用户对触摸显示屏1205的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。The pressure sensor 1213 may be disposed on the side frame of the terminal 1200 and/or the lower layer of the touch display screen 1205. When the pressure sensor 1213 is arranged on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left and right hand recognition or quick operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is arranged at the lower layer of the touch display screen 1205, the processor 1201 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 1205. The operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
指纹传感器1214用于采集用户的指纹,由处理器1201根据指纹传感器1214采集到的指纹识别用户的身份,或者,由指纹传感器1214根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1201授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1214可以被设置终端1200的正面、背面或侧面。当终端1200上设置有物理按键或厂商Logo时,指纹传感器1214可以与物理按键或厂商Logo集成在一起。The fingerprint sensor 1214 is used to collect the user's fingerprint. The processor 1201 identifies the user's identity according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user's identity according to the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 1201 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings. The fingerprint sensor 1214 may be provided on the front, back or side of the terminal 1200. When a physical button or a manufacturer logo is provided on the terminal 1200, the fingerprint sensor 1214 can be integrated with the physical button or the manufacturer logo.
光学传感器1215用于采集环境光强度。在一个实施例中,处理器1201可以根据光学传感器1215采集的环境光强度,控制触摸显示屏1205的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1205的显示亮度;当环境光强度较低时,调低触摸显示屏1205的显示亮度。在另一个实施例中,处理器1201还可以根据光学传感器1215采集的环境光强度,动态调整摄像头组件1206的拍摄参数。The optical sensor 1215 is used to collect the ambient light intensity. In an embodiment, the processor 1201 may control the display brightness of the touch screen 1205 according to the intensity of the ambient light collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1205 is decreased. In another embodiment, the processor 1201 may also dynamically adjust the shooting parameters of the camera assembly 1206 according to the ambient light intensity collected by the optical sensor 1215.
接近传感器1216,也称距离传感器,通常设置在终端1200的前面板。接近传感器1216用于采集用户与终端1200的正面之间的距离。在一个实施例中,当接近传感器1216检测到用户与终端1200的正面之间的距离逐渐变小时,由处理器1201控制触摸显示屏1205从亮屏状态切换为息屏状态;当接近传感器1216检测到用户与终端1200的正面之间的距离逐渐变大时,由处理器1201控制触摸显示屏1205从息屏状态切换为亮屏状态。The proximity sensor 1216, also called a distance sensor, is usually arranged on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect the distance between the user and the front of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front of the terminal 1200 gradually decreases, the processor 1201 controls the touch screen 1205 to switch from the on-screen state to the off-screen state; when the proximity sensor 1216 detects When the distance between the user and the front of the terminal 1200 gradually increases, the processor 1201 controls the touch display screen 1205 to switch from the rest screen state to the bright screen state.
本领域技术人员可以理解,图12中示出的结构并不构成对终端1200的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。Those skilled in the art can understand that the structure shown in FIG. 12 does not constitute a limitation on the terminal 1200, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当计算机可读存储介质在计算机上运行时,可以使得计算机执行如图2和图3所示的动画文件的生成方法。The embodiments of the present disclosure also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium. When the computer-readable storage medium runs on a computer, the computer can execute as shown in FIG. 2 and FIG. 3 The method of generating animation files.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。以上所述仅为本公开的可选实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。A person of ordinary skill in the art can understand that all or part of the steps in the above embodiments can be implemented by hardware, or by a program to instruct relevant hardware. The program can be stored in a computer-readable storage medium. The storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc. The above are only optional embodiments of the present disclosure and are not intended to limit the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure shall be included in the protection of the present disclosure. Within range.

Claims (11)

  1. 一种动画文件的生成方法,其特征在于,所述方法包括:A method for generating animation files, characterized in that the method includes:
    获取用于生成动画文件的目标参数和目标图片;Obtain target parameters and target pictures used to generate animation files;
    基于所述目标参数生成目标格式的动画数据,所述动画数据用于指示所述目标图片在多帧画面中的播放形式;Generating animation data in a target format based on the target parameters, where the animation data is used to indicate a playback form of the target picture in a multi-frame screen;
    根据所述动画数据和所述目标图片,生成所述目标格式的动画文件,所述动画文件包括所述多帧画面。According to the animation data and the target picture, an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
  2. 根据权利要求1所述的方法,其特征在于,所述目标参数包括播放时长和多个元素状态值,每个所述元素状态值用于指示所述目标图片在一帧画面的播放状态;所述基于所述目标参数生成目标格式的动画数据,包括:The method according to claim 1, wherein the target parameter includes a playback duration and a plurality of element state values, each of the element state values is used to indicate the playback state of the target picture in one frame; The generating of animation data in a target format based on the target parameters includes:
    按照目标格式的封装标准,将所述播放时长和所述多个元素状态值进行封装,生成所述目标格式的动画数据。According to the encapsulation standard of the target format, the playback duration and the multiple element state values are encapsulated to generate animation data in the target format.
  3. 根据权利要求1所述的方法,其特征在于,所述目标参数包括播放时长、帧率和播放效果;所述基于所述目标参数生成目标格式的动画数据,包括:The method according to claim 1, wherein the target parameters include playback duration, frame rate, and playback effect; and generating animation data in a target format based on the target parameters includes:
    根据所述播放时长、所述帧率和所述播放效果,确定多个元素状态值,每个所述元素状态值用于指示所述目标图片在一帧画面的播放状态;Determining a plurality of element state values according to the playing duration, the frame rate and the playing effect, each of the element state values is used to indicate the playing state of the target picture in one frame;
    按照目标格式的封装标准,将所述播放时长和所述多个元素状态值进行封装,生成所述目标格式的动画数据。According to the encapsulation standard of the target format, the playback duration and the multiple element state values are encapsulated to generate animation data in the target format.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述播放时长、所述帧率和所述播放效果,确定多个元素状态值,包括:The method according to claim 3, wherein the determining a plurality of element state values according to the playing duration, the frame rate and the playing effect comprises:
    将所述播放时长与所述帧率相乘,得到所述动画文件包括的画面的总帧数;Multiplying the playing duration and the frame rate to obtain the total number of frames of the pictures included in the animation file;
    基于所述播放效果和所述总帧数,确定多个元素状态值。Based on the playback effect and the total number of frames, multiple element state values are determined.
  5. 根据权利要求1所述的方法,其特征在于,所述目标参数包括播放时长、帧率、播放效果和播放轨迹;所述基于所述目标参数生成目标格式的动画数据,包括:The method according to claim 1, wherein the target parameters include playback duration, frame rate, playback effect, and playback trajectory; and generating animation data in a target format based on the target parameters includes:
    根据所述播放轨迹,确定所述目标图片的至少一个播放位置;Determine at least one play position of the target picture according to the play track;
    将所述播放时长与所述帧率相乘,得到所述动画文件包括的画面的总帧数;Multiplying the playing duration and the frame rate to obtain the total number of frames of the pictures included in the animation file;
    基于所述播放效果、所述目标图片的至少一个播放位置和所述总帧数,确定多个元素状态值,每个所述元素状态值用于指示所述目标图片在一帧画面的播放状态;Based on the playback effect, at least one playback position of the target picture, and the total number of frames, multiple element state values are determined, each of the element state values is used to indicate the playback state of the target picture in one frame ;
    按照目标格式的封装标准,将所述播放时长和所述多个元素状态值进行封装,生成所述目标格式的动画数据。According to the encapsulation standard of the target format, the playback duration and the multiple element state values are encapsulated to generate animation data in the target format.
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述播放轨迹,确定所述目标图片的至少一个播放位置,包括:The method according to claim 5, wherein the determining at least one playback position of the target picture according to the playback track comprises:
    确定所述播放轨迹包括的多个采样点;Determine multiple sampling points included in the playback track;
    将从所述多个采样点中选取的至少一个目标采样点的位置确定为所述目标图片的至少一个播放位置,其中每相邻的两个所述目标采样点之间的间距大于间距阈值。The position of at least one target sampling point selected from the plurality of sampling points is determined as at least one playback position of the target picture, wherein the distance between each two adjacent target sampling points is greater than a distance threshold.
  7. 根据权利要求5或6所述的方法,其特征在于,所述播放轨迹根据用户在绘制界面上执行的触控操作确定,或,所述播放轨迹根据用户针对至少一个备选播放轨迹的选择操作确定,或,所述播放轨迹为对目标对象识别得到的所述目标对象的轮廓。The method according to claim 5 or 6, wherein the playing track is determined according to the touch operation performed by the user on the drawing interface, or the playing track is determined according to the user's selection operation for at least one candidate playing track Determine, or, the playing track is the contour of the target object obtained by recognizing the target object.
  8. 根据权利要求1至6任一所述的方法,其特征在于,所述目标格式为可伸缩矢量图形SVGA格式;所述根据所述动画数据和所述目标图片,生成所述目标格式的动画文件,包括:The method according to any one of claims 1 to 6, wherein the target format is a scalable vector graphics SVGA format; and the animation file in the target format is generated according to the animation data and the target picture ,include:
    按照所述SVGA格式的文件标准,对所述动画数据进行编码;Encoding the animation data according to the file standard of the SVGA format;
    将编码后的动画数据和所述目标图片进行压缩,得到所述SVGA格式的动画文件。The encoded animation data and the target picture are compressed to obtain the animation file in the SVGA format.
  9. 一种动画文件的生成装置,其特征在于,所述装置包括:An animation file generating device, characterized in that the device comprises:
    获取模块,用于获取用于生成动画文件的目标参数和目标图片;The acquisition module is used to acquire the target parameters and target pictures used to generate the animation file;
    基于所述目标参数生成目标格式的动画数据,所述动画数据用于指示所述目标图片在多帧画面中的播放形式;Generating animation data in a target format based on the target parameters, where the animation data is used to indicate a playback form of the target picture in a multi-frame screen;
    根据所述动画数据和所述目标图片,生成所述目标格式的动画文件,所述动画文件包括所述多帧画面。According to the animation data and the target picture, an animation file in the target format is generated, and the animation file includes the multi-frame pictures.
  10. 一种动画文件的生成装置,其特征在于,所述装置包括:An animation file generating device, characterized in that the device comprises:
    处理器;processor;
    用于存储所述处理器的可执行指令的存储器;A memory for storing executable instructions of the processor;
    其中,所述处理器被配置为:Wherein, the processor is configured to:
    执行如权利要求1至8任一所述的动画文件的生成方法。The method for generating an animation file according to any one of claims 1 to 8 is executed.
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述计算机可读存储介质在计算机上运行时,使得计算机执行如权利要求1至8任一所述的动画文件的生成方法。A computer-readable storage medium, characterized in that instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer can execute any one of claims 1 to 8 The method of generating animation files described above.
PCT/CN2020/112752 2019-10-16 2020-09-01 Animation file generating method and device, and storage medium WO2021073293A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910983828.6 2019-10-16
CN201910983828.6A CN110662105A (en) 2019-10-16 2019-10-16 Animation file generation method and device and storage medium

Publications (1)

Publication Number Publication Date
WO2021073293A1 true WO2021073293A1 (en) 2021-04-22

Family

ID=69041240

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112752 WO2021073293A1 (en) 2019-10-16 2020-09-01 Animation file generating method and device, and storage medium

Country Status (2)

Country Link
CN (1) CN110662105A (en)
WO (1) WO2021073293A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742000A (en) * 2021-08-25 2021-12-03 深圳Tcl新技术有限公司 Data processing method, data processing device, computer readable storage medium and computer equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110662105A (en) * 2019-10-16 2020-01-07 广州华多网络科技有限公司 Animation file generation method and device and storage medium
CN111596983A (en) * 2020-04-23 2020-08-28 西安震有信通科技有限公司 Animation display method, device and medium based on animation component
CN111627090B (en) * 2020-06-04 2023-10-03 珠海西山居数字科技有限公司 Animation resource manufacturing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120223952A1 (en) * 2011-03-01 2012-09-06 Sony Computer Entertainment Inc. Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof.
CN106611435A (en) * 2016-12-22 2017-05-03 广州华多网络科技有限公司 Animation processing method and device
CN108093307A (en) * 2017-12-29 2018-05-29 广州酷狗计算机科技有限公司 Obtain the method and system of played file
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
CN109658485A (en) * 2018-11-21 2019-04-19 平安科技(深圳)有限公司 Web animation method for drafting, device, computer equipment and storage medium
CN109885795A (en) * 2019-01-25 2019-06-14 平安科技(深圳)有限公司 A kind of end Web animation configuration method and device
CN110111401A (en) * 2018-01-31 2019-08-09 北京新唐思创教育科技有限公司 Animation playing method and device for online class
US20190268249A1 (en) * 2016-11-04 2019-08-29 Google Llc Systems and methods for measuring media performance on end-user devices
CN110662105A (en) * 2019-10-16 2020-01-07 广州华多网络科技有限公司 Animation file generation method and device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100474342C (en) * 2006-12-21 2009-04-01 珠海金山软件股份有限公司 Apparatus and method for transferring mutually cartoon track and optional pattern
CN104463932B (en) * 2013-09-22 2018-06-08 北大方正集团有限公司 The method and apparatus for realizing animation effect
CN103559353B (en) * 2013-11-07 2016-11-09 南京国电南自轨道交通工程有限公司 A kind of method for designing based on dynamic behaviour form in the monitoring system picture configuration of SVG
CN105657574B (en) * 2014-11-12 2019-01-22 阿里巴巴集团控股有限公司 A kind of video file production method and device
US9786032B2 (en) * 2015-07-28 2017-10-10 Google Inc. System for parametric generation of custom scalable animated characters on the web
CN105427353B (en) * 2015-11-12 2019-05-21 小米科技有限责任公司 Compression, method for drafting and the device of scalable vector graphics
CN107765976B (en) * 2016-08-16 2021-12-14 腾讯科技(深圳)有限公司 Message pushing method, terminal and system
CN106651995B (en) * 2016-10-10 2019-11-19 腾讯科技(深圳)有限公司 A kind of configuration method of animation resource, playing method and device
CN106649541A (en) * 2016-10-26 2017-05-10 广东小天才科技有限公司 Cartoon playing and generating method and device
CN107592565A (en) * 2017-09-29 2018-01-16 深圳市前海手绘科技文化有限公司 A kind of method that Freehandhand-drawing video elementary is quickly combined in video
CN109885301B (en) * 2019-01-21 2022-04-08 新奥特(北京)视频技术有限公司 Method, device, storage medium and equipment for generating scalable vector graphics

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120223952A1 (en) * 2011-03-01 2012-09-06 Sony Computer Entertainment Inc. Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof.
US20190268249A1 (en) * 2016-11-04 2019-08-29 Google Llc Systems and methods for measuring media performance on end-user devices
CN106611435A (en) * 2016-12-22 2017-05-03 广州华多网络科技有限公司 Animation processing method and device
CN108093307A (en) * 2017-12-29 2018-05-29 广州酷狗计算机科技有限公司 Obtain the method and system of played file
CN110111401A (en) * 2018-01-31 2019-08-09 北京新唐思创教育科技有限公司 Animation playing method and device for online class
CN108769562A (en) * 2018-06-29 2018-11-06 广州酷狗计算机科技有限公司 The method and apparatus for generating special efficacy video
CN109658485A (en) * 2018-11-21 2019-04-19 平安科技(深圳)有限公司 Web animation method for drafting, device, computer equipment and storage medium
CN109885795A (en) * 2019-01-25 2019-06-14 平安科技(深圳)有限公司 A kind of end Web animation configuration method and device
CN110662105A (en) * 2019-10-16 2020-01-07 广州华多网络科技有限公司 Animation file generation method and device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113742000A (en) * 2021-08-25 2021-12-03 深圳Tcl新技术有限公司 Data processing method, data processing device, computer readable storage medium and computer equipment

Also Published As

Publication number Publication date
CN110662105A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
US11640235B2 (en) Additional object display method and apparatus, computer device, and storage medium
US11517099B2 (en) Method for processing images, electronic device, and storage medium
WO2020253096A1 (en) Method and apparatus for video synthesis, terminal and storage medium
CN108401124B (en) Video recording method and device
WO2020186988A1 (en) Information display method and device, terminal, and storage medium
WO2021073293A1 (en) Animation file generating method and device, and storage medium
CN110213638B (en) Animation display method, device, terminal and storage medium
CN111031393A (en) Video playing method, device, terminal and storage medium
CN112533017B (en) Live broadcast method, device, terminal and storage medium
CN110163066B (en) Multimedia data recommendation method, device and storage medium
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
CN109275013B (en) Method, device and equipment for displaying virtual article and storage medium
WO2021043121A1 (en) Image face changing method, apparatus, system, and device, and storage medium
CN114116053B (en) Resource display method, device, computer equipment and medium
WO2023000677A1 (en) Content item display method and apparatus
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
WO2022095465A1 (en) Information display method and apparatus
CN108289237B (en) Method, device and terminal for playing dynamic picture and computer readable storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
WO2022252563A1 (en) Information display method and electronic device
CN111437600A (en) Plot showing method, plot showing device, plot showing equipment and storage medium
CN112822544B (en) Video material file generation method, video synthesis method, device and medium
CN107888975B (en) Video playing method, device and storage medium
CN111368114A (en) Information display method, device, equipment and storage medium
CN110868642B (en) Video playing method, device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877326

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20877326

Country of ref document: EP

Kind code of ref document: A1