WO2022001508A1 - 视频特效的处理方法、装置以及电子设备 - Google Patents

视频特效的处理方法、装置以及电子设备 Download PDF

Info

Publication number
WO2022001508A1
WO2022001508A1 PCT/CN2021/095994 CN2021095994W WO2022001508A1 WO 2022001508 A1 WO2022001508 A1 WO 2022001508A1 CN 2021095994 W CN2021095994 W CN 2021095994W WO 2022001508 A1 WO2022001508 A1 WO 2022001508A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
duration
target
timestamp
original
Prior art date
Application number
PCT/CN2021/095994
Other languages
English (en)
French (fr)
Inventor
齐国鹏
陈仁健
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP21834318.4A priority Critical patent/EP4044604A4/en
Priority to JP2022555878A priority patent/JP7446468B2/ja
Publication of WO2022001508A1 publication Critical patent/WO2022001508A1/zh
Priority to US17/730,050 priority patent/US12041372B2/en
Priority to US18/735,059 priority patent/US20240323308A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Definitions

  • the present application relates to digital multimedia technology, and in particular, to a video special effect processing method, apparatus, electronic device, and computer-readable storage medium.
  • special effects animation can be designed based on professional video editing and design software such as Adobe After Effect (AE), and Airbnb's open-source hot animation library solution and Portable Animated Graphics (PAG, Portable Animated Graphics) solution can be used in AE.
  • AE Adobe After Effect
  • PAG Portable Animated Graphics
  • the duration of the video effects is fixed after the design is completed, which makes it difficult for the same video effects to be applied to the application scenarios of playing videos with diverse needs; Using all possible scenarios to generate video special effects with different playback durations in advance will not only cause a waste of computing resources, but also affect the real-time performance of video presentation.
  • An embodiment of the present application provides a method for processing video special effects, the method comprising:
  • the length of the target time axis is consistent with the target playback duration
  • Rendering is performed according to the special effect frame corresponding to the target time axis, to obtain a target video special effect conforming to the target playback duration.
  • Embodiments of the present application provide an apparatus, an electronic device, and a computer-readable storage medium related to the processing method of the video special effect.
  • FIG. 1 is a schematic structural diagram of a processing system for video special effects provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • 3A-3E are schematic flowcharts of a method for processing video special effects provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the application effect of the processing method of the video special effect provided by the embodiment of the present application in the scene of the game weekly battle report video;
  • FIG. 5 is a flowchart of a system for processing video special effects provided by an embodiment of the present application.
  • 6A-6C are schematic diagrams of annotations of a method for processing video special effects provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a time axis of a method for processing video special effects provided by an embodiment of the present application.
  • AE is the abbreviation of Adobe After Effect. It is a graphics and video processing software launched by Adobe. It is suitable for institutions engaged in design and video special effects, including TV stations, animation production companies, personal post-production studios and multimedia studios. It belongs to the layer type post-software.
  • Video special effects files binary files carrying special effects content, such as PAG files, are sticker animations stored in binary file format.
  • Original time axis the time axis corresponding to the entire video special effect file, or the time axis corresponding to when the special effect part corresponding to the video special effect sub-file is played.
  • Target Timeline The time axis corresponding to when the complete special effect object in the video special effect file is stretched and played, or the corresponding time axis when some special effect objects corresponding to the video special effect sub-file are played after being stretched and stretched.
  • the mobile Internet client based on AE (Adobe After Effect) to achieve animation solutions include Airbnb's open source Lottie solution and PAG solution, both of which have opened up the workflow from AE animation design to mobile terminal presentation.
  • the animation designed by the designer on AE The animation file is exported by exporting the plug-in, and then loaded and rendered on the mobile terminal through the SDK, which greatly reduces the development cost.
  • the animation files of the two schemes are designed through AE.
  • the duration of the animation is fixed, and the applicant is implementing During the process of the embodiments of the present application, it is found that in some user interface animation and video editing scenarios, it is necessary to externally be able to control the duration of the animation file, such as fixing some interval animation files, and performing linear stretching or cyclic processing on some interval animations.
  • the sticker animation length is 2 seconds, but the actual animation length is 4 seconds, and the sticker animation needs to be stretched to 4 seconds externally, or the sticker animation needs to be drawn repeatedly, that is, the sticker animation is played twice.
  • the embodiment of the present application provides a processing method for a video special effect, which can support the time expansion and contraction of the fixed animation file, and the external application platform only
  • the target playback time of the animation file needs to be set, and the animation file can be time-scaled according to the scaling policy configured by the user.
  • the playback duration scaling processing of the video special effect file is controlled by the duration scaling strategy in the video special effect file. After decoding the video special effect file, it can be processed and rendered according to the duration scaling strategy to achieve the target video special effect of the target playback duration; it can be directly applied to each video effect file. It supports various applications and various platforms, and is not limited by the operating system of the platform, and the implementation process is extremely simple.
  • the electronic devices provided by the embodiments of the present application may be implemented as notebook computers, tablet computers, desktop computers, set-top boxes, mobile devices (for example, mobile phones, portable music players, Various types of terminal devices such as personal digital assistants, dedicated messaging devices, portable game devices) can also be implemented as servers.
  • FIG. 1 is a schematic structural diagram of a video special effect processing system provided by an embodiment of the present application.
  • the terminal 400 is connected to the server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
  • the server 200 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a cloud server that provides cloud computing services.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
  • the terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • Cloud computing is a computing mode that distributes computing tasks on a resource pool composed of a large number of computers, enabling various application systems to obtain computing power, storage space and information services as needed.
  • the network that provides resources is called “cloud”, and the resources in the “cloud” are infinitely expandable in the eyes of users, and can be obtained at any time, used on demand, expanded at any time, and paid for according to usage.
  • the designer uses the terminal 500 or calls the cloud service of the server to design a video special effect file, sends the video special effect file to the server 200 of the client (ie the background server), and the server 200 stores the received video special effect file, or
  • the video special effect file is stored in the database 600 or the file system by the server 200; the user uses the client terminal running in the terminal 400, and the client terminal can be various types of applications such as game applications, social network applications, short video applications, online shopping applications, etc.
  • the server 200 in the process of user use, trigger the business logic of the server 200 to deliver the video, for example, the server 200 regularly delivers the business usage report, for example, the business logic of delivering the weekly battle report video in the game and the monthly consumption video report,
  • the specific content of the service usage report is related to the service of the client, and triggers the server 200 to issue a video special effect file that enhances the expressiveness of the video, and the video special effect file may be pre-stored by the client.
  • the client running in the terminal 400 takes the playback duration of the video delivered by the server 200 as the target playback duration based on the duration scaling policy in the video special effect file, and renders the video in the video special effect file at the same time as rendering the video.
  • the special effect object (video special effect) carried by it is to obtain the target video special effect that meets the target playback duration, thereby realizing the effect of synchronous display of the video special effect and the video.
  • the terminal 400 responds to receiving the native video (each Weekly battle report video) and video special effect file, obtain the native video duration of the native video (the duration of the weekly battle report video), use the native video duration as the target playback duration, decode the video special effect file and perform the corresponding duration scaling processing, so that after the The video special effects processed by the duration scaling are adapted to the playback duration of the native video. Finally, the video special effects are rendered and displayed simultaneously with the native video to display the weekly battle report video with video special effects.
  • the server delivers multiple videos, and the video effects are used as transition animations between multiple videos to play multiple videos in series; the duration of the transition animation can be specified when the server delivers the video, For example, the duration of the transition animation can be determined according to the user account level (the higher the level, the shorter the transition time); when the client running in the terminal 400 finishes playing a video, it renders the video special effect carried in the video special effect file. , to obtain the target video special effect that meets the target playback duration. The target video special effect actually realizes the effect of transition animation, so as to make the connection between multiple videos more natural.
  • the terminal 400 obtains from the server 200 the corresponding A video special effect file of a video special effect is decoded, and the corresponding duration scaling processing is performed based on the specified target playback duration (the duration of the transition animation), and the video special effects after duration scaling processing are rendered between several native videos. .
  • the playback duration of the videos delivered by the server for different users, or for different videos delivered by the same user, varies.
  • Using the same video special effect file can be reused in the playback process of many videos at the same time, reducing the calculation of repetitive video generation by the server. Resource consumption reduces the waiting delay on the user side.
  • the client running in the terminal 400 is a social network client, or a video sharing client, with functions of video capture, editing and sharing.
  • the client collects the video and uses the video effect files downloaded from the server for synthesis, such as image splicing (both displayed at the same time) or timeline splicing (that is, using the video effect files to connect multiple videos collected).
  • the playback duration is the target playback duration
  • the specific process is as follows: the terminal 400 responds to receiving the native video shot by the user, and obtains the native video duration of the native video, and the terminal 400 obtains a video special effect file corresponding to a certain video special effect from the server 200 and decodes it to convert
  • the native video duration is used as the target playback duration, and the corresponding duration scaling processing is performed for the video special effects, so that the video special effects are adapted to the native video.
  • image splicing processing of the video special effects and the native video real-time rendering is performed to preview the final editing effect. After previewing, you can also encode the image stitching processing result to obtain a new video file and share it with other users.
  • the target playback duration is the duration set by the user or the default transition animation duration of the client.
  • the video special effect After playing a video, render the video special effect carried in the video special effect file to obtain the target video special effect that matches the target playback duration.
  • the target video special effect actually realizes the effect of transition animation, so that The connection is more natural, and the specific process is as follows: the terminal 400 obtains a video special effect file corresponding to a certain video special effect from the server 200, decodes it, and executes the corresponding duration scaling process based on the specified target playback duration (the duration of the transition animation), and converts the
  • the video special effects after the duration scaling process are rendered between several native videos, and the time axis splicing processing is performed on the video effects after the duration scaling processing and several native videos, and then encoding processing is performed to obtain a new video file , to share with other users.
  • the user can continue to adjust the target playback duration and re-render the original sound video and video special effects until the final preview result is determined to meet the requirements.
  • the client video editing client
  • the client when the computing resources (processor and memory) consumed by rendering exceed the terminal's capacity, the client can request the server to perform rendering, and present the target video special effects according to the rendering data returned by the server.
  • FIG. 2 is a schematic structural diagram of the electronic device provided by the embodiment of the present application.
  • the terminal 400 shown in FIG. 2 includes: At least one processor 410 , memory 450 , at least one network interface 420 and user interface 430 .
  • the various components in terminal 400 are coupled together by bus system 440 . It is understood that the bus system 440 is used to implement the connection communication between these components.
  • the bus system 440 also includes a power bus, a control bus, and a status signal bus. For clarity, however, the various buses are labeled as bus system 440 in FIG. 2 .
  • the processor 410 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc., where a general-purpose processor may be a microprocessor or any conventional processor or the like.
  • DSP Digital Signal Processor
  • User interface 430 includes one or more output devices 431 that enable presentation of media content, including one or more speakers and/or one or more visual display screens.
  • User interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, and other input buttons and controls.
  • Memory 450 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like.
  • Memory 450 optionally includes one or more storage devices that are physically remote from processor 410 .
  • Memory 450 includes volatile memory or non-volatile memory, and may also include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • ROM read-only memory
  • RAM random access memory
  • the memory 450 described in the embodiments of the present application is intended to include any suitable type of memory.
  • memory 450 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
  • the operating system 451 includes system programs for processing various basic system services and performing hardware-related tasks, such as framework layer, core library layer, driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • a presentation module 453 for enabling presentation of information (eg, a user interface for operating peripherals and displaying content and information) via one or more output devices 431 (eg, a display screen, speakers, etc.) associated with the user interface 430 );
  • An input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
  • the apparatus for processing video special effects may be implemented in software.
  • FIG. 2 shows a processing apparatus 455 for video special effects stored in the memory 450, which may be in the form of programs and plug-ins.
  • the software includes the following software modules: a file acquisition module 4551, a duration acquisition module 4552, a special effect frame determination module 4553 and a rendering module 4554. These modules are logical, and therefore can be arbitrarily combined or further split according to the implemented functions.
  • the terminal may be a client running in the terminal.
  • the example of the client has been described above and will not be repeated.
  • FIG. 3A is a schematic flowchart of a video special effect processing method provided by an embodiment of the present application, which will be described with reference to the steps shown in FIG. 3A .
  • step 101 the terminal acquires a video special effect file, and extracts a duration scaling policy from the video special effect file.
  • the main way to obtain the video special effect file is to export the video special effect file through a plug-in.
  • the video special effect file can be a sticker animation file in PAG format.
  • the vector export method can be selected according to specific requirements.
  • one of the bitmap sequence frame export method or the video sequence frame export method to export the PAG binary file and the client or the server side decodes the exported PAG binary file.
  • the binary file is decoded, and then rendered and rendered by the rendering module.
  • the decoding and rendering process of the terminal can be realized by calling the rendering SDK.
  • the function of decoding is to deserialize the PAG binary file into a data object that the client can operate.
  • the data structure can refer to the PAG data structure.
  • obtaining a video special effect file in step 101 can be achieved by the following technical solutions: performing one of the following processes: performing encoding processing on multiple layer structures of special effect objects to obtain an encoded export file corresponding to the special effect object; Perform encoding processing on multiple special effect frames of the special effect object to obtain an encoded export file corresponding to the special effect object; perform video format compression processing on multiple special effect frames of the special effect object, and perform encoding processing on the obtained video format compression processing results to obtain corresponding special effects
  • the encoding export file of the object encapsulate the duration scaling type and duration scaling interval in the encoding export file to obtain the video effect file corresponding to the special effect object.
  • vector export can support most of the AE features, and the export file is extremely small, usually It is applied to the user interface or scenes with editable content.
  • Vector export is the restoration of the AE animation layer structure.
  • the vector export method is to restore the AE animation layer structure through the SDK provided by AE, and in the process of exporting
  • the dynamic bit storage technology is adopted, which greatly reduces the file size; the bitmap sequence frame export and video sequence frame export methods can support all AE features, but the export file is large, which is usually used in video synthesis or special animation effects.
  • the bitmap sequence frame export is the bitmap data
  • the bitmap sequence frame export is to convert each frame in the complex animation designed by the designer into a picture format for storage, more specifically, for most of AE animations It has the characteristics of coherence and small difference between frames. Select a frame as a key frame, and compare the data of each subsequent frame with it to obtain the position information, width and height data of the difference bitmap, and intercept the difference bitmap information for storage. , thereby reducing the file size.
  • the bitmap sequence frame supports exporting multiple versions (different scaling factors, frame rates, and sizes) to meet the needs of different scenarios.
  • the advantage of this processing method is that it can support all AE features, but the disadvantage is that the exported file will be too large , and cannot realize image replacement and text editing operations in AE animation, suitable for processing complex special effects such as masks and shadows, mainly used on web pages;
  • the video sequence frame export method adopts the H.264 compression format in the video field, which is relatively For bitmap sequence frames, the decoding speed will be faster, and it is mainly applied to the mobile terminal.
  • the video sequence frame export method compresses the captured pictures in the video format. Compared with the bitmap sequence frame export method, the video sequence frame export method has more optimized pictures. In terms of format volume and decoding efficiency, from the perspective of performance, the vector export method can achieve the limit state where the file size and performance are very optimized. For the PAG video special effect file generated by the sequence frame export method, the overall time consumption is only the same as that of the sequence frame image. Size matters.
  • encapsulating the duration scaling type and duration scaling interval input by the user in the encoding export file is actually modifying the data structure of the PAG sticker animation file.
  • the duration scaling type and duration scaling interval can be added at the level of the file root path, and finally
  • the execution order of the encapsulation step and the encoding step may not be limited.
  • One of the encoding derivation processing can also be performed first, and then the duration scaling type and duration scaling interval can be added at the root path level.
  • the designer designs animations through AE, and then provides the animation feature data to the terminal development engineer to realize the animation function development process, which is reduced to the PAG sticker animation file that the designer designs animation through AE, and the terminal controls the PAG sticker animation file.
  • the file is directly loaded and displayed, which greatly reduces the workload of terminal development and is compatible with the development requirements of various platforms.
  • the duration scaling strategy includes a duration scaling interval and a corresponding duration scaling type; in step 101, the duration scaling strategy is extracted from the video special effect file, which can be implemented by the following technical solutions: decoding the video special effect file to obtain corresponding At least one duration expansion and contraction interval of the video special effect file and the corresponding duration expansion and contraction type; wherein, the duration expansion and contraction type includes any one of the following types: time linear expansion type; time repeat type; time reverse repeat type.
  • the video special effect file may be the above-mentioned PAG sticker animation file, which is a description file for special effect objects.
  • the user-configured duration expansion and contraction interval and the corresponding duration expansion and contraction type, and the number of duration expansion and contraction intervals can be extracted. Usually there is one, but in a more complex application scenario, the number of duration scaling intervals is multiple. For each duration scaling interval, there is a corresponding duration scaling type configured by the user. A configuration entry is provided to receive the duration expansion interval and duration expansion type input by the user, see FIG. 6A, FIG.
  • 6A is a schematic diagram of the annotation of the processing method of the video special effect provided by the embodiment of the present application, that is, the configuration provided by the client in each application scenario
  • Entries 601 and 602 are used to receive the duration scaling interval and duration scaling type input by the user.
  • a page jump can also occur to continue to receive the input target playback duration corresponding to each duration scaling interval, or set
  • the received configuration information is sent to the AE client through inter-process communication, and the special effect data obtained in the AE client is used for encoding processing to obtain the final video special effect file, which is extracted from the video special effect file.
  • the configuration entry 603 of the target playback duration is provided.
  • This implementation can help the client of any application scenario to flexibly set the scaling strategy of the animation special effect file, and the duration scaling interval and duration scaling type can also be performed through the AE client.
  • 6B-6C are schematic diagrams of the annotation of the video special effect processing method provided by the embodiment of the present application, that is, the duration expansion interval and duration expansion type input by the user are directly received through the AE client to combine the AE
  • the special effect data obtained in the client is encoded and processed to obtain the final video special effect file.
  • This implementation can ease the development and optimization tasks of the client in each application scenario.
  • Each client side can directly call the rendering SDK to obtain the video special effect file and execute it. subsequent logic.
  • step 102 a target playback duration that needs to be achieved when the video special effect file is applied to the design scene is obtained, wherein the target playback duration is different from the original playback duration of the video special effect file.
  • FIG. 3B is an optional schematic flowchart of the processing method of the video special effect provided by the embodiment of the present application.
  • step 102 the target playback duration that needs to be achieved when applying the video special effect file to the design scene can be obtained. It is implemented through steps 1021 to 1022, and will be described in conjunction with each step.
  • step 1021 when the number of time-length expansion and contraction sections is multiple, the video special effect file is divided into a plurality of video special effect sub-files with the same number, and the target playback duration for each video special effect sub-file is obtained respectively.
  • the method of splitting video effect files is not limited in the implementation process. It is only necessary to ensure that each video effect sub-file obtained after splitting contains one and only one time-length expansion and contraction interval.
  • the target playback duration of the duration extension interval can be obtained by allocating the target playback duration of the video special effect file.
  • the target playback duration of the video special effect file is 10 seconds, and there are two duration extension intervals in the video special effect file, each of which is 1 second. -2 seconds for the first duration scaling interval and 3 seconds-4 seconds for the second duration scaling interval, then for the first target playback duration of the first duration scaling interval, it is only necessary to ensure that the corresponding second duration scaling is combined.
  • the second target playback duration of the interval satisfies the target playback duration of the video special effect file, that is, if the first duration scaling type is the repeat type, then the first target playback duration is at least greater than 2 seconds (for example, 1 second to 3 seconds), and the first The sum of the target playback duration and the second target playback duration is 10 seconds, and this implementation has fewer restrictions, that is, only the target playback duration of the corresponding sub-files and the target playback duration of the video special effect file are limited, and each The target playback duration of the sub-file satisfies the corresponding duration expansion and contraction interval, which avoids the trouble of setting by the user without requiring the user's human intervention, thereby providing diversified and random rendering effects.
  • the target playback duration of each video special effect sub-file actually involves the distribution of the target playback duration of the video special effect file.
  • the allocation scheme configuration function can be provided for the user, and a related entry can also be provided in the user configuration entry in FIG. 6C to facilitate the user to input the target playback duration configured for each duration expansion and contraction interval. For example, each duration expansion and contraction interval is set by the user.
  • the target playback duration of the corresponding video special effect sub-files through this implementation, it is beneficial for the user to flexibly control the rendering effect of each sub-file in a more fine-grained manner, and then control the rendering effect of the entire file.
  • the duration of the native video can be directly used as the target playback duration corresponding to the video special effect file, and for different expansion and contraction sections Allocate the duration of the native video so that the target playback duration of each scaling section satisfies the above constraints. If the allocated target playback duration cannot fit the duration scaling type in the duration scaling policy of the corresponding video effect subfile, for example, duration scaling If the type is time repetition type, but the target playback duration is shorter than the original playback duration, other video effect files can be selected. There can be video effect files with the same special effect object corresponding to different duration scaling strategies in the material library.
  • step 1022 when the number of time-length expansion and contraction sections is one, the overall target playback duration for the video special effect file is acquired.
  • the target playback duration of the video special effect file when the number of time-length expansion and contraction sections is one, there is no need to allocate the target playback duration of the video special effect file, and the target playback duration configured by the user is directly used as the overall target playback duration of the video special effect file. If the target playback duration entered by the user does not conform to the duration scaling type of the duration scaling interval, an error message will be returned to the user, and the portal will be opened again for receiving the target playback duration input by the user.
  • the duration of the native video can be directly used as the target playback duration of the special effect. If the duration of the native video does not satisfy the corresponding For the duration scaling type in the duration scaling strategy of the special effect file, select other video special effect files, and there can be video special effects files with the same special effect object corresponding to different duration scaling strategies in the material library.
  • the same video special effect file can be reused in the playback process and editing process of many videos at the same time, which can reduce the consumption of computing resources for the server to repeatedly generate videos and reduce the The waiting delay on the user side is reduced.
  • the special effect sub-file performs the following processing: obtains the original time axis of the corresponding special effect object from the time axis of the video special effect sub file, that is, the part of the time axis where the special effect object appears in the time axis of the sub file; keeps the frame rate of the original time axis unchanged, Perform time-length scaling processing on the original timeline to obtain a target timeline corresponding to the target playback duration; when the number of current-length scaling sections is one, perform the following processing on the video effect file: Obtain the original timeline corresponding to the special effect object from the video effect file ; Keep the frame rate of the original timeline unchanged, and perform duration scaling processing on the original timeline to obtain the target timeline corresponding to the target playback duration.
  • the part of the time axis in which the special effect object appears from the time axis of the video special effect sub-file is used as the original time axis, and the frame rate of the original time axis is kept unchanged.
  • the duration scaling process is performed to obtain a target time axis corresponding to the target playback duration.
  • the number of time-length expansion and contraction intervals is one
  • the original time axis corresponding to the special effect object in the video special effect file is directly subjected to duration expansion and contraction processing to obtain the target time axis corresponding to the target playback duration.
  • keep the frame rate unchanged that is, ensure that the minimum time unit remains unchanged, so that the effect of the duration scaling processing of the special effect object is that the playback progress changes, not the playback frame rate.
  • step 103 the terminal determines the special effect frame corresponding to the target time axis in the video special effect file according to the time length scaling strategy; wherein, the length of the target time axis is consistent with the target playback time length.
  • FIG. 3C is a schematic flowchart of a method for processing video special effects provided by an embodiment of the present application.
  • step 103 determines the number of video special effects files in the video special effect file.
  • the special effect frame corresponding to the target time axis can be implemented by performing steps 1031A to 1032A for each time-length expansion/contraction interval, which will be described in conjunction with each step.
  • step 1031A a plurality of special effect frames including special effect objects and the timestamp corresponding to each special effect frame on the original time axis are obtained from the video special effect sub-file, and used as the original special effect frame timestamp of each special effect frame.
  • the dependent rendering logic is the timestamp corresponding to each special effect frame in the video special effect sub-file on the original time axis.
  • special effect frame 1 to special effect frame 24 there are special effect frame 1 to special effect frame 24, and the frame rate It is 24 frames per second, that is, every 1/24 second is a timestamp, such as 0, 1/24, 2/24, ..., 23/24, and the special effect frames 1 to 24 are presented on these 24 timestamps.
  • the above Timestamp 0, 1/24, 2/24, ..., 23/24 are the original effect frame timestamps of effect frame 1 to effect frame 24, respectively.
  • step 1032A an effect frame corresponding to each timestamp on the target time axis is determined from among the plurality of effect frames based on the duration expansion and contraction interval and the original effect frame timestamp of each effect frame.
  • the process of duration scaling is actually to determine the special effect frame corresponding to each timestamp on the target timeline, assuming that the original timeline is 1 second and the frame rate is 24 frames per second, that is, every 1/24 second is a frame Timestamp, the target timeline is 2 seconds, and the frame rate is still 24 frames per second, then each timestamp on the target timeline is 0, 1/24, 2/24, ..., 23/24, 24/24, ... , 47/24, when performing scaling processing, the mapping relationship and mapping range between the timestamp on the target timeline and the timestamp on the original timeline can be determined through the duration scaling interval and the corresponding duration scaling type, so as to determine the mapping relationship and the mapping range based on the mapping relationship and The mapping range determines the timestamp on the original timeline corresponding to each timestamp on the target timeline.
  • Some of the timestamps on the corresponding original timeline are the original effect frame timestamps, and some timestamps do not show special effects. For example, when the timestamp on the target timeline is 1/48, the corresponding timestamp on the original timeline does not present a special effect frame. For the timestamp that does not present a special effect frame, the principle of proximity is adopted to determine the special effect frame that needs to be presented.
  • FIG. 3D is a schematic flowchart of a video special effect processing method provided by an embodiment of the present application.
  • the number of time-length expansion and contraction sections is one, in step 103, according to the time-length expansion and contraction strategy, it is determined that the video special effect file contains
  • the special effect frame corresponding to the target time axis can be implemented through steps 1031B to 1032B, which will be described in conjunction with each step.
  • step 1031B a plurality of special effect frames including special effect objects and the corresponding time stamp of each special effect frame on the original time axis are obtained from the video special effect file, and used as the original special effect frame time stamp of each special effect frame.
  • the dependent rendering logic is the timestamp corresponding to each special effect frame in the video special effect file on the original time axis.
  • the frame rate is 24 frames per second, that is, every 1/24 second is a timestamp, such as 0, 1/24, 2/24, ..., 23/24, on these 24 timestamps, special effect frames 1 to 24 are presented respectively, and the above time Stamp 0, 1/24, 2/24, ..., 23/24 are the original effect frame timestamps of effect frame 1 to effect frame 24, respectively.
  • step 1032B an effect frame corresponding to each time stamp on the target time axis is determined from among the plurality of effect frames based on the duration expansion and contraction interval and the original effect frame timestamp of each effect frame.
  • the process of duration scaling is actually to determine the special effect frame corresponding to each timestamp on the target timeline, assuming that the original timeline is 1 second and the frame rate is 24 frames per second, that is, every 1/24 second is a frame Timestamp, the target timeline is 2 seconds, and the frame rate is still 24 frames per second, then each timestamp on the target timeline is 0, 1/24, 2/24, ..., 23/24, 24/24, ... , 47/24, when performing scaling processing, the mapping relationship and mapping range between the timestamp on the target timeline and the timestamp on the original timeline can be determined through the duration scaling interval and the corresponding duration scaling type, so as to determine the mapping relationship and the mapping range based on the mapping relationship and The mapping range determines the timestamp on the original timeline corresponding to each timestamp on the target timeline.
  • Some of the timestamps on the corresponding original timeline are the original effect frame timestamps, and some timestamps do not show special effects. For example, when the timestamp on the target timeline is 1/48, the corresponding timestamp on the original timeline does not present a special effect frame. For the timestamp that does not present a special effect frame, the principle of proximity is adopted to determine the special effect frame that needs to be presented.
  • the effect frame corresponding to each time stamp on the target time axis is determined from the multiple effect frames based on the duration expansion and contraction interval and the original effect frame time stamp of each special effect frame.
  • This is achieved through the following technical solutions: sequentially taking each timestamp on the target time axis as the target timestamp, and performing the following processing: determining the original timestamp corresponding to the target timestamp on the original time axis based on the duration scaling interval; when the target timestamp When the corresponding original time stamp on the original time axis overlaps with any original effect frame time stamp, the effect frame corresponding to the overlapping original effect frame time stamp is determined as the special effect frame corresponding to the target time stamp; when the target time stamp is at the original time When the corresponding original time stamp on the axis does not overlap with any original effect frame time stamp, determine the original effect frame time stamp with the smallest distance from the original time stamp, and determine the effect frame corresponding to the original effect frame time stamp as the target time stamp
  • the frame rate is 24 frames per second, that is, every 1/24 of a second is a timestamp
  • there are effect frames 1 to 24 and the frame rate is 24 frames per second, that is, every 1 /24 seconds is a timestamp, such as 0, 1/24, 2/24, ..., 23/24, on these 24 timestamps
  • special effect frames 1 to 24 are presented respectively
  • the above timestamps 0, 1/24, 2 /24, ..., 23/24 are the original effect frame timestamps of effect frame 1 to effect frame 24, respectively
  • the target time axis is 2 seconds
  • the frame rate is still 24 frames per second
  • each time stamp on the target time axis is 0, 1/24, 2/24, ..., 23/24, 24/24, ..., 47/24, take each timestamp on the target timeline as the target timestamp, and determine the original corresponding to each target timestamp respectively.
  • the corresponding original effect frame timestamps are 0, 0, 1/24, 1/24, 2/24, 2/24, ..., 23/24, 23/24, because the target timestamp is in the original If the corresponding original timestamps on the timeline overlap with the original effect frame timestamps, the effect frames corresponding to these original effect frame timestamps are determined as the effect frames on each target timestamp, then each timestamp on the target timeline For 0, 1/24, 2/24, ..., 23/24, 24/24, ..., 47/24, the corresponding special effect frames are special effect frame 1, special effect frame 1, ..., special effect frame 24, special effect frame 24, However, this transformation is an ideal transformation.
  • the original timestamps on the original timeline corresponding to each target timestamp on the target timeline are not necessarily the original effect frame timestamps.
  • the target timestamp 1/24 corresponds to the original timestamp on the original timeline is 1/48, but the original timestamp 1/48 on the original timeline has no corresponding special effects when the frame rate is 24 frames per second. Therefore, the principle of proximity is adopted, and the special effect frame A with the closest time distance is regarded as the special effect frame with 1/48 of the original timestamp, and then determined as the special effect frame corresponding to 1/24 of the target timestamp.
  • the special effect frame A can be the special effect frame on the original time stamp 0, or it can be the special effect frame with the original time stamp 1/24, if the target time stamp 1/24 corresponds to the original time stamp on the original time axis, the original time stamp is 1/ 36, then the closest timestamp to it (original timestamp) is 1/24 of the original timestamp, and the special effect frame A here is the special effect frame with the timestamp 1/24.
  • the above-mentioned determination of the original timestamp corresponding to the target timestamp on the original time axis based on the duration scaling interval can be implemented through the following technical solutions: performing the following processing for each duration scaling interval: when the target timestamp is not greater than When the start timestamp of the duration scaling interval, the target timestamp is determined as the original timestamp corresponding to the original time axis; when the target timestamp is greater than the starting timestamp of the duration scaling interval and smaller than the ending timestamp of the duration scaling interval When the target timestamp is based on the duration scaling type, the corresponding original timestamp is obtained; when the target timestamp is greater than or equal to the termination timestamp and less than the target playback duration, the difference between the original playback duration and the target playback duration is determined.
  • the first difference value of , the first difference value and the target timestamp are summed, and the result of the summation processing is determined as the original timestamp corresponding to the target timestamp on the original
  • the duration scaling interval in the duration scaling policy already defines the interval that needs to be scaled, for different timestamps on the target Timestamp, see Figure 7, the length of the original time axis is m, the length of the target time axis is n, the length expansion interval is from a to b, and the length of the duration expansion interval is ba.
  • the duration expansion interval The starting timestamp is a, and the ending timestamp is n-(mb).
  • the corresponding timestamp on the original timeline is also t, because this time period belongs to the fixed title If the target timestamp t is between n-(mb) and n, the corresponding timestamp on the original time axis is m-n+t. If the target timestamp t is between a and n-m+b time, the mapping processing needs to be performed according to different duration scaling types.
  • the above-mentioned mapping processing based on the duration scaling type is performed on the target timestamp to obtain the corresponding original timestamp, which may be implemented through the following technical solutions: when the duration scaling type is the time linear scaling type, the target playback duration is The second difference of the original playback duration is determined as the expansion and contraction length, and the expansion and contraction length is summed with the length of the duration expansion and contraction interval; the ratio of the length of the duration expansion and contraction interval and the summation processing result is calculated to obtain the expansion and contraction coefficient; the target is determined For the third difference between the timestamp and the starting timestamp, the third difference is multiplied by the scaling coefficient; the multiplication result and the starting timestamp are summed to obtain the corresponding original timestamp.
  • the mapping processing needs to be performed according to different duration scaling types.
  • the duration scaling type is time linear scaling
  • the target playback duration n and the original playback duration are compared
  • the second difference of m is determined as the expansion and contraction length, and the expansion and contraction length is summed with the length ba of the duration expansion and contraction interval;
  • For the third difference between the target timestamp t and the starting timestamp a multiply the third difference with the scaling coefficient k; perform a summation process on the multiplication result and the starting timestamp a to obtain the corresponding
  • the specific calculation principle can refer to the following formula (1):
  • a is the start timestamp
  • t is the target timestamp
  • k is the scaling coefficient
  • f(t) is the original timestamp
  • n is the target playback duration
  • m is the original playback duration.
  • the above-mentioned mapping of the target timestamp based on the duration scaling type to obtain the corresponding original timestamp can be achieved by the following technical solutions: when the duration scaling type is a time repetition type, determining the target timestamp and the starting timestamp For the fourth difference between the time stamps, the remainder processing is performed on the fourth difference and the length of the duration expansion and contraction interval; the result of the remainder processing and the starting time stamp are summed to obtain the corresponding original time stamp.
  • the mapping processing needs to be performed according to different duration scaling types.
  • the duration scaling type is the time repetition type, determine the target timestamp t and the start time. Stamp the fourth difference between a, and perform remainder processing on the fourth difference and the length (ba) of the duration expansion interval; sum the remainder processing result and the starting timestamp a to obtain the corresponding original time.
  • a is the start timestamp
  • t is the target timestamp
  • k is the scaling coefficient
  • f(t) is the original timestamp
  • n is the target playback duration
  • m is the original playback duration.
  • the above-mentioned mapping on the target timestamp based on the duration scaling type to obtain the corresponding original timestamp can be achieved by the following technical solutions: when the duration scaling type is the reverse chronological repetition type, determine the target timestamp and the starting timestamp. The fifth difference between the initial time stamps; the remainder processing is performed on the fifth difference and the length of the duration expansion and contraction interval to obtain the remainder processing result, and the fifth difference and the length of the duration expansion and contraction interval are processed.
  • Ratio processing is performed to obtain the ratio processing; the comparison value result is rounded to obtain the rounding result; when the rounding result is an even number, the remainder processing result and the starting timestamp are summed to obtain the corresponding original timestamp ; When the rounding result is an odd number, determine the sixth difference between the length of the duration expansion interval and the remainder processing result, and perform a summation process on the sixth difference and the starting timestamp to obtain the corresponding original timestamp.
  • the mapping processing needs to be performed according to different duration scaling types.
  • the remainder processing is performed on the fifth difference and the length (ba) of the duration expansion interval; for example, the remainder processing is performed on 8 and 3, and the obtained remainder processing result is 2, The corresponding ratio result is 8/3, and the rounding result obtained after rounding is 2. Perform rounding processing on the ratio result corresponding to the remainder processing result to obtain the rounding result; when the rounding result is an even number, the rounding result will be calculated.
  • the remainder processing result is summed with the starting timestamp a to obtain the corresponding original timestamp; when the rounding result is an odd number, the sixth difference between the length (ba) of the duration expansion interval and the remainder processing result is determined , the sixth difference and the starting timestamp a are summed to obtain the corresponding original timestamp.
  • the specific calculation principle can be found in the following formula (3):
  • a is the start timestamp
  • t is the target timestamp
  • k is the scaling coefficient
  • f(t) is the original timestamp
  • n is the target playback duration
  • m is the original playback duration.
  • step 104 the terminal performs rendering according to the special effect frame corresponding to the target time axis, and obtains the target video special effect that meets the target playback duration.
  • the terminal performs rendering processing on the corresponding time stamp according to the special effect frame of each time stamp on the target time axis, so as to obtain the target video special effect conforming to the target playback duration.
  • the terminal responds to receiving the native video shot by the user or the native video returned from the server, obtains the native video duration of the native video, and the terminal responds to the video corresponding to a certain special effect object in the material library.
  • the special effect file is decoded and the corresponding duration scaling processing is performed.
  • the native video duration is used as the target playback duration, so that the special effect is adapted to the native video.
  • real-time rendering can be performed as a preview of the final effect. , after previewing, encode the result of splicing processing, and get a new video file to share with other users.
  • the special effects in the video special effect file can also be used as a connection animation between several native videos
  • the terminal decodes the video special effect file corresponding to a certain special effect object in the material library, and Receive the setting operation for the target playback duration and perform the corresponding duration scaling processing, so that the special effect object after the duration scaling processing is between several native videos.
  • Rendering as a preview of the final effect, encodes the result of the splicing process after previewing, and obtains a new video file to share with other users.
  • the terminal responds to receiving the native video and special effect video file returned from the server, obtains the native video duration of the native video, decodes the video special effect file and generates Perform the corresponding duration scaling processing, take the native video duration as the target playback duration, make the special effect object fit the playback duration of the native video, render the special effect object and the native video and display them at the same time.
  • FIG. 3E is a schematic flowchart of a method for processing video special effects provided by an embodiment of the present application.
  • a specific processing process is only performed by a terminal during the implementation process.
  • a terminal and a server can also be combined to realize
  • the terminal sends a rendering request to the server, in step 202, the server obtains the video special effect file, and extracts the duration scaling policy from the video special effect file, and in step 203, the server extracts the duration scaling policy
  • the terminal receives the input target playback duration
  • the terminal sends the target playback duration to the server
  • the server determines the video effect file and the target time according to the duration scaling strategy.
  • step 207 the server renders according to the special effect frame corresponding to the target time axis, so as to obtain the target video special effect that meets the target playback duration; in step 208, the server returns the target video special effect to the terminal, in step 209, the terminal presents the target video special effect, and the above process involves the interaction process between the terminal and the server, and the rendering process that requires a lot of computing resources is allocated to the server to complete, and the terminal is only responsible for receiving the user's configuration requirements, and presenting and rendering
  • the logic completed by the above server can also be completed by calling the rendering SDK, or the rendering SDK can remotely call the cloud server resources to complete.
  • the embodiment of the present application provides a video special effect processing method, which can support the time expansion and contraction of a fixed animation file, and the external application platform only needs to set the target playback time of the animation file, and the animation file can perform time expansion and contraction according to the expansion strategy configured by the user.
  • the playback duration scaling processing of the video special effect file is controlled by the duration scaling strategy in the video special effect file. After decoding the video special effect file, it can be processed and rendered according to the duration scaling strategy to achieve the target video special effect of the target playback duration; it can be directly applied to each video effect file. It supports various applications and various platforms, and is not limited by the operating system of the platform, and the implementation process is extremely simple.
  • FIG. 4 is an application of the video special effect processing method provided by the embodiment of the present application in the game weekly battle report video scene.
  • the terminal needs to present the horizontal video (native video) in the middle area and the sticker animation in the upper and lower areas (corresponding to the special effect object of the video special effect file).
  • the PAG sticker animation is used to realize the sticker animation in the upper and lower areas. This is a vertical Screen animation, which is presented on the upper and lower edge areas of the video, and is accompanied by periodic animation effects.
  • the requirement can be the target playback duration
  • the target playback duration can be the duration of the horizontal video (the duration of the native video).
  • the sending logic enables the server to send the weekly battle report video to the client, and the client responds by receiving the weekly battle report video (native video) and the corresponding video special effect file returned from the server, and obtains the duration of the weekly battle report video as the target playback duration , and decode the video special effect file based on the target playback time and perform the corresponding time length scaling processing, so that the special effect object after the time length scaling processing is adapted to the duration of the weekly battle report video, and finally the special effect object is rendered and matched with the weekly battle report.
  • the videos are displayed at the same time, and the displayed effect is the weekly battle report video with special effects objects.
  • FIG. 5 is a flowchart of a video special effect processing system provided by an embodiment of the present application.
  • the expansion and contraction interval and expansion type of the sticker animation are set through a plug-in in AE.
  • FIG. 6B is provided by an embodiment of the present application.
  • FIG. 6C is a schematic diagram of the annotation of the video special effect processing method provided by the embodiment of the present application.
  • the duration scaling interval and four durations can be set.
  • the specific setting process of the scaling type is as follows: (1) In response to the operation of double-clicking the annotation, the setting page is presented; (2) The duration scaling type (that is, the filled-in content) is received in the setting page, (3) The modification is received in the setting page (4)
  • the duration scaling types include the following: 1. No scaling type: indicates that there is no need for duration scaling; 2.
  • Linear scaling type when setting the entire PAG sticker animation When the target playback duration is longer than the original playback duration of the original PAG sticker animation, linear stretching is performed in the duration expansion interval.
  • the target playback duration of the entire PAG sticker animation is set to be shorter than the original playback duration of the original PAG sticker animation, it is performed in the duration expansion interval.
  • the special effect object is encoded to obtain the PAG sticker animation file, and the PAG sticker animation file is exported.
  • the duration expansion type and the level of the file root path are added.
  • the time length scaling interval is convenient for encoding processing.
  • the decoding module of the rendering SDK needs to decode and read the corresponding data, so as to obtain the time scaling interval, and obtain the PAG rendering timeline (including: no scaling) based on the time scaling interval.
  • the rendering SDK can be a client-side SDK or a server-side SDK, the client-side SDK completes rendering on the client side (PAG rendering and drawing), and the server-side SDK completes rendering on the server (PAG rendering and drawing) .
  • FIG. 7 is a schematic diagram of a time axis of a method for processing video special effects provided by an embodiment of the present application.
  • the minimum time unit of the original time axis and the target time axis is a frame, and if the frame rate is 24 frames per second, it means a 24 frames are presented in one second, and the minimum time unit is 1/24 second.
  • the original playback duration of the PAG sticker animation file is m, which includes a duration expansion interval (a, b). If the duration expansion type is no expansion type, the rendering The logic is the same as the previous logic, and no time scaling processing will be performed.
  • the time scaling type is linear scaling, calculate the original effect frame timestamp of the actual rendering on the original timeline of the PAG sticker animation according to the following formula (4):
  • the time-stretching type is the repeating inversion type
  • the original effect frame timestamp of the actual rendering on the original timeline of the PAG sticker animation is calculated in two cases: when (ta)/ (ba)
  • f(t) a+(ta)%(ba)
  • f(t) b-(ta )%(ba)
  • t is a value in other ranges
  • the calculation method is similar. It needs to be calculated for multiple duration expansion intervals.
  • the rendering module in the rendering SDK can render the animation image corresponding to the final desired special effect according to the corresponding original effect frame timestamp, and finally the animation image of the special effect object is compared with the weekly battle report.
  • the videos are displayed at the same time, and the displayed effect is the weekly battle report video with special effects objects.
  • the embodiment of the present application provides a video special effect processing method, which can well solve the problem of user interface animation (for example, video editing), server-side special effect video rendering scene for special effect animation in the scene where the duration requirement is not fixed, and stickers designed by designers
  • user interface animation for example, video editing
  • server-side special effect video rendering scene for special effect animation in the scene where the duration requirement is not fixed
  • stickers designed by designers The contradiction between the fixed duration of the animation file, when designing the sticker animation effect, after the designer has set the duration expansion interval and the duration expansion type, it is only necessary to set the target playback duration of the sticker animation during use on any platform. Time-stretching effect for animation.
  • the video special effect processing apparatus 455 stored in the memory 450 has a The software module may include: a file acquisition module 4551, configured to acquire a video special effect file, and extract a duration scaling strategy from the video special effect file; a duration acquisition module 4552, configured to acquire the video special effect file that needs to be implemented when applying it to the design scene.
  • the target playback duration wherein the target playback duration is different from the original playback duration of the video special effect file;
  • the special effect frame determination module 4553 is configured to determine the special effect frame corresponding to the target timeline in the video special effect file according to the duration scaling strategy; wherein, the target time The length of the axis is consistent with the target playback duration;
  • the rendering module 4554 is configured to perform rendering according to the special effect frame corresponding to the target time axis, so as to obtain the target video special effect conforming to the target playback duration.
  • the file acquisition module 4551 is further configured to: perform one of the following processes: perform encoding processing on multiple layer structures of special effects objects to obtain encoded export files corresponding to the special effects objects;
  • the frame is encoded to obtain the encoding export file corresponding to the special effect object;
  • the video format compression process is performed on multiple special effect frames of the special effect object, and the obtained video format compression processing result is encoded to obtain the encoding export file corresponding to the special effect object;
  • the duration scaling type and duration scaling interval are encapsulated in the encoding export file to obtain a video special effect file corresponding to the special effect object.
  • the duration scaling strategy includes duration scaling intervals and corresponding duration scaling types; the file obtaining module 4551 is further configured to: decode the video special effect file to obtain at least one duration scaling interval corresponding to the video special effect file and a corresponding duration scaling type; wherein, the duration scaling type includes any one of the following types: a linear scaling type in time; a repetitive timing type; a repetitive timing reverse type.
  • the duration obtaining module 4552 is further configured to: when the number of time-length expansion and contraction intervals is multiple, split the video special effect file into a plurality of video special effect sub-files with the same number, and obtain the corresponding sub-files for each The target playback duration of the video special effect sub-file; when the number of the current long stretch interval is one, obtain the overall target playback duration for the video special effect file.
  • the duration obtaining module 4552 is further configured to: after obtaining the target playback duration that needs to be achieved by the video special effect file, when the number of the current length expansion and contraction intervals is multiple, execute the execution for each video special effect sub-file.
  • the following processing obtain the original time axis of the corresponding special effect object from the video special effect sub-file; keep the frame rate of the original time axis unchanged, perform duration scaling on the original timeline, and obtain the target timeline corresponding to the target playback duration;
  • the following processing is performed for the video special effect file: obtain the original time axis of the corresponding special effect object from the video special effect file; keep the frame rate of the original time axis unchanged, perform time-length scaling processing on the original time axis, and obtain the corresponding The target timeline for the target playback duration.
  • the special effect frame determination module 4553 is further configured to: perform the following processing for each time-length expansion and contraction interval: obtain multiple video effect sub-files including special effect objects The special effect frame and the corresponding time stamp of each special effect frame on the original time axis are used as the original special effect frame time stamp of each special effect frame; Determine the special effect frame corresponding to each timestamp on the target timeline from among the special effect frames.
  • the special effect frame determining module 4553 is further configured to: obtain from the video special effect file a plurality of special effect frames including special effect objects, and each special effect frame on the original time axis The corresponding time stamp on the effect frame is used as the original effect frame time stamp of each special effect frame; based on the time stretch interval and the original effect frame time stamp of each special effect frame, the The effect frame corresponding to the timestamp.
  • the special effect frame determination module 4553 is further configured to: take each timestamp on the target time axis as the target timestamp in turn, and perform the following processing: based on the duration expansion and contraction interval, determine that the target timestamp is on the original time axis Corresponding original timestamp; when the original timestamp corresponding to the target timestamp on the original time axis overlaps with any original effect frame timestamp, the effect frame corresponding to the overlapping original effect frame timestamp is determined as the one corresponding to the target timestamp.
  • Effect frame when the original time stamp corresponding to the target time stamp on the original time axis does not overlap with any original effect frame time stamp, determine the original effect frame time stamp with the smallest distance from the original time stamp, and convert the original effect frame time stamp.
  • the corresponding special effect frame is determined as the special effect frame corresponding to the target timestamp.
  • the special effect frame determination module 4553 is further configured to: perform the following processing for each duration scaling interval: when the target timestamp is not greater than the start timestamp of the duration scaling interval, determine the target timestamp as the original The corresponding original timestamp on the time axis; when the target timestamp is greater than the start timestamp of the duration scaling interval and smaller than the ending timestamp of the duration scaling interval, the target timestamp is mapped based on the duration scaling type to obtain the corresponding Original timestamp; when the target timestamp is greater than or equal to the termination timestamp and less than the target playback duration, determine the first difference between the original playback duration and the target playback duration, and sum the first difference and the target timestamp process, and determine the summation processing result as the original timestamp corresponding to the target timestamp on the original time axis.
  • the special effect frame determining module 4553 is further configured to: when the time-length scaling type is the time-linear scaling type, determine the second difference between the target playback duration and the original playback duration as the scaling length, and compare the scaling length with the Perform the summation processing on the length of the duration expansion and contraction interval; perform a ratio processing between the length of the duration expansion and contraction interval and the summation processing result, and obtain the expansion coefficient; determine the third difference between the target timestamp and the start timestamp, and calculate the third The difference value is multiplied by the expansion coefficient; the multiplication result and the start timestamp are summed to obtain the corresponding original special effect frame timestamp.
  • the special effect frame determination module 4553 is further configured to: when the duration scaling type is the time repetition type, determine a fourth difference between the target timestamp and the starting timestamp, and compare the fourth difference and the duration The length of the expansion and contraction interval is processed by the remainder; the result of the remainder processing and the start timestamp are summed to obtain the corresponding original timestamp.
  • the special effect frame determination module 4553 is further configured to: determine the fifth difference between the target timestamp and the start timestamp when the time-length scaling type is the reverse time sequence repetition type; Perform remainder processing on the length of the duration expansion and contraction interval to obtain a remainder processing result, and perform a ratio processing between the fifth difference and the length of the duration expansion interval to obtain a ratio result; perform rounding processing on the comparison result to obtain The rounding result; when the rounding result is an even number, the remainder processing result and the starting timestamp are summed to obtain the corresponding original timestamp; when the rounding result is an odd number, the length of the duration expansion interval is determined and the For the sixth difference between the remainder processing results, the sixth difference and the start timestamp are summed to obtain the corresponding original timestamp.
  • Embodiments of the present application provide a computer-readable storage medium storing executable instructions, wherein executable instructions are stored, and when the executable instructions are executed by a processor, the processor will cause the processor to execute the electronic red envelope provided by the embodiments of the present application.
  • the transmission method is, for example, the processing method of the video special effect as shown in FIGS. 3A-3E .
  • Embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the electronic device executes the video special effect processing method of the embodiment of the present application.
  • the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; it may also include one or any combination of the foregoing memories Various equipment.
  • executable instructions may take the form of programs, software, software modules, scripts, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and which Deployment may be in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, a Hyper Text Markup Language (HTML, Hyper Text Markup Language) document
  • HTML Hyper Text Markup Language
  • One or more scripts in stored in a single file dedicated to the program in question, or in multiple cooperating files (eg, files that store one or more modules, subroutines, or code sections).
  • executable instructions may be deployed to execute on one electronic device, or on multiple electronic devices located at one site, or alternatively, multiple electronic devices distributed across multiple sites and interconnected by a communication network execute on.
  • a video special effect file can be freely scaled to the playback duration required by different application scenarios through the encapsulation duration scaling strategy in the video special effect file, which has universal applicability, and the target video is obtained by rendering again on the basis.
  • the special effects save the huge consumption of computing resources and time resources caused by the production of a large number of video special effects files with different playback durations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供了一种视频特效的处理方法、装置、电子设备及计算机可读存储介质;方法包括:获取视频特效文件,并从所述视频特效文件中提取时长伸缩策略;获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,其中,所述目标播放时长区别于所述视频特效文件的原始播放时长;根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧;其中,所述目标时间轴的长度与所述目标播放时长一致;根据与所述目标时间轴对应的特效帧进行渲染,得到符合所述目标播放时长的目标视频特效。

Description

视频特效的处理方法、装置以及电子设备
相关申请的交叉引用
本申请基于申请号为202010599847.1、申请日为2020年06月28日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及数字多媒体技术,尤其涉及一种视频特效的处理方法、装置、电子设备及计算机可读存储介质。
背景技术
互联网特别是移动互联网的发展,使得视频作为信息的传播媒介得到前所未有的运用。为了增强视频所承载信息的表现力,并提升关注度,相关技术通常会在拍摄完成中的视频中额外增加视频特效。
例如,相关技术中,可以基于专业的视频编辑设计软件例如AE(Adobe After Effect)来设计特效动画,AE中可以使用Airbnb开源的火热动画库方案以及便携式动画图形(PAG,Portable Animated Graphics)方案。
但是,通过视频编辑设计软件设计出的视频特效,在设计完成之后,视频特效的时长均是固定的,这就使得同一个视频特效难以适用于需求多样化的播放视频的应用场景;如果针对各种可能的场景来事先生成各种不同播放时长的视频特效,则不仅会造成计算资源的浪费,而且影响视频呈现的实时性。
发明内容
本申请实施例提供一种视频特效的处理方法,所述方法包括:
获取视频特效文件,并从所述视频特效文件中提取时长伸缩策略;
获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,其中,所述目标播放时长区别于所述视频特效文件的原始播放时长;
根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧;
其中,所述目标时间轴的长度与所述目标播放时长一致;
根据与所述目标时间轴对应的特效帧进行渲染,得到符合所述目标播放时长的目标视频特效。
本申请实施例提供一种与所述视频特效的处理方法相关的装置、电子设备及计算机可读存储介质。
附图说明
图1是本申请实施例提供的视频特效的处理系统的结构示意图;
图2是本申请实施例提供的电子设备的结构示意图;
图3A-3E是本申请实施例提供的视频特效的处理方法的流程示意图;
图4是本申请实施例提供的视频特效的处理方法在游戏周战报视频的场景中的应用效果示意图;
图5是本申请实施例提供的视频特效的处理系统的流程图;
图6A-6C是本申请实施例提供的视频特效的处理方法的标注示意图;
图7是本申请实施例提供的视频特效的处理方法的时间轴示意图。
具体实施方式
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)AE:是Adobe After Effect的简称,是Adobe公司推出的一款图形视频处理软件,适用于从事设计和视频特技的机构,包括电视台、动画制作公司、个人后期制作工作室以及多媒体工作室,属于层类型后期软件。
2)视频特效文件,承载特效内容的二进制文件,例如PAG文件,是一种以二进制文件格式进行储存的贴纸动画。
3)原始时间轴:视频特效文件整体所对应的时间轴,或者视频特效子文件所对应的特效部分进行播放时所对应的时间轴。
4)目标时间轴:视频特效文件中完整的特效对象进行伸缩处理后进行播放时所对应的时间轴,或者视频特效子文件所对应的部分特效对象进行伸缩处理后进行播放时所对应的时间轴。
相关技术中移动互联网客户端基于AE(Adobe After Effect)实现动画的方案有Airbnb开源的Lottie方案和PAG方案,它们都打通了AE动画设计到移动端呈现的工作流,设计师在AE上设计的动画通过导出插件导出动画文件,进而在移动端通过SDK进行加载渲染,从而大大降低了开发的成本,但是,两个方案通过AE设计出的动画文件中动画的时长都是固定的,申请人在实施本申请实施例的过程中发现,在部分用户界面动画及视频编辑的场景下,需要外部能够控制动画文件的时长,如对部分区间动画文件固定,对部分区间动画进行线性拉伸或循环处理,如贴纸动画长度为2秒,而是实际需要动画长度为4秒,外部需要将贴纸动画拉伸为4秒,或者对贴纸动画进行重复绘制,即将贴纸动画播放2遍。
针对相关技术的固定的视频特效文件与任意目标播放时长需求之间存在矛盾的技术问题,本申请实施例提供一种视频特效的处理方法,可以支持固定动画文件的时间伸缩,并且外部应用平台只需要设置动画文件的目标播放时间,动画文件就可以按照用户配置的伸缩策略进行时间伸缩。视频特效文件的播放时长伸缩处理是由视频特效文件中的时长伸缩策略控制,对视频特效文件解码 后,根据时长伸缩策略处理并渲染即可实现目标播放时长的目标视频特效;能够直接应用于各种应用和各种平台,且不受平台的操作系统的限制,实现流程极其简洁。
下面说明本申请实施例提供的电子设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的终端设备,也可以实施为服务器。
参见图1,图1是本申请实施例提供的视频特效的处理系统的结构示意图,终端400通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合。
服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
云计算(cloud computing)是一种计算模式,它将计算任务分布在大量计算机构成的资源池上,使各种应用系统能够根据需要获取计算力、存储空间和信息服务。提供资源的网络被称为“云”,“云”中的资源在使用者看来是可以无限扩展的,并且可以随时获取,按需使用,随时扩展,按使用付费。
下面结合不同的应用场景说明本申请实施例提供的视频特效的处理方法的应用。
在一个应用场景中,设计者利用终端500或者通过调用服务器的云服务设计视频特效文件,发送视频特效文件到客户端的服务器200(即后台服务器),由服务器200存储接收到的视频特效文件,或由服务器200将视频特效文件存储到数据库600或文件系统;用户使用终端400中运行的客户端,客户端可以是游戏应用、社交网络应用、短视频应用、网上购物应用等的各种类型的应用,用户使用的过程中,触发服务器200的下发视频的业务逻辑,例如,服务器200定期下发业务使用报告,例如,下发游戏中的每周战报视频以及每月消费视频报告的业务逻辑,业务使用报告的具体内容与客户端的业务有关,并触发服务器200下发增强视频表现力的视频特效文件,视频特效文件可以是客户端预先存储的。
在一些应用场景中,终端400中运行的客户端基于视频特效文件中的时长伸缩策略,以服务器200下发的视频的播放时长为目标播放时长,在渲染视频的同时,还渲染视频特效文件中所承载的特效对象(视频特效),以得到符合目标播放时长的目标视频特效,从而实现视频特效与视频同步显示的效果,具体过程如下:终端400响应接收到从服务器200返回的原生视频(每周战报视频)以及视频特效文件,获取原生视频的原生视频时长(每周战报视频的时长),将原生视频时长作为目标播放时长,对视频特效文件进行解码并执行相应的时长伸缩处理,使得经过时长伸缩处理的视频特效适配于原生视频的播放时长,最后将该视频特效进行渲染并与原生视频进行同时显示,以显示出带有视频特效 的每周战报视频。
在一些应用场景中,服务器下发多个视频,视频特效作为多个视频之间的转场动画,以将多个视频进行串联播放;转场动画的时长可以是服务器下发视频时指定的,例如可以是根据用户账号等级(等级越高,则转场时间越短)确定转场动画的时长;当终端400中运行的客户端播放完一个视频时,渲染视频特效文件中所承载的视频特效,得到符合目标播放时长的目标视频特效,目标视频特效实际上是实现了转场动画的作用,以使多个视频之间的衔接更加自然,具体过程如下:终端400从服务器200中获取对应某一视频特效的视频特效文件进行解码,且基于指定的目标播放时长(转场动画的时长)执行相应的时长伸缩处理,并将经过时长伸缩处理之后的视频特效在若干个原生视频之间进行渲染。
服务器针对不同用户下发的视频,或者针对同一用户下发的不同视频的播放时长是变化的,利用同一个视频特效文件可以同时复用于众多视频的播放过程中,减少服务器重复生成视频的计算资源消耗,减少了用户侧的等待延迟。
在另一个应用场景中,终端400中运行的客户端是社交网络客户端,或者视频分享客户端,具有视频采集、编辑和分享的功能。客户端采集了视频,利用从服务器下载的视频特效文件进行合成,例如图像拼接(二者同时显示)或时间轴拼接(即使用视频特效文件衔接采集的多个视频),对于前者,以视频的播放时长为目标播放时长,具体过程如下:终端400响应接收到用户拍摄的原生视频,获取原生视频的原生视频时长,终端400从服务器200中获取对应某一视频特效的视频特效文件进行解码以将原生视频时长作为目标播放时长,针对视频特效执行相应的时长伸缩处理,使得该视频特效适配于原生视频,将该视频特效与原生视频进行图像拼接处理后,进行实时渲染以预览最终编辑效果,进行预览后还可以对图像拼接处理结果进行编码处理,得到一个新视频文件分享给其他用户,对于后者,以用户设定的时长或客户端默认的转场动画的时长为目标播放时长,在播放完一个视频后,渲染视频特效文件中所承载的视频特效,以得到符合目标播放时长的目标视频特效,目标视频特效实际上是实现了转场动画的作用,以使多个视频之间的衔接更加自然,具体过程如下:终端400从服务器200中获取对应某一视频特效的视频特效文件进行解码,且基于指定的目标播放时长(转场动画的时长)执行相应的时长伸缩处理,并将经过时长伸缩处理之后的视频特效在若干个原生视频之间进行渲染,并将经过时长伸缩处理之后的视频特效与若干个原生视频进行时间轴拼接处理,再进行编码处理,得到一个新的视频文件,以向其他用户进行分享。
在上述视频编辑场景中,用户可以继续调整目标播放时长并对原声视频以及视频特效进行重新渲染,直至确定得到的最终预览结果符合要求,客户端(视频编辑客户端)结合原生视频和视频特效文件编码得到完整的视频文件,并可以进行分享。
需要指出的,上文所述的客户端中用以实现特效处理的功能可以是客户端中原生的,也可以是客户端通过植入相应的插件例如软件开发工具包(SDK, Software Development Kit)来实现的,对于客户端中实现视频特效处理的具体形式不做限定。
此外,作为客户端渲染的替代方案,当渲染耗费的计算资源(处理器和内存)超出终端的承受能力时,客户端可以请求服务器进行渲染,并根据服务器返回的渲染数据呈现目标视频特效。
下面继续以本申请实施例提供的电子设备为上文所述的终端为例说明,参见图2,图2是本申请实施例提供的电子设备的结构示意图,图2所示的终端400包括:至少一个处理器410、存储器450、至少一个网络接口420和用户接口430。终端400中的各个组件通过总线系统440耦合在一起。可理解,总线系统440用于实现这些组件之间的连接通信。总线系统440除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统440。
处理器410可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口430包括使得能够呈现媒体内容的一个或多个输出装置431,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口430还包括一个或多个输入装置432,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器450可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器450可选地包括在物理位置上远离处理器410的一个或多个存储设备。
存储器450包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Memory),易失性存储器可以是随机存取存储器(RAM,Random Access Memory)。本申请实施例描述的存储器450旨在包括任意适合类型的存储器。
在一些实施例中,存储器450能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统451,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块452,用于经由一个或多个(有线或无线)网络接口420到达其他电子设备,示例性的网络接口420包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块453,用于经由一个或多个与用户接口430相关联的输出装置431(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块454,用于对一个或多个来自一个或多个输入装置432之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的视频特效的处理装置可以采用软件方式实现,图2示出了存储在存储器450中的视频特效的处理装置455,其可以是程序和插件等形式的软件,包括以下软件模块:文件获取模块4551、时长获取模块4552、特效帧确定模块4553和渲染模块4554,这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分。
将说明本申请实施例提供的视频特效的处理方法在上文所述的终端的示例性应用和实施。需要指出,下文是以终端的角度进行说明,但是可以理解的,根据上文的具体应用场景的介绍,下文中视频特效的相关处理可以是由终端中运行的客户端来完成的,因此下文中终端具体可以是终端中运行的客户端,关于客户端的示例上文已有说明,不再重复。
参见图3A,图3A是本申请实施例提供的视频特效的处理方法的流程示意图,将结合图3A示出的步骤进行说明。
在步骤101中,终端获取视频特效文件,并从视频特效文件中提取时长伸缩策略。
作为示例,获取视频特效文件的方式主要是通过插件来导出视频特效文件,视频特效文件可以是PAG格式的贴纸动画文件,为了读取工程文件中的动画特效数据,可以根据具体需求选择矢量导出方式、位图序列帧导出方式或者视频序列帧导出方式中的一种导出PAG二进制文件,客户端或者服务器端对导出的PAG二进制文件进行解码,这里以终端为例进行说明,即终端对导出的PAG二进制文件进行解码,进而通过渲染模块进行渲染后呈现,终端进行解码以及呈现的过程可以通过调用渲染SDK实现,解码的作用是将PAG二进制文件反序列化为客户端可以操作的数据对象,解码出的数据结构可以参照PAG数据结构。
在一些实施例中,步骤101中获取视频特效文件,可以通过以下技术方案实现:执行以下处理之一:对特效对象的多个图层结构进行编码处理,得到对应特效对象的编码导出文件;对特效对象的多个特效帧进行编码处理,得到对应特效对象的编码导出文件;对特效对象的多个特效帧进行视频格式压缩处理,并对得到的视频格式压缩处理结果进行编码处理,得到对应特效对象的编码导出文件;将时长伸缩类型以及时长伸缩区间封装在编码导出文件中,得到对应特效对象的视频特效文件。
作为示例,首先通过以下三种方式之一得到特效对象的编码导出文件:矢量导出、位图序列帧导出和视频序列帧导出,矢量导出方式可以支持大部分的AE特性,导出文件极小,通常应用于用户界面上或内容可编辑的场景,矢量导出是对AE动画图层结构的还原,矢量导出方式是通过AE提供的SDK,对AE动画图层结构进行了还原,并在导出的过程中采用了动态比特位存储技术,大幅降低了文件大小;位图序列帧导出和视频序列帧导出方式能够支持所有的AE特性,但是导出文件较大,通常应用在视频合成中或对动画特效有特别要求的场景,位图序列帧导出的是位图数据,位图序列帧导出是将设计师设计的复杂动效中的每一帧转换为图片格式进行存储,更具体的,针对AE大部分动画具有连贯性、帧间差异小的特点,选取某一帧作为关键帧,后面的每帧数据与之进行对比,获取到差异位图的位置信息、宽高数据,并截取差异位图信息进行 存储,从而降低文件大小。同时位图序列帧支持导出多个版本(缩放系数、帧率、尺寸不同),来满足不同场景下的需求,这种处理方式的优点是可以支持所有AE特性,缺点是导出的文件会偏大,且无法实现对AE动画中图片替换、文本编辑操作,适用于处理遮罩、阴影等复杂特效,主要应用于网页端;视频序列帧导出方式采用的是视频领域的H.264压缩格式,相对于位图序列帧,解码速度会更快,侧重应用于移动端,视频序列帧导出方式将截取的图片进行视频格式压缩,视频序列帧导出方式相对于位图序列帧导出方式具有更加优化的图片格式体积和解码效率,从性能方面看,矢量导出方式可以做到文件大小和性能都非常优化的极限状态,针对于序列帧导出方式产生的PAG视频特效文件,整体耗时只跟序列帧图片的尺寸有关系。
作为示例,将用户输入的时长伸缩类型以及时长伸缩区间封装在编码导出文件中,实际上是修改PAG贴纸动画文件的数据结构,可以在文件根路径的层级增加时长伸缩类型和时长伸缩区间,最终得到对应特效对象的视频特效文件,在具体实施的过程中,可以不限定封装步骤与进行编码步骤的执行顺序,即可以先在根路径的层级增加时长伸缩类型和时长伸缩区间,再进行上述三种方式之一的编码导出处理,也可以先进行编码导出处理,再在根路径的层级增加时长伸缩类型和时长伸缩区间。
通过上述实施例,将设计师通过AE设计动画、再提供动画特性数据到终端开发工程师以实现动画功能的开发流程,缩减为设计师通过AE设计动画的导出PAG贴纸动画文件,终端对PAG贴纸动画文件直接加载显示,即大幅减少了终端开发的工作量,也兼容了各个平台的开发需求。
在一些实施例中,时长伸缩策略包括时长伸缩区间以及对应的时长伸缩类型;步骤101中从视频特效文件中提取时长伸缩策略,可以通过以下技术方案实现:对视频特效文件进行解码处理,得到对应视频特效文件的至少一个时长伸缩区间以及对应的时长伸缩类型;其中,时长伸缩类型包括以下类型中的任意一种:时间线性伸缩类型;时间重复类型;时间倒序重复类型。
作为示例,视频特效文件可以为上述PAG贴纸动画文件,是一种针对特效对象的描述文件,在进行解码后可以提取到基于用户配置的时长伸缩区间与对应的时长伸缩类型,时长伸缩区间的数目通常为1个,但是在较为复杂的应用场景下,时长伸缩区间的数目为多个,针对每个时长伸缩区间,分别对应有用户配置的时长伸缩类型,对于用户配置功能,可以通过终端为用户提供配置入口,接收用户输入的时长伸缩区间与时长伸缩类型,参见图6A,图6A是本申请实施例提供的视频特效的处理方法的标注示意图,即通过各个应用场景下的客户端提供的配置入口601和602来接收用户输入的时长伸缩区间与时长伸缩类型,在接收到时长伸缩策略之后,还可以发生页面跳转,以继续接收输入的对应各个时长伸缩区间的目标播放时长,或者在设置完时长伸长策略之后,将所接收的配置信息通过进程间通信发送到AE客户端,以结合AE客户端中获取的特效数据进行编码处理,获取最终的视频特效文件,从视频特效文件中提取伸缩策略之后,再提供目标播放时长的配置入口603,这种实施方式能够有利于任意应用场景的客户端灵活设置动画特效文件的伸缩策略,还可以通过AE 客户端进行时长伸缩区间与时长伸缩类型的标注,参见图6B-6C,图6B-6C是本申请实施例提供的视频特效的处理方法的标注示意图,即直接通过AE客户端接收用户输入的时长伸缩区间与时长伸缩类型,以结合AE客户端中获取的特效数据进行编码处理,获取最终的视频特效文件,通过这种实施方式可以缓解各个应用场景的客户端的开发优化任务,各个客户端侧可以直接调用渲染SDK获取视频特效文件并执行后续的逻辑。
在步骤102中,获取将视频特效文件应用于设计场景时需要实现的目标播放时长,其中,目标播放时长区别于视频特效文件的原始播放时长。
基于图3A,参见图3B,图3B是本申请实施例提供的视频特效的处理方法的一个可选的流程示意图,步骤102中获取将视频特效文件应用于设计场景时需要实现的目标播放时长可以通过步骤1021至步骤1022实现,将结合各步骤进行说明。
在步骤1021中,当时长伸缩区间的数目为多个时,将视频特效文件拆分为与数目一致的多个视频特效子文件,并分别获取针对每个视频特效子文件的目标播放时长间。
作为示例,在实施过程中并不限定视频特效文件拆分的方式,只需要保证拆分后得到的每个视频特效子文件中均包含有且仅包含有一个时长伸缩区间,而针对于每个时长伸缩区间的目标播放时长可以是对视频特效文件的目标播放时长进行分配得到的,例如,视频特效文件的目标播放时长为10秒,视频特效文件中存在两个时长伸缩区间,分别是1秒-2秒的第一时长伸缩区间、以及3秒-4秒的第二时长伸缩区间这两个区间,那么针对于第一时长伸缩区间的第一目标播放时长仅需要保证结合对应第二时长伸缩区间的第二目标播放时长满足视频特效文件的目标播放时长,即若第一时长伸缩类型是重复类型,那么第一目标播放时长至少大于2秒(例如,1秒-3秒),且第一目标播放时长和第二目标播放时长的总和为10秒,这种实施方式的限制条件较少,即仅限定了对应各子文件的目标播放时长的和为视频特效文件的目标播放时长,以及各个子文件的目标播放时长满足对应时长伸缩区间,在不需要用户人为干预的情形下,避免了用户进行设置的麻烦,从而提供了多样化且随机的渲染效果。
作为示例,每个视频特效子文件的目标播放时长间实际涉及到视频特效文件的目标播放时长的分配问题,在进行分配的过程中,除了可以按照上述限制条件的约束进行任意分配之外,还可以为用户提供分配方案配置功能,在图6C的用户配置入口中还可以提供相关入口以方便用户输入为每个时长伸缩区间所配置的目标播放时长,例如,由用户设定每个时长伸缩区间对应的视频特效子文件的目标播放时长,通过这种实施方式有利于用户以更细粒度的方式灵活控制各个子文件的渲染效果,进而控制整个文件的渲染效果。
作为示例,当时长伸缩区间的数目为多个时,且存在需要与特效对象进行适配呈现的原生视频时,可以直接将原生视频的时长作为视频特效文件对应的目标播放时长,针对不同伸缩区间对原生视频的时长进行分配,使得每个伸缩区间的目标播放时长满足上述限制条件,若分配的目标播放时长无法适配对应视频特效子文件的时长伸缩策略中的时长伸缩类型,例如,时长伸缩类型为时 间重复类型,但是目标播放时长小于原始播放时长,则选择其他的视频特效文件,在素材库中可以存在相同特效对象对应不同时长伸缩策略的视频特效文件。
在步骤1022中,当时长伸缩区间的数目为一个时,获取针对视频特效文件的整体的目标播放时长。
作为示例,当时长伸缩区间的数目为一个时,并不存在需要对视频特效文件的目标播放时长进行分配的问题,直接将用户配置的目标播放时长作为视频特效文件的整体的目标播放时长,当用户输入的目标播放时长不符合时长伸缩区间的时长伸缩类型时,会返回用户提示错误信息,并再次开放入口供接收用户输入目标播放时长。
作为示例,当时长伸缩区间的数目为一个时,且存在需要与特效对象进行适配呈现的原生视频时,可以直接将原生视频的时长作为特效的目标播放时长,若原生视频的时长不满足对应特效文件的时长伸缩策略中的时长伸缩类型,则选择其他的视频特效文件,在素材库中可以存在相同特效对象对应不同时长伸缩策略的视频特效文件。
由于不同的应用场景下对于目标播放时长的需求是多种多样的,同一个视频特效文件可以同时复用于众多视频的播放过程以及编辑过程中,可以减少服务器重复生成视频的计算资源消耗,减少了用户侧的等待延迟。
在一些实施例中,在执行步骤102获取将视频特效文件应用于设计场景时需要实现的目标播放时长之后,还可以执行以下技术方案:当时长伸缩区间的数目为多个时,针对每个视频特效子文件执行以下处理:从视频特效子文件的时间轴中获取对应特效对象的原始时间轴,即子文件的时间轴中出现特效对象的部分时间轴;保持原始时间轴的帧率不变,对原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴;当时长伸缩区间的数目为一个时,针对视频特效文件执行以下处理:从视频特效文件中获取对应特效对象的原始时间轴;保持原始时间轴的帧率不变,对原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴。
作为示例,当时长伸缩区间的数目为多个时,从视频特效子文件的时间轴中出现特效对象的部分时间轴作为原始时间轴,保持原始时间轴的帧率不变,对原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴。当时长伸缩区间的数目为一个时,直接将视频特效文件中对应特效对象的原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴,上述这两种情形中所进行的伸缩处理时,均保持帧率不变,即保证最小时间单位不变,从而使得特效对象的时长伸缩处理的效果是播放进度发生改变,而不是播放帧率发生改变。
在步骤103中,终端根据时长伸缩策略,确定视频特效文件中与目标时间轴对应的特效帧;其中,目标时间轴的长度与目标播放时长一致。
基于图3A,参见图3C,图3C是本申请实施例提供的视频特效的处理方法的流程示意图,当时长伸缩区间的数目为多个时,步骤103中根据时长伸缩策略,确定视频特效文件中与目标时间轴对应的特效帧可以通过针对每个时长伸缩区间执行步骤1031A至步骤1032A实现,将结合各步骤进行说明。
在步骤1031A中,从视频特效子文件中获取包括特效对象的多个特效帧、以及每个特效帧在原始时间轴上对应的时间戳,并作为每个特效帧的原始特效帧时间戳。
作为示例,在不需要进行时长伸缩处理时,所依赖的渲染逻辑即是视频特效子文件中各个特效帧在原始时间轴上对应的时间戳,例如,存在特效帧1至特效帧24,帧率是24帧每秒,即每1/24秒为一个时间戳,例如0,1/24,2/24,…,23/24,在这24个时间戳上分别呈现特效帧1至24,上述时间戳0,1/24,2/24,…,23/24分别为特效帧1至特效帧24的原始特效帧时间戳。
在步骤1032A中,基于时长伸缩区间、以及每个特效帧的原始特效帧时间戳,在多个特效帧中确定与目标时间轴上每个时间戳对应的特效帧。
作为示例,进行时长伸缩处理的过程实际上是确定目标时间轴上每个时间戳对应的特效帧,假设原始时间轴是1秒,帧率是24帧每秒,即每1/24秒为一个时间戳,目标时间轴是2秒,帧率仍然是24帧每秒,则目标时间轴上每个时间戳为0,1/24,2/24,…,23/24,24/24,…,47/24,在进行伸缩处理时,可以通过时长伸缩区间以及对应的时长伸缩类型来确定目标时间轴上的时间戳与原始时间轴上时间戳的映射关系以及映射范围,从而基于映射关系以及映射范围确定目标时间轴上的每个时间戳所对应的原始时间轴上时间戳,所对应的原始时间轴上时间戳中有的时间戳是原始特效帧时间戳,有的时间戳没有呈现特效帧,例如目标时间轴上的时间戳为1/48时,所对应的原始时间轴上时间戳没有呈现特效帧,对于没有呈现特效帧的时间戳采取就近原则,来确定需要呈现的特效帧。
基于图3A,参见图3D,图3D是本申请实施例提供的视频特效的处理方法的流程示意图,当时长伸缩区间的数目为一个时,步骤103中根据时长伸缩策略,确定视频特效文件中与目标时间轴对应的特效帧可以通过步骤1031B至步骤1032B实现,将结合各步骤进行说明。
在步骤1031B中,从视频特效文件中获取包括特效对象的多个特效帧、以及每个特效帧在原始时间轴上对应的时间戳,并作为每个特效帧的原始特效帧时间戳。
作为示例,在不需要进行时长伸缩处理时,所依赖的渲染逻辑即是视频特效文件中各个特效帧在原始时间轴上对应的时间戳,例如,存在特效帧1至特效帧24,帧率是24帧每秒,即每1/24秒为一个时间戳,例如0,1/24,2/24,…,23/24,在这24个时间戳上分别呈现特效帧1至24,上述时间戳0,1/24,2/24,…,23/24分别为特效帧1至特效帧24的原始特效帧时间戳。
在步骤1032B中,基于时长伸缩区间、以及每个特效帧的原始特效帧时间戳,在多个特效帧中确定与目标时间轴上每个时间戳对应的特效帧。
作为示例,进行时长伸缩处理的过程实际上是确定目标时间轴上每个时间戳对应的特效帧,假设原始时间轴是1秒,帧率是24帧每秒,即每1/24秒为一个时间戳,目标时间轴是2秒,帧率仍然是24帧每秒,则目标时间轴上每个时间戳为0,1/24,2/24,…,23/24,24/24,…,47/24,在进行伸缩处理时,可以通过时长伸缩区间以及对应的时长伸缩类型来确定目标时间轴上的时间戳 与原始时间轴上时间戳的映射关系以及映射范围,从而基于映射关系以及映射范围确定目标时间轴上的每个时间戳所对应的原始时间轴上时间戳,所对应的原始时间轴上时间戳中有的时间戳是原始特效帧时间戳,有的时间戳没有呈现特效帧,例如目标时间轴上的时间戳为1/48时,所对应的原始时间轴上时间戳没有呈现特效帧,对于没有呈现特效帧的时间戳采取就近原则,来确定需要呈现的特效帧。
在一些实施例中,步骤1032A或者1032B中基于时长伸缩区间、以及每个特效帧的原始特效帧时间戳,在多个特效帧中确定与目标时间轴上每个时间戳对应的特效帧,可以通过以下技术方案实现:依次将目标时间轴上每个时间戳作为目标时间戳,并执行以下处理:基于时长伸缩区间,确定目标时间戳在原始时间轴上对应的原始时间戳;当目标时间戳在原始时间轴上对应的原始时间戳与任一原始特效帧时间戳重叠时,将重叠的原始特效帧时间戳对应的特效帧确定为目标时间戳对应的特效帧;当目标时间戳在原始时间轴上对应的原始时间戳未与任一原始特效帧时间戳重叠时,确定与原始时间戳距离最小的原始特效帧时间戳,并将原始特效帧时间戳对应的特效帧确定为目标时间戳对应的特效帧。
作为示例,假设原始时间轴是1秒,帧率是24帧每秒,即每1/24秒为一个时间戳,存在特效帧1至特效帧24,帧率是24帧每秒,即每1/24秒为一个时间戳,例如0,1/24,2/24,…,23/24,在这24个时间戳上分别呈现特效帧1至24,上述时间戳0,1/24,2/24,…,23/24分别为特效帧1至特效帧24的原始特效帧时间戳,目标时间轴是2秒,帧率仍然是24帧每秒,则目标时间轴上每个时间戳为0,1/24,2/24,…,23/24,24/24,…,47/24,将目标时间轴上每个时间戳作为目标时间戳,并分别确定各个目标时间戳对应的原始特效帧时间戳,分别对应的原始特效帧时间戳是0,0,1/24,1/24,2/24,2/24,…,23/24,23/24,由于目标时间戳在原始时间轴上对应的原始时间戳与原始特效帧时间戳重叠,则将这些原始特效帧时间戳所对应的特效帧分别确定为各个目标时间戳上的特效帧,则目标时间轴上每个时间戳为0,1/24,2/24,…,23/24,24/24,…,47/24分别对应的特效帧为特效帧1,特效帧1,…,特效帧24,特效帧24,但是这种变换情况是理想状态下的变换,在一些情形下,目标时间轴上每个目标时间戳所对应的原始时间轴上的原始时间戳并不一定都是原始特效帧时间戳,例如,假设目标时间戳1/24在原始时间轴上对应的原始时间戳为1/48,但是在帧率为24帧每秒的情况下原始时间轴上的原始时间戳1/48并没有对应的特效帧,因此采取就近原则,将时间距离最近的特效帧A作为原始时间戳1/48的特效帧,进而确定为目标时间戳1/24对应的特效帧,出于存在两个距离最近时间戳的原因,这里特效帧A可以是原始时间戳0上的特效帧,也可以是原始时间戳1/24的特效帧,若是目标时间戳1/24在原始时间轴上对应的原始时间戳为1/36,则距离其(原始时间戳)最近的时间戳是原始时间戳1/24,这里的特效帧A为时间戳1/24的特效帧。
在一些实施例中,上述基于时长伸缩区间,确定目标时间戳在原始时间轴上对应的原始时间戳,可以通过以下技术方案实现:针对每个时长伸缩区间执 行以下处理:当目标时间戳不大于时长伸缩区间的起始时间戳时,将目标时间戳确定为在原始时间轴上对应的原始时间戳;当目标时间戳大于时长伸缩区间的起始时间戳,且小于时长伸缩区间的终止时间戳时,对目标时间戳进行基于时长伸缩类型的映射处理,得到对应的原始时间戳;当目标时间戳大于或者等于终止时间戳,且小于目标播放时长时,确定原始播放时长与目标播放时长之间的第一差值,将第一差值与目标时间戳进行求和处理,并将求和处理结果确定为目标时间戳在原始时间轴上对应的原始时间戳。
作为示例,由于时长伸缩策略中的时长伸缩区间已经限定了需要进行伸缩的区间,因此针对于目标时间轴上不同的时间戳,将根据不同的映射关系,映射得到在原始时间轴上对应的原始时间戳,参见图7,原始时间轴的长度是m,目标时间轴的长度是n,时长伸缩区间是从a至b,时长伸缩区间的长度是b-a,经过时长拉伸处理之后,时长伸缩区间的起始时间戳是a,终止时间戳是n-(m-b),若目标时间戳t在0至a之间,则对应在原始时间轴上的时间戳也是t,因为这个时间段属于片头固定的区间,若目标时间戳t在n-(m-b)至n之间,则对应在原始时间轴上的时间戳是m-n+t,若目标时间戳t在a至n-m+b之间,则需要根据不同的时长伸缩类型进行映射处理。
在一些实施例中,上述对目标时间戳进行基于时长伸缩类型的映射处理,得到对应的原始时间戳,可以通过以下技术方案实现:当时长伸缩类型为时间线性伸缩类型时,将目标播放时长与原始播放时长的第二差值确定为伸缩长度,并将伸缩长度与时长伸缩区间的长度进行求和处理;对时长伸缩区间的长度与求和处理结果进行求比值处理,得到伸缩系数;确定目标时间戳与起始时间戳之间的第三差值,将第三差值与伸缩系数进行相乘处理;对相乘处理结果与起始时间戳进行求和处理,得到对应的原始时间戳。
作为示例,当目标时间戳t在a至n-m+b之间,则需要根据不同的时长伸缩类型进行映射处理,若时长伸缩类型为时间线性伸缩时,将目标播放时长n与原始播放时长m的第二差值确定为伸缩长度,并将伸缩长度与时长伸缩区间的长度b-a进行求和处理;对时长伸缩区间的长度b-a与求和处理结果进行求比值处理,得到伸缩系数k;确定目标时间戳t与起始时间戳a之间的第三差值,将第三差值与伸缩系数k进行相乘处理;对相乘处理结果与起始时间戳a进行求和处理,得到对应的原始时间戳,具体计算原理可以参见以下公式(1):
f(t)=a+k(t-a),a<t<n-m+b           (1);
其中,a为起始时间戳,t为目标时间戳,k为伸缩系数,f(t)为原始时间戳,n为目标播放时长,m为原始播放时长。
在一些实施例中,上述对目标时间戳进行基于时长伸缩类型的映射,得到对应的原始时间戳,可以通过以下技术方案实现:当时长伸缩类型为时间重复类型时,确定目标时间戳与起始时间戳之间的第四差值,对第四差值与时长伸缩区间的长度进行求余数处理;将求余数处理结果与起始时间戳进行求和处理,得到对应的原始时间戳。
作为示例,当目标时间戳t在a至n-m+b之间,则需要根据不同的时长伸 缩类型进行映射处理,若时长伸缩类型为时间重复类型时,确定目标时间戳t与起始时间戳a之间的第四差值,对第四差值与时长伸缩区间的长度(b-a)进行求余数处理;将求余数处理结果与起始时间戳a进行求和处理,得到对应的原始时间戳,具体计算原理可以参见以下公式(2):
f(t)=a+(t-a)%(b-a),a<t<n-m+b       (2);
其中,a为起始时间戳,t为目标时间戳,k为伸缩系数,f(t)为原始时间戳,n为目标播放时长,m为原始播放时长。
在一些实施例中,上述对目标时间戳进行基于时长伸缩类型的映射,得到对应的原始时间戳,可以通过以下技术方案实现:当时长伸缩类型为时间倒序重复类型时,确定目标时间戳与起始时间戳之间的第五差值;对第五差值与时长伸缩区间的长度进行求余数处理,得到求余数处理结果,并对所述第五差值与所述时长伸缩区间的长度进行求比值处理,得到比值处理;对比值结果进行取整处理,得到取整结果;当取整结果为偶数时,将求余数处理结果与起始时间戳进行求和处理,得到对应的原始时间戳;当取整结果为奇数时,确定时长伸缩区间的长度与求余数处理结果之间的第六差值,对第六差值与起始时间戳进行求和处理,得到对应的原始时间戳。
作为示例,当目标时间戳t在a至n-m+b之间,则需要根据不同的时长伸缩类型进行映射处理,若时长伸缩类型为时间倒序重复类型时,确定目标时间戳t与起始时间戳a之间的第五差值,对第五差值与时长伸缩区间的长度(b-a)进行求余数处理;例如,对8与3进行求余数处理,得到的求余数处理结果为2,对应的比值结果为8/3,取整处理之后得到的取整结果为2,针对对应求余数处理结果的比值结果进行取整处理,得到取整结果;当取整结果为偶数时,将求余数处理结果与起始时间戳a进行求和处理,得到对应的原始时间戳;当取整结果为奇数时,确定时长伸缩区间的长度(b-a)与求余数处理结果之间的第六差值,对第六差值与起始时间戳a进行求和处理,得到对应的原始时间戳,具体计算原理可以参见以下公式(3):
f(t)=a+(t-a)%(b-a),a<t<n-m+b       (3);
其中,a为起始时间戳,t为目标时间戳,k为伸缩系数,f(t)为原始时间戳,n为目标播放时长,m为原始播放时长。
在步骤104中,终端根据与目标时间轴对应的特效帧进行渲染,得到符合目标播放时长的目标视频特效。
作为示例,终端根据目标时间轴上每个时间戳的特效帧,在对应时间戳上进行渲染处理,从而得到符合目标播放时长的目标视频特效。
作为示例,在进行短视频编辑的场景下,终端响应接收到用户拍摄的原生视频或者从服务器返回的原生视频,获取原生视频的原生视频时长,终端对素材库中的对应某一特效对象的视频特效文件进行解码并执行相应的时长伸缩处理,将原生视频时长作为目标播放时长,使得该特效适配于原生视频,将该特效与原生视频进行拼接处理后可以进行实时渲染,作为最终效果的预览,进行预览后对拼接处理结果进行编码处理,得到一个新视频文件分享给其他用户。
作为示例,在进行短视频编辑的场景下,视频特效文件中的特效还可以作为若干个原生视频之间的衔接动画,终端对素材库中的对应某一特效对象的视频特效文件进行解码,且接收针对目标播放时长的设置操作并执行相应的时长伸缩处理,使得经过时长伸缩处理之后的特效对象处在若干个原生视频之间,将该特效对象与原生视频进行时间轴拼接处理后可以进行实时渲染,作为最终效果的预览,进行预览后对拼接处理结果进行编码处理,得到一个新视频文件分享给其他用户。
作为示例,在不涉及到文件分享的场景中,例如,游戏战报场景,终端响应接收到从服务器返回的原生视频以及特效视频文件,获取原生视频的原生视频时长,并对视频特效文件进行解码并执行相应的时长伸缩处理,将原生视频时长作为目标播放时长,使得该特效对象适配于原生视频的播放时长,将该特效对象与原生视频进行渲染并同时显示。
参见图3E,图3E是本申请实施例提供的视频特效的处理方法的流程示意图,上述实施例在实施过程中仅通过终端执行具体处理流程,除此之外,还可以结合终端和服务器来实现上述处理流程,在步骤201中,终端向服务器发出渲染请求,在步骤202中,服务器获取视频特效文件,并从视频特效文件中提取时长伸缩策略,在步骤203中,服务器将提取的时长伸缩策略返回至终端,在步骤204中,终端接收输入的目标播放时长,在步骤205中,终端将目标播放时长发送至服务器,在步骤206中,服务器根据时长伸缩策略,确定视频特效文件中与目标时间轴对应的特效帧;在步骤207中,服务器根据与目标时间轴对应的特效帧进行渲染,以得到符合目标播放时长的目标视频特效;在步骤208中,服务器将目标视频特效返回至终端,在步骤209中,终端呈现目标视频特效,上述过程涉及到终端与服务器之间的交互过程,将需要大量计算资源的渲染处理分配给服务器来完成,终端仅负责接收用户的配置需求,以及呈现渲染得到的视频特效,在其他实施方式中,还可以将上述服务器所完成的逻辑通过调用渲染SDK来完成,或者通过渲染SDK远程调用云服务器资源完成。
本申请实施例提供一种视频特效的处理方法,可以支持固定动画文件的时间伸缩,并且外部应用平台只需要设置动画文件的目标播放时间,动画文件就可以按照用户配置的伸缩策略进行时间伸缩。视频特效文件的播放时长伸缩处理是由视频特效文件中的时长伸缩策略控制,对视频特效文件解码后,根据时长伸缩策略处理并渲染即可实现目标播放时长的目标视频特效;能够直接应用于各种应用和各种平台,且不受平台的操作系统的限制,实现流程极其简洁。
下面,将说明本申请实施例提供的视频特效的处理方法在一个实际的应用场景中的示例性应用。
本申请实施例提供的视频特效的处理方法在游戏周战报视频中有着广泛的应用,参见图4,图4是本申请实施例提供的视频特效的处理方法在游戏周战报视频的场景中的应用效果示意图,终端需要进行呈现的是中间区域的横向视频(原生视频)及上下区域的贴纸动画(对应视频特效文件的特效对象),通过PAG贴纸动画来实现上下区域的贴纸动画,这是一个竖屏动画,呈现在视频的上下边缘区域,并伴随有周期性的动画效果,在其他的应用场景中,还存在贴 纸动画的片头以及片尾固定,中间部分的内容根据需求进行时间拉伸的情况,需求可以是目标播放时长,目标播放时长可以为横向视频的时长(原生视频的时长),图4中的应用效果示意图可以通过如下步骤实现,客户端触发了请求服务器下发每周战报视频的下发逻辑,使得服务器向客户端下发每周战报视频,客户端响应接收到从服务器返回的每周战报视频(原生视频)以及对应的视频特效文件,获取每周战报视频的时长作为目标播放时长,并基于目标播放时长对视频特效文件进行解码并执行相应的时长伸缩处理,使得经过时长伸缩处理的特效对象适配于每周战报视频的时长,最后将该特效对象进行渲染并与每周战报视频进行同时显示,显示出的效果即为带有特效对象的每周战报视频。
参见图5,图5是本申请实施例提供的视频特效的处理系统的流程图,首先在AE中通过插件设置贴纸动画的伸缩区间及伸缩类型,参见图6B,图6B是本申请实施例提供的视频特效的处理方法的标注示意图,在贴纸动画的总合成中添加标注,AE中的插件可以支持对图层添加标注,(1)通过点击空白区域的操作,使得没有图层被选择,(2)响应于点击图层控件的操作,呈现图层菜单,(3)响应于添加标记的操作实现标记添加过程,从而结合具体的使用场景,可以添加相关标识,例如时长伸缩区间和时长伸缩类型,便于后续通过渲染SDK按照时长伸缩区间和时长伸缩类型进行渲染处理,参见图6C,图6C是本申请实施例提供的视频特效的处理方法的标注示意图,具体可以设置时长伸缩区间以及四种时长伸缩类型,具体设置过程如下:(1)响应于双击标注的操作,呈现设置页面;(2)在设置页面中接收时长伸缩类型(即所填写的内容),(3)在设置页面中接收修改的起始时间,(4)在设置页面接收到确认保存操作,时长伸缩类型包括以下几种:1、无伸缩类型:表示无需进行时长伸缩;2、线性伸缩类型:当设置整个PAG贴纸动画的目标播放时长长于原始PAG贴纸动画的原始播放时长时,在时长伸缩区间进行线性拉伸,当设置整个PAG贴纸动画的目标播放时长短于原始PAG贴纸动画的原始播放时长时,在时长伸缩区间进行线性伸缩;3、重复类型:时间伸缩类型为重复类型,当设置整个PAG贴纸动画的目标播放时长长于原始PAG贴纸动画的原始播放时长时,在时长伸缩区间进行周期性拉伸;4、重复倒转类型:当设置整个PAG贴纸动画的目标播放时长长于原始PAG贴纸动画的原始播放时长时,在时长伸缩区间进行倒序周期性拉伸,即先进行正序播放,再进行倒序播放,然后再进行正序播放,接着再进行倒序播放,依此类推。
时长伸缩区间及类型设置成功后,对特效对象进行编码处理得到了PAG贴纸动画文件,并导出PAG贴纸动画文件,通过修改PAG贴纸动画文件的数据结构,在文件根路径的层级增加时长伸缩类型和时长伸缩区间,便于进行编码处理,具体到平台端使用时,渲染SDK的解码模块需要进行相应数据的解码读取,从而获取时间伸缩区间,基于时间伸缩区间获取PAG渲染时间轴(包括:无伸缩、线性伸缩、重复伸缩以及倒序重复伸缩),渲染SDK可以是客户端SDK或者是服务器SDK,客户端SDK在客户端完成渲染(PAG渲染绘制),服务器端SDK在服务器完成渲染(PAG渲染绘制)。
为了在PAG贴纸动画上增加支持时长伸缩的功能,并且因为基于AE设计 的动画的渲染逻辑比较复杂,例如带有轨迹计算、时间缓动等相关效果,如果通过修改PAG贴纸动画的具体图层的动画特性来实现动画时间拉伸功能,其实现复杂度会相当高,因此不宜修改PAG贴纸动画的图层的具体动画特性,于是渲染侧中可以在原始渲染时间计算逻辑上进行封装,通过改变原始动画文件的渲染进度来实现时长伸缩的功能,计算时长伸缩区间的具体渲染进度。
参见图7,图7是本申请实施例提供的视频特效的处理方法的时间轴示意图,原始时间轴和目标时间轴的最小时间单位均为帧,若帧率为24帧每秒,则表示一秒时间内呈现24帧,最小时间单位为1/24秒,PAG贴纸动画文件的原始播放时长为m,其中包含一个时长伸缩区间(a,b),如果时长伸缩类型为无伸缩类型,则渲染逻辑和之前的逻辑保持一致,不会进行时长伸缩处理,如果时长伸缩类型为其他几种类型,且经过伸缩后的目标播放时长为n,则具体渲染进度的计算过程如下:首先计算时间伸缩系数k,k=(b-a)/(n–m+b-a);t为伸缩后的渲染时间点,即目标时间轴上的目标时间戳,f(t)为PAG贴纸动画的原始时间轴上进行实际渲染的原始特效帧时间戳,当时间伸缩类型为线性伸缩类型时,按照如下公式(4)计算PAG贴纸动画的原始时间轴上进行实际渲染的原始特效帧时间戳:
Figure PCTCN2021095994-appb-000001
当时间伸缩类型为重复类型时,按照如下公式(5)计算PAG贴纸动画的原始时间轴上进行实际渲染的原始特效帧时间戳:
Figure PCTCN2021095994-appb-000002
当时间伸缩类型为重复倒转类型时,当a<t<n-m+b时,分两种情况计算PAG贴纸动画的原始时间轴上进行实际渲染的原始特效帧时间戳:当(t-a)/(b-a)计算结果取整为偶数时,f(t)=a+(t-a)%(b-a),当(t-a)/(b-a)计算结果取整为奇数时,f(t)=b-(t-a)%(b-a),当t为其它范围的取值时,与上面的计算相同,当PAG贴纸动画中有多个时长伸缩区间时,计算方法类似,需要针对多个时长伸缩区间进行计算,当通过以上公式计算出f(t)后,渲染SDK中的渲染模块便可以根据对应的原始特效帧时间戳渲染出最终需要的特效对应的动画画面,最后将该特效对象的动画画面与每周战报视频进行同时显示,显示出的效果即为带有特效对象的每周战报视频。
本申请实施例提供一种视频特效的处理方法,能够很好地解决了用户界面动画(例如,视频编辑)、服务器端特效视频渲染场景下对于特效动画的时长需求不固定与设计师设计的贴纸动画文件时长固定之间的矛盾,在设计贴纸动画效果时,当设计师设定好时长伸缩区间及时长伸缩类型后,任意平台端使用过程中只需要设置贴纸动画的目标播放时长,便可实现动画的时间拉伸效果。
下面继续说明本申请实施例提供的视频特效的处理装置455的实施为软件模块的示例性结构,在一些实施例中,如图2所示,存储在存储器450的视频特效的处理装置455中的软件模块可以包括:文件获取模块4551,配置为获取视频特效文件,并从视频特效文件中提取时长伸缩策略;时长获取模块4552,配置为获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,其中,目标播放时长区别于视频特效文件的原始播放时长;特效帧确定模块4553,配置为根据时长伸缩策略,确定视频特效文件中与目标时间轴对应的特效帧;其中,目标时间轴的长度与目标播放时长一致;渲染模块4554,配置为根据与目标时间轴对应的特效帧进行渲染,得到符合目标播放时长的目标视频特效。
在一些实施例中,文件获取模块4551,还配置为:执行以下处理之一:对特效对象的多个图层结构进行编码处理,得到对应特效对象的编码导出文件;对特效对象的多个特效帧进行编码处理,得到对应特效对象的编码导出文件;对特效对象的多个特效帧进行视频格式压缩处理,并对得到的视频格式压缩处理结果进行编码处理,得到对应特效对象的编码导出文件;将时长伸缩类型以及时长伸缩区间封装在编码导出文件中,得到对应特效对象的视频特效文件。
在一些实施例中,所述时长伸缩策略包括时长伸缩区间以及对应的时长伸缩类型;文件获取模块4551,还配置为:对视频特效文件进行解码处理,得到对应视频特效文件的至少一个时长伸缩区间以及对应的时长伸缩类型;其中,时长伸缩类型包括以下类型中的任意一种:时间线性伸缩类型;时间重复类型;时间倒序重复类型。
在一些实施例中,时长获取模块4552,还配置为:当时长伸缩区间的数目为多个时,将视频特效文件拆分为与数目一致的多个视频特效子文件,并分别获取针对每个视频特效子文件的目标播放时长;当时长伸缩区间的数目为一个时,获取针对视频特效文件的整体的目标播放时长。
在一些实施例中,时长获取模块4552,还配置为:在获取需要所述视频特效文件所实现的目标播放时长之后,当时长伸缩区间的数目为多个时,针对每个视频特效子文件执行以下处理:从视频特效子文件中获取对应特效对象的原始时间轴;保持原始时间轴的帧率不变,对原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴;当时长伸缩区间的数目为一个时,针对视频特效文件执行以下处理:从视频特效文件中获取对应特效对象的原始时间轴;保持原始时间轴的帧率不变,对原始时间轴进行时长伸缩处理,得到对应目标播放时长的目标时间轴。
在一些实施例中,当时长伸缩区间的数目为多个时,特效帧确定模块4553,还配置为:针对每个时长伸缩区间执行以下处理:从视频特效子文件中获取包括特效对象的多个特效帧、以及每个特效帧在原始时间轴上对应的时间戳,并作为每个特效帧的原始特效帧时间戳;基于时长伸缩区间、以及每个特效帧的原始特效帧时间戳,在多个特效帧中确定与目标时间轴上每个时间戳对应的特效帧。
在一些实施例中,当时长伸缩区间的数目为一个时,特效帧确定模块4553, 还配置为:从视频特效文件中获取包括特效对象的多个特效帧、以及每个特效帧在原始时间轴上对应的时间戳,并作为每个特效帧的原始特效帧时间戳;基于时长伸缩区间、以及每个特效帧的原始特效帧时间戳,在多个特效帧中确定与目标时间轴上每个时间戳对应的特效帧。
在一些实施例中,特效帧确定模块4553,还配置为:依次将目标时间轴上每个时间戳作为目标时间戳,并执行以下处理:基于时长伸缩区间,确定目标时间戳在原始时间轴上对应的原始时间戳;当目标时间戳在原始时间轴上对应的原始时间戳与任一原始特效帧时间戳重叠时,将重叠的原始特效帧时间戳对应的特效帧确定为目标时间戳对应的特效帧;当目标时间戳在原始时间轴上对应的原始时间戳未与任一原始特效帧时间戳重叠时,确定与原始时间戳距离最小的原始特效帧时间戳,并将原始特效帧时间戳对应的特效帧确定为目标时间戳对应的特效帧。
在一些实施例中,特效帧确定模块4553,还配置为:针对每个时长伸缩区间执行以下处理:当目标时间戳不大于时长伸缩区间的起始时间戳时,将目标时间戳确定为在原始时间轴上对应的原始时间戳;当目标时间戳大于时长伸缩区间的起始时间戳,且小于时长伸缩区间的终止时间戳时,对目标时间戳进行基于时长伸缩类型的映射处理,得到对应的原始时间戳;当目标时间戳大于或者等于终止时间戳,且小于目标播放时长时,确定原始播放时长与目标播放时长之间的第一差值,将第一差值与目标时间戳进行求和处理,并将求和处理结果确定为目标时间戳在原始时间轴上对应的原始时间戳。
在一些实施例中,特效帧确定模块4553,还配置为:当时长伸缩类型为时间线性伸缩类型时,将目标播放时长与原始播放时长的第二差值确定为伸缩长度,并将伸缩长度与时长伸缩区间的长度进行求和处理;对时长伸缩区间的长度与求和处理结果进行求比值处理,得到伸缩系数;确定目标时间戳与起始时间戳之间的第三差值,将第三差值与伸缩系数进行相乘处理;对相乘处理结果与起始时间戳进行求和处理,得到对应的原始特效帧时间戳。
在一些实施例中,特效帧确定模块4553,还配置为:当时长伸缩类型为时间重复类型时,确定目标时间戳与起始时间戳之间的第四差值,对第四差值与时长伸缩区间的长度进行求余数处理;将求余数处理结果与起始时间戳进行求和处理,得到对应的原始时间戳。
在一些实施例中,特效帧确定模块4553,还配置为:当时长伸缩类型为时间倒序重复类型时,确定目标时间戳与起始时间戳之间的第五差值;对第五差值与时长伸缩区间的长度进行求余数处理,得到求余数处理结果,并对所述第五差值与所述时长伸缩区间的长度进行求比值处理,得到比值结果;对比值结果进行取整处理,得到取整结果;当取整结果为偶数时,将求余数处理结果与起始时间戳进行求和处理,得到对应的原始时间戳;当取整结果为奇数时,确定时长伸缩区间的长度与求余数处理结果之间的第六差值,对第六差值与起始时间戳进行求和处理,得到对应的原始时间戳。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实 施例提供的电子红包的发送方法,例如,如图3A-3E示出的视频特效的处理方法。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。电子设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该电子设备执行本申请实施例的视频特效的处理方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个电子设备上执行,或者在位于一个地点的多个电子设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个电子设备上执行。
综上,通过本申请实施例,通过视频特效文件中封装时长伸缩策略,使得一个视频特效文件自由伸缩为不同应用场景所需求的播放时长,具有普遍的适用性,再次基础上进行渲染得到目标视频特效,节约了制作大量不同播放时长的视频特效文件所带来的计算资源和时间资源的巨大消耗。
以上,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。

Claims (15)

  1. 一种视频特效的处理方法,所述方法由电子设备执行,所述方法包括:
    获取视频特效文件,并从所述视频特效文件中提取时长伸缩策略;
    获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,其中,所述目标播放时长区别于所述视频特效文件的原始播放时长;
    根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧;
    其中,所述目标时间轴的长度与所述目标播放时长一致;
    根据与所述目标时间轴对应的特效帧进行渲染,得到符合所述目标播放时长的目标视频特效。
  2. 根据权利要求1所述的方法,其中,所述获取视频特效文件,包括:
    执行以下处理之一:
    对所述特效对象的多个图层结构进行编码处理,得到对应所述特效对象的编码导出文件;
    对所述特效对象的多个特效帧进行编码处理,得到对应所述特效对象的编码导出文件;
    对所述特效对象的多个特效帧进行视频格式压缩处理,并对得到的视频格式压缩处理结果进行编码处理,得到对应所述特效对象的编码导出文件;
    将时长伸缩类型以及时长伸缩区间封装在所述编码导出文件中,得到对应所述特效对象的视频特效文件。
  3. 根据权利要求1所述的方法,其中,
    所述时长伸缩策略包括时长伸缩区间以及对应的时长伸缩类型;
    所述从所述视频特效文件中提取时长伸缩策略,包括:
    对所述视频特效文件进行解码处理,得到对应所述视频特效文件的至少一个时长伸缩区间以及对应的时长伸缩类型;
    其中,所述时长伸缩类型包括以下类型中的任意一种:时间线性伸缩类型;时间重复类型;时间倒序重复类型。
  4. 根据权利要求3所述的方法,其中,所述获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,包括:
    当所述时长伸缩区间的数目为多个时,将所述视频特效文件拆分为与所述数目一致的多个视频特效子文件,并分别获取针对每个所述视频特效子文件的目标播放时长;
    当所述时长伸缩区间的数目为一个时,获取针对所述视频特效文件的整体的目标播放时长。
  5. 根据权利要求4所述的方法,其中,在获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长之后,所述方法还包括:
    当所述时长伸缩区间的数目为多个时,针对每个所述视频特效子文件执行以下处理:
    从所述视频特效子文件中获取对应特效对象的原始时间轴;
    保持所述原始时间轴的帧率不变,对所述原始时间轴进行时长伸缩处理,得到对应所述目标播放时长的目标时间轴;
    当所述时长伸缩区间的数目为一个时,针对所述视频特效文件执行以下处理:
    从所述视频特效文件中获取对应特效对象的原始时间轴;
    保持所述原始时间轴的帧率不变,对所述原始时间轴进行时长伸缩处理,得到对应所述目标播放时长的目标时间轴。
  6. 根据权利要求5所述的方法,其中,
    当所述时长伸缩区间的数目为多个时,所述根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧,包括:
    针对每个时长伸缩区间执行以下处理:
    从所述视频特效子文件中获取包括所述特效对象的多个特效帧、以及每个所述特效帧在所述原始时间轴上对应的时间戳,并作为每个所述特效帧的原始特效帧时间戳;
    基于时长伸缩区间、以及每个所述特效帧的原始特效帧时间戳,在所述多个特效帧中确定与所述目标时间轴上每个时间戳对应的特效帧。
  7. 根据权利要求5所述的方法,其中,
    当所述时长伸缩区间的数目为一个时,所述根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧,包括:
    从所述视频特效文件中获取包括所述特效对象的多个特效帧、以及每个所述特效帧在所述原始时间轴上对应的时间戳,并作为每个所述特效帧的原始特效帧时间戳;
    基于时长伸缩区间、以及每个所述特效帧的原始特效帧时间戳,在所述多个特效帧中确定与所述目标时间轴上每个时间戳对应的特效帧。
  8. 根据权利要求6或7所述的方法,其中,所述基于时长伸缩区间、以及每个所述特效帧的原始特效帧时间戳,在所述多个特效帧中确定与所述目标时间轴上每个时间戳对应的特效帧,包括:
    依次将所述目标时间轴上每个时间戳作为目标时间戳,并执行以下处理:
    基于时长伸缩区间,确定所述目标时间戳在所述原始时间轴上对应的原始时间戳;
    当所述目标时间戳在所述原始时间轴上对应的原始时间戳与任一所述原始特效帧时间戳重叠时,将重叠的所述原始特效帧时间戳对应的特效帧确定为所述目标时间戳对应的特效帧;
    当所述目标时间戳在所述原始时间轴上对应的原始时间戳未与任一所述原始特效帧时间戳重叠时,确定与所述原始时间戳距离最小的所述原始特效帧时间戳,并将所述原始特效帧时间戳对应的特效帧确定为所述目标时间戳对应的特效帧。
  9. 根据权利要求8所述的方法,其中,所述基于时长伸缩区间,确定所述目标时间戳在所述原始时间轴上对应的原始时间戳,包括:
    针对每个所述时长伸缩区间执行以下处理:
    当所述目标时间戳不大于所述时长伸缩区间的起始时间戳时,将所述目标时间戳确定为在所述原始时间轴上对应的原始时间戳;
    当所述目标时间戳大于所述时长伸缩区间的起始时间戳,且小于所述时长伸缩区间的终止时间戳时,对所述目标时间戳进行基于所述时长伸缩类型的映射处理,得到对应的原始时间戳;
    当所述目标时间戳大于或者等于所述终止时间戳,且小于所述目标播放时长时,确定所述原始播放时长与所述目标播放时长之间的第一差值,将所述第一差值与所述目标时间戳进行求和处理,并将求和处理结果确定为所述目标时间戳在所述原始时间轴上对应的原始时间戳。
  10. 根据权利要求9所述的方法,其中,所述对所述目标时间戳进行基于所述时长伸缩类型的映射处理,得到对应的原始时间戳,包括:
    当所述时长伸缩类型为所述时间线性伸缩类型时,将所述目标播放时长与所述原始播放时长的第二差值确定为伸缩长度,并将所述伸缩长度与所述时长伸缩区间的长度进行求和处理;
    对所述时长伸缩区间的长度与求和处理结果进行求比值处理,得到伸缩系数;
    确定所述目标时间戳与所述起始时间戳之间的第三差值,将所述第三差值与所述伸缩系数进行相乘处理;
    对相乘处理结果与所述起始时间戳进行求和处理,得到对应的原始时间戳。
  11. 根据权利要求9所述的方法,其中,所述对所述目标时间戳进行基于所述时长伸缩类型的映射,得到对应的原始时间戳,包括:
    当所述时长伸缩类型为所述时间重复类型时,确定所述目标时间戳与所述起始时间戳之间的第四差值,对所述第四差值与所述时长伸缩区间的长度进行求余数处理;
    将求余数处理结果与所述起始时间戳进行求和处理,得到对应的原始时间戳。
  12. 根据权利要求9所述的方法,其中,所述对所述目标时间戳进行基于所述时长伸缩类型的映射,得到对应的原始时间戳,包括:
    当所述时长伸缩类型为所述时间倒序重复类型时,确定所述目标时间戳与所述起始时间戳之间的第五差值;
    对所述第五差值与所述时长伸缩区间的长度进行求余数处理,得到求余数处理结果,并对所述第五差值与所述时长伸缩区间的长度进行求比值处理,得到比值结果;
    对所述比值结果进行取整处理,得到取整结果;
    当所述取整结果为偶数时,将所述求余数处理结果与所述起始时间戳进行求和处理,得到对应的原始时间戳;
    当所述取整结果为奇数时,确定所述时长伸缩区间的长度与所述求余数处理结果之间的第六差值,对所述第六差值与所述起始时间戳进行求和处理,得到对应的原始时间戳。
  13. 一种视频特效的处理装置,所述装置包括:
    文件获取模块,配置为获取视频特效文件,并从所述视频特效文件中提取时长伸缩策略;
    时长获取模块,配置为获取将所述视频特效文件应用于设计场景时需要实现的目标播放时长,其中,所述目标播放时长区别于所述视频特效文件的原始播放时长;
    特效帧确定模块,配置为根据所述时长伸缩策略,确定所述视频特效文件中与目标时间轴对应的特效帧;
    其中,所述目标时间轴的长度与所述目标播放时长一致;
    渲染模块,配置为根据与所述目标时间轴对应的特效帧进行渲染,得到符合所述目标播放时长的目标视频特效。
  14. 一种电子设备,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至12任一项所述的视频特效的处理方法。
  15. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至12任一项所述的视频特效的处理方法。
PCT/CN2021/095994 2020-06-28 2021-05-26 视频特效的处理方法、装置以及电子设备 WO2022001508A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21834318.4A EP4044604A4 (en) 2020-06-28 2021-05-26 VIDEO SPECIAL EFFECTS PROCESSING METHOD AND APPARATUS AND ELECTRONIC DEVICE
JP2022555878A JP7446468B2 (ja) 2020-06-28 2021-05-26 ビデオ特殊効果の処理方法、装置、電子機器及びコンピュータプログラム
US17/730,050 US12041372B2 (en) 2020-06-28 2022-04-26 Video special effect processing method and apparatus, and electronic device
US18/735,059 US20240323308A1 (en) 2020-06-28 2024-06-05 Video special effect processing method and apparatus, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010599847.1 2020-06-28
CN202010599847.1A CN111669623B (zh) 2020-06-28 2020-06-28 视频特效的处理方法、装置以及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/730,050 Continuation US12041372B2 (en) 2020-06-28 2022-04-26 Video special effect processing method and apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022001508A1 true WO2022001508A1 (zh) 2022-01-06

Family

ID=72390053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095994 WO2022001508A1 (zh) 2020-06-28 2021-05-26 视频特效的处理方法、装置以及电子设备

Country Status (5)

Country Link
US (2) US12041372B2 (zh)
EP (1) EP4044604A4 (zh)
JP (1) JP7446468B2 (zh)
CN (1) CN111669623B (zh)
WO (1) WO2022001508A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669623B (zh) * 2020-06-28 2023-10-13 腾讯科技(深圳)有限公司 视频特效的处理方法、装置以及电子设备
CN112202751B (zh) * 2020-09-25 2022-06-07 腾讯科技(深圳)有限公司 动画的处理方法、装置、电子设备及存储介质
CN112702656A (zh) * 2020-12-21 2021-04-23 北京达佳互联信息技术有限公司 视频编辑方法和视频编辑装置
CN115209215B (zh) * 2021-04-09 2024-07-12 北京字跳网络技术有限公司 视频处理方法、装置及设备
CN113556576B (zh) * 2021-07-21 2024-03-19 北京达佳互联信息技术有限公司 视频生成方法及设备
CN115842815A (zh) * 2021-09-02 2023-03-24 微软技术许可有限责任公司 基于Web的视频效果添加
CN114419198A (zh) * 2021-12-21 2022-04-29 北京达佳互联信息技术有限公司 一种帧序列处理方法、装置、电子设备及存储介质
CN114630181B (zh) * 2022-02-24 2023-03-24 深圳亿幕信息科技有限公司 一种视频处理方法、系统、电子设备及介质
CN114664331B (zh) * 2022-03-29 2023-08-11 深圳万兴软件有限公司 一种周期可调的变速特效渲染方法、系统及其相关组件
CN117714767A (zh) * 2022-12-08 2024-03-15 北京冰封互娱科技有限公司 动画播放方法及装置
CN117372583B (zh) * 2023-12-08 2024-04-09 广东咏声动漫股份有限公司 一种动画文件处理方法、系统、电子设备及存储介质
CN117714774B (zh) * 2024-02-06 2024-04-19 北京美摄网络科技有限公司 视频特效封面的制作方法、装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903927B2 (en) * 2004-07-08 2011-03-08 Sony Corporation Editing apparatus and control method thereof, and program and recording medium
CN106060581A (zh) * 2016-07-05 2016-10-26 广州华多网络科技有限公司 一种视频实时传输数据处理方法、装置及系统
CN108632540A (zh) * 2017-03-23 2018-10-09 北京小唱科技有限公司 视频处理方法和装置
KR20190075672A (ko) * 2017-12-21 2019-07-01 박행운 동영상과 동기화된 부가 정보 제공 방법 및 이를 실행하는 시스템
CN110674341A (zh) * 2019-09-11 2020-01-10 广州华多网络科技有限公司 特效处理方法、装置、电子设备及存储介质
CN111031393A (zh) * 2019-12-26 2020-04-17 广州酷狗计算机科技有限公司 视频播放方法、装置、终端及存储介质
CN111669623A (zh) * 2020-06-28 2020-09-15 腾讯科技(深圳)有限公司 视频特效的处理方法、装置以及电子设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0899666B1 (en) * 1997-08-25 2003-07-09 Sharp Kabushiki Kaisha Image processing apparatus displaying a catalog of different types of data in different manner
JP2004152132A (ja) * 2002-10-31 2004-05-27 Sharp Corp 画像出力方法、画像出力装置、画像出力プログラムおよびコンピュータ読取可能な記録媒体
JP2007065928A (ja) * 2005-08-30 2007-03-15 Toshiba Corp 情報記憶媒体、情報処理方法、情報転送方法、情報再生方法、情報再生装置、情報記録方法、情報記録装置、及びプログラム
US8170396B2 (en) * 2007-04-16 2012-05-01 Adobe Systems Incorporated Changing video playback rate
JP2012054619A (ja) * 2009-03-19 2012-03-15 Grass Valley Co Ltd 編集装置、編集方法、編集プログラム及びデータ構造
AU2011202182B1 (en) * 2011-05-11 2011-10-13 Frequency Ip Holdings, Llc Creation and presentation of selective digital content feeds
WO2014089345A1 (en) * 2012-12-05 2014-06-12 Frequency Ip Holdings, Llc Automatic selection of digital service feed
EP2763401A1 (de) * 2013-02-02 2014-08-06 Novomatic AG Eingebettetes System zur Videoverarbeitung mit Hardware-Mitteln
US9170707B1 (en) * 2014-09-30 2015-10-27 Google Inc. Method and system for generating a smart time-lapse video clip
JP6623977B2 (ja) * 2016-08-26 2019-12-25 富士電機株式会社 時間軸伸縮装置、バッチプロセス監視装置、時間軸伸縮システム及びプログラム
JP6218296B1 (ja) * 2016-12-26 2017-10-25 株式会社ユニコーン 動画再生装置、動画再生方法、そのプログラム及び記録媒体
CN107124624B (zh) * 2017-04-21 2022-09-23 腾讯科技(深圳)有限公司 视频数据生成的方法和装置
CN110213504B (zh) * 2018-04-12 2021-10-08 腾讯科技(深圳)有限公司 一种视频处理方法、信息发送方法及相关设备
CN112150587A (zh) * 2019-06-11 2020-12-29 腾讯科技(深圳)有限公司 动画数据编码、解码方法、装置、存储介质和计算机设备
CN110445992A (zh) * 2019-08-16 2019-11-12 深圳特蓝图科技有限公司 一种基于xml的视频剪辑合成方法
CN110708596A (zh) * 2019-09-29 2020-01-17 北京达佳互联信息技术有限公司 生成视频的方法、装置、电子设备及可读存储介质
CN110677713B (zh) * 2019-10-15 2022-02-22 广州酷狗计算机科技有限公司 视频图像处理方法及装置、存储介质
CN110769313B (zh) * 2019-11-19 2022-02-22 广州酷狗计算机科技有限公司 视频处理方法及装置、存储介质
CN111050203B (zh) * 2019-12-06 2022-06-14 腾讯科技(深圳)有限公司 一种视频处理方法、装置、视频处理设备及存储介质
CN113038149A (zh) * 2019-12-09 2021-06-25 上海幻电信息科技有限公司 直播视频互动方法、装置以及计算机设备
CN111193876B (zh) * 2020-01-08 2021-09-07 腾讯科技(深圳)有限公司 视频中添加特效的方法及装置
CN111258526B (zh) * 2020-05-06 2020-08-28 上海幻电信息科技有限公司 投屏方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903927B2 (en) * 2004-07-08 2011-03-08 Sony Corporation Editing apparatus and control method thereof, and program and recording medium
CN106060581A (zh) * 2016-07-05 2016-10-26 广州华多网络科技有限公司 一种视频实时传输数据处理方法、装置及系统
CN108632540A (zh) * 2017-03-23 2018-10-09 北京小唱科技有限公司 视频处理方法和装置
KR20190075672A (ko) * 2017-12-21 2019-07-01 박행운 동영상과 동기화된 부가 정보 제공 방법 및 이를 실행하는 시스템
CN110674341A (zh) * 2019-09-11 2020-01-10 广州华多网络科技有限公司 特效处理方法、装置、电子设备及存储介质
CN111031393A (zh) * 2019-12-26 2020-04-17 广州酷狗计算机科技有限公司 视频播放方法、装置、终端及存储介质
CN111669623A (zh) * 2020-06-28 2020-09-15 腾讯科技(深圳)有限公司 视频特效的处理方法、装置以及电子设备

Also Published As

Publication number Publication date
US20240323308A1 (en) 2024-09-26
CN111669623A (zh) 2020-09-15
CN111669623B (zh) 2023-10-13
EP4044604A1 (en) 2022-08-17
JP7446468B2 (ja) 2024-03-08
JP2023518388A (ja) 2023-05-01
EP4044604A4 (en) 2023-01-18
US20220264029A1 (en) 2022-08-18
US12041372B2 (en) 2024-07-16

Similar Documents

Publication Publication Date Title
WO2022001508A1 (zh) 视频特效的处理方法、装置以及电子设备
CN107770626B (zh) 视频素材的处理方法、视频合成方法、装置及存储介质
US12017145B2 (en) Method and system of automatic animation generation
CN112184856B (zh) 支持多图层特效及动画混合的多媒体处理装置
JP4937256B2 (ja) アニメーション間の滑らかな遷移
CN107393013B (zh) 虚拟漫游文件生成、显示方法、装置、介质、设备和系统
US8265457B2 (en) Proxy editing and rendering for various delivery outlets
CN112235604B (zh) 渲染方法及装置、计算机可读存储介质、电子设备
CN105630459A (zh) 一种将ppt转换为html页面的方法
WO2020220773A1 (zh) 图片预览信息的显示方法、装置、电子设备及计算机可读存储介质
JP2023095832A (ja) ビデオ処理方法、装置、電子機器及びコンピュータ記憶媒体
WO2024198989A1 (zh) 视频生成方法和装置
CN113905254A (zh) 视频合成方法、装置、系统与可读存储介质
KR20120000595A (ko) 멀티플랫폼에서 구동되는 온라인 멀티미디어 콘텐츠 제작툴 제공 방법 및 시스템
CN114866801B (zh) 视频数据的处理方法、装置、设备及计算机可读存储介质
KR20050029266A (ko) 개인용 컴퓨터에서 제작된 발표 파일을 네트워크 단말기,휴대용 저장장치 및 휴대용 멀티미디어 재생 장치에서 사용할 수 있도록 변환하는 파일 형식과 재생장치 및 방법
CN111813969A (zh) 多媒体数据处理方法、装置、电子设备及计算机存储介质
CN115065866B (zh) 一种视频生成方法、装置、设备及存储介质
US20240155175A1 (en) Method and apparatus for generating interactive video
WO2024046029A9 (zh) 媒体内容的创作方法、装置、设备及存储介质
CN115134658B (zh) 视频处理方法、装置、设备及存储介质
CN112565268B (zh) 多媒体信息的传输控制方法、装置设备及计算机存储介质
US20140013229A1 (en) Export of playback logic to multiple playback formats
JP2024534743A (ja) ビデオ生成方法、装置、機器、記憶媒体及びプログラム製品
CN117406891A (zh) 时序开放式通用交互标准文件的生成方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834318

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021834318

Country of ref document: EP

Effective date: 20220505

ENP Entry into the national phase

Ref document number: 2022555878

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE