CN114173192A - Method and system for adding dynamic special effect based on exported video - Google Patents

Method and system for adding dynamic special effect based on exported video Download PDF

Info

Publication number
CN114173192A
CN114173192A CN202111499682.1A CN202111499682A CN114173192A CN 114173192 A CN114173192 A CN 114173192A CN 202111499682 A CN202111499682 A CN 202111499682A CN 114173192 A CN114173192 A CN 114173192A
Authority
CN
China
Prior art keywords
animation
file
video
view
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111499682.1A
Other languages
Chinese (zh)
Inventor
张征
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Avanti Electronic Technology Co ltd
Original Assignee
Guangzhou Avanti Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Avanti Electronic Technology Co ltd filed Critical Guangzhou Avanti Electronic Technology Co ltd
Priority to CN202111499682.1A priority Critical patent/CN114173192A/en
Publication of CN114173192A publication Critical patent/CN114173192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for adding dynamic special effects based on derived videos, belonging to the technical field of mobile internet development, wherein the method for adding dynamic special effects based on the derived videos comprises the following steps: step 1, firstly, obtaining a video file to be added with a special effect, obtaining pictures with the picture formats of PNG and Bitmap, then generating a screenshot by using a control, and making a view animation file describing animation actions; and 2, storing the pictures, the screenshots and the manufactured files obtained in the step 1 into a system memory, and analyzing the corresponding view animation files through a renderer to obtain data to be processed.

Description

Method and system for adding dynamic special effect based on exported video
Technical Field
The invention belongs to the technical field of mobile internet development, and particularly relates to a method and a system for adding a dynamic special effect based on a derived video.
Background
In the current society, introduction forms presented by traditional characters and pictures and introduction of products or information reception of viewers cannot be met, dynamic videos are required to present the contents, in the process of issuing videos, adding watermarks in the videos gradually becomes a popular means so that copyright of the videos can be maintained or advertisement promotion and the like can be facilitated, in the related technology, a client for issuing the videos can be provided with watermarks, if a user wants to add watermarks in the videos when issuing the videos, the watermarks can be selected to trigger a watermark adding instruction, and after the client receives the watermark adding instruction, the watermarks selected by the user are added to the videos to be issued;
in other existing technologies, a video is edited and synthesized into a required video, when a dynamic sticker or watermark is added, a watermark or a sticker object to be added is decomposed into multiple frames of pictures, then the multiple frames of pictures are compressed into a file and placed into an apk, the object is used for real-time preview and edition during video editing, when the operation is performed, the watermark or the sticker object is not flexible enough when needing to be modified, a special art designer is needed to modify and adjust the watermark or the sticker object, too many pictures are needed for the watermark and the sticker object, and the watermark and the sticker object occupy a large space even if being compressed, the adjustment is not good, time and labor are wasted, and the cost is also consumed.
Disclosure of Invention
The invention mainly aims to provide a method and a system for adding a dynamic special effect based on a derived video, aiming at solving the problems that the watermark flexibility on the video is not enough, and the watermark and the sticker object need too many pictures per se and occupy larger space even if the watermark and the sticker object are compressed in the prior art, improving the flexibility of modifying the watermark and reducing the number of the watermark and the sticker object.
In order to achieve the above object, the present invention provides a method and a system for adding a dynamic special effect based on a derived video, wherein the method for adding a dynamic special effect based on a derived video comprises the following steps:
step 1, firstly, obtaining a video file to be added with a special effect, obtaining pictures with the picture formats of PNG and Bitmap, then generating a screenshot by using a control, and making a view animation file describing animation actions;
step 2, storing the pictures, the screenshots and the manufactured files obtained in the step 1 into a system memory, and analyzing the corresponding view animation files through a renderer to obtain data to be processed;
step 3, transmitting the video file as texture data to a corresponding view animation file, and acquiring a new animation video file with a watermark and a sticker rendered by using a video renderer;
and 4, previewing the animation video file generated in the step 3, after previewing, performing soft coding and decoding on the view animation file, the picture and the screenshot in the step 1 by using a code generator and a coding tool to obtain data to be synthesized, operating a video renderer, synthesizing the animation video file with the picture and the screenshot, and obtaining a video with a requirement.
More specifically, the control tool for generating the screenshot by using the control in step 1 is an android view control.
More specifically, the animation action in step 1 is described and obtained by using an xml file of the android view animation, and the types of the video animation action include a gradual animation, a scaling animation, and a rotation animation.
More recently, the step 3 of obtaining a new animation video file with a rendered watermark and a rendered sticker is to execute a corresponding opengl api or glsl syntax to analyze the corresponding view animation file through a render code file, transmit the video file as texture data, execute a render code, and render a video with a watermark and a rendered sticker.
More closely, the step 4 of synthesizing the animation video file with the picture and the screenshot is to execute corresponding opengl api or glsl grammar inside through a render code file to analyze the corresponding view animation file, the picture and the screenshot, transmit the video file as texture data, perform soft de-coding by using EGL + ffmpeg, run render code, synthesize the animation video file with the picture and the screenshot, and acquire the video file with the requirement.
Further, in the step 4, the view animation file, the picture and the screenshot in the step 1 may be decoded, and data to be synthesized into the video may be obtained by performing hard coding and decoding using the android media codec.
A system for adding dynamic effects based on derived video, comprising:
the device comprises a storage module, an analysis module, a preview module, a processing module and a synthesis module;
the storage module stores pictures and screenshots with formats of PNG and Bitmap, view animation files for describing animation actions and video files to be synthesized;
the analysis module extracts a view animation file for describing animation actions from the storage module, analyzes the view animation file, acquires data to be previewed and sends the data to be previewed to the preview module;
the preview module receives data to be previewed sent by the analysis module, extracts a video file from the storage module, processes the video file into texture data, combines the texture data with the data to be previewed to obtain a new animation video file with a rendered watermark and a sticker, and sends the texture data to the synthesis module;
the processing module extracts pictures and screenshots with formats of PNG and Bitmap from the storage module and makes view animation files describing animation actions, soft coding and decoding are carried out on the extracted files by using a code generator and a coding tool to generate data of an animation to be synthesized, and the data to be synthesized are sent to the synthesis module;
the synthesis module receives the video texture data sent by the preview module and the to-be-synthesized motion picture data sent by the processing module, and synthesizes an animation video file with pictures and screenshots.
More preferably, the storage module further comprises a drawing module, and the drawing module is used for making the xml file of the android view animation to obtain a view dynamic file containing animation actions.
Further, the animation action types of the view dynamic file containing the animation action comprise a gradual animation, a telescopic animation and a rotation animation.
More recently, the processing mode of the process of synthesizing an animation video file with a picture and a screenshot by the synthesis module comprises two modes of EGL + ffmpeg soft coding and decoding and android media codec hard coding and decoding, and a render code is used for synthesizing a new video after data is received and decoded by any mode.
Compared with the prior art, the invention has the beneficial effects that:
(1) the animation action of the video to be synthesized is obtained by using the android view animation xml file, when the style and the content of the animation are modified, the action of the animation can be directly modified by modifying the view animation xml file according to the requirements, so that a lot of working time flows are saved, the operation is easy to operate by using the familiar android view animation xml file, the previewing is performed by using the android view when the video previewing effect is achieved, the operation is convenient, the occupied space of the used dynamic watermark or dynamic sticker is small, a large amount of memory space is saved, and the adjustment is more convenient;
(2) the invention solves the problems that the flexibility of the watermark on the video is not enough in the prior art, and the watermark and the sticker object need too many pictures per se, so that the method occupies a larger space even if the watermark and the sticker object are compressed, improves the flexibility of modifying the watermark, reduces the number of the watermark and the sticker object, reduces the number of pictures, and saves a large amount of memory space.
Drawings
In order to more clearly illustrate the embodiments or exemplary technical solutions of the present application, the drawings needed to be used in the embodiments or exemplary descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application and therefore should not be considered as limiting the scope, and it is also possible for those skilled in the art to obtain other drawings according to the drawings without inventive efforts.
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a schematic diagram of the system of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort shall fall within the protection scope of the present application, and the present invention shall be described in detail with reference to the figures and the specific embodiments.
Example 1
As shown in fig. 1, a method and a system for adding a dynamic special effect based on a derived video includes the following processes:
step 1, firstly, obtaining a video file to be added with a special effect, obtaining pictures with the picture formats of PNG and Bitmap, then generating a screenshot by using a control, and making a view animation file describing animation actions;
step 2, storing the pictures, the screenshots and the manufactured files obtained in the step 1 into a system memory, and analyzing the corresponding view animation files through a renderer to obtain data to be processed;
step 3, transmitting the video file as texture data to a corresponding view animation file, and acquiring a new animation video file with a watermark and a sticker rendered by using a video renderer;
step 4, previewing the animation video file generated in the step 3, after previewing, using a code generator and a coding tool to perform soft coding and decoding on the view animation file, the picture and the screenshot in the step 1, acquiring data to be synthesized, operating a video renderer, synthesizing the animation video file with the picture and the screenshot, and acquiring a video with a demand;
the method is explained below with reference to specific embodiments:
firstly, obtaining a video file to be added with a special effect, obtaining pictures with the formats of PNG and Bitmap, then generating a screenshot by using a control, making a view animation file describing animation actions, searching a plurality of basic pictures with the formats of PNG and Bitmap by using a Baidu network, then generating the screenshot by using an android view control tool, and describing and obtaining by using an xml file of the android view animation, wherein the types of the video animation actions comprise gradual animation, telescopic animation and rotary animation;
storing the obtained picture, screenshot and the manufactured file into a system memory, analyzing the corresponding view animation file through a renderer to obtain data to be processed, placing the corresponding picture file and the xml file into a local folder of the mobile phone, loading the files into the memory during operation, analyzing the corresponding xml file through a render code file in which a corresponding opengl api or glsl grammar is arranged, introducing the video file as texture data, executing the render code on the video renderer to draw and render a video with a watermark or a sticker, and obtaining a new animation video file with a watermark and a sticker;
previewing the generated animation video file, after the preview is passed, using a code generator and a coding tool to perform soft coding and decoding on the view animation file, the picture and the screenshot to obtain data to be synthesized, operating a video renderer to synthesize the animation video file with the picture and the screenshot to obtain a video with a requirement, executing a corresponding opengl api or glsl grammar in the render code file to analyze the corresponding view animation file, the picture and the screenshot, introducing the video file as texture data, performing soft decoding by using EGL + ffmpeg, operating a render code, synthesizing the animation video file with the picture and the screenshot to obtain the video file with the requirement;
the animation action of the video to be synthesized is obtained by using the android view animation xml file, when the style and the content of the animation are modified, the action of the animation can be directly modified by modifying the view animation xml file according to the requirements, a lot of working time flows are saved, the operation is easy to operate by using the familiar android view animation xml file, the previewing is performed by using the android view when the video previewing effect is achieved, the operation is convenient, the occupied space of the used dynamic watermark or dynamic sticker is small, a large amount of memory space is saved, and the adjustment is convenient.
Example 2
The method for adding the dynamic special effect based on the exported video has the implementation steps basically the same as those of the embodiment 1, and further, the step 4 of decoding the view animation file, the picture and the screenshot in the step 1 can also obtain data to be synthesized into the video by using android media codec;
in a specific implementation process, special effect synthesis is performed on the video by setting decoding modes of different modes, so that the diversity of decoding is guaranteed, and feasibility is provided for decoding under different conditions.
Example 3
A system for adding dynamic effects based on derived video, comprising:
the device comprises a storage module, an analysis module, a preview module, a processing module and a synthesis module;
the storage module stores pictures and screenshots with formats of PNG and Bitmap, view animation files for describing animation actions and video files to be synthesized;
the analysis module extracts a view animation file for describing animation actions from the storage module, analyzes the view animation file, acquires data to be previewed and sends the data to be previewed to the preview module;
the preview module receives data to be previewed sent by the analysis module, extracts a video file from the storage module, processes the video file into texture data, combines the texture data with the data to be previewed to obtain a new animation video file with a rendered watermark and a sticker, and sends the texture data to the synthesis module;
the processing module extracts pictures and screenshots with formats of PNG and Bitmap from the storage module and makes view animation files describing animation actions, soft coding and decoding are carried out on the extracted files by using a code generator and a coding tool to generate data of an animation to be synthesized, and the data to be synthesized are sent to the synthesis module;
the synthesis module receives the video texture data sent by the preview module and the to-be-synthesized moving picture data sent by the processing module, and synthesizes an animation video file with a picture and a screenshot;
in a specific implementation process, firstly, a video file to be added with a special effect is obtained, pictures with picture formats of PNG and Bitmap are obtained, then a control is used for generating a screenshot, a view animation file for describing animation actions is made, a plurality of basic pictures with formats of PNG and Bitmap are searched by using a Baidu network, then an android view control tool is used for generating the screenshot, the xml files of the android view animation are used for describing and obtaining, the types of the video animation actions comprise gradual change animation, telescopic animation and rotary animation, and the data are stored in a memory space of a mobile phone;
storing the obtained picture, screenshot and the manufactured file into a system memory, analyzing the corresponding view animation file through a renderer to obtain data to be processed, placing the corresponding picture file and the xml file into a local folder of the mobile phone, loading the files into the memory during operation, analyzing the corresponding xml file through a render code file in which a corresponding opengl api or glsl grammar is arranged, introducing the video file as texture data, executing the render code on the video renderer to draw and render a video with a watermark or a sticker, and obtaining a new animation video file with a watermark and a sticker;
previewing the generated animation video file, after the preview is passed, using a code generator and a coding tool to perform soft coding and decoding on the view animation file, the picture and the screenshot to obtain data to be synthesized, operating a video renderer to synthesize the animation video file with the picture and the screenshot to obtain a video with a requirement, executing a corresponding opengl api or glsl grammar in the render code file to analyze the corresponding view animation file, the picture and the screenshot, introducing the video file as texture data, performing soft decoding by using EGL + ffmpeg, operating a render code, synthesizing the animation video file with the picture and the screenshot to obtain the video file with the requirement;
the animation action of the video to be synthesized is obtained by using the android view animation xml file, when the style and the content of the animation are modified, the action of the animation can be directly modified by modifying the view animation xml file according to the requirements, so that a lot of working time flows are saved, the operation is easy to operate by using the familiar android view animation xml file, the previewing is performed by using the android view when the video previewing effect is achieved, the operation is convenient, the occupied space of the used dynamic watermark or dynamic sticker is small, a large amount of memory space is saved, and the adjustment is more convenient;
the problem of among the prior art watermark flexibility ratio on the adjustment video is not enough, and watermark and sticker object itself need too many pictures moreover, just take up bigger space even the compression is solved, promote the flexibility ratio of revising the watermark, reduce watermark and sticker object quantity, reduce picture quantity, saved a large amount of memory spaces.
The above examples are merely representative of preferred embodiments of the present invention, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention.

Claims (10)

1. A method for adding dynamic special effects based on derived videos is characterized in that the flow is as follows:
step 1, firstly, obtaining a video file to be added with a special effect, obtaining pictures with the picture formats of PNG and Bitmap, then generating a screenshot by using a control, and making a view animation file describing animation actions;
step 2, storing the pictures, the screenshots and the manufactured files obtained in the step 1 into a system memory, and analyzing the corresponding view animation files through a renderer to obtain data to be processed;
step 3, transmitting the video file as texture data to a corresponding view animation file, and acquiring a new animation video file with a watermark and a sticker rendered by using a video renderer;
and 4, previewing the animation video file generated in the step 3, after previewing, performing soft coding and decoding on the view animation file, the picture and the screenshot in the step 1 by using a code generator and a coding tool to obtain data to be synthesized, operating a video renderer, synthesizing the animation video file with the picture and the screenshot, and obtaining a video with a requirement.
2. The method of claim 1, wherein the method comprises: the control tool for generating the screenshot by using the control in the step 1 is an android view control.
3. The method of claim 1, wherein the method comprises: the animation action in the step 1 is described and obtained by using an xml file of the android view animation, and the types of the video animation action comprise a gradual animation, a telescopic animation and a rotary animation.
4. The method of claim 1, wherein the method comprises: and 3, acquiring a new animation video file with a rendered watermark and a rendered sticker through a render code file, executing corresponding opengl api or glsl grammar in the render code file to analyze the corresponding view animation file, transmitting the video file as texture data, executing the render code, and rendering the video with the watermark and the rendered sticker.
5. The method of claim 1, wherein the method comprises: the step 4 of synthesizing the animation video file with the picture and the screenshot is to execute corresponding opengl api or glsl grammar inside through a render code file to analyze the corresponding view animation file, the picture and the screenshot, transmit the video file as texture data, perform soft de-coding by using EGL + ffmpeg, run the render code, synthesize the animation video file with the picture and the screenshot, and obtain the video file with the requirement.
6. The method of claim 1, wherein the method comprises: in the step 4, the view animation file, the picture and the screenshot in the step 1 can be decoded, and data to be synthesized into the video can be obtained by performing hard coding and decoding by using the android mediacodec.
7. A system for adding dynamic effects based on derived video, comprising:
the device comprises a storage module, an analysis module, a preview module, a processing module and a synthesis module;
the storage module stores pictures and screenshots with formats of PNG and Bitmap, view animation files for describing animation actions and video files to be synthesized;
the analysis module extracts a view animation file for describing animation actions from the storage module, analyzes the view animation file, acquires data to be previewed and sends the data to be previewed to the preview module;
the preview module receives data to be previewed sent by the analysis module, extracts a video file from the storage module, processes the video file into texture data, combines the texture data with the data to be previewed to obtain a new animation video file with a rendered watermark and a sticker, and sends the texture data to the synthesis module;
the processing module extracts pictures and screenshots with formats of PNG and Bitmap from the storage module and makes view animation files describing animation actions, soft coding and decoding are carried out on the extracted files by using a code generator and a coding tool to generate data of an animation to be synthesized, and the data to be synthesized are sent to the synthesis module;
the synthesis module receives the video texture data sent by the preview module and the to-be-synthesized motion picture data sent by the processing module, and synthesizes an animation video file with pictures and screenshots.
8. The system according to claim 7, wherein said system is further configured to add dynamic special effects based on the derived video, and wherein: the storage module further comprises a drawing module, and the drawing module is used for making the xml file of the android view animation to obtain a view dynamic file containing animation actions.
9. The system of claim 8, wherein the system is further configured to add dynamic special effects based on the derived video, and wherein: the animation action types of the view dynamic file containing the animation action comprise gradual animation, telescopic animation and rotary animation.
10. The system according to claim 7, wherein said system is further configured to add dynamic special effects based on the derived video, and wherein: the processing mode of the process of synthesizing the animation video file with the picture and the screenshot by the synthesis module comprises two modes of EGL + ffmpeg soft coding and decoding and android media codec hard coding and decoding, and a new video is synthesized by using a render code after data is received and decoded by any mode.
CN202111499682.1A 2021-12-09 2021-12-09 Method and system for adding dynamic special effect based on exported video Pending CN114173192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111499682.1A CN114173192A (en) 2021-12-09 2021-12-09 Method and system for adding dynamic special effect based on exported video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111499682.1A CN114173192A (en) 2021-12-09 2021-12-09 Method and system for adding dynamic special effect based on exported video

Publications (1)

Publication Number Publication Date
CN114173192A true CN114173192A (en) 2022-03-11

Family

ID=80484897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111499682.1A Pending CN114173192A (en) 2021-12-09 2021-12-09 Method and system for adding dynamic special effect based on exported video

Country Status (1)

Country Link
CN (1) CN114173192A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106804003A (en) * 2017-03-09 2017-06-06 广州四三九九信息科技有限公司 Video editing method and device based on ffmpeg
JP2017118559A (en) * 2017-02-06 2017-06-29 株式会社クリエ・ジャパン Moving image generation server, moving image generation program, moving image generation method, and moving image generation system
CN107124624A (en) * 2017-04-21 2017-09-01 腾讯科技(深圳)有限公司 The method and apparatus of video data generation
CN109583158A (en) * 2018-11-15 2019-04-05 福建南威软件有限公司 A kind of electronics license copy generation method based on dynamic watermark
CN111193876A (en) * 2020-01-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for adding special effect in video
CN111355960A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Method and device for synthesizing video file, mobile terminal and storage medium
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN113420268A (en) * 2021-07-15 2021-09-21 南京中孚信息技术有限公司 Watermark adding method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017118559A (en) * 2017-02-06 2017-06-29 株式会社クリエ・ジャパン Moving image generation server, moving image generation program, moving image generation method, and moving image generation system
CN106804003A (en) * 2017-03-09 2017-06-06 广州四三九九信息科技有限公司 Video editing method and device based on ffmpeg
CN107124624A (en) * 2017-04-21 2017-09-01 腾讯科技(深圳)有限公司 The method and apparatus of video data generation
CN109583158A (en) * 2018-11-15 2019-04-05 福建南威软件有限公司 A kind of electronics license copy generation method based on dynamic watermark
CN111355960A (en) * 2018-12-21 2020-06-30 北京字节跳动网络技术有限公司 Method and device for synthesizing video file, mobile terminal and storage medium
CN111193876A (en) * 2020-01-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for adding special effect in video
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN113420268A (en) * 2021-07-15 2021-09-21 南京中孚信息技术有限公司 Watermark adding method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111935504B (en) Video production method, device, equipment and storage medium
US8639086B2 (en) Rendering of video based on overlaying of bitmapped images
CN111899155B (en) Video processing method, device, computer equipment and storage medium
CN111899322B (en) Video processing method, animation rendering SDK, equipment and computer storage medium
TW200850010A (en) Formatting and compression of content data
US20070016657A1 (en) Multimedia data processing devices, multimedia data processing methods and multimedia data processing programs
US20100060652A1 (en) Graphics rendering system
JP2014531142A (en) Script-based video rendering
CN113891113A (en) Video clip synthesis method and electronic equipment
CN111885346B (en) Picture code stream synthesis method, terminal, electronic device and storage medium
SG173703A1 (en) Method for generating gif, and system and media player thereof
EP2164008A2 (en) System and method for transforming web page objects
WO2007114961A2 (en) Automated visualization for enhanced music playback
CN102819851B (en) Method for implementing sound pictures by using computer
CN111954006A (en) Cross-platform video playing implementation method and device for mobile terminal
CN112689197B (en) File format conversion method and device and computer storage medium
CN108495174B (en) Method and system for generating video file by H5 page effect
CN105262957A (en) Video image processing method and device
CN114205680A (en) Video cover display method and device, equipment, medium and product thereof
CN113905254A (en) Video synthesis method, device, system and readable storage medium
CN114173192A (en) Method and system for adding dynamic special effect based on exported video
CN111145318A (en) Rendering method and device based on NGUI
CN116250013A (en) Information processing apparatus and method
CN115250335A (en) Video processing method, device, equipment and storage medium
CN109246377B (en) Video data storage method, video data reading method and video data storage equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220311

RJ01 Rejection of invention patent application after publication