CN110784739A - Video synthesis method and device based on AE - Google Patents
Video synthesis method and device based on AE Download PDFInfo
- Publication number
- CN110784739A CN110784739A CN201911027453.2A CN201911027453A CN110784739A CN 110784739 A CN110784739 A CN 110784739A CN 201911027453 A CN201911027453 A CN 201911027453A CN 110784739 A CN110784739 A CN 110784739A
- Authority
- CN
- China
- Prior art keywords
- video
- replacement data
- template
- video template
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Abstract
The application discloses a video synthesis method and device based on AE. The method comprises the steps that a mobile terminal obtains a video template designed and completed in AE; receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template, and synthesizing the replacement data and the video template to obtain a target video; and displaying the target video. The method and the device solve the problem that the existing mode of using the animation designed by AE for the mobile terminal can not meet the personalized requirements of users.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method and an apparatus for video synthesis based on AE.
Background
Adobe After Effects abbreviated as "AE" is a graphic video processing software introduced by Adobe corporation, and is a tool commonly used by designers of individuals and units engaged in designing animation videos and the like. When the animation designed by AE is used for a mobile terminal, the file conversion is required to be carried out through corresponding software, and the converted animation is displayed on the mobile terminal.
Currently, the method of using animation of AE design for the mobile terminal is: and exporting the Ae from the animation file through the open source code base, importing the animation file by using the code bases on different platforms, and displaying the animation file on a mobile terminal. As can be seen from the above description, the foregoing manner can only display the animation designed on the AE, and when a user who does not use the AE software wants to modify the animation of the AE software by a designer according to personal preferences, the user cannot modify and adjust the animation autonomously, and the personalized requirements of the user cannot be met.
Disclosure of Invention
The present application mainly aims to provide a method and an apparatus for video composition based on AE, so as to solve the problem that the existing mode of using an AE-designed animation for a mobile terminal cannot meet the personalized requirements of a user.
To achieve the above object, according to a first aspect of the present application, there is provided a method of AE-based video composition.
The method for video synthesis based on AE according to the application comprises the following steps:
the mobile terminal acquires a video template designed in AE;
receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template;
synthesizing the replacement data and the video template to obtain a target video;
and displaying the target video.
Further, before receiving the replacement data, the method further comprises:
and modifying the setting of a preset layer in the video template to obtain an editable video template.
Further, the synthesizing the replacement data and the video template to obtain the target video includes:
and synthesizing the replacement data and the editable video template to obtain the target video.
Further, the method is designed and developed by using a universal underlying language to be suitable for mobile terminals of different platform systems.
Further, the synthesizing of the replacement data with the editable video template includes:
and calling an open graphics library OpenGL interface to synthesize the replacement data and the editable video template.
Further, the synthesizing the replacement data with the editable video template further comprises:
and superposing the special effects in the editable video template according to the filter chains of different filter combinations.
Further, the synthesizing the replacement data with the editable video template further comprises:
performing data analysis on the editable video template and the replacement data;
and importing the analyzed data into a GPU (graphics processing Unit) for multi-texture synthesis to obtain a target video.
In order to achieve the above object, according to a second aspect of the present application, there is provided an apparatus for AE-based video composition.
The device for AE-based video synthesis according to the application comprises:
the acquisition unit is used for acquiring the video template designed in the AE by the mobile terminal;
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving replacement data, and the replacement data is used for replacing original data in a preset layer of a video template;
the synthesizing unit is used for synthesizing the replacement data and the video template to obtain a target video;
and the display unit is used for displaying the target video.
Further, the apparatus further comprises:
and the modifying unit is used for modifying the setting of a preset layer in the video template before receiving the replacement data to obtain an editable video template.
Further, the synthesis unit is configured to:
and synthesizing the replacement data and the editable video template to obtain the target video.
Further, the device is designed and developed by using a universal underlying language so as to be suitable for mobile terminals of different platform systems.
Further, the synthesis unit includes:
and the calling module is used for calling an open graphics library OpenGL interface to synthesize the replacement data and the editable video template.
Further, the synthesis unit further includes:
and the superposition module is used for superposing the special effects in the editable video template according to the filter chains of different filter combinations.
Further, the synthesis unit further includes:
the analysis module is used for carrying out data analysis on the editable video template and the replacement data;
and the texture synthesis module is used for importing the analyzed data into a GPU (graphics processing Unit) for multi-texture synthesis to obtain a target video.
To achieve the above object, according to a third aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method of AE-based video composition according to any one of the first aspect.
In the embodiment of the application, in the method and the device for video synthesis based on AE, firstly, a mobile terminal acquires a video template designed in the AE; receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template, and synthesizing the replacement data and the video template to obtain a target video; and displaying the target video. Therefore, in the application, after the mobile terminal designs the video template from the AE, the data in the preset layer in the video template can be replaced, so that the user can adjust the video template according to the preference of the user, and the personalized requirements of the user are met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a flowchart of a method for AE-based video composition according to an embodiment of the present application;
FIG. 2 is a flow chart of another method for AE-based video compositing provided in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of a process for compositing replacement data with an editable video template according to an embodiment of the application;
fig. 4 is a schematic flowchart of a method for AE-based video composition applied to a mobile terminal of different platforms according to an embodiment of the present application;
fig. 5 is a block diagram illustrating an AE-based video composition apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of another AE-based video compositing apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a method for AE-based video composition, as shown in fig. 1, the method including the steps of:
and S101, the mobile terminal acquires the video template designed in the AE.
After a designer designs a video template on AE software, two files, namely data and images, can be exported, wherein the data is a data file, and the images are folders containing materials such as pictures. The mobile terminal can export files such as pictures, texts, audios and videos contained in the data and images from the mobile terminal through a third-party plug-in.
The mobile terminal acquires a video template designed in AE, namely acquires files such as pictures, texts, audios and videos corresponding to the video template.
S102, receiving replacement data.
The replacement data is data selected by a user to replace original data in a preset layer of the video template. The replacement data may be images and/or video and/or audio and/or text.
Before the user selects the replacement data, the user mobile terminal needs to display the picture, the text, the audio and the video and the like corresponding to the video template acquired in step S101, so that the user can select the data to be replaced according to the video template, and then select (in a local selection mode or an online downloading mode and the like) the replacement data to replace the data to be replaced.
And S103, synthesizing the replacement data and the video template to obtain the target video.
The process of synthesizing the replacement data and the video template to obtain the target video comprises the following steps: the method comprises the steps of firstly analyzing files such as pictures, texts, audios and videos corresponding to the video template obtained in the step S101 to obtain analyzed data, and then obtaining target videos through decoding, coding, texture synthesis and other modes of the analyzed data and the replacement data obtained in the step S102.
And S104, displaying the target video.
And loading and displaying the obtained target video in a display interface of the mobile terminal.
From the above description, it can be seen that in the method for video composition based on AE in the embodiment of the present application, first, the mobile terminal acquires a video template designed in AE; receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template, and synthesizing the replacement data and the video template to obtain a target video; and displaying the target video. Therefore, in the application, after the mobile terminal designs the video template from the AE, the data in the preset layer in the video template can be replaced, so that the user can adjust the video template according to the preference of the user, and the personalized requirements of the user are met.
As a further supplement and refinement to the above-described embodiments, according to an embodiment of the present application, another method for AE-based video composition is provided, as shown in fig. 2, the method comprising the steps of:
and S201, the mobile terminal acquires the video template designed in the AE.
The implementation of this step is the same as that of step S101 in the figure, and is not described here again.
S202, the setting of a preset layer in the video template is modified, and an editable video template is obtained.
The preset map layer can be freely set according to requirements. The setting of the preset layer is changed into editable, so that a user can replace the data in the preset layer in the video template.
S203, receiving the replacement data.
The implementation of this step is the same as that of step S102 in fig. 1, and is not described here again.
And S204, synthesizing the replacement data and the editable video template to obtain the target video.
The specific process of combining the replacement data with the editable video template is shown in fig. 3:
the video data mp4\ flv and the like represent data obtained by analyzing an editable video template and replacement data; then audio sampling data, video pixel data and pictures are obtained through decapsulation, audio and video compression and audio and video decoding; and then importing the video pixel data and the picture into a GPU (graphics processing Unit) for multi-texture synthesis, and then carrying out video coding and packaging to obtain a target video.
In the synthesizing process, the mobile terminal invokes an open graphics library OpenGL interface to synthesize the replacement data and the editable video template. Graphic languages platform graphic languages, such as Metal, Vulkan, etc., may be readily used in the future.
In the synthesizing process, the special effects in the editable video template can be superposed according to the filter chains of different filter combinations, so that different special effects can be efficiently superposed, and the effect of the video template designed by Ae is seamlessly approximated.
In the synthesis process, the audio and the video are simultaneously processed in batch by using the ffmpeg, and the synthesis speed is accelerated and improved by using hardware. ffmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams.
And S205, displaying the target video.
And loading and displaying the obtained target video in a display interface of the mobile terminal.
In addition, the AE-based video composition method in the embodiments of fig. 1 and fig. 2 is designed and developed by using a general underlying language (C + + language, etc.), and can be applied to mobile terminals of different platform systems (iOS, android, Linux, and web). The method avoids the realization of a set of synthesis processing logic at each end, and avoids the problems of realization of differentiation, non-uniform effect, non-uniform and non-standard flow and non-uniform version. And the new characteristics can be developed efficiently, and the aim of product requirements can be realized efficiently. In addition, mobile terminals of different platforms all maintain the same interface, call back and the like. In the technology requiring interaction with the bottom layer, by appointing a uniform interface or protocol, mobile terminals of different platforms perform respective platform processing according to the interface. Such as unified gesture processing data, unified composition interfaces, unified template data, etc.
Finally, a schematic diagram of applying the AE-based video synthesis method to the mobile terminals of different platforms is given, as shown in fig. 4
The Ae video template is a video template designed on Ae software, is suitable for android, IOS and Linux systems, and is used for processing files corresponding to the video template to obtain related configuration files, template data and the like (files such as pictures, characters, audio and video and the like corresponding to the video template), and then re-synthesizing a new video (a target video) through user modification. In fig. 4, the edited text, picture, video, etc., background video, mask video, and background video belong to the related configuration file and template data; the data analysis, audio and video decoding and filter chain in fig. 4 belong to the process of target video synthesis.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided an apparatus for AE-based video composition for implementing the method described in fig. 1 to 2, as shown in fig. 5, the apparatus includes:
an obtaining unit 31, configured to obtain, by a mobile terminal, a video template designed in AE;
a receiving unit 32, configured to receive replacement data, where the replacement data is used to replace original data in a preset layer of a video template;
a synthesizing unit 33, configured to synthesize the replacement data and the video template to obtain a target video;
and a display unit 34 for displaying the target video.
From the above description, it can be seen that in the apparatus for video composition based on AE in the embodiment of the present application, first, the mobile terminal acquires a video template designed in AE; receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template, and synthesizing the replacement data and the video template to obtain a target video; and displaying the target video. Therefore, in the application, after the mobile terminal designs the video template from the AE, the data in the preset layer in the video template can be replaced, so that the user can adjust the video template according to the preference of the user, and the personalized requirements of the user are met.
Further, as shown in fig. 6, the apparatus further includes:
the modifying unit 35 is configured to modify the setting of the preset layer in the video template before receiving the replacement data, so as to obtain an editable video template.
Further, the synthesis unit 33 is configured to:
and synthesizing the replacement data and the editable video template to obtain the target video.
Further, the device is designed and developed by using a universal underlying language so as to be suitable for mobile terminals of different platform systems.
Further, as shown in fig. 6, the synthesis unit 33 includes:
the invoking module 331 is configured to invoke an OpenGL interface of an open graphics library to perform synthesis of the replacement data and the editable video template.
Further, as shown in fig. 6, the synthesis unit 33 further includes:
and the superposition module 332 is configured to superpose the special effect in the editable video template according to the filter chains of different filter combinations.
Further, as shown in fig. 6, the synthesis unit 33 further includes:
the analysis module 333 is used for carrying out data analysis on the editable video template and the replacement data;
and a texture synthesis module 334, configured to import the analyzed data into a GPU for multi-texture synthesis, so as to obtain a target video.
Specifically, the specific process of implementing the functions of each unit and module in the device in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
There is also provided, in accordance with an embodiment of the present application, a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the method for AE-based video compositing described in any of fig. 1-2.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A method of AE-based video compositing, the method comprising:
the mobile terminal acquires a video template designed in AE;
receiving replacement data, wherein the replacement data is used for replacing original data in a preset layer of a video template;
synthesizing the replacement data and the video template to obtain a target video;
and displaying the target video.
2. The method of AE-based video compositing of claim 1, wherein prior to receiving replacement data, the method further comprises:
and modifying the setting of a preset layer in the video template to obtain an editable video template.
3. The AE-based video composition method of claim 1, wherein the compositing the replacement data with the video template to obtain a target video comprises:
and synthesizing the replacement data and the editable video template to obtain the target video.
4. The AE-based video composition method of any one of claims 1-3, wherein the method is developed using a common underlying language design to adapt to mobile ends of different platform systems.
5. The method of AE-based video composition of claim 4, wherein said compositing replacement data with said editable video template comprises:
and calling an open graphics library OpenGL interface to synthesize the replacement data and the editable video template.
6. The AE-based video composition method of claim 4, wherein the compositing replacement data with the editable video template further comprises:
and superposing the special effects in the editable video template according to the filter chains of different filter combinations.
7. The AE-based video composition method of claim 4, wherein the compositing replacement data with the editable video template further comprises:
performing data analysis on the editable video template and the replacement data;
and importing the analyzed data into a GPU (graphics processing Unit) for multi-texture synthesis to obtain a target video.
8. An apparatus for AE-based video compositing, the apparatus comprising:
the acquisition unit is used for acquiring the video template designed in the AE by the mobile terminal;
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving replacement data, and the replacement data is used for replacing original data in a preset layer of a video template;
the synthesizing unit is used for synthesizing the replacement data and the video template to obtain a target video;
and the display unit is used for displaying the target video.
9. The AE-based video compositing device of claim 8, characterized in that the device further comprises:
and the modifying unit is used for modifying the setting of a preset layer in the video template before receiving the replacement data to obtain an editable video template.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of AE-based video compositing of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911027453.2A CN110784739A (en) | 2019-10-25 | 2019-10-25 | Video synthesis method and device based on AE |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911027453.2A CN110784739A (en) | 2019-10-25 | 2019-10-25 | Video synthesis method and device based on AE |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110784739A true CN110784739A (en) | 2020-02-11 |
Family
ID=69386959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911027453.2A Pending CN110784739A (en) | 2019-10-25 | 2019-10-25 | Video synthesis method and device based on AE |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110784739A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111899155A (en) * | 2020-06-29 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Video processing method, video processing device, computer equipment and storage medium |
CN112770110A (en) * | 2020-12-29 | 2021-05-07 | 北京奇艺世纪科技有限公司 | Video quality detection method, device and system |
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN111899155B (en) * | 2020-06-29 | 2024-04-26 | 腾讯科技(深圳)有限公司 | Video processing method, device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105657254A (en) * | 2015-12-28 | 2016-06-08 | 努比亚技术有限公司 | Image synthesizing method and device |
US9578202B2 (en) * | 2012-12-28 | 2017-02-21 | Canon Kabushiki Kaisha | Communication apparatus and method of controlling communication apparatus |
CN106611435A (en) * | 2016-12-22 | 2017-05-03 | 广州华多网络科技有限公司 | Animation processing method and device |
CN107220063A (en) * | 2017-06-27 | 2017-09-29 | 北京金山安全软件有限公司 | Dynamic wallpaper generation method and device |
CN108174266A (en) * | 2017-12-20 | 2018-06-15 | 五八有限公司 | Method, apparatus, terminal and the server that animation plays |
CN109168028A (en) * | 2018-11-06 | 2019-01-08 | 北京达佳互联信息技术有限公司 | Video generation method, device, server and storage medium |
CN109769141A (en) * | 2019-01-31 | 2019-05-17 | 北京字节跳动网络技术有限公司 | A kind of video generation method, device, electronic equipment and storage medium |
CN110072120A (en) * | 2019-04-23 | 2019-07-30 | 上海偶视信息科技有限公司 | A kind of video generation method, device, computer equipment and storage medium |
-
2019
- 2019-10-25 CN CN201911027453.2A patent/CN110784739A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9578202B2 (en) * | 2012-12-28 | 2017-02-21 | Canon Kabushiki Kaisha | Communication apparatus and method of controlling communication apparatus |
CN105657254A (en) * | 2015-12-28 | 2016-06-08 | 努比亚技术有限公司 | Image synthesizing method and device |
CN106611435A (en) * | 2016-12-22 | 2017-05-03 | 广州华多网络科技有限公司 | Animation processing method and device |
CN107220063A (en) * | 2017-06-27 | 2017-09-29 | 北京金山安全软件有限公司 | Dynamic wallpaper generation method and device |
CN108174266A (en) * | 2017-12-20 | 2018-06-15 | 五八有限公司 | Method, apparatus, terminal and the server that animation plays |
CN109168028A (en) * | 2018-11-06 | 2019-01-08 | 北京达佳互联信息技术有限公司 | Video generation method, device, server and storage medium |
CN109769141A (en) * | 2019-01-31 | 2019-05-17 | 北京字节跳动网络技术有限公司 | A kind of video generation method, device, electronic equipment and storage medium |
CN110072120A (en) * | 2019-04-23 | 2019-07-30 | 上海偶视信息科技有限公司 | A kind of video generation method, device, computer equipment and storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111899155A (en) * | 2020-06-29 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Video processing method, video processing device, computer equipment and storage medium |
CN111899155B (en) * | 2020-06-29 | 2024-04-26 | 腾讯科技(深圳)有限公司 | Video processing method, device, computer equipment and storage medium |
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN112770110A (en) * | 2020-12-29 | 2021-05-07 | 北京奇艺世纪科技有限公司 | Video quality detection method, device and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106611435B (en) | Animation processing method and device | |
CN111669623B (en) | Video special effect processing method and device and electronic equipment | |
CN110853121B (en) | Cross-platform data processing method and device based on AE | |
CN104765614B (en) | Color in processing method and processing device | |
CN111193876B (en) | Method and device for adding special effect in video | |
CN111899322B (en) | Video processing method, animation rendering SDK, equipment and computer storage medium | |
CN110908762B (en) | Dynamic wallpaper implementation method and device | |
CN109242934B (en) | Animation code generation method and equipment | |
CN106886353B (en) | Display processing method and device of user interface | |
CN106843639A (en) | The display methods of icon and the display device of icon | |
CN110784739A (en) | Video synthesis method and device based on AE | |
CN112689168A (en) | Dynamic effect processing method, dynamic effect display method and dynamic effect processing device | |
CN112950757A (en) | Image rendering method and device | |
CN112651475A (en) | Two-dimensional code display method, device, equipment and medium | |
CN110647273B (en) | Method, device, equipment and medium for self-defined typesetting and synthesizing long chart in application | |
CN111068314B (en) | NGUI resource rendering processing method and device based on Unity | |
CN117376660A (en) | Subtitle element rendering method, device, equipment, medium and program product | |
CN110992438B (en) | Picture editing method and device | |
CN114390307A (en) | Image quality enhancement method, device, terminal and readable storage medium | |
CN117065357A (en) | Media data processing method, device, computer equipment and storage medium | |
CN111199519B (en) | Method and device for generating special effect package | |
CN106331834B (en) | Multimedia data processing method and equipment thereof | |
CN111242688A (en) | Animation resource manufacturing method and device, mobile terminal and storage medium | |
CN113538302A (en) | Virtual article display method and device and computer readable storage medium | |
CN112017261A (en) | Sticker generation method and device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200211 |
|
RJ01 | Rejection of invention patent application after publication |