CN116896672B - Video special effect processing method and device, electronic equipment and storage medium - Google Patents

Video special effect processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116896672B
CN116896672B CN202311165899.8A CN202311165899A CN116896672B CN 116896672 B CN116896672 B CN 116896672B CN 202311165899 A CN202311165899 A CN 202311165899A CN 116896672 B CN116896672 B CN 116896672B
Authority
CN
China
Prior art keywords
video
special effect
xml file
layer
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311165899.8A
Other languages
Chinese (zh)
Other versions
CN116896672A (en
Inventor
闫亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Meishe Network Technology Co ltd
Original Assignee
Beijing Meishe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Meishe Network Technology Co ltd filed Critical Beijing Meishe Network Technology Co ltd
Priority to CN202311165899.8A priority Critical patent/CN116896672B/en
Publication of CN116896672A publication Critical patent/CN116896672A/en
Application granted granted Critical
Publication of CN116896672B publication Critical patent/CN116896672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Studio Circuits (AREA)

Abstract

The application provides a video special effect processing method, a video special effect processing device, electronic equipment and a readable storage medium. The method comprises the following steps: acquiring material information and special effect information of a video, wherein the video is an AE engineering file, and the video comprises an adjustment layer; constructing an xml file according to the material information of the video and the special effect information of the video; and adding special effects to the video through the SDK according to the xml file to obtain a video special effect processing result. According to the video special effect processing method, the xml file can be generated according to the video in the AE engineering file format, the xml file carries special effect information to be processed of the video, the video special effect can be added through the SDK, although the SDK does not have an interface for adjusting the layer in the AE engineering file, the video special effect processing can be still carried out according to the xml file, the layer can be fully utilized for carrying out the video special effect processing on the premise that the structure of the SDK does not need to be changed, and the video special effect processing efficiency is improved.

Description

Video special effect processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video special effect processing method, a device, an electronic apparatus, and a readable storage medium.
Background
As network technology evolves, many users develop social habits that capture videos and send them to a social platform. However, videos with no special effects are less in number of people watched on the social platform, and social demands of users are difficult to meet. The specialized video processing and special effect adding technology is too complex, so that the user is difficult to quickly learn and master.
In view of the above problems, the prior art provides various video special effect processing software, such as Adobe After Effects (AE) software for video special effect addition, and develops various AE-based software development kits (Software Development Kit, SDK) to reduce the difficulty of use by users. However, in order to improve user experience, only video tracks and audio tracks are provided in the SDK, special effect addition cannot utilize an adjustment layer of AE software, and video special effect processing efficiency is low.
Disclosure of Invention
The application provides a video special effect processing method, a video special effect processing device, electronic equipment and a readable storage medium. The video special effect processing method provided by the application. According to the video special effect processing method, the xml file can be generated according to the video in the AE engineering file format, the xml file carries special effect information to be processed of the video, the video special effect can be added through the SDK, although the SDK does not have an interface for adjusting the layer in the AE engineering file, the video special effect processing can be still carried out according to the xml file, the layer can be fully utilized for carrying out the video special effect processing on the premise that the structure of the SDK does not need to be changed, and the video special effect processing efficiency is improved.
In a first aspect, the present application provides a video special effect processing method, including:
acquiring material information and special effect information of a video, wherein the video is an AE engineering file, and the video comprises an adjustment layer;
constructing an xml file according to the material information of the video and the special effect information of the video;
and adding special effects to the video through the SDK according to the xml file to obtain a video special effect processing result.
Optionally, the video special effect processing method provided by the application further includes:
performing time line construction according to the adjustment layer of the video;
and constructing a first video track on the time line according to the adjustment layer of the video to obtain the material information and special effect information of the adjustment layer of the video.
Optionally, the video special effect processing method provided by the application further includes:
and constructing a blank third video track on the time line, wherein the third video track is used for masking the video.
Optionally, the video special effect processing method provided by the application further includes:
and constructing a second video track on the time line according to the second layer of the video to obtain material information and special effect information of the second layer of the video.
Optionally, the video special effect processing method provided by the application further includes:
constructing a first xml file according to the special effect information of the adjustment layer, wherein the first xml file is used for marking the special effect information of the adjustment layer to be added to the first video track;
constructing a second xml file according to the special effect information of the second layer, wherein the second xml file is used for marking the special effect information of the second layer to be added to the second video track;
and constructing a third xml file, wherein the third xml file is used for marking special effect information to be added to the third video track.
Optionally, the video special effect processing method provided by the application further includes:
performing pattern trick addition on the time line through an SDK to obtain a pattern trick object;
setting xml content for the drawing trick object according to the first xml file, the second xml file and the third xml file;
and adding special effects to the video according to the xml content set by the drawing special effect object to obtain the video special effect processing result.
Optionally, the video special effect processing method provided by the application further includes:
inputting the first video track and the second video track into a composition node;
performing special effect addition on the synthesized node according to the special effect information of the adjustment layer and the special effect information of the second layer to obtain a special effect added node;
and obtaining the xml content according to the synthesized node, the special effect added node and the third video track.
In a second aspect, the present application further provides a video special effect processing apparatus, including:
the image layer adjusting acquisition module is used for acquiring material information and special effect information of a video, wherein the video is an AE engineering file, and the video comprises an image layer adjusting module;
the xml file generation module is used for constructing an xml file according to the material information of the video and the special effect information of the video;
and the special effect adding module is used for adding special effects to the video through the SDK according to the xml file to obtain a video special effect processing result.
In a third aspect, the present application further provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video special effect processing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the video special effect processing method according to the first aspect.
According to the video special effect processing method, the xml file can be generated according to the video in the AE engineering file format, the xml file carries special effect information to be processed of the video, the video special effect can be added through the SDK, although the SDK does not have an interface for adjusting the layer in the AE engineering file, the video special effect processing can be still carried out according to the xml file, the layer can be fully utilized for carrying out the video special effect processing on the premise that the structure of the SDK does not need to be changed, and the video special effect processing efficiency is improved.
The foregoing description is merely an overview of the technical solutions provided in the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application is given.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
Fig. 1 is a schematic diagram of a video special effect processing method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 3 is a third schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a video special effect processing method according to an embodiment of the present disclosure;
FIG. 8 is one of the video special effects processing flow schematic provided in the embodiments of the present application;
FIG. 9 is a second schematic illustration of a video special effect processing flow provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a video special effect processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The AE adjustment layer is a layer that can add special effects to multiple layers in the video. For example, in AE, the smaller the number of layers, the higher the layer stacking order, thus setting layer 1 as the adjustment layer. And is set as the first digit of the layer addition sequence, layer 2 and layer 3 being located below layer 1. When the video special effect is added to the layer 1, for example, a blurring special effect is generated on the part covered by the layer 2 and the layer 3 times the layer 1, so that the video special effect is quickly added.
However, the existing SDK has only video tracks and audio tracks, has poor connection effect with the AE adjustment layer, and is difficult to realize quick processing of video special effects by utilizing the AE adjustment layer. Aiming at the problem, the technical scheme provided by the application is oriented to more use scenes, and through the graphical rendering of the special effect xml description file, more kinds of special effects can be combined, and the special effects are not limited to the conversion of some special effects in AE, so that the expansibility is high, for example, the effect of reducing an adjustment layer in AE by constructing an xml structure, the video special effect processing efficiency is improved, and the user experience is improved.
The video special effect processing method, the device, the electronic equipment and the nonvolatile readable storage medium provided by the application are described in detail below through specific embodiments and application scenes with reference to the accompanying drawings.
A first embodiment of the present application relates to a video special effect processing method, as shown in fig. 1, including:
step 101, acquiring material information and special effect information of a video, wherein the video is an AE engineering file, and the video comprises an adjustment layer;
102, constructing an xml file according to the material information of the video and the special effect information of the video;
and step 103, adding special effects to the video through the SDK according to the xml file to obtain a video special effect processing result.
Specifically, the processing object of the video special effect processing method provided by the application is a video with an AE engineering file format of an adjustment layer. Firstly, acquiring material information and special effect information of an AE engineering file format video, wherein the material information of the video comprises material information for adjusting a layer, and the special effect information of the video comprises special effect information for adjusting the layer. And then constructing an xml file according to the material information and the special effect information, and describing the special effect content and the quantity which need to be added for each layer in the video. And finally, adding special effects to the video according to the xml file through the SDK to obtain a video special effect processing result.
According to the video special effect processing method, the xml file can be generated according to the video in the AE engineering file format, the xml file carries special effect information to be processed of the video, the video special effect can be added through the SDK, although the SDK does not have an interface for adjusting the layer in the AE engineering file, the video special effect processing can be still carried out according to the xml file, the layer can be fully utilized for carrying out the video special effect processing on the premise that the structure of the SDK does not need to be changed, and the video special effect processing efficiency is improved.
On the basis of the above embodiment, as shown in fig. 2, in the video special effect processing method provided in the present application, step 101 includes:
step 111, constructing a time line according to the adjustment layer of the video;
and 112, constructing a first video track on the time line according to the adjustment layer of the video to obtain the material information and the special effect information of the adjustment layer of the video.
Specifically, in the video special effect processing method provided by the self device, the extraction of the material information and the special effect information of the video can be obtained by creating a time line according to an adjustment layer of the video and a plurality of layers positioned on the adjustment layer, and constructing a first video track on the time line according to the adjustment layer of the video to obtain the material information and the special effect information of the adjustment layer. And describing the special effect quantity and special effect content of the adjustment layers added on the first video track when the follow-up xml file is generated, and carrying out special effect processing on the first video track where the adjustment layers are positioned according to the xml file on the premise that the SDK does not have an interface corresponding to the adjustment layers, so as to obtain a video special effect processing result.
On the basis of the embodiment, as the video special effect processing method provided by the application can construct a time line and create the first video track corresponding to the adjustment layer on the time line, the SDK can realize the special effect addition of the adjustment layer according to the first video track, the SDK can still process video special effects according to the xml file although the SDK does not have an interface for butting with the adjustment layer in the AE engineering file, and the adjustment layer can still be fully utilized to process the video special effects on the premise of not changing the structure of the SDK, thereby improving the video special effect processing efficiency.
On the basis of the foregoing embodiment, as shown in fig. 3, in the video special effect processing method provided in the present application, after step 111, the method further includes:
and 113, constructing a blank third video track on the time line, wherein the third video track is used for carrying out mask processing on the video.
Specifically, after the time line is constructed, a blank third video track is constructed, for example, a white picture is added in the third video track, and the white picture is used as a mask when the subsequent video special effect is processed. At this time, the first video track constructed according to the adjustment layer and the third video track constructed according to the blank picture are simultaneously arranged on the time line, and the special effect content, the special effect quantity and the special effect quantity which need to be added to the first video track and the third video track are simultaneously described when the xml file is generated, so that the follow-up SDK can perform special effect processing on the first video track where the adjustment layer is located and the third video track where the blank layer is located according to the xml file on the premise that no interface corresponding to the adjustment layer exists, and a video special effect processing result is obtained.
On the basis of the embodiment, the timeline in the SDK in the video special effect processing method can also be used for constructing a blank third video track used as a mask image, so that the mask processing of each image layer can be ensured during the video special effect processing, and the application range of the video special effect processing is expanded.
On the basis of the foregoing embodiment, as shown in fig. 4, the video further includes a second layer, the material information of the second layer is the same as the material information of the adjustment layer, and the special effect information of the second layer includes the special effect information of the adjustment layer.
And 114, constructing a second video track on the time line according to the second layer of the video to obtain material information and special effect information of the second layer of the video.
Specifically, in the video special effect processing method provided by the application, the video not only comprises the adjustment layer, but also comprises a second layer positioned on the adjustment layer, wherein the second layer carries all special effects of the adjustment layer and special effects which need to be added in the second layer. After the time line is built, a second video track is built according to a second layer of the video, and material information and special effect information of the second layer are obtained. And describing special effect contents and special effect quantity which need to be added to the first video track, the special effect quantity which need to be added to the second video track, the special effect contents and the special effect quantity which need to be added to the third video track when the follow-up xml file is generated, and carrying out special effect processing on the first video track where the adjustment layer is located, the second video track where the second layer is located and the third video track where the blank layer is located according to the xml file on the premise that the follow-up SDK does not correspond to the adjustment layer, so as to obtain a video special effect processing result.
On the basis of the embodiment, because the time line in the SDK in the video special effect processing method provided by the application can also construct a blank second video track corresponding to the second layer on the adjustment layer, the special effect content of the second video track comprises all special effect contents of the adjustment layer, the adjustment layer is fully utilized, and the special effect adding efficiency of the layer is improved.
On the basis of the foregoing embodiment, as shown in fig. 5, the xml file includes a first xml file, a second xml file, and a third xml file, and step 102 in the video special effect processing method provided in the present application includes:
step 121, constructing a first xml file according to the special effect information of the adjustment layer, wherein the first xml file is used for marking the special effect information of the adjustment layer to be added to the first video track;
step 122, constructing a second xml file according to the special effect information of the second layer, wherein the second xml file is used for marking the special effect information of the second layer to be added in the second video track;
and 123, constructing a third xml file, wherein the third xml file is used for marking special effect information to be added to the third video track.
Specifically, after the first video track, the second video track and the third video track are all generated on the time line, generating special effect contents to be added to the first video track according to the material information and the special effect information of the adjustment layer respectively to obtain a first xml file; generating special effect content to be added to the second video track according to the material information and special effect information of the second layer to obtain a second xml file; and generating special effect content to be added to the third video track according to the blank picture for the mask, and obtaining a third xml file.
On the basis of the embodiment, the video special effect processing method can construct the xml files corresponding to the tracks, so that the xml files used for describing the tracks and needing to be added with special effect contents are obtained, the SDK can rapidly process the tracks in the time line through the xml files, and the video special effect processing efficiency is improved.
On the basis of the above embodiment, as shown in fig. 6, in the video special effect processing method provided in the present application, step 103 includes:
step 131, performing pattern trick addition on the time line through an SDK to obtain a pattern trick object;
step 132, setting xml content for the drawing special effects object according to the first xml file, the second xml file and the third xml file;
and 133, adding special effects to the video according to the xml content set by the drawing special effect object to obtain the video special effect processing result.
Specifically, a graphic trick is performed on a time line through an interface of the SDK, such as Graph trick addition, so as to obtain a graphic trick object, and corresponding xml content is set for the graphic trick object according to the first xml file, the second xml file and the third xml file. And finally, obtaining video special effect processing results consistent with the AE engineering files according to the drawing special effect objects and the corresponding xml contents in the SDK.
Based on the foregoing embodiment, as shown in fig. 7, in the video special effect processing method provided in the present application, step 132 includes:
step 134, inputting the first video track and the second video track into a synthesis node;
step 135, adding special effects to the synthesized node according to the special effect information of the adjustment layer and the special effect information of the second layer to obtain a special effect added node;
and 136, obtaining the xml content according to the synthesized node, the special effect added node and the third video track.
Specifically, xml content of Graph tricks can be constructed by:
the xml file comprises a first xml file, a second xml file and a third xml file, and is used for describing special effect contents which need to be added for adjusting layers, the second layers and layers for masks. The xml file is preset with a plurality of material nodes and a plurality of special effect nodes, for example, the material nodes track1, track2 and track3 respectively store the material content of each layer, and the special effect nodes effect0, effect1 and effect2 store the special effect content to be added in each layer. For example, when special effects of a combination of multiple layer corresponding tracks are required to be added, the tracks required to be combined, such as the first video track and the second video track, are placed in a preset synthesis node, such as a child com1 synthesis node, in an xml file, and adding video special effects to the child com1 synthesis node according to special effects content of the special effects node is equivalent to adding special effects to the first video track and the second video track. For example, based on xml writing rules, special effects of special effect nodes such as effect0 and effect1 can be added to the child com1 synthetic node, so that a special effect added node, such as child com1-effect0 with the effect0 special effect, is obtained. For example, effect nodes effect0 and effect1 carry effect information of the adjustment layer, and after effect addition, the first video track and the second video track in the nodes both carry all effects of the adjustment layer, that is, the second layer after video effect processing carries all effect information of the adjustment layer. And finally, processing blank pictures used for making a mask picture in the node child 1-effect0, child 1 synthetic node and a third video track after the special effect is added to obtain xml content for special effect addition of the video after the SDK is acquired. For example, a special effect node, such as a special effect (blank with mask) for mixing a foreground and a background through a mask, in the special effect added node child com1-effect0, child com1 synthesis node and a blank picture for making a mask in a third video track is input into an SDK, and is added to a default output node pin, such as a final synthesis pin (total Comp), of the SDK, so that a special effect of adjusting layers in the SDK and an AE engineering file is realized, that is, a plurality of layers in a video all carry special effects of adjusting layers. And then adding the special effect nodes corresponding to the respective layers to the corresponding layers to obtain the video special effect processing result.
On the basis of the embodiment, the video special effect processing method provided by the application can be used for combining and synthesizing the effects of each layer through the graphic special effect of the SDK, so that the application range of video special effect processing is expanded, and the user experience is improved.
Based on the above embodiments, the present application provides an example of video special effects processing in conjunction with fig. 8-9:
the video special effect processing framework is shown in fig. 8, and the video special effect processing object provided by the application is an AE engineering file or AE engineering data. The processing object includes an adjustment layer, a second layer, and a third layer in AE software (Adobe After Effects). The adjustment layers carry a plurality of special effects such as special effects 1 and 2, and the second layer and the third layer not only carry all special effects in the adjustment layers, but also carry special effects of the respective layers. When the processing object is output in the form of an AE engineering file or AE engineering data, the AE engineering file or the AE engineering data comprises an adjustment layer, a video layer, a picture layer, a text layer and a special effect list. The SDK is provided with a time line, and can be converted into a plurality of tracks in the SDK time line, such as a track2, a track3 and a track1 or a track1 white map according to the information in the AE engineering data, and finally a graphic special effect processing module in the SDK constructs track nodes, special effect nodes or AE special effect nodes, mask mixing (blend with mask) nodes and synthesis nodes of each track.
The workflow is shown in fig. 9, firstly, an AE project is opened, projects are loaded through an AE SDK plug-in, then, the AE project needing special effect processing is loaded, an SDK applying the video special effect processing method is loaded, a time line is created in the SDK, materials of the AE project are added, then, drawing special effect conversion is carried out on an adjustment layer according to the materials of the AE project and special effect categories, and at the moment, a plurality of layers in a video all carry special effect contents of the adjustment layer, so that a video special effect processing result is obtained. In addition, when in use, a user can also carry out video special effect processing and adjust the preview of the layer conversion effect through the SDK, and video special effect processing progress information is obtained in real time.
On the basis of the above embodiment, the present application further provides a code example capable of implementing video special effect processing:
<?xml version=“1.0” encoding=“UTF-8”?>
<graph defaultOutputNodePin=“totalComp”>
<nodeTable>
<sourceNode id=“track1”source=“:1”/>
<sourceNode id=“track2”source=“:2”/>
<sourceNode id=“track3”source=“:3”/>
<effectNode id=“childComp1”repeat=“true”effectName=“compositor”>
<param value=“normal,normal”name=“blendMode”/>
<param value=“1,1”name=“opacity”/>
</effectNode>
<effectNode id=“totalComp”repeat=“true”effectName=“compositor”>
<param value=“normal”name=“blendMode”/>
<param value=“1”name=“opacity”/>
</effectNode>
<effectNode effectDuration=“30000”allowSretch=“true”id=“childComp1-effect0”repeat=“true”effectName=“boxBlur”effectSrart=“0”>
<param value=“32.4”name=“radius”/>
<param value=“default”name=“orientation”/>
</effectNode>
<effectNode effectDuration=“30000”allowSretch=“true”id=“blendWithMask”repeat=“true”effectName=“blendWithMask”effectSrart=“0”>
</nodeTable>
<connectionTable>
<connection dstNodePin=“childComp1:INPUT1”srcNodePin=“track2”/>
<connection dstNodePin=“childComp1:INPUT2”srcNodePin=“track3”/>
<connection dstNodePin=“childComp1-effect0”srcNodePin=“childComp1”/>
<connection dstNodePin=“biendWithMask:INPUT1”srcNodePin=“childComp1”/>
<connection dstNodePin=“biendWithMask:INPUT2”srcNodePin=“childComp1-effect0”/>
<connection dstNodePin=“biendWithMask:INPUT3”srcNodePin=“track1”/>
<connection dstNodePin=“biendWithMask:INPUT1”srcNodePin=“blendWithMask”/>
</connectionTable>
</graph>
in the above code, because the description file is an XML file, an XML file header is set first, the description file is encoded as utf8, the root element of the description file is a graph, and the parameter defaultoutputnodipin represents the node pin total comp of default output.
The nodeTable in the code is used to store all nodes in the graph, and the nodes are divided into two types, sourceNode and efffectnode. The sourceNode represents an input source node, the parameter ID is the ID of the source node, the source is a file path of the source, and the formats PNG and CAF are supported. When source is "in: +N "at the beginning, represents the Nth image input. Xml of the application has three input source nodes, < sourceNode id= "track1" source= ": 1"/>, < sourceNode id= "track2" source= ": 2"/>, and < sourceNode id= "track3" source= ": 3"/>. The effect node represents the special effect node, and the parameter ID is the ID of the special effect node. The effect name is an embedded effect of the node, and each effect node can only correspond to one effect. The effect start is the effect start time, the effect duration is the effect duration, and allowStretch indicates whether the embedded effect supports a long stretch to the graph duration. The special effect correspondence parameters can be assigned by param. Each special effect node may have N input pins and one output pin, the number of input pins being determined by the corresponding special effect. The xml of the application has four special effect nodes, the ids are respectively child com1, totalComp, childCom1-effect0 and blendWithmask, and the embedded special effect names of the nodes used by the four id pairs are compositor, compositor, bloxBlur, blendWithMask. Parameters may be written according to default parameters, such as < paramname= "blendMode" value= "normal"/>.
connection tables are used to store pin-to-pin connection, i.e., a data stream connection. The srcNodePin parameter represents the source pin and dstNodePin represents the destination pin. In the application, 7 connections are used for completing the data conversion relation of the AE adjustment layer. The srcNodePin of the first two connections are the input source nodes track2 and track3 mentioned above, and dstNodePin is the special effect node child 1 mentioned above, meaning that track2 and track3 are subjected to child effect processing by child com 1; then the srcNodePin of the next connection is the child Com1, and the dstNodePin is the child Com1-effect0, which means that the child Com1 is subjected to the child Com1-effect0 special effect treatment; then for 3 connections of the blendWithMask, srcNodePin is child Com1, child Com1-effect0, track1, dstNodePin is blendWithMask: INPUT1, blendWithMask: INPUT2, blendWithMask: INPUT3, respectively; the srcNodePin of the last connection is blendWithMask, dstNodePin, which is the default output node total comp of defaultoutputnodpin.
A second embodiment of the present application relates to a video special effect processing apparatus, as shown in fig. 10, including:
the adjusting layer acquisition module 201 is configured to acquire material information and special effect information of a video, where the video is an AE engineering file, and the video includes an adjusting layer;
an Xml file generation module 202, configured to construct an Xml file according to the material information of the video and the special effect information of the video;
and the special effect adding module 203 is configured to add special effects to the video through the SDK according to the xml file, so as to obtain a video special effect processing result.
On the basis of the foregoing embodiment, in the video special effect processing apparatus provided in the present application, the adjustment layer obtaining module 201 includes:
a timeline construction unit 211, configured to perform timeline construction according to the adjustment layer of the video;
the first track construction unit 212 is configured to construct a first video track on the timeline according to the adjustment layer of the video, so as to obtain material information and special effect information of the adjustment layer of the video.
On the basis of the foregoing embodiment, in the video special effect processing apparatus provided in the present application, the adjustment layer obtaining module 201 further includes:
a third track construction unit 213, configured to construct a blank third video track on the timeline, where the third video track is used for masking the video.
On the basis of the foregoing embodiment, in the video special effect processing apparatus provided in the present application, the adjustment layer obtaining module 201 further includes:
and a second track construction unit 214, configured to construct a second video track on the timeline according to the second layer of the video, so as to obtain material information and special effect information of the second layer of the video.
Based on the above embodiment, in the video special effect processing apparatus provided in the present application, the Xml file generation module 202 includes:
a first file construction unit 221, configured to construct a first xml file according to the special effect information of the adjustment layer, where the first xml file is used to mark the special effect information of the adjustment layer to be added in the first video track;
a second file construction unit 222, configured to construct a second xml file according to the special effect information of the second layer, where the second xml file is used to mark the special effect information of the second layer to be added in the second video track;
and a third a Jin Goujian unit 223, configured to construct a third xml file according to the third layer.
On the basis of the foregoing embodiment, in the video special effect processing apparatus provided in the present application, the special effect adding module 203 includes:
a schema trick adding unit 231, configured to perform schema trick adding on the timeline through an SDK to obtain a schema trick object;
an Xml content setting unit 232 for setting Xml content for the drawing trick object according to the first Xml file, the second Xml file, and the third Xml file;
and the special effect adding unit 233 is configured to perform special effect adding on the video according to the xml content set by the drawing special effect object, so as to obtain the video special effect processing result.
On the basis of the above embodiment, in the video special effect processing apparatus provided in the present application, the Xml content setting unit 232 includes:
a composite node input unit 234 for inputting the first video track and the second video track into a composite node;
the node special effect adding unit 235 is configured to perform special effect adding on the synthesized node according to the special effect information of the adjustment layer and the special effect information of the second layer to obtain a special effect added node;
the Xml content generating unit 236 obtains the Xml content from the composition node, the special effect post-addition node, and the third video track.
A third embodiment of the present application relates to an electronic device, as shown in fig. 11, including:
at least one processor 301; the method comprises the steps of,
a memory 302 communicatively coupled to the at least one processor 301; wherein,
the memory 302 stores instructions executable by the at least one processor 301 to enable the at least one processor 301 to implement the video special effect processing method according to the first embodiment of the present application.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over the wireless medium via the antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
A fourth embodiment of the present application relates to a non-transitory computer readable storage medium storing a computer program. The computer program, when executed by a processor, implements the video special effect processing method described in the first embodiment of the present application.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (6)

1. A method for video special effect processing, the method comprising:
performing time line construction according to an adjustment layer of a video, wherein the video is an AE engineering file, the video comprises an adjustment layer and a second layer, material information of the second layer is the same as material information of the adjustment layer, and special effect information of the second layer comprises special effect information of the adjustment layer;
constructing a first video track on the time line according to the adjustment layer of the video to obtain material information and special effect information of the adjustment layer of the video;
constructing a blank third video track on the timeline, wherein the third video track is used for carrying out shade processing on the video;
constructing a second video track on the time line according to a second layer of the video to obtain material information and special effect information of the second layer of the video;
constructing a first xml file according to the special effect information of the adjustment layer, wherein the first xml file is used for marking the special effect information of the adjustment layer to be added to the first video track;
constructing a second xml file according to the special effect information of the second layer, wherein the second xml file is used for marking the special effect information of the second layer to be added to the second video track;
constructing a third xml file, wherein the third xml file is used for marking special effect information to be added to the third video track;
and adding special effects to the video through the SDK according to the first xml file, the second xml file and the third xml file to obtain a video special effect processing result.
2. The method of claim 1, wherein the adding the special effects to the video through the SDK according to the first xml file, the second xml file, and the third xml file, to obtain a video special effect processing result includes:
performing pattern trick addition on the time line through an SDK to obtain a pattern trick object;
setting xml content for the drawing trick object according to the first xml file, the second xml file and the third xml file;
and adding special effects to the video according to the xml content set by the drawing special effect object to obtain the video special effect processing result.
3. The method according to claim 2, wherein the setting xml content for the drawing stunt object according to the first xml file, the second xml file, and the third xml file comprises:
inputting the first video track and the second video track into a composition node;
performing special effect addition on the synthesized node according to the special effect information of the adjustment layer and the special effect information of the second layer to obtain a special effect added node;
and obtaining the xml content according to the synthesized node, the special effect added node and the third video track.
4. A video special effects processing apparatus, the apparatus comprising:
the time line construction module is used for constructing a time line according to an adjustment layer of a video, wherein the video is an AE engineering file, the video comprises an adjustment layer and a second layer, the material information of the second layer is the same as the material information of the adjustment layer, and the special effect information of the second layer comprises the special effect information of the adjustment layer;
the first track construction module is used for constructing a first video track on the time line according to the adjustment layer of the video to obtain material information and special effect information of the adjustment layer of the video;
the second track construction module is used for constructing a second video track on the time line according to a second layer of the video to obtain material information and special effect information of the second layer of the video;
a third track construction module, configured to construct a blank third video track on the timeline, where the third video track is used to mask the video;
the first Xml file generation module is used for constructing a first Xml file according to the special effect information of the adjustment layer, wherein the first Xml file is used for marking the special effect information of the adjustment layer to be added to the first video track;
the second Xml file generation module is used for constructing a second Xml file according to the special effect information of the second image layer, wherein the second Xml file is used for marking the special effect information of the second image layer to be added to the second video track;
the third Xml file generation module is used for constructing a third Xml file, wherein the third Xml file is used for marking special effect information to be added to the third video track;
and the special effect adding module is used for adding special effects to the video through the SDK according to the first xml file, the second xml file and the third xml file to obtain a video special effect processing result.
5. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video special effects processing method of any one of claims 1-3.
6. A readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the video special effect processing method of any one of claims 1-3.
CN202311165899.8A 2023-09-11 2023-09-11 Video special effect processing method and device, electronic equipment and storage medium Active CN116896672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311165899.8A CN116896672B (en) 2023-09-11 2023-09-11 Video special effect processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311165899.8A CN116896672B (en) 2023-09-11 2023-09-11 Video special effect processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116896672A CN116896672A (en) 2023-10-17
CN116896672B true CN116896672B (en) 2023-12-29

Family

ID=88311137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311165899.8A Active CN116896672B (en) 2023-09-11 2023-09-11 Video special effect processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116896672B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
WO2020187086A1 (en) * 2019-03-21 2020-09-24 腾讯科技(深圳)有限公司 Video editing method and apparatus, device, and storage medium
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN116095413A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment
CN116456165A (en) * 2023-06-20 2023-07-18 北京美摄网络科技有限公司 Scheduling method and device for AE engineering, electronic equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020187086A1 (en) * 2019-03-21 2020-09-24 腾讯科技(深圳)有限公司 Video editing method and apparatus, device, and storage medium
CN112116690A (en) * 2019-06-19 2020-12-22 腾讯科技(深圳)有限公司 Video special effect generation method and device and terminal
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
CN116095413A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment
CN116456165A (en) * 2023-06-20 2023-07-18 北京美摄网络科技有限公司 Scheduling method and device for AE engineering, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116896672A (en) 2023-10-17

Similar Documents

Publication Publication Date Title
Lievrouw Materiality and media in communication and technology studies: An unfinished project
DE602004001896T2 (en) Device and method for converting multimedia content
CN107027046B (en) Audio and video processing method and device for assisting live broadcast
CN104007967B (en) A kind of user interface creating method and device based on extensible markup language
CN107025676A (en) The generation method and relevant apparatus of a kind of graphic template and picture
EP2599075A1 (en) System and method for the relevance-based categorizing and near-time learning of words
CN108595705A (en) Micro- class production method, system, device and the learning platform of multi-format document encapsulation
CN114866807A (en) Avatar video generation method and device, electronic equipment and readable storage medium
Geyer et al. Sociocybernetics: Complexity, autopoiesis, and observation of social systems
DE102022107251A1 (en) Convert sign language
WO2021114587A1 (en) Home image description generation method, apparatus and system, and storage medium
CN116896672B (en) Video special effect processing method and device, electronic equipment and storage medium
CN107133206A (en) A kind of digital content makes and demonstration tool and its application
CN105025307B (en) Audio and video acquisition synthesis system based on linkage of cable television and intelligent mobile device
KR100956768B1 (en) Apparatus and method for generating composed moving picture
CN102467496A (en) Method and device for converting stream mode typeset content into block mode typeset document
CN104658019A (en) Manufacturing method of multimedia file
JP2021135945A (en) System and method for assisting creation of game script
Ogden Interactive/Transmedia Documentary: convergence culture meets actuality storytelling
CN117252161A (en) Model training and text generation method in specific field
CN107016631A (en) The intelligent synthetic method of cross-platform courseware
CN103488442B (en) The personalized real time print system and method for a kind of card face
CN101188683B (en) An image template inherence method in image preparing and playing
CN111932691B (en) Webpage-based virtual reality courseware making method, device and equipment
AU2022286819A1 (en) Moment-based gifts and designs generated using a digital product collaboration platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant