CN111083396A - Video synthesis method and device, electronic equipment and computer-readable storage medium - Google Patents
Video synthesis method and device, electronic equipment and computer-readable storage medium Download PDFInfo
- Publication number
- CN111083396A CN111083396A CN201911371693.4A CN201911371693A CN111083396A CN 111083396 A CN111083396 A CN 111083396A CN 201911371693 A CN201911371693 A CN 201911371693A CN 111083396 A CN111083396 A CN 111083396A
- Authority
- CN
- China
- Prior art keywords
- video
- materials
- audio
- track
- synthesized
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention provides a video synthesis method, a video synthesis device, electronic equipment and a computer readable storage medium. The method comprises the following steps: acquiring a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized; transcoding the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials; splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized; splicing the plurality of encoded audio materials to obtain an audio track corresponding to the video to be synthesized; and synthesizing the video track and the audio track to generate the video to be synthesized. The embodiment of the invention realizes the automatic generation of the set theme video content, can improve the reutilization of the materials and improve the utilization rate of the materials.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video synthesis method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
In this micro-shooting era, people often use mobile terminals such as mobile phones and tablet computers to take videos and pictures or record audios so as to record the operating and living drips. People can also install software with a media material editing function in the mobile terminal, synthesize shot videos, pictures and recorded audios into a sound dynamic video, and add various special effects to the synthesized video.
The generation of the composite video is mainly achieved by shooting video content by a camera or a mobile phone and carrying out post-editing and synthesis, certain professional knowledge and operation skills are needed for shooting and editing, the video synthesis period is long, the synthesis efficiency is low, in addition, in the synthesis process, a user is needed to screen the short videos shot in advance, and the workload is large.
Disclosure of Invention
Embodiments of the present invention provide a video synthesis method, an apparatus, an electronic device, and a computer-readable storage medium, so as to implement automatic generation of set subject video content, improve reutilization of a material, and improve utilization rate of the material.
The specific technical scheme is as follows:
in a first aspect of the embodiments of the present invention, a video synthesis method is provided, including:
acquiring a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized;
transcoding the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials;
splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized;
splicing the plurality of encoded audio materials to obtain an audio track corresponding to the video to be synthesized;
and synthesizing the video track and the audio track to generate the video to be synthesized.
Optionally, the obtaining of multiple video materials and multiple audio materials corresponding to a video to be synthesized includes:
acquiring preset video description information corresponding to the video to be synthesized;
based on the video description information, a plurality of candidate video materials and a plurality of candidate audio materials are obtained by searching from a multimedia material library;
and acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
Optionally, the video description information includes a video topic, a keyword, and a video type, and the retrieving a plurality of candidate video-class materials and a plurality of candidate audio-class materials from a multimedia material library based on the video description information includes:
searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library;
and searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
Optionally, the splicing the multiple encoded video materials to obtain the video track corresponding to the video to be synthesized includes:
acquiring a video playing sequence corresponding to the plurality of coded video materials set by a user;
and splicing the plurality of encoded video materials based on the video playing sequence to generate the video track.
Optionally, the splicing the plurality of encoded video-like materials based on the video playing order to generate the video track includes:
acquiring video transition characteristics set by the user between two adjacent coded video materials in the plurality of coded video materials;
and splicing the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics to generate the video track.
Optionally, the splicing the multiple encoded audio materials to obtain the audio track corresponding to the video to be synthesized includes:
acquiring an audio playing sequence corresponding to the plurality of coded audio materials set by a user;
and splicing the plurality of encoded audio materials based on the audio playing sequence to generate the audio track with the playing time length same as that of the video track.
In a second aspect of embodiments of the present invention, there is provided a video compositing apparatus, including:
the audio and video material acquisition module is used for acquiring a plurality of video materials and a plurality of audio materials corresponding to the video to be synthesized;
the audio and video material determining module is used for respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters and determining a plurality of coded video materials and a plurality of coded audio materials;
the video track acquisition module is used for splicing the plurality of coded video materials to acquire a video track corresponding to the video to be synthesized;
the audio track acquisition module is used for splicing the plurality of encoded audio materials to acquire an audio track corresponding to the video to be synthesized;
and the video to be synthesized generating module is used for synthesizing the video track and the audio track to generate the video to be synthesized.
Optionally, the audio/video material obtaining module includes:
the description information acquisition unit is used for acquiring preset video description information corresponding to the video to be synthesized;
the candidate material retrieval unit is used for retrieving a plurality of candidate video materials and a plurality of candidate audio materials from a multimedia material library based on the video description information;
and the audio and video material acquisition unit is used for acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
Optionally, the video description information includes a video topic, a keyword, and a video type, and the candidate material retrieval unit includes:
the candidate video material searching subunit is used for searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library;
and the candidate audio material searching subunit is used for searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
Optionally, the video track acquisition module includes:
the video playing sequence acquiring unit is used for acquiring video playing sequences corresponding to the plurality of coded video materials set by a user;
and the video track generating unit is used for splicing the plurality of encoded video materials based on the video playing sequence to generate the video track.
Optionally, the video track generation unit includes:
a video transition characteristic obtaining subunit, configured to obtain a video transition characteristic set by the user between two adjacent encoded video materials in the plurality of encoded video materials;
and the video track generation subunit is configured to perform splicing processing on the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics, so as to generate the video track.
Optionally, the audio track acquisition module includes:
the audio playing sequence acquisition unit is used for acquiring the audio playing sequence corresponding to the plurality of coded audio materials set by the user;
and the audio track generating unit is used for splicing the plurality of encoded audio materials based on the audio playing sequence and generating the audio track with the same playing time length as the video track.
In another aspect of the embodiments of the present invention, there is also provided a method for processing a signal
The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the video synthesis method when executing the program stored in the memory.
In still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the program is configured to implement the above-mentioned video composition method when executed by a processor.
According to the video synthesis scheme provided by the embodiment of the invention, a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized are obtained, the plurality of video materials and the plurality of audio materials are respectively transcoded based on preset coding parameters, the plurality of coded video materials and the plurality of coded audio materials are determined, the plurality of coded video materials are spliced to obtain a video track corresponding to the video to be synthesized, the plurality of coded audio materials are spliced to obtain an audio track corresponding to the video to be synthesized, and the video track and the audio track are synthesized to generate the video to be synthesized. According to the embodiment of the invention, a plurality of video materials and audio materials are combined into one video, so that the automatic generation of the set theme video content can be realized, the reutilization of the materials is improved, and the utilization rate of the materials is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart illustrating steps of a video synthesis method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a video compositing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video compositing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video compositing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a video composition method according to an embodiment of the present invention is shown, and as shown in fig. 1, the video composition method may include the following steps:
step 101: and acquiring a plurality of video materials and a plurality of audio materials corresponding to the video to be synthesized.
In the embodiment of the present invention, the video to be synthesized refers to a video that the user needs to synthesize. The video to be synthesized may be a news video, an entertainment video, and the like, and specifically, may be determined according to an actual need of a user, which is not limited in the embodiment of the present invention.
The video material refers to a video type material required when synthesizing a video to be synthesized, and the video material may include a video, an image and the like, and specifically, may be determined according to an actual situation.
The audio material refers to a material of an audio type required when synthesizing a video to be synthesized, and the audio material may include materials of audio, text, and the like, and specifically, may be determined according to actual situations.
When a user needs to synthesize a video to be synthesized, which video material and which audio material that need to be obtained can be preset, specifically, video description information (such as video keywords, video types, and the like) of the video to be synthesized can be set by the user, and a plurality of video materials and a plurality of audio materials can be obtained by retrieving according to the video description information.
After obtaining a plurality of video-class materials and a plurality of audio-class materials corresponding to the video to be synthesized, step 102 is executed.
Step 102: and respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials.
The encoding parameters refer to parameters that need to transcode a plurality of video-class materials and a plurality of audio-class materials, and the encoding parameters may include at least one of parameters such as resolution parameters, code rate, and encoding format, and specifically may be determined according to service requirements.
The encoding parameters can be used for transcoding a plurality of video materials and a plurality of audio materials, so that the obtained materials are materials in a uniform encoding format, such as a uniform resolution format, a uniform code rate, a uniform encoding format and the like.
In some examples, the encoding parameter may be an encoding parameter input by a user, for example, when the user sets video description information of a video to be synthesized, the user may input an encoding parameter corresponding to the video to be synthesized, so as to transcode, according to the encoding parameter, the selected multiple video materials and the materials that may have different resolutions, different code rates, different encoding formats, and the like, among the multiple video materials, thereby implementing a unified encoding format for the materials.
In some examples, the encoding parameter may be a parameter set according to a screen resolution of the terminal used by the user.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
The encoding of the video material refers to the video material obtained after transcoding processing is performed on the video material by using encoding parameters.
The coded audio material refers to an audio material obtained after transcoding processing is carried out on the audio material by adopting coding parameters.
After a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized are obtained, transcoding processing can be respectively performed on the plurality of video materials and the plurality of audio materials according to preset coding parameters, so that a plurality of coded video materials and a plurality of audio materials can be obtained, for example, the plurality of video materials comprise video materials 1, video materials 2 and video materials 3, the plurality of audio materials comprise audio materials 1, audio materials 2 and audio materials 3, and after transcoding processing is respectively performed on the video materials 1, the video materials 2 and the video materials 3 according to the coding parameters, corresponding coded video materials 1, coded video materials 2 and coded video materials 3 can be obtained; after the audio material 1, the audio material 2 and the audio material 3 are respectively transcoded according to the coding parameters, the corresponding coded audio material 1, the coded audio material 2 and the coded audio material 3 can be obtained, and the obtained coded video material 1, the coded video material 2, the coded video material 3, the coded audio material 1, the coded audio material 2 and the coded audio material 3 are materials with a uniform coding format.
After transcoding the plurality of video-class materials and the plurality of audio-class materials respectively based on the preset coding parameters and determining the plurality of coded video-class materials and the plurality of coded audio-class materials, step 103 and step 104 are executed.
Step 103: and splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized.
The video track is a video playing track formed by sequentially splicing a plurality of coded video materials. For example, the plurality of encoded video-type materials may include video material 1, video material 2, and video material 3, and then the video material 1, the video material 2, and the video material 3 may be spliced to obtain a complete video: video material 2, video material 1 and video material 3, which are played in sequence.
After the plurality of encoded video materials are acquired, a user can splice the encoded video materials according to own requirements and personal preferences, and the splicing sequence can be set by the user.
In some examples, a user may add corresponding sequence marks to the plurality of encoded video materials, splice the plurality of encoded video materials according to the marks, for example, add mark serial numbers 1, 2, and.
In some examples, a preset page may be generated in advance, and a user sequentially arranges a plurality of encoded video materials according to their own needs and preferences to determine a splicing sequence of the plurality of encoded video materials, and then splices the encoded video materials according to the splicing sequence to obtain a video track.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the plurality of encoded video materials are spliced to obtain the video track corresponding to the video to be synthesized, step 105 is executed.
Step 104: and splicing the plurality of encoded audio materials to obtain an audio track corresponding to the video to be synthesized.
The audio track is an audio playing track formed by sequentially splicing a plurality of coded audio materials. For example, the plurality of encoded audio-based materials may include audio material 1, audio material 2 and audio material 3, and then the audio material 1, audio material 2 and audio material 3 may be spliced to obtain a complete audio: audio material 2, audio material 1 and audio material 3, which are played in sequence.
After the multiple encoded audio materials are obtained, the user can splice the encoded audio materials according to the own requirements and personal preferences, and the splicing sequence can be set by the user.
In some examples, a user may add corresponding sequence marks to the multiple encoded audio materials, splice the multiple encoded audio materials according to the marks, for example, add mark serial numbers 1, 2, and the splicing sequence of the encoded audio materials is determined according to the mark serial numbers, and then splice the encoded audio materials according to the splicing sequence to obtain an audio track.
In some examples, a preset page may be generated in advance, and a user sequentially arranges a plurality of encoded audio materials according to their own needs and preferences to determine a splicing sequence of the plurality of encoded audio materials, and then splices the encoded audio materials according to the splicing sequence to obtain an audio track.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After splicing the multiple encoded audio materials to obtain the audio track corresponding to the video to be synthesized, step 105 is executed.
Step 105: and synthesizing the video track and the audio track to generate the video to be synthesized.
After the audio track and the video track are obtained, the audio track and the video track can be synthesized, so that a video to be synthesized can be obtained, and specifically, the synthesizer can be called to mix the audio track and the audio track, so that the video to be synthesized can be obtained.
Of course, in a specific implementation, other video synthesis manners may also be adopted, and in particular, may be determined according to actual situations.
The scheme provided by the embodiment of the invention can automatically generate the set theme video content by using the existing multimedia material, can reuse the material and improves the utilization rate of the material.
The video synthesis method provided by the embodiment of the invention comprises the steps of obtaining a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized, respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters, determining the plurality of coded video materials and the plurality of coded audio materials, splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized, splicing the plurality of coded audio materials to obtain an audio track corresponding to the video to be synthesized, and synthesizing the video track and the audio track to generate the video to be synthesized. According to the embodiment of the invention, a plurality of video materials and audio materials are combined into one video, so that the automatic generation of the set theme video content can be realized, the reutilization of the materials is improved, and the utilization rate of the materials is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a video composition method according to an embodiment of the present invention is shown, and as shown in fig. 2, the video composition method may include the following steps:
step 201: and acquiring preset video description information corresponding to the video to be synthesized.
In the embodiment of the present invention, the video to be synthesized refers to a video that the user needs to synthesize. The video to be synthesized may be a news video, an entertainment video, and the like, and specifically, may be determined according to an actual need of a user, which is not limited in the embodiment of the present invention.
The video description information refers to description information of video materials and audio materials required for retrieving a video to be synthesized, and the video description information may include description information of video topics, keywords, video types, and the like.
When a user needs to synthesize a video to be synthesized by adopting a plurality of video-class materials and a plurality of audio-class materials, video description information can be input by the user.
After the video description information corresponding to the preset video to be synthesized is obtained, step 202 is executed.
Step 202: and based on the video description information, a plurality of candidate video materials and a plurality of candidate audio materials are obtained by searching from a multimedia material library.
The candidate video material refers to a video material obtained by searching from a multimedia material library by using video description information.
The candidate audio material refers to the audio material obtained by searching from the multimedia material library by using the video description information.
After the preset video description information is obtained, a plurality of candidate video-class materials and a plurality of candidate audio-class materials can be retrieved from the multimedia material library by using the video description information.
Of course, in the present invention, the multimedia material library may be divided into a video material library and an audio material library, when candidate video materials are obtained, candidate video materials may be retrieved from the video material library, and when candidate audio materials are retrieved, candidate audio materials may be retrieved from the audio material library, which may be described in detail with reference to the following specific implementation.
In a specific implementation manner of the present invention, the video description information includes a video topic, a keyword, and a video type, and the step 202 may include:
substep S1: and searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library.
In the embodiment of the present invention, the video material library refers to a database containing video materials, and the video materials may include video materials, image materials, and the like.
The video theme refers to a theme of a video to be generated, which is customized by a user, and the video theme can express a central idea of the video to be generated, such as comedy, horror and the like, and specifically, the theme can be determined according to business requirements, and the embodiment of the present invention does not limit this.
The keywords may be video keywords set by a user, such as specific characters, scenes, and the like, and specifically, may be determined according to actual situations, which is not limited in this embodiment of the present invention.
The video type refers to the type of the video to be generated, such as a news type, an entertainment type and the like, set by the user.
In some examples, a video setting interface may be preset, in which an input box of video topics, keywords, and video types is provided, and a user may input the video topics, the keywords, and the video types in the input box.
In some examples, a voice receiving device may be preset in the system, and may receive voice of a video topic, a keyword, and a video type input by a user, and obtain the video topic, the keyword, and the video type set by the user by recognizing the voice.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the video theme, the keyword and the video type preset by the user are obtained, the candidate video materials matched with the video theme, the keyword and the video type can be searched from the video material library, specifically, the video materials matched with the theme can be searched from the video material library according to the video theme, then the video materials matched with the keyword can be searched from the video materials matched with the theme according to the keyword, finally the video materials matched with the type matched with the video type can be searched from the video materials matched with the keyword, and then the finally obtained video materials matched with the type can be used as the candidate video materials.
Substep S2: and searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
The audio material library refers to a database containing audio material, and the audio material may include audio material, text material, and the like.
After the video theme and the keywords preset by the user are obtained, the candidate audio materials matched with the video theme and the keywords can be searched from the audio material library, specifically, the audio materials matched with the themes can be searched from the audio material library according to the video theme, then the audio materials matched with the keywords can be searched from the audio materials matched with the themes according to the keywords, and finally the audio materials matched with the keywords are used as the candidate audio materials.
The video subjects and keywords provided by this embodiment are for retrieval and screening from the material library, and the video types are not related to the video content, for example, a piece of news with the same content can be encoded by H264 or H265, and the final playing is not affected, even if the encoding types of the video segments in the material library are different, the video content finally generated by splicing can be transcoded to generate consistent encoded content, so that the workload of personnel can be reduced, and the efficiency of video production can be improved.
After retrieving the candidate video class materials and the candidate audio class materials from the multimedia material library based on the video description information, step 203 is performed.
Step 203: and acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
The video material refers to a video material screened by a user from a plurality of candidate video materials and used for synthesizing a video to be synthesized, and the video material may include materials such as a video and an image, and specifically may be determined according to actual conditions.
The audio material refers to an audio material screened by a user from a plurality of candidate audio materials and used for synthesizing a video to be synthesized, and the audio material may include audio, text and other materials, and specifically may be determined according to actual conditions.
After obtaining the candidate video materials and the candidate audio materials from the multimedia material library, the video materials may be selected from the candidate video materials by the user, and the audio materials may be selected from the candidate audio materials, for example, the candidate video materials may include material 1, material 2, material 10, and when the user selects material 1, material 3, and material 10, material 1, material 3, and material 10 are regarded as the video materials selected by the user. The candidate audio-class materials can comprise a material a, a material b, a material e and a material z, and when the user selects the material a, the material e and the material f, the material a, the material e and the material f are regarded as the audio-class materials selected by the user.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After acquiring the plurality of video materials selected by the user from the plurality of candidate video materials and the plurality of audio materials selected from the plurality of candidate audio materials, step 204 is executed.
Step 204: and respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials.
The encoding parameters refer to parameters that need to transcode a plurality of video-class materials and a plurality of audio-class materials, and the encoding parameters may include at least one of parameters such as resolution parameters, code rate, and encoding format, and specifically may be determined according to service requirements.
The encoding parameters can be used for transcoding a plurality of video materials and a plurality of audio materials, so that the obtained materials are materials in a uniform encoding format, such as a uniform resolution format, a uniform code rate, a uniform encoding format and the like.
In some examples, the encoding parameter may be an encoding parameter input by a user, for example, when the user sets video description information of a video to be synthesized, the user may input an encoding parameter corresponding to the video to be synthesized, so as to transcode, according to the encoding parameter, the selected multiple video materials and the materials that may have different resolutions, different code rates, different encoding formats, and the like, among the multiple video materials, thereby implementing a unified encoding format for the materials.
In some examples, the encoding parameter may be a parameter set according to a screen resolution of the terminal used by the user.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
The encoding of the video material refers to the video material obtained after transcoding processing is performed on the video material by using encoding parameters.
The coded audio material refers to an audio material obtained after transcoding processing is carried out on the audio material by adopting coding parameters.
After a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized are obtained, transcoding processing can be respectively performed on the plurality of video materials and the plurality of audio materials according to preset coding parameters, so that a plurality of coded video materials and a plurality of audio materials can be obtained, for example, the plurality of video materials comprise video materials 1, video materials 2 and video materials 3, the plurality of audio materials comprise audio materials 1, audio materials 2 and audio materials 3, and after transcoding processing is respectively performed on the video materials 1, the video materials 2 and the video materials 3 according to the coding parameters, corresponding coded video materials 1, coded video materials 2 and coded video materials 3 can be obtained; after the audio material 1, the audio material 2 and the audio material 3 are respectively transcoded according to the coding parameters, the corresponding coded audio material 1, the coded audio material 2 and the coded audio material 3 can be obtained, and the obtained coded video material 1, the coded video material 2, the coded video material 3, the coded audio material 1, the coded audio material 2 and the coded audio material 3 are materials with a uniform coding format.
Step 205: and acquiring a video playing sequence corresponding to the plurality of coded video materials set by the user.
The video playing sequence refers to a playing sequence of a plurality of encoded video-type materials, and the video playing sequence may be set by a user according to personal preference, for example, the plurality of encoded video-type materials may include: video 1, video 2, video 3, image 1 and image 2, the user can arrange according to personal preferences, and obtain: video 1, image 1, video 2, image 2, and video 3, the video playing sequence of the multiple encoded video materials is: video 1, image 1, video 2, image 2, video 3. Of course, the user may also arrange according to personal preferences to obtain: video 3, video 1, image 1, video 2, and image 2, the video playing sequence of the multiple encoded video materials is: video 3, video 1, image 1, video 2, image 2.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the video playing sequence corresponding to the plurality of encoded video-like materials set by the user is obtained, step 206 is executed.
Step 206: and splicing the plurality of encoded video materials based on the video playing sequence to generate the video track.
The video track is a video playing track formed by sequentially splicing a plurality of coded video materials. For example, the plurality of encoded video-type materials may include video material 1, video material 2, and video material 3, and then the video material 1, the video material 2, and the video material 3 may be spliced to obtain a complete video: video material 2, video material 1 and video material 3, which are played in sequence.
After the multiple encoded video materials set by the user are obtained, the multiple encoded video materials may be spliced according to a video playing sequence, and a video track may be generated, which may be described in detail below in conjunction with the following specific implementation manner.
In a specific implementation manner of the present invention, the step 206 may include:
sub-step M1: and acquiring video transition characteristics set by the user between two adjacent coded video materials in the plurality of coded video materials.
In the embodiment of the present invention, the video transition characteristics refer to transition effects between two adjacent video materials when the video materials are spliced, such as fade-in and fade-out, rotation, and shutter effects.
After the video playing sequence of the multiple encoded video materials is obtained, the video transition characteristics can be set between any two adjacent encoded video materials by the user according to the requirement of the user, for example, sequencing the multiple encoded video materials according to the video playing sequence can obtain: video 1, video 2, video 3, and video 4, and the user may set a fade-in and fade-out transition effect between video 2 and video 3, a shutter transition effect between video 3 and video 4, and the like.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After obtaining the video transition characteristics set by the user between two adjacent ones of the plurality of encoded video-like material, sub-step M2 is performed.
Sub-step M2: and splicing the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics to generate the video track.
After the video playing sequence corresponding to the plurality of encoded video materials and the video transition characteristics set by the user between two adjacent encoded video materials in the plurality of encoded video materials are obtained, the plurality of encoded video materials and the video transition characteristics can be spliced based on the video playing sequence and the video transition characteristics, so that the video track can be obtained.
After the plurality of encoded video-like materials are processed by splicing based on the video playing order to generate the video track, step 209 is performed.
Step 207: and acquiring the audio playing sequence corresponding to the plurality of coded audio materials set by the user.
The audio playing sequence refers to a playing sequence of the plurality of encoded audio-based materials, and the audio playing sequence may be set by a user according to personal preference, for example, the plurality of encoded audio-based materials may include: audio 1, audio 2, audio 3, text 1, and text 2, which the user may arrange according to personal preferences, resulting in: audio 1, text 1, audio 2, text 2, and audio 3, the audio playing sequence of the multiple encoded audio materials is: audio 1, text 1, audio 2, text 2, audio 3. Of course, the user may also arrange according to personal preferences to obtain: audio 3, audio 1, text 1, audio 2, and text 2, the audio playing sequence of the multiple encoded audio and video materials is: audio 3, audio 1, text 1, audio 2, text 2.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present invention, and are not to be taken as the only limitation of the embodiments of the present invention.
After the audio playing sequence corresponding to the plurality of encoded audio-like materials set by the user is obtained, step 208 is executed.
Step 208: and splicing the plurality of encoded audio materials based on the audio playing sequence to generate the audio track with the playing time length same as that of the video track.
The audio track refers to an audio playing track which is formed by sequentially splicing a plurality of coded audio materials and has the same playing time length as the video track. For example, the plurality of encoded audio-based materials may include audio material 1, audio material 2 and audio material 3, and then the audio material 1, audio material 2 and audio material 3 may be spliced to obtain a complete audio: audio material 2, audio material 1 and audio material 3, which are played in sequence.
After the multiple encoded audio materials set by the user are obtained, the multiple encoded audio materials can be spliced according to the audio playing sequence, and an audio track can be generated.
It can be understood that, in this embodiment, the playing time lengths of the audio track and the video track are the same, for example, when the playing time length of the spliced video track is 10min, the playing time length of the spliced audio track is 10 min; and when the playing time of the spliced video track is 8min, the playing time of the spliced audio track is 8 min.
After the splicing process is performed on the multiple encoded audio-like materials based on the audio playing order, and an audio track with the same playing time length as the video track is generated, step 209 is performed.
Step 209: and synthesizing the video track and the audio track to generate the video to be synthesized.
After the audio track and the video track are obtained, the audio track and the video track can be synthesized, so that a video to be synthesized can be obtained, and specifically, the synthesizer can be called to mix the audio track and the audio track, so that the video to be synthesized can be obtained.
Of course, in a specific implementation, other video synthesis manners may also be adopted, and in particular, may be determined according to actual situations.
The scheme provided by the embodiment of the invention can automatically generate the set theme video content by using the existing multimedia material, can reuse the material and improves the utilization rate of the material. The invention can easily search the multimedia materials which accord with the theme by classifying and managing the materials of different types, and the provided video transition characteristics are automatically spliced and generated, thereby ensuring the continuity of the video content.
The video synthesis method provided by the embodiment of the invention comprises the steps of obtaining a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized, respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters, determining the plurality of coded video materials and the plurality of coded audio materials, splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized, splicing the plurality of coded audio materials to obtain an audio track corresponding to the video to be synthesized, and synthesizing the video track and the audio track to generate the video to be synthesized. According to the embodiment of the invention, a plurality of video materials and audio materials are combined into one video, so that the automatic generation of the set theme video content can be realized, the reutilization of the materials is improved, and the utilization rate of the materials is improved.
EXAMPLE III
Referring to fig. 3, which shows a schematic structural diagram of a video compositing apparatus according to an embodiment of the present invention, as shown in fig. 3, the video compositing apparatus 300 may include the following modules:
the audio and video material obtaining module 310 is configured to obtain a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized;
the audio and video material determining module 320 is configured to transcode the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determine a plurality of coded video materials and a plurality of coded audio materials;
a video track obtaining module 330, configured to perform splicing processing on the multiple encoded video materials, so as to obtain a video track corresponding to the video to be synthesized;
an audio track obtaining module 340, configured to splice the multiple encoded audio materials to obtain an audio track corresponding to the video to be synthesized;
and a to-be-synthesized video generating module 350, configured to perform synthesis processing on the video track and the audio track to generate the to-be-synthesized video.
The video synthesis device provided by the embodiment of the invention obtains a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized, respectively transcodes the plurality of video materials and the plurality of audio materials based on preset coding parameters, determines the plurality of coded video materials and the plurality of coded audio materials, splices the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized, splices the plurality of coded audio materials to obtain an audio track corresponding to the video to be synthesized, and synthesizes the video track and the audio track to generate the video to be synthesized. According to the embodiment of the invention, a plurality of video materials and audio materials are combined into one video, so that the automatic generation of the set theme video content can be realized, the reutilization of the materials is improved, and the utilization rate of the materials is improved.
Example four
Referring to fig. 4, a schematic structural diagram of a video compositing apparatus according to an embodiment of the present invention is shown, and as shown in fig. 4, the video compositing apparatus 400 may include the following modules:
an audio/video material obtaining module 410, configured to obtain multiple video materials and multiple audio materials corresponding to a video to be synthesized;
the audio and video material determining module 420 is configured to transcode the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determine a plurality of coded video materials and a plurality of coded audio materials;
a video track obtaining module 430, configured to perform splicing processing on the multiple encoded video materials, and obtain a video track corresponding to the video to be synthesized;
an audio track obtaining module 440, configured to splice the multiple encoded audio materials to obtain an audio track corresponding to the video to be synthesized;
and a to-be-synthesized video generating module 450, configured to perform synthesis processing on the video track and the audio track to generate the to-be-synthesized video.
Optionally, the audio/video material obtaining module 410 includes:
a description information obtaining unit 411, configured to obtain video description information corresponding to the preset video to be synthesized;
a candidate material retrieving unit 412, configured to retrieve a plurality of candidate video-class materials and a plurality of candidate audio-class materials from a multimedia material library based on the video description information;
the audio/video material obtaining unit 413 is configured to obtain the plurality of video materials selected by the user from the plurality of candidate video materials and the plurality of audio materials selected from the plurality of candidate audio materials.
Optionally, the video description information includes a video topic, a keyword, and a video type, and the candidate material retrieving unit 412 includes:
the candidate video material searching subunit is used for searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library;
and the candidate audio material searching subunit is used for searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
Optionally, the video track acquiring module 430 includes:
a video playing sequence acquiring unit 431, configured to acquire a video playing sequence corresponding to the plurality of encoded video-like materials set by the user;
a video track generating unit 432, configured to perform splicing processing on the multiple encoded video-like materials based on the video playing order, and generate the video track.
Optionally, the video track generating unit 432 includes:
a video transition characteristic obtaining subunit, configured to obtain a video transition characteristic set by the user between two adjacent encoded video materials in the plurality of encoded video materials;
and the video track generation subunit is configured to perform splicing processing on the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics, so as to generate the video track.
Optionally, the audio track acquisition module 440 includes:
an audio playing sequence acquiring unit 441, configured to acquire an audio playing sequence corresponding to the multiple encoded audio materials set by the user;
an audio track generating unit 442, configured to perform splicing processing on the multiple encoded audio-like materials based on the audio playing sequence, and generate the audio track with the same playing time length as the video track.
The video synthesis device provided by the embodiment of the invention obtains a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized, respectively transcodes the plurality of video materials and the plurality of audio materials based on preset coding parameters, determines the plurality of coded video materials and the plurality of coded audio materials, splices the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized, splices the plurality of coded audio materials to obtain an audio track corresponding to the video to be synthesized, and synthesizes the video track and the audio track to generate the video to be synthesized. According to the embodiment of the invention, a plurality of video materials and audio materials are combined into one video, so that the automatic generation of the set theme video content can be realized, the reutilization of the materials is improved, and the utilization rate of the materials is improved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, which includes a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
acquiring a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized;
transcoding the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials;
splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized;
splicing the plurality of encoded audio materials to obtain an audio track corresponding to the video to be synthesized;
and synthesizing the video track and the audio track to generate the video to be synthesized.
Optionally, the obtaining of multiple video materials and multiple audio materials corresponding to a video to be synthesized includes:
acquiring preset video description information corresponding to the video to be synthesized;
based on the video description information, a plurality of candidate video materials and a plurality of candidate audio materials are obtained by searching from a multimedia material library;
and acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
Optionally, the video description information includes a video topic, a keyword, and a video type, and the retrieving a plurality of candidate video-class materials and a plurality of candidate audio-class materials from a multimedia material library based on the video description information includes:
searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library;
and searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
Optionally, the splicing the multiple encoded video materials to obtain the video track corresponding to the video to be synthesized includes:
acquiring a video playing sequence corresponding to the plurality of coded video materials set by a user;
and splicing the plurality of encoded video materials based on the video playing sequence to generate the video track.
Optionally, the splicing the plurality of encoded video-like materials based on the video playing order to generate the video track includes:
acquiring video transition characteristics set by the user between two adjacent coded video materials in the plurality of coded video materials;
and splicing the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics to generate the video track.
Optionally, the splicing the multiple encoded audio materials to obtain the audio track corresponding to the video to be synthesized includes:
acquiring an audio playing sequence corresponding to the plurality of coded audio materials set by a user;
and splicing the plurality of encoded audio materials based on the audio playing sequence to generate the audio track with the playing time length same as that of the video track.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform the video composition method according to any one of the above embodiments.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the video composition method of any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. A method for video compositing, comprising:
acquiring a plurality of video materials and a plurality of audio materials corresponding to a video to be synthesized;
transcoding the plurality of video materials and the plurality of audio materials respectively based on preset coding parameters, and determining a plurality of coded video materials and a plurality of coded audio materials;
splicing the plurality of coded video materials to obtain a video track corresponding to the video to be synthesized;
splicing the plurality of encoded audio materials to obtain an audio track corresponding to the video to be synthesized;
and synthesizing the video track and the audio track to generate the video to be synthesized.
2. The method according to claim 1, wherein the obtaining a plurality of video-class materials and a plurality of audio-class materials corresponding to the video to be synthesized comprises:
acquiring preset video description information corresponding to the video to be synthesized;
based on the video description information, a plurality of candidate video materials and a plurality of candidate audio materials are obtained by searching from a multimedia material library;
and acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
3. The method of claim 2, wherein the video description information comprises video subject, keywords and video type, and wherein retrieving a plurality of candidate video-class material and a plurality of candidate audio-class material from a multimedia material library based on the video description information comprises:
searching the candidate video materials matched with the video theme, the keywords and the video type from a video material library;
and searching the candidate audio materials matched with the video theme and the keywords from an audio material library.
4. The method according to claim 1, wherein the splicing the plurality of encoded video-like materials to obtain the video track corresponding to the video to be synthesized comprises:
acquiring a video playing sequence corresponding to the plurality of coded video materials set by a user;
and splicing the plurality of encoded video materials based on the video playing sequence to generate the video track.
5. The method according to claim 4, wherein the splicing the plurality of encoded video-like materials based on the video playing order to generate the video track comprises:
acquiring video transition characteristics set by the user between two adjacent coded video materials in the plurality of coded video materials;
and splicing the plurality of encoded video materials and the video transition characteristics based on the video playing sequence and the video transition characteristics to generate the video track.
6. The method according to claim 1, wherein the splicing the plurality of encoded audio-like materials to obtain the audio track corresponding to the video to be synthesized comprises:
acquiring an audio playing sequence corresponding to the plurality of coded audio materials set by a user;
and splicing the plurality of encoded audio materials based on the audio playing sequence to generate the audio track with the playing time length same as that of the video track.
7. A video compositing apparatus, comprising:
the audio and video material acquisition module is used for acquiring a plurality of video materials and a plurality of audio materials corresponding to the video to be synthesized;
the audio and video material determining module is used for respectively transcoding the plurality of video materials and the plurality of audio materials based on preset coding parameters and determining a plurality of coded video materials and a plurality of coded audio materials;
the video track acquisition module is used for splicing the plurality of coded video materials to acquire a video track corresponding to the video to be synthesized;
the audio track acquisition module is used for splicing the plurality of encoded audio materials to acquire an audio track corresponding to the video to be synthesized;
and the video to be synthesized generating module is used for synthesizing the video track and the audio track to generate the video to be synthesized.
8. The apparatus of claim 7, wherein the audio-visual material acquisition module comprises:
the description information acquisition unit is used for acquiring preset video description information corresponding to the video to be synthesized;
the candidate material retrieval unit is used for retrieving a plurality of candidate video materials and a plurality of candidate audio materials from a multimedia material library based on the video description information;
and the audio and video material acquisition unit is used for acquiring the plurality of video materials selected from the plurality of candidate video materials by the user and the plurality of audio materials selected from the plurality of candidate audio materials.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the video compositing method of any of claims 1-6 when executing a program stored on a memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a video composition method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911371693.4A CN111083396B (en) | 2019-12-26 | 2019-12-26 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911371693.4A CN111083396B (en) | 2019-12-26 | 2019-12-26 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111083396A true CN111083396A (en) | 2020-04-28 |
CN111083396B CN111083396B (en) | 2022-08-02 |
Family
ID=70318712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911371693.4A Active CN111083396B (en) | 2019-12-26 | 2019-12-26 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111083396B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110691276A (en) * | 2019-11-06 | 2020-01-14 | 北京字节跳动网络技术有限公司 | Method and device for splicing multimedia segments, mobile terminal and storage medium |
CN111683209A (en) * | 2020-06-10 | 2020-09-18 | 北京奇艺世纪科技有限公司 | Mixed-cut video generation method and device, electronic equipment and computer-readable storage medium |
CN112040271A (en) * | 2020-09-04 | 2020-12-04 | 杭州七依久科技有限公司 | Cloud intelligent editing system and method for visual programming |
CN112153463A (en) * | 2020-09-04 | 2020-12-29 | 上海七牛信息技术有限公司 | Multi-material video synthesis method and device, electronic equipment and storage medium |
CN112528049A (en) * | 2020-12-17 | 2021-03-19 | 北京达佳互联信息技术有限公司 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
CN112954391A (en) * | 2021-02-05 | 2021-06-11 | 北京百度网讯科技有限公司 | Video editing method and device and electronic equipment |
CN113343827A (en) * | 2021-05-31 | 2021-09-03 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043785A1 (en) * | 2000-03-30 | 2001-11-22 | Hidehiko Teshirogi | Magnetic-tape recording apparatus and method, magnetic-tape reproduction apparatus and method, and recording medium |
CN1510501A (en) * | 2002-12-11 | 2004-07-07 | ��˹���´﹫˾ | System and method for synthesizing filmslide |
KR20090062562A (en) * | 2007-12-13 | 2009-06-17 | 삼성전자주식회사 | Apparatus and method for generating multimedia email |
CN103928039A (en) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | Video compositing method and device |
CN103971713A (en) * | 2014-05-07 | 2014-08-06 | 厦门美图之家科技有限公司 | Video file filter processing method |
CN104244086A (en) * | 2014-09-03 | 2014-12-24 | 陈飞 | Video real-time splicing device and method based on real-time conversation semantic analysis |
CN104376049A (en) * | 2014-10-29 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Virtual news generation method based on crisis situations |
US20170026719A1 (en) * | 2015-06-17 | 2017-01-26 | Lomotif Private Limited | Method for generating a composition of audible and visual media |
CN108495141A (en) * | 2018-03-05 | 2018-09-04 | 网宿科技股份有限公司 | A kind of synthetic method and system of audio and video |
CN108881957A (en) * | 2017-11-02 | 2018-11-23 | 北京视联动力国际信息技术有限公司 | A kind of mixed method and device of multimedia file |
CN109618222A (en) * | 2018-12-27 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of splicing video generation method, device, terminal device and storage medium |
CN110519638A (en) * | 2019-09-06 | 2019-11-29 | Oppo广东移动通信有限公司 | Processing method, processing unit, electronic device and storage medium |
CN110545476A (en) * | 2019-09-23 | 2019-12-06 | 广州酷狗计算机科技有限公司 | Video synthesis method and device, computer equipment and storage medium |
CN110572722A (en) * | 2019-09-26 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Video clipping method, device, equipment and readable storage medium |
-
2019
- 2019-12-26 CN CN201911371693.4A patent/CN111083396B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043785A1 (en) * | 2000-03-30 | 2001-11-22 | Hidehiko Teshirogi | Magnetic-tape recording apparatus and method, magnetic-tape reproduction apparatus and method, and recording medium |
CN1510501A (en) * | 2002-12-11 | 2004-07-07 | ��˹���´﹫˾ | System and method for synthesizing filmslide |
KR20090062562A (en) * | 2007-12-13 | 2009-06-17 | 삼성전자주식회사 | Apparatus and method for generating multimedia email |
CN103928039A (en) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | Video compositing method and device |
CN103971713A (en) * | 2014-05-07 | 2014-08-06 | 厦门美图之家科技有限公司 | Video file filter processing method |
CN104244086A (en) * | 2014-09-03 | 2014-12-24 | 陈飞 | Video real-time splicing device and method based on real-time conversation semantic analysis |
CN104376049A (en) * | 2014-10-29 | 2015-02-25 | 中国电子科技集团公司第二十八研究所 | Virtual news generation method based on crisis situations |
US20170026719A1 (en) * | 2015-06-17 | 2017-01-26 | Lomotif Private Limited | Method for generating a composition of audible and visual media |
CN108881957A (en) * | 2017-11-02 | 2018-11-23 | 北京视联动力国际信息技术有限公司 | A kind of mixed method and device of multimedia file |
CN108495141A (en) * | 2018-03-05 | 2018-09-04 | 网宿科技股份有限公司 | A kind of synthetic method and system of audio and video |
CN109618222A (en) * | 2018-12-27 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of splicing video generation method, device, terminal device and storage medium |
CN110519638A (en) * | 2019-09-06 | 2019-11-29 | Oppo广东移动通信有限公司 | Processing method, processing unit, electronic device and storage medium |
CN110545476A (en) * | 2019-09-23 | 2019-12-06 | 广州酷狗计算机科技有限公司 | Video synthesis method and device, computer equipment and storage medium |
CN110572722A (en) * | 2019-09-26 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Video clipping method, device, equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
袁峰 等: ""媒体素材采集制作及多媒体课件合成的软件选择"", 《云南农业大学学报 》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110691276A (en) * | 2019-11-06 | 2020-01-14 | 北京字节跳动网络技术有限公司 | Method and device for splicing multimedia segments, mobile terminal and storage medium |
CN111683209A (en) * | 2020-06-10 | 2020-09-18 | 北京奇艺世纪科技有限公司 | Mixed-cut video generation method and device, electronic equipment and computer-readable storage medium |
CN112040271A (en) * | 2020-09-04 | 2020-12-04 | 杭州七依久科技有限公司 | Cloud intelligent editing system and method for visual programming |
CN112153463A (en) * | 2020-09-04 | 2020-12-29 | 上海七牛信息技术有限公司 | Multi-material video synthesis method and device, electronic equipment and storage medium |
CN112528049A (en) * | 2020-12-17 | 2021-03-19 | 北京达佳互联信息技术有限公司 | Video synthesis method and device, electronic equipment and computer-readable storage medium |
CN112528049B (en) * | 2020-12-17 | 2023-08-08 | 北京达佳互联信息技术有限公司 | Video synthesis method, device, electronic equipment and computer readable storage medium |
CN112954391A (en) * | 2021-02-05 | 2021-06-11 | 北京百度网讯科技有限公司 | Video editing method and device and electronic equipment |
CN112954391B (en) * | 2021-02-05 | 2022-12-06 | 北京百度网讯科技有限公司 | Video editing method and device and electronic equipment |
CN113343827A (en) * | 2021-05-31 | 2021-09-03 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111083396B (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111083396B (en) | Video synthesis method and device, electronic equipment and computer-readable storage medium | |
US11743514B2 (en) | Apparatus, systems and methods for a content commentary community | |
Thorson et al. | YouTube, Twitter and the Occupy movement: Connecting content and circulation practices | |
US8655146B2 (en) | Collection and concurrent integration of supplemental information related to currently playing media | |
US9304657B2 (en) | Audio tagging | |
KR101814154B1 (en) | Information processing system, and multimedia information processing method and system | |
US20090034933A1 (en) | Method and System for Remote Digital Editing Using Narrow Band Channels | |
US20180068188A1 (en) | Video analyzing method and video processing apparatus thereof | |
US20150100582A1 (en) | Association of topic labels with digital content | |
CN110248116B (en) | Picture processing method and device, computer equipment and storage medium | |
CN112929730A (en) | Bullet screen processing method and device, electronic equipment, storage medium and system | |
KR20080030490A (en) | Recording-and-reproducing apparatus and recording-and-reproducing method | |
CN106790558B (en) | Film multi-version integration storage and extraction system | |
US20090234886A1 (en) | Apparatus and Method for Arranging Metadata | |
KR102308508B1 (en) | Review making system | |
US9524752B2 (en) | Method and system for automatic B-roll video production | |
JP2014130536A (en) | Information management device, server, and control method | |
KR101295377B1 (en) | Method for constructing of file format and apparatus and method for processing broadcast signal with file which has file format | |
US7720798B2 (en) | Transmitter-receiver system, transmitting apparatus, transmitting method, receiving apparatus, receiving method, and program | |
US20150026147A1 (en) | Method and system for searches of digital content | |
RU2690163C2 (en) | Information processing device and information processing method | |
KR20150106472A (en) | Method and apparatus for providing contents | |
KR102384263B1 (en) | Method and system for remote medical service using artificial intelligence | |
JP2008252529A (en) | Terminal | |
KR101465258B1 (en) | method for displaying photo and termianl using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |