CN115278306B - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN115278306B
CN115278306B CN202210700104.8A CN202210700104A CN115278306B CN 115278306 B CN115278306 B CN 115278306B CN 202210700104 A CN202210700104 A CN 202210700104A CN 115278306 B CN115278306 B CN 115278306B
Authority
CN
China
Prior art keywords
video
clipping
information
video material
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210700104.8A
Other languages
Chinese (zh)
Other versions
CN115278306A (en
Inventor
周桂鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210700104.8A priority Critical patent/CN115278306B/en
Publication of CN115278306A publication Critical patent/CN115278306A/en
Application granted granted Critical
Publication of CN115278306B publication Critical patent/CN115278306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application discloses a video editing method and device. The main technical scheme comprises the following steps: providing a timeline configuration interface to a user; acquiring time line configuration information input by a user through a time line configuration interface; generating time line scheduling information according to the time line configuration information, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each time unit; and decoding at least one video material, sequentially determining video frames to be clipped corresponding to the time units in each time unit, clipping the video frames to be clipped by using clipping type information corresponding to the time units, and encoding the video frames obtained after clipping to obtain a target video. The application can conveniently and efficiently realize video editing.

Description

Video editing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a video editing method and apparatus.
Background
With the continuous popularization of computer technology and the vigorous development of the internet industries such as short video, self-media, social networks, etc., the demand for video clips is becoming stronger. Video editing is the use of software to perform nonlinear editing on video material, such as cutting, merging, converting, special effects, etc., on video material, without even lacking some artificial intelligence processing, such as speech recognition, speech synthesis, image matting, etc., to generate new video with different expressive power.
With the increasing demand for video display, more and more enterprises, workshops or individuals need to clip a large amount of video for branding popularization, product display, social network content release, etc., so how to conveniently and efficiently clip video is an objective that is addressed by each large service provider (e.g., video cloud service provider).
Disclosure of Invention
In view of the above, the present application provides a video editing method and apparatus, so as to conveniently and efficiently implement video editing.
The application provides the following scheme:
In a first aspect, a video editing method is provided, the method comprising:
Generating time line scheduling information according to time line configuration information input by a user, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each time unit, and the clip type information comprises more than two clip types;
And decoding the at least one video material, sequentially determining video frames to be clipped corresponding to the time units in each time unit, clipping the video frames to be clipped by using clipping type information corresponding to the time units, and encoding the video frames obtained after clipping to obtain a target video.
According to an implementation manner of the embodiment of the present application, before the decoding processing is performed on the at least one video material, the method further includes:
Determining the video material to be preprocessed according to the clip type information of the video material contained in the time line configuration information;
Submitting the video material to be preprocessed to a clipping module corresponding to the clipping type, and acquiring a video frame sequence obtained after the clipping module clips the received video material;
and updating the video material information corresponding to the to-be-clipped video material information in the time line scheduling information by utilizing the video frame sequence.
According to an implementation manner of the embodiment of the present application, the video material to be preprocessed includes: clip type that requires intelligent prediction using timing information.
According to an implementation manner of the embodiment of the present application, the clipping processing of the video frame to be clipped by using clipping type information corresponding to the video frame to be clipped in the time unit includes:
If the video frame to be clipped corresponding to the time unit is one and corresponds to a plurality of clipping type information, clipping processing corresponding to the clipping type information is carried out on the video frame to be clipped one by one according to a preset priority; or alternatively
If the number of the video frames to be clipped corresponding to the time unit is multiple, clipping processing corresponding to clipping types is performed on the multiple video frames to be clipped respectively, and multiple video frames obtained after clipping processing are synthesized.
According to an implementation manner of the embodiment of the present application, the decoding process, the clipping process and the encoding process performed in each time unit may be a streaming process.
According to a second aspect, there is provided a video editing method, the method comprising:
Providing a timeline configuration interface to a user;
Obtaining timeline configuration information input by the user through the timeline configuration interface for video editing, wherein the timeline configuration information comprises: at least one video material, and clip type information and time information on a time line for clip processing of the at least one video material, the clip type information including two or more clip types.
According to one implementation manner of the embodiment of the present application, the timeline configuration interface includes: a timeline, one or more tracks to obtain video material, a component to configure clip type information, and a component to launch clips;
wherein the video material of which the track is input takes the time line as a reference of time information of a clipping process;
and starting the step of clipping processing in response to the event that the component for starting clipping is triggered.
According to one implementation manner of the embodiment of the application, the method further comprises:
responsive to a user operation to adjust a position of a video material in the track, determining time information corresponding to a clipping process performed on the video material on the timeline; or alternatively
Responding to the starting time set by a user for the video material in the selected track, determining the time information corresponding to the clipping processing of the video material on the time line, and correspondingly adjusting the position of the video material in the selected track; or alternatively
And responding to the event that the component for configuring the clip type information is triggered, and acquiring and recording the clip type information input by the user aiming at the video material of the currently selected track.
According to an implementation manner of the embodiment of the present application, the video clip includes:
generating time line scheduling information according to the time line configuration information, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each time unit;
And decoding the at least one video material, sequentially determining video frames to be clipped corresponding to the time units in each time unit, clipping the video frames to be clipped by using clipping type information corresponding to the time units, and encoding the video frames obtained after clipping to obtain a target video.
In a third aspect, a video editing apparatus, the apparatus comprising:
an interface providing unit configured to provide a timeline configuration interface to a user;
a configuration acquisition unit configured to acquire timeline configuration information input by the user through the timeline configuration interface, the timeline configuration information including: at least one video material, and clip type information and time information on a time line of clip processing of the at least one video material, the clip type information including two or more clip types;
A scheduling processing unit configured to generate timeline scheduling information according to the timeline configuration information, wherein the timeline scheduling information comprises video material information to be clipped and clip type information corresponding to each time unit;
The editing processing unit is configured to decode the at least one video material, sequentially determine video frames to be edited corresponding to the time units in each time unit, carry out editing processing on the video frames to be edited by utilizing the editing type information corresponding to the time units of the video frames to be edited, and encode the video frames obtained after the editing processing to obtain a target video.
According to a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the first aspects described above.
According to a fifth aspect, there is provided an electronic device characterized by comprising:
One or more processors; and
A memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the first aspects above.
According to the specific embodiment provided by the application, the application has the following technical effects:
1) The method and the device generate time line scheduling information according to the time line configuration information input by the user, and encode and obtain the target video after clipping the video frames to be clipped in each time unit according to the time line scheduling information and clipping the video frames in the corresponding clipping types. Compared with a mode that a user triggers the video material to be clipped to generate intermediate material and then triggers the next clipping process, the scheduling mode that various clipping processes are carried out by time units by taking time as a main line is more efficient.
2) The application provides the time line configuration interface for the user to input the time line configuration information, the user only needs to input the time line configuration information in the whole editing process, the editing process of each type is not required to be triggered step by step, the imported materials and the generated intermediate materials are not required to be managed in each editing process, and the complexity of the user operation is greatly reduced.
3) Compared with the traditional video editing mode taking video materials as main lines, the method does not need to encode and decode each step of video editing processing link, but encodes all video materials after decoding the video materials and taking time units as main lines, thereby greatly reducing the times of encoding and decoding, shortening the time consumption of video editing and reducing the quality loss caused by more encoding and decoding times on the video materials.
4) When the type of the clip which needs to be intelligently predicted by utilizing the time sequence information is related, the corresponding clipping module can be firstly scheduled for processing, and then the video material information corresponding to the clip in the time line scheduling information is updated by utilizing the obtained video frame sequence, so that the intelligent capability is fused into the time line scheduling, and the usability is ensured while the video clipping capability is enriched.
5) Compared with the method that all video materials are decoded firstly, all video frames are encoded after being clipped, the method can greatly improve clipping efficiency.
Of course, it is not necessary for any one product to practice the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an exemplary system architecture provided by an embodiment of the present application;
FIG. 2 is a main flow chart of a video editing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a timeline configuration interface according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a clipping process using a time unit as a main line according to an embodiment of the present application;
FIG. 5 is a block diagram of a video editing apparatus according to an embodiment of the present application;
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
The existing video editing mode needs a user to trigger the required video editing step by step, and editing processing of all links is carried out step by step on materials related to the video editing according to the operation of the user. For example, the user needs to perform a person matting on the video 1 and then compose the image on the background image of the video 2. After the user needs to import the video 1, the video editing tool is triggered to decode the video 1, then character matting processing is carried out, and then the processing result is encoded to obtain the video 1 after character matting. Then triggering a video editing tool to decode the video 1 and the video 2 after the character is scratched on the basis of the video 1 after the character is scratched, synthesizing the frames by frames, and then encoding to obtain the video finally clipped and output. It can be seen that the existing video clip mode mainly has the following two disadvantages:
1) Each step of editing needs the user to manage the video material to be processed, the generated intermediate material and the like and trigger the editing process step by step, namely after the video material is triggered to be subjected to one editing process, the intermediate material is selected to trigger the other editing process. When more materials are needed or more editing processing links are needed, great trouble is brought to the operation of a user.
2) Each step of user-triggered clipping processing involves the process of encoding and decoding. The more editing processing links involved in the whole flow, the more times of encoding and decoding can be, which can cause long time consumption of the video editing process. And each time the encoding and decoding brings a certain quality loss to the video material, the distortion of the video material can be caused.
In view of this, the application provides a new idea to more conveniently and efficiently implement video editing. For the convenience of understanding the present application, a system architecture to which the present application is applied will be briefly described. FIG. 1 illustrates an exemplary system architecture to which embodiments of the present application may be applied, as shown in FIG. 1, the system primarily comprising a client and a server.
The client is used for providing a UI interface for a user, and the user can input video materials and information through the UI interface. And provides the video material and the information input by the user to the server.
The server is provided with a video editing device for editing the video material in a video editing mode according to the information input by the user, and the object video obtained after editing is stored and can be returned to the client for display to the user by the client.
The above-mentioned client is typically provided in a computer terminal, which may be, for example, a mobile phone, a tablet computer, a notebook computer, a PDA (personal digital assistant), etc., and may even be an in-vehicle terminal, a wearable device, a smart television, a virtual reality device, an augmented reality device, etc.
The server may be set in a server. The server may be a single server, a server group formed by a plurality of servers, or a cloud server. The cloud server is also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual special server (VPs, virtual PRIVATE SERVER) service.
The server may be provided in a computer terminal having a relatively high computing power, may be provided in the same computer terminal as the client, or may be provided in different computer terminals, respectively.
If the server is set in the server, or the client and the server are set in different computer terminals, the client and the server can communicate through a network. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
It should be understood that the number of clients and servers in fig. 1 is merely illustrative. There may be any number of clients and servers, as desired for implementation.
Fig. 2 is a main flowchart of a video editing method according to an embodiment of the present application, where the method may be performed by a video editing device at a server in the system shown in fig. 1, and the device may be set as an application, or may be set as a functional unit such as a plug-in unit or a software development kit (Software Development Kit, SDK) in the application. As shown in fig. 2, the method may include the steps of:
Step 202: a timeline configuration interface is provided to a user.
Step 204: acquiring time line configuration information input by a user through a time line configuration interface, wherein the time line configuration information comprises: at least one video material, and clip type information and time information on a time line for clip processing of the at least one video material, the clip type information including two or more clip types.
Steps 202-204 are user oriented configuration processes, and the timeline configuration information is used to make video clips. The following steps 206 to 208 are the procedure of performing specific clipping processing on the video according to the timeline configuration information:
step 206: generating time line scheduling information according to the time line configuration information, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each time unit.
Step 208: and decoding at least one video material, sequentially determining video frames to be clipped corresponding to each time unit, clipping at least one video material frame by utilizing clipping type information corresponding to the time unit, and encoding the video frames obtained after clipping to obtain a target video.
As can be seen from the above description, the present application provides the user with the timeline configuration interface for the user to input the timeline configuration information, and in the whole process, the user only needs to input the timeline configuration information in the timeline configuration interface, without gradually triggering each type of editing process, and without managing the imported materials and the generated intermediate materials in each step of editing process, so that the complexity of user operation is greatly reduced.
The method and the device generate the time line scheduling information according to the time line configuration information input by the user, encode the video frames to be clipped to obtain the target video after clipping the video frames to be clipped in each time unit according to the time line scheduling information, and are more efficient compared with a scheduling mode that the video material is clipped by time unit by time as a main line triggered by the user to generate intermediate material and then triggered to carry out next clipping.
Each step in the above-described flow is described in detail below. The above step 202, i.e. "provide timeline configuration interface to user", will first be described in detail in connection with an embodiment.
When a user opens a video clip application, the video clip apparatus may provide a timeline configuration interface to the user. Or the user triggers a timeline configuration component presented by the video clip application, the video clip apparatus can provide a timeline configuration interface to the user. Etc.
As one of these realizations, the timeline configuration interface includes a plurality of display tiles (otherwise referred to as interface elements) thereon. For example, components including a timeline, more than one track, and information for configuring clip types, components such as text, input boxes, drop-down boxes, other functions, and so forth, may also be included.
The track is used to acquire video material, for example, a user may import video material used by the clip, and the imported video material is displayed in the track. Typically one track accommodates one video material. Video material referred to in this application refers to any multimedia material that may be used to make video clips, and may include, but is not limited to, video clips, pictures, audio, text, special effects material, and the like.
The user may set clip types for the imported video material through components on the timeline configuration interface that are used to configure clip information. The user may be provided with various clip types available on the timeline configuration interface, such as by way of menus and submenus, for selection by the user. Multiple clip types may also be provided by way of a drop down box for selection by the user. An input box may also be provided on the timeline configuration interface for a user to enter clip types in the input box.
For example, after a user selects a video material of a certain track, a clip type is input to the video material by clicking a component configuring clip type information. In response to an event that a component configuring clip type information is triggered, the video clip device obtains and records clip type information entered by a user for video material of a currently selected track.
Clip types referred to in embodiments of the present application may include, but are not limited to: adding filters, matting, adding special effects, adding subtitles, adding voices, generating video highlights, synthesizing, removing water marks and the like. It is also possible to set a more fine-grained clip type, for example, to add a specific type of filter, for example, to combine two videos, to combine a video with audio, to combine a video with a picture, and so on. Some of which need only be set for a single video material, such as adding filters, matting, etc. Some clip types require association settings for more than two video materials, such as combining two videos, combining a video with audio, and so on.
Wherein the time line can be arranged in parallel with the track, and the video material of the track is input by taking the time line as a reference of time point information of the clipping process. The user can adjust the time of editing the video materials by setting the starting time of the video materials or adjusting the position of the video materials in the track, and the like, and the combination setting of editing the plurality of video materials is convenient.
For example, the corresponding time information on the timeline for the clip processing of the video material may be determined in response to a user operation that adjusts the position of the video material in the track. The user can adjust the position of the video material in the track by means of dragging, moving and the like, for example, after the user selects the video material in the track, the user adjusts the time information for editing the video material by moving the position of the video material in the track left and right. Since the duration of the video material is fixed, the time information for adjusting the video material to clip may be the start time or the end time.
For another example, in response to a start time set by a user for a video material in a selected track, time information corresponding to a clip process performed on the video material on a timeline may be determined, and a position of the video material in the selected track may be adjusted accordingly. This approach is in effect to employ a user to set a start time for the video material, and adapt the position of the video material in the track in accordance with the start time. Wherein the user can set the start time of the video material by inputting time information in an input box or by selecting a time option from the provided time options.
The interface may further include a start clip component that can be clicked after the user has completed importing all video material, configuring the video type, and configuring the time information on the timeline. In response to an event that initiates the component of the clip to be triggered, either step 206 or 208 is performed.
Fig. 3 is a schematic diagram of a timeline configuration interface according to an embodiment of the present application. For example, an example illustrates how a user may enter timeline configuration information via a timeline configuration interface. Assuming that a user wants to scratch out a person from each video frame in the video b, generating a subtitle for the video c, and merging each frame with the person scratched out from the video b by taking the video c as a background. The beginning of the combined video needs to be added with text a and the whole video needs to be combined with audio d.
The user may import text a into track 1, video b into track 2, video c into track 3, and audio d into track 4.
When the user imports the text a into the track 1, clicks the composition component in the configuration clip type information component, and then the submenu presents the submenu corresponding to the composition component, from which the user can select an option indicating to add the text a. When the user guides the video b into the track 2, clicking a matting component in the configuration clip type information component, and then displaying a submenu corresponding to the matting component, wherein the user can select the option of a person to matting out. When the user guides the video c into the track 3, clicking the subtitle component in the configuration clip type information component, clicking the merging component, and then displaying the submenu corresponding to the merging component, wherein the user can select options for synthesizing as a background. When the user guides the audio d into the track 4, the user clicks the synthesis component in the configuration clip type information component, and then the submenu displays the submenu corresponding to the synthesis component, from which the user can select the option to merge into the video.
The user can then adjust the position of the video material in the track according to the timeline by selecting the video material in the track. In response to a user operation to adjust the position of the video material in the track, point-in-time information corresponding to the clip processing performed on the video material on the timeline is determined. For example, text a is incorporated into the video from the beginning to the 10 th minute, so the location of text a can be placed in a location corresponding to time lines 00:00-00:10. For example, video b may need to be merged with the background of video c from 10 th to 40 th minutes, so the position of video b may be placed in positions corresponding to timelines 00:10-00:40. Other video materials are similar and are not listed here.
The preview part in the interface can preview the video material in the track selected by the user, and can preview the target video obtained after editing processing.
It should be noted that fig. 3 is only a schematic illustration, and no style, number, layout, etc. of the components are meant to limit the present application, and any style, number, layout may be used, so long as they fall within the spirit of the present application.
In step 204, after the timeline configuration information input by the user through the timeline configuration interface is obtained, the timeline configuration information is stored. Since the present application is advantageous in that it is necessary to perform processing of two or more clip types, it is equally applicable to a case where a user is required to perform processing of only one clip type.
The above step 206, i.e. "generating timeline scheduling information from timeline configuration information" is described in detail below in connection with an embodiment.
The original time line configuration information is set by taking the video material as a main line, and the time line scheduling information taking time as the main line is generated by utilizing the time line configuration information in the embodiment of the application. That is, each video material and clip type are mapped to each time unit by using the time line as a main line, and specifically, the time line scheduling information may include video material information and clip type information to be clipped corresponding to each time unit.
The time unit described above may be determined in the smallest unit of time for a video clip, e.g. a video clip tool allows a minimum of 1 second of video material to be clipped, then the time unit is 1 second. The time units may also be determined at the frame rate of the video material, for example, with one or more frame-to-frame intervals of the video material as time units.
That is, it can be determined at each time unit whether there is video material to be clipped at that time unit according to the timeline scheduling information, and if so, it can be further determined what type of clipping is required to be performed on the video material at that time unit.
The following describes the step 208 in detail, namely, "decoding at least one video material, determining, in each time unit, a video frame to be clipped corresponding to the time unit in sequence, clipping at least one video material frame by using clipping type information corresponding to the time unit, and encoding a video frame obtained after clipping to obtain a target video".
The present step may begin to be performed, or step 206 may begin to be performed, in response to a user triggering a component to launch a clip on the timeline configuration interface.
For example, after the user finishes importing all video materials, configuring the clip types and configuring corresponding time information on the time line configuration interface, clicking a component for starting the clip to uniformly generate the time line configuration information, and starting to execute specific clip processing.
For another example, the user may generate timeline configuration information and update it as the user enters the information during configuration on the timeline configuration interface. After the user completes the importing of all the video materials, the configuration of the clip types and the configuration of the corresponding time information on the time line configuration interface, clicking a component for starting the clip, and starting to execute specific clip processing based on the current time line configuration information.
In this step, the time unit is taken as a basis, and after the video frame to be clipped corresponding to one time unit is processed according to the corresponding clipping type, the video frame to be clipped corresponding to the next time unit is processed according to the corresponding clipping type. That is, the idea of cutting the video material into a main line, cutting the video material into a next clip process for the intermediate material or combining with other video materials is broken, and the time line is changed into a main line, cutting the video materials into individual "atomic clips" based on time units, and all the atomic clips corresponding to the time units are performed in one time unit, and then the atomic clips of the next time unit are performed.
In some cases, however, some clip types require timing information for intelligent prediction, and video contents corresponding to only time units cannot be accurately clipped. For example, adding a subtitle requires subtitle recognition, and intelligent prediction using a long voice, where information such as the context of a sentence is used. For another example, generating a video highlight requires analyzing the association between the content of the video material as a whole and the video frames. Thus, as a preferred embodiment, the video material to be preprocessed may be determined according to clip type information of at least one video material; submitting the video material to be preprocessed to a clipping module corresponding to the clipping type, and acquiring a video frame sequence obtained after clipping the received video material by the clipping module; and updating the video material information corresponding to the to-be-clipped video material information in the time line scheduling information by using the obtained video frame sequence.
Wherein the video material to be preprocessed may include a clip type that needs to be intelligently predicted using timing information. Such as subtitle recognition, generating video highlights, text-to-speech, music cartoons, etc. In addition, other strategies may be employed to determine the video material that needs to be preprocessed, such as determining the types of clips that require greater computing power, using those types of clips as the video material that needs to be preprocessed, and so forth.
The above-mentioned clipping module refers to a functional module for executing a specific clipping type process, and the functional module may be a functional module of the video clipping apparatus itself, or may be a functional module of another apparatus or application. In the embodiment of the application, the video material to be processed can be transmitted to the clipping module by calling the API of the clipping module, and the video material is clipped by the clipping module and then returned to the video clipping device.
After all the video materials needing to be preprocessed are processed by the corresponding editing module and the video frame sequences obtained after the editing processing are returned, the video editing device updates the video material information corresponding to the video material to be edited in the time line scheduling information by using the video frame sequences.
As an example, still with the user shown in fig. 3, to crop out the person in each video frame in the video b, a subtitle is generated for the video c, and then the video c is combined with each frame in which the person is crop out from the video b as a background. The beginning of the combined video needs to be added with text a and the whole video needs to be combined with audio d. After the time line scheduling information is generated, the video c is sent to a clipping module for generating the subtitles, the clipping module for generating the subtitles identifies the video c, the subtitles are added to each video frame of the video c, a new video frame sequence of the video c is generated, and the video frame sequence replaces the original video c in the time line scheduling information.
After all preprocessing is executed, the video editing device decodes all video materials to be edited. It should be noted that if the video material is decoded and the corresponding editing process is performed by the editing module in the previous preprocessing process, the video editing device is not required to repeatedly decode. But here the decoding process is performed on the video material that has not yet been decoded.
Then, executing the time units one by one according to the sequence of the time units on the time line: and determining the video frame to be clipped corresponding to the time unit, and clipping the video frame to be clipped by utilizing clipping type information corresponding to the time unit.
If the video frame to be clipped corresponding to one time unit is one and corresponds to a plurality of clip type information, the video frame to be clipped can be subjected to clipping processing corresponding to the plurality of clip type information one by one according to a preset priority. The priority among the clip types can be set by a user, or a default setting mode can be adopted.
If the number of the video frames to be clipped corresponding to one time unit is multiple, clipping processing corresponding to clipping types can be performed on the multiple video frames to be clipped respectively, and multiple video frames obtained after clipping processing are synthesized.
Continuing with the example, as shown in fig. 3, after decoding video b and audio d, the following is performed for each time unit between 00:00 and 00:10: for a time unit, text a may be first synthesized into a video frame (pre-processed) of video c corresponding to the time unit, then the video frame of video c is synthesized with a segment of audio d corresponding to the time unit, and then the same process is performed for the next time unit. If one time unit corresponds to a video frame of the video c, as shown in fig. 4, all clip processing of the frame is performed equivalent to a frame by frame, and then all clip processing of the next frame is performed.
For each time unit between 00:10 and 00:40: the method comprises the steps of capturing a person from a video frame of a video b corresponding to a time unit, synthesizing the captured person with a video frame (preprocessed) of a video c corresponding to the time unit, synthesizing the synthesized video frame with a segment of an audio d corresponding to the time unit, and executing the same process for the next time unit.
This is done by sequentially processing each time unit, actually performing clip processing of all clip types frame by frame, and then encoding to obtain the target video. And after all the time units are executed, the final target video is obtained.
In addition, as a preferred embodiment, the above-mentioned processes of decoding, clipping and encoding may use a streaming processing method. For example, the video frames that have been subjected to all the clipping processing are subjected to the clipping processing frame by using the video frames that have been decoded while being decoded, and then the video frames that have been subjected to all the clipping processing are encoded frame by frame. Compared with the streaming mode in which all video materials are decoded firstly, all video frames are coded after the video frames are clipped, the clipping efficiency can be greatly improved.
Compared with the traditional video editing mode taking video materials as main lines, the method does not need to encode and decode each step of video editing processing link, but encodes all video materials after decoding the video materials and taking time units as main lines, thereby greatly reducing the times of encoding and decoding, shortening the time consumption of video editing and reducing the quality loss caused by more encoding and decoding times on the video materials.
The generated target video can be returned to the client and displayed to the user, and can also be stored for subsequent inquiry or display. For example, the method may return to the client and preview on the interface shown in fig. 3, and if the user determines that the target video meets the requirement of the user after previewing, the stored component may be triggered to request to store the target video, or the downloaded component may be triggered to request to download the target video to the local terminal device.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
According to an embodiment of another aspect, a video editing apparatus is provided. Fig. 5 shows a schematic block diagram of a video editing apparatus according to one embodiment. As shown in fig. 5, the apparatus 500 includes: an interface providing unit 501, a configuration acquiring unit 502, a scheduling processing unit 503, and a clipping processing unit 504. Wherein the main functions of each unit are as follows:
the interface providing unit 501 is configured to provide a timeline configuration interface to a user.
As one of the realizations, the timeline configuration interface includes: a timeline, one or more tracks to obtain video material, and components to configure clip type information.
Wherein the video material of which the track is input takes the time line as a reference of the time information of the clip processing.
A configuration obtaining unit 502 configured to obtain timeline configuration information input by a user through a timeline configuration interface, where the timeline configuration information includes: at least one video material, and clip type information and time information on a time line for clip processing of the at least one video material, the clip type information including two or more clip types.
As one of the realizations, in response to a user operation to adjust a position of a video material in a track, corresponding time information on a timeline for a clip process performed on the video material is determined.
As another implementation manner, in response to a start time set by a user for video material in a selected track, time information corresponding to clip processing performed on the video material on a time line is determined, and a position of the video material in the selected track is adjusted accordingly.
The scheduling processing unit 503 is configured to generate timeline scheduling information according to the timeline configuration information, where the timeline scheduling information includes video material information to be clipped and clip type information corresponding to each time unit.
The clipping unit 504 is configured to perform decoding processing on at least one video material, sequentially determine a video frame to be clipped corresponding to each time unit, perform clipping processing on the video frame to be clipped by using clipping type information corresponding to the time unit, and perform encoding processing on the video frame obtained after clipping processing, so as to obtain a target video.
As one of the realizations, the scheduling processing unit 503 may be further configured to: determining the video material to be preprocessed according to the clip type information of at least one video material; submitting the video material to be preprocessed to a clipping module corresponding to the clipping type, and acquiring a video frame sequence obtained after clipping the received video material by the clipping module; and updating the video material information corresponding to the to-be-clipped video material information in the time line scheduling information by utilizing the video frame sequence.
The video material to be preprocessed may include: clip type that requires intelligent prediction using timing information.
As one of the realizable modes, when the video frames to be clipped are clipped by using the clipping type information corresponding to the time unit of the video frames to be clipped, if the video frames to be clipped corresponding to the time unit are one and correspond to a plurality of clipping type information, clipping processing corresponding to the clipping type information is performed on the video frames to be clipped one by one according to a preset priority; or if the number of the video frames to be clipped corresponding to the time unit is multiple, clipping processing corresponding to clipping types is respectively carried out on the multiple video frames to be clipped, and the multiple video frames obtained after clipping processing are synthesized.
As one of the realizations, the clip processing unit 504 may employ a streaming processing method when performing the decoding process, the clipping process and the encoding process performed at each time unit.
It should be noted that, in the embodiment of the present application, the use of user data may be involved, and in practical application, the user specific personal data may be used in the solution described herein within the scope allowed by the applicable legal regulations in the country under the condition of meeting the applicable legal regulations in the country (for example, the user explicitly agrees to the user to notify practically, etc.).
In addition, the embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the method of any one of the previous method embodiments.
And an electronic device comprising:
One or more processors; and
A memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
Fig. 6 illustrates an architecture of an electronic device, which may include a processor 610, a video display adapter 611, a disk drive 612, an input/output interface 613, a network interface 614, and a memory 620, to name a few. The processor 610, video display adapter 611, disk drive 612, input/output interface 613, network interface 614, and memory 620 may be communicatively coupled via a communications bus 630.
The processor 610 may be implemented by a general-purpose CPU, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solution provided by the present application.
The Memory 620 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage, dynamic storage, etc. The memory 620 may store an operating system 621 for controlling the operation of the electronic device 600, and a Basic Input Output System (BIOS) 622 for controlling the low-level operation of the electronic device 600. In addition, a web browser 623, a data storage management system 624, and a video clip device 625, etc. may also be stored. The video clipping apparatus 625 may be an application program embodying the operations of the foregoing steps in the embodiment of the present application. In general, when the technical solution provided by the present application is implemented by software or firmware, relevant program codes are stored in the memory 620 and invoked by the processor 610 to be executed.
The input/output interface 613 is used to connect with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The network interface 614 is used to connect communication modules (not shown) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 630 includes a path to transfer information between components of the device (e.g., processor 610, video display adapter 611, disk drive 612, input/output interface 613, network interface 614, and memory 620).
It should be noted that although the above devices illustrate only the processor 610, video display adapter 611, disk drive 612, input/output interface 613, network interface 614, memory 620, bus 630, etc., the device may include other components necessary to achieve proper operation in an implementation. Furthermore, it will be appreciated by those skilled in the art that the apparatus may include only the components necessary to implement the present application, and not all of the components shown in the drawings.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the application that follows may be better understood, and in order that the present principles and embodiments may be better understood; also, it is within the scope of the present application to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the application.

Claims (9)

1. A method of video editing, the method comprising:
obtaining time line configuration information input by a user, wherein the time line configuration information comprises: at least one video material, and clip type information and time information on a timeline for the at least one video material;
Corresponding the video materials and the clip type information on each frame to generate time line scheduling information, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each frame, and the clip type information comprises more than two clip types;
Determining the video material to be preprocessed according to the clip type information of the video material contained in the time line configuration information, wherein the video material to be preprocessed comprises: clip type requiring intelligent prediction using timing information; submitting the video material to be preprocessed to a clipping module corresponding to the clipping type, and acquiring a video frame sequence obtained after the clipping module clips the received video material; updating the video material information corresponding to the to-be-clipped in the time line scheduling information by utilizing the video frame sequence;
And decoding at least one video material, sequentially determining a video frame to be clipped corresponding to the frame in each frame, clipping the video frame to be clipped by utilizing clipping type information corresponding to the frame in the video frame to be clipped, and encoding the video frame obtained after clipping to obtain a target video.
2. The method of claim 1, wherein the clipping the video frame to be clipped using clip type information corresponding to the video frame to be clipped comprises:
If the video frame to be clipped corresponding to the frame is one and corresponds to a plurality of clipping type information, clipping processing corresponding to the clipping type information is carried out on the video frame to be clipped one by one according to a preset priority; or alternatively
If the number of the video frames to be clipped corresponding to the frame is multiple, clipping processing corresponding to clipping types is carried out on the multiple video frames to be clipped respectively, and multiple video frames obtained after clipping processing are synthesized.
3. The method of claim 1, wherein the decoding process, the clipping process and the encoding process performed on each frame are performed in a streaming manner.
4. A method of video editing, the method comprising:
Providing a timeline configuration interface to a user;
acquiring timeline configuration information input by the user through the timeline configuration interface for video editing using the method of any of claims 1-3.
5. The method of claim 4, wherein the timeline configuration interface comprises: a timeline, one or more tracks to obtain video material, a component to configure clip type information, and a component to launch clips;
wherein the video material of which the track is input takes the time line as a reference of time information of a clipping process;
and starting the step of clipping processing in response to the event that the component for starting clipping is triggered.
6. The method of claim 5, further comprising:
responsive to a user operation to adjust a position of a video material in the track, determining time information corresponding to a clipping process performed on the video material on the timeline; or alternatively
Responding to the starting time set by a user for the video material in the selected track, determining the time information corresponding to the clipping processing of the video material on the time line, and correspondingly adjusting the position of the video material in the selected track; or alternatively
And responding to the event that the component for configuring the clip type information is triggered, and acquiring and recording the clip type information input by the user aiming at the video material of the currently selected track.
7. A video editing apparatus, the apparatus comprising:
an interface providing unit configured to provide a timeline configuration interface to a user;
a configuration acquisition unit configured to acquire timeline configuration information input by the user through the timeline configuration interface, the timeline configuration information including: at least one video material, and clip type information and time information on a time line of clip processing of the at least one video material, the clip type information including two or more clip types;
A scheduling processing unit configured to correspond the video materials and the clip type information on each frame to generate time line scheduling information, wherein the time line scheduling information comprises video material information to be clipped and clip type information corresponding to each frame; determining the video material to be preprocessed according to the clip type information of the video material contained in the time line configuration information, wherein the video material to be preprocessed comprises: clip type requiring intelligent prediction using timing information; submitting the video material to be preprocessed to a clipping module corresponding to the clipping type, and acquiring a video frame sequence obtained after the clipping module clips the received video material; updating the video material information corresponding to the to-be-clipped in the time line scheduling information by utilizing the video frame sequence;
the editing processing unit is configured to decode the at least one video material, sequentially determine a video frame to be edited corresponding to the frame in each frame, carry out editing processing on the video frame to be edited by utilizing the editing type information corresponding to the frame in the video frame to be edited, and encode the video frame obtained after the editing processing to obtain a target video.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
9. An electronic device, comprising:
One or more processors; and
A memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 6.
CN202210700104.8A 2022-06-20 2022-06-20 Video editing method and device Active CN115278306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210700104.8A CN115278306B (en) 2022-06-20 2022-06-20 Video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210700104.8A CN115278306B (en) 2022-06-20 2022-06-20 Video editing method and device

Publications (2)

Publication Number Publication Date
CN115278306A CN115278306A (en) 2022-11-01
CN115278306B true CN115278306B (en) 2024-05-31

Family

ID=83762015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210700104.8A Active CN115278306B (en) 2022-06-20 2022-06-20 Video editing method and device

Country Status (1)

Country Link
CN (1) CN115278306B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805678B1 (en) * 2004-04-16 2010-09-28 Apple Inc. Editing within single timeline
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
CN104115225A (en) * 2011-12-21 2014-10-22 派尔高公司 Double-timeline video editing software using gap-removed proportionable thumbnail
CN109151537A (en) * 2018-08-29 2019-01-04 北京达佳互联信息技术有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN110381371A (en) * 2019-07-30 2019-10-25 维沃移动通信有限公司 A kind of video clipping method and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
CN111918128A (en) * 2020-07-23 2020-11-10 上海网达软件股份有限公司 Cloud editing method, device, equipment and storage medium
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112565825A (en) * 2020-12-02 2021-03-26 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and medium
WO2021093737A1 (en) * 2019-11-15 2021-05-20 北京字节跳动网络技术有限公司 Method and apparatus for generating video, electronic device, and computer readable medium
CN113015005A (en) * 2021-05-25 2021-06-22 腾讯科技(深圳)有限公司 Video clipping method, device and equipment and computer readable storage medium
CN113038234A (en) * 2021-03-15 2021-06-25 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8458595B1 (en) * 2006-05-31 2013-06-04 Adobe Systems Incorporated Video editing including simultaneously displaying timelines and storyboards
EP2088776A4 (en) * 2006-10-30 2015-01-21 Gvbb Holdings Sarl Editing device and editing method using metadata
EP2304724A2 (en) * 2008-07-16 2011-04-06 Thomson Licensing Encoding apparatus of video and audio data, encoding method thereof, and video editing system
US9165603B2 (en) * 2012-03-29 2015-10-20 Adobe Systems Incorporated Method and apparatus for grouping video tracks in a video editing timeline
US8879888B2 (en) * 2013-03-12 2014-11-04 Fuji Xerox Co., Ltd. Video clip selection via interaction with a hierarchic video segmentation

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805678B1 (en) * 2004-04-16 2010-09-28 Apple Inc. Editing within single timeline
CN104115225A (en) * 2011-12-21 2014-10-22 派尔高公司 Double-timeline video editing software using gap-removed proportionable thumbnail
CN103928039A (en) * 2014-04-15 2014-07-16 北京奇艺世纪科技有限公司 Video compositing method and device
CN109151537A (en) * 2018-08-29 2019-01-04 北京达佳互联信息技术有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN109819179A (en) * 2019-03-21 2019-05-28 腾讯科技(深圳)有限公司 A kind of video clipping method and device
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN110381371A (en) * 2019-07-30 2019-10-25 维沃移动通信有限公司 A kind of video clipping method and electronic equipment
CN110532426A (en) * 2019-08-27 2019-12-03 新华智云科技有限公司 It is a kind of to extract the method and system that Multi-media Material generates video based on template
WO2021093737A1 (en) * 2019-11-15 2021-05-20 北京字节跳动网络技术有限公司 Method and apparatus for generating video, electronic device, and computer readable medium
CN111918128A (en) * 2020-07-23 2020-11-10 上海网达软件股份有限公司 Cloud editing method, device, equipment and storage medium
CN112333536A (en) * 2020-10-28 2021-02-05 深圳创维-Rgb电子有限公司 Audio and video editing method, equipment and computer readable storage medium
CN112565825A (en) * 2020-12-02 2021-03-26 腾讯科技(深圳)有限公司 Video data processing method, device, equipment and medium
CN113038234A (en) * 2021-03-15 2021-06-25 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113015005A (en) * 2021-05-25 2021-06-22 腾讯科技(深圳)有限公司 Video clipping method, device and equipment and computer readable storage medium
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115278306A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN109547841B (en) Short video data processing method and device and electronic equipment
CN108833787B (en) Method and apparatus for generating short video
CN111464761A (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113261058B (en) Automatic video editing using beat match detection
CN111629252A (en) Video processing method and device, electronic equipment and computer readable storage medium
US10674183B2 (en) System and method for perspective switching during video access
CN110781349A (en) Method, equipment, client device and electronic equipment for generating short video
CN111246289A (en) Video generation method and device, electronic equipment and storage medium
JP2023535989A (en) Method, apparatus, server and medium for generating target video
CN113395538B (en) Sound effect rendering method and device, computer readable medium and electronic equipment
CN113589982A (en) Resource playing method and device, electronic equipment and storage medium
CN111797061A (en) Multimedia file processing method and device, electronic equipment and storage medium
CN114584716A (en) Picture processing method, device, equipment and storage medium
CN113207025A (en) Video processing method and device, electronic equipment and storage medium
CN115278306B (en) Video editing method and device
CN111385599B (en) Video processing method and device
CN111385638B (en) Video processing method and device
CN113885741A (en) Multimedia processing method, device, equipment and medium
CN114528433A (en) Template selection method and device, electronic equipment and storage medium
CN113139090A (en) Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN110703971A (en) Method and device for publishing information
CN113392238A (en) Media file processing method and device, computer readable medium and electronic equipment
CN111314793B (en) Video processing method, apparatus and computer readable medium
CN114979764B (en) Video generation method, device, computer equipment and storage medium
WO2024099353A1 (en) Video processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant