CN108810596B - Video editing method and device and terminal - Google Patents

Video editing method and device and terminal Download PDF

Info

Publication number
CN108810596B
CN108810596B CN201710288746.0A CN201710288746A CN108810596B CN 108810596 B CN108810596 B CN 108810596B CN 201710288746 A CN201710288746 A CN 201710288746A CN 108810596 B CN108810596 B CN 108810596B
Authority
CN
China
Prior art keywords
video
data
time
decoding
effective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710288746.0A
Other languages
Chinese (zh)
Other versions
CN108810596A (en
Inventor
秦智
王颖琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710288746.0A priority Critical patent/CN108810596B/en
Publication of CN108810596A publication Critical patent/CN108810596A/en
Application granted granted Critical
Publication of CN108810596B publication Critical patent/CN108810596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Abstract

The invention discloses a video editing method, a video editing system and a video editing terminal, which can realize the nonlinear editing operation of a video, can accurately position the time point of the video to be inserted under the condition of only video segments, and finish the insertion of the video to be inserted in a finally generated video file, wherein the insertion precision can reach millisecond level. Furthermore, by changing transcoding parameters, video files with different definitions can be generated, and the requirement of video release is fully met.

Description

Video editing method and device and terminal
Technical Field
The present invention relates to the field of video technologies, and in particular, to a video editing method, an apparatus, and a terminal.
Background
The advertisement publishing by using the network media becomes one of the popular propaganda means of the merchant, and with the development and wide application of the multimedia technology, the advertisement inserting in the video content becomes one of the important propaganda means.
At present, most of the prior art can only add advertisement video files at the beginning or the end of network video content, but cannot insert advertisements at any time interval, and the inflexibility and the single form of advertisement insertion influence the user experience and the advertising effect of the advertisements.
Further, in some video editing platforms, a plurality of video clips need to be spliced to obtain a complete video file, that is, only the video clips are generated before the complete video file is generated. And how to complete the insertion of the advertisement of any time interval in the case of only video clips is also a problem to be solved.
Disclosure of Invention
In order to solve the technical problem, the invention provides a video editing method, a video editing device and a terminal. The invention is realized by the following technical scheme:
in a first aspect, a method of video editing, the method comprising:
acquiring N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in a temporal order;
obtaining an offset value delta1 of a video start time relative to a time start point of a1 st video segment, an offset value deltatan of a video end time relative to a time start point of an Nth video segment, and an offset value deltax of an insertion time in a video relative to a time start point of an x (x ≦ N) th video segment;
obtaining the actual inserting time T according to the offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment and the offset value delta x of the video inserting time relative to the time starting point of the x (x is less than or equal to N) th video segmentinsert
Obtaining effective video data according to the video data between the video starting time and the video ending time in the N video clips;
acquiring video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
In a second aspect, a video editing apparatus includes:
the video clip acquisition module is used for acquiring N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in a temporal order;
a parameter obtaining module, configured to obtain an offset value delta1 of a video start time relative to a time start point of a1 st video segment, an offset value deltan of a video end time relative to a time start point of an nth video segment, and an offset value deltax of an insertion time in a video relative to a time start point of an x (x is less than or equal to N) th video segment;
an intermediate time acquiring module, configured to obtain an actual intermediate time T according to an offset value delta1 between the video start time and the time start point of the 1 st video segment and an offset value deltax between the video intermediate time and the time start point of the x (x is less than or equal to N) th video segmentinsert
The effective data acquisition module is used for intercepting video data between the video starting time and the video ending time in the N video clips to obtain effective video data;
a video file generation module for obtainingThe video data to be inserted is inserted according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
In a third aspect, a video editing terminal includes the video editing apparatus.
The invention provides a video editing method, a video editing device and a video editing terminal, which have the following beneficial effects:
under the condition of only video clips, the time point of the video to be inserted can be accurately positioned, the video to be inserted is inserted in the finally generated video file, and the insertion precision can reach millisecond level. Furthermore, by changing transcoding parameters, video files with different definitions can be generated, and the requirement of video release is fully met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a video editing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an actual time-in-flight acquisition method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for generating a video file by decoding and encoding according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for implementing insertion of multiple videos to be inserted by decoding and encoding according to an embodiment of the present invention;
FIG. 5 is a flow chart of a further method for processing a video file according to an embodiment of the present invention;
fig. 6 is a block diagram of a video editing apparatus according to an embodiment of the present invention;
fig. 7 is a block diagram of an intermediate time acquiring module according to an embodiment of the present invention;
FIG. 8 is a block diagram of a valid data acquisition module provided by an embodiment of the present invention;
FIG. 9 is a block diagram of a video file generation module provided by an embodiment of the present invention;
fig. 10 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 11 is a block diagram of a server according to an embodiment of the present invention;
fig. 12 is a system architecture diagram provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a video editing method, where the method is shown in fig. 1 and includes:
s101, acquiring N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in chronological order.
Specifically, the N video segments are used as original materials for video editing, and may be originally stored video data or obtained by intercepting from a live stream.
In the live broadcasting process, video data can be obtained by intercepting live broadcasting streams. However, in the live broadcasting process, since the live broadcasting content does not necessarily need to be obtained completely, only one or more segments of video data in the live broadcasting stream can be intercepted, and for convenience of editing and managing the video data, each segment of the intercepted video data is composed of one or more video clips. The video editing method provided by the embodiment is applicable to the case of one piece of video data or multiple pieces of video data.
S102, obtaining an offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment, an offset value deltatan of the video ending time relative to the time starting point of the Nth video segment and an offset value deltax of the video inserting time relative to the time starting point of the x (x is less than or equal to N) th video segment.
Specifically, the video start time and the video end time are both represented by relative values. The video start time is a shift value delta1 with respect to the time start of the 1 st video segment, and the video end time is an offset value deltatan with respect to the time start of the nth video segment.
S103, obtaining the actual inserting time T according to the offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment and the offset value delta x of the inserting time in the video relative to the time starting point of the x (x is less than or equal to N) th video segmentinsert
The actual insertion time is an actual playout time of the video to be inserted in a video file generated in the future.
Specifically, the actual time-lapse acquiring method is shown in fig. 2, and includes:
and S1031, calculating the total time length L from the time starting point of the first fragment to the time end point of the (x-1) th fragment.
S1032. according to the formula TinsertL-delta1+ deltax yields the actual insertion time.
If N is 1, x is 1, and only one video clip participates in video editing. Therefore, L in step S1031 is 0, and the time T is actually insertedinsertI.e., deltax-delta 1.
If N is more than 1, a plurality of video clips participate in video editing, then Tinsert=L-delta1+deltax。
S104, obtaining effective video data according to the video data between the video starting time and the video ending time in the N video segments.
The valid video data may be recorded in a video file, and the common video file format may be MP 4.
If N is 1, reading all data of the first video clip from delta1 until the end of the first video clip;
if N is 2, reading first data, where the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; splicing the first data and the second data according to the time sequence to obtain effective video data;
if N is more than 2, reading and reading first data, wherein the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; reading third data, wherein the third data is all data from the time end point of the first video clip to the time start point of the Nth video clip; and splicing the first data, the third data and the second data according to the time sequence to obtain effective video data.
S105, obtaining video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
In some application scenarios, the video data to be inserted may be an advertisement or video content that a video provider wishes to show to a user.
Further, after the video file is generated, a non-linear editing operation may be performed on the video file according to specific requirements, where the non-linear editing operation includes, but is not limited to:
(1) shearing
According to the method, part of video content in the video file is cut according to actual conditions, and the method can be generally applied to cutting off advertisements inserted in the live broadcasting process or part of video content without review value, so that the user experience is improved.
(2) Insert into
According to the practical situation, one or more sections of other contents are inserted into the video contents recorded in the video file, and the inserted contents are video or audio, and can be used for advertising promotion or playing a role in prompting a user.
(3) Adding pictures
One or more pictures are inserted into the video content recorded in the video file according to the actual situation, so that the method can be used for advertising promotion or playing a role in prompting a user;
(4) blurring
The fuzzification processing is carried out on certain pixels in the video content recorded by the video file according to the actual situation, and the fuzzification processing method can be used for avoiding copyright disputes or protecting privacy.
The embodiment provides a video editing method, which can accurately position a time point at which a video to be inserted needs to be inserted under the condition of only a video clip, and complete the insertion of the video to be inserted in a finally generated video file, wherein the insertion precision can reach millisecond level, so that the user experience is remarkably improved, and for advertising, the advertisement insertion flexibility and the advertising strength of the advertisement can be remarkably improved.
Another embodiment of the present invention further provides a video editing method, including:
s201, acquiring N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in chronological order.
S202, obtaining an offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment, an offset value deltatan of the video ending time relative to the time starting point of the Nth video segment and an offset value deltax of the video inserting time relative to the time starting point of the x (x is less than or equal to N) th video segment.
Specifically, the video start time and the video end time are both represented by relative values. The video start time is a shift value delta1 with respect to the time start of the 1 st video segment, and the video end time is an offset value deltatan with respect to the time start of the nth video segment.
S203, obtaining the actual inserting time T according to the offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment and the offset value delta x of the inserting time in the video relative to the time starting point of the x (x is less than or equal to N) th video segmentinsert
The actual insertion time is the actual play-out time of the video to be inserted in the video file generated in the future.
S204, obtaining effective video data according to the video data between the video starting time and the video ending time in the N video segments.
S205, obtaining video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
In the embodiment of the invention, the video to be inserted is inserted into the effective video data in a decoding and coding mode, and before decoding and coding, the video data to be inserted and the effective video data need to be ensured to have the same definition, and can be coded by using the same coding parameters.
Specifically, as shown in fig. 3, the method for generating a video file by decoding and encoding includes:
and S2051, decoding the effective video data to obtain effective decoding data.
And S2052, decoding the video data to be inserted to obtain decoded data to be inserted.
S2053, coding the effective decoding data according to a time sequence, and inserting coded data obtained by coding the decoding data to be inserted when the coding process is carried out on the data to be inserted; the data to be inserted is the actual insertion time T in the playing process of the effective video datainsertAnd decoding data corresponding to the data during broadcasting.
The same kind of encoding parameters are used for both the valid decoded data and the encoding process for the decoded data to be inserted.
And S2054, after the coding of the decoding data to be inserted is finished, continuing coding the effective decoding data until the coding of the effective decoding data is finished.
In particular, the presentation time for valid video data is at TinsertThe decoding data corresponding to the previous data is directly encoded, and the encoding result is recorded by the temporary encoding file. When the encoding process goes to effectiveThe video data is played at TinsertAnd when the decoding data corresponding to the data at the moment corresponds to the data, the coding of the effective decoding data is suspended, and the coding data obtained by coding the decoding data to be inserted is inserted after the temporary coding file. And after the coding is finished, continuing to code the effective decoding data which is not coded, and recording the coding result in the temporary coding file in an additional form until all the effective decoding data are coded.
And S2055, generating a video file according to the coding result.
Specifically, a video file may be generated from the generated temporary encoded file.
If a plurality of videos to be inserted need to be inserted in the video editing process, each video corresponds to one Tinsert. The ith video to be inserted has the actual insertion time Tinserti,TinsertiAnd TinsertThe obtaining method is consistent, and thus, the description is omitted.
The process of inserting multiple videos to be inserted by decoding and encoding is similar to the process of steps S2051-S2055, and as shown in fig. 4, the process includes the following steps:
and S1, decoding the effective video data to obtain effective decoding data.
And S2, decoding all the video data to be inserted to obtain decoding data corresponding to each video data to be inserted.
S3, coding the effective decoding data according to a time sequence, and inserting coded data obtained by coding the decoding data corresponding to a certain video data to be inserted when the coding process is carried out to the data to be inserted corresponding to the certain video data to be inserted; the data to be inserted corresponding to the certain video data to be inserted is the actual insertion time T corresponding to the certain video data to be inserted in the playing process of the effective video datainsertiAnd decoding data corresponding to the data during broadcasting.
S4, after the decoding data corresponding to the inserted video data are coded, continuing to code the effective decoding data; if the encoding process is proceeded to another to-be-inserted data corresponding to the to-be-inserted video data, go to step S3; and if the decoding data corresponding to all the video data to be inserted are successfully inserted after the encoding is finished, continuing to encode the effective decoding data until the effective decoding data are encoded.
And S5, generating a video file according to the coding result.
Specifically, the encoding process is recorded by a temporary encoding file, and after the encoding is finished, a video file can be generated according to the generated temporary encoding file.
Further, for other non-linear operations, such as cropping, inserting, adding pictures, or blurring, can be implemented using decoding coding.
Specifically, in this embodiment, encoding is performed according to preset encoding parameters, and by changing the preset transcoding parameters and acquiring the corresponding video data to be inserted, one or more video files can be obtained by using the video editing method of the embodiment of the present invention. The video files obtained according to different coding parameters have different definitions, and different requirements of users can be met. Typically, the sharpness may be smooth, standard definition, high definition, and ultra definition.
The embodiment provides a video editing method, and particularly provides a method for realizing nonlinear operation through a decoding and encoding mode, so that the nonlinear editing operation of cutting, inserting, adding pictures or blurring can be accurately finished. Furthermore, the video editing method provided by the embodiment of the invention has no requirement on the number of nonlinear editing operations in the video editing process, for example, one or more videos to be inserted can be inserted, so that the user experience in the video editing process is remarkably improved. Furthermore, the embodiment of the invention can obtain the video files corresponding to various definitions by setting different coding parameters, thereby meeting the requirements of users on the diversity of the video definitions.
Another embodiment of the present invention further provides a video editing method, including:
s301, acquiring N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in chronological order.
S302, obtaining an offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment, an offset value deltatan of the video ending time relative to the time starting point of the Nth video segment and an offset value deltax of the video inserting time relative to the time starting point of the x (x is less than or equal to N) th video segment.
Specifically, the video start time and the video end time are both represented by relative values. The video start time is a shift value delta1 with respect to the time start of the 1 st video segment, and the video end time is an offset value deltatan with respect to the time start of the nth video segment.
S303, obtaining the actual inserting time T according to the offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment and the offset value delta x of the inserting time in the video relative to the time starting point of the x (x is less than or equal to N) th video segmentinsert
The actual insertion time is the actual play-out time of the video to be inserted in the video file generated in the future.
S304, obtaining effective video data according to the video data between the video starting time and the video ending time in the N video segments.
S305, obtaining video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
After the video file is generated, the video file may be directly published.
The embodiment of the present invention further processes the video file, and the processing procedure is as shown in fig. 5, and includes:
and S10, carrying out fluidization treatment on the video file.
And S20, storing the video file subjected to the fluidization processing.
The video file after the streaming processing can be played directly by using an mp4 player, so that the video file after the streaming processing can be distributed directly, and a user can watch the video file after the video editing.
And S30, deleting the video segments used for generating the video files and the video data to be inserted.
The embodiment of the invention further obtains the video file after the streaming processing, and a user can directly use an mp4 player to watch the result after the video editing.
An embodiment of the present invention provides a video editing apparatus, as shown in fig. 6, including:
a video clip acquiring module 401, configured to acquire N (N is greater than or equal to 1) video clips for video editing; the video segments are numbered sequentially in chronological order. May be used to implement step S101.
A parameter obtaining module 402, configured to obtain an offset value delta1 of a video start time relative to a time start point of a1 st video segment, an offset value deltatan of a video end time relative to a time start point of an nth video segment, and an offset value deltax of an insertion time in a video relative to a time start point of an x (x ≦ N) th video segment. May be used to implement step S102.
An intermediate time obtaining module 403, configured to obtain an actual intermediate time T according to an offset value delta1 between the video start time and the time start point of the 1 st video segment and an offset value deltax between the video intermediate time and the time start point of the x (x is less than or equal to N) th video segmentinsert. May be used to implement step S103.
An effective data obtaining module 404, configured to obtain effective video data according to video data between the video start time and the video end time in the N video segments. May be used to implement step S104.
A video file generating module 405, configured to obtain video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file. May be used to implement step S105.
Referring to fig. 7, the inter-insertion time acquiring module 403 includes.
A total length calculating unit 4031, configured to calculate a total length L of time from the time start point of the first slice to the time end point of the x-1 th slice. May be used to implement step S1031.
An intermediate time calculation unit 4032 for calculating the time of the next slot according to the formula TinsertL-delta1+ deltax yields the actual insertion time Tinsert. May be used to implement step S1032.
Referring to fig. 8, the valid data obtaining module 404 includes:
a first data obtaining unit 4041, configured to read all data of the first video segment from delta1 until the end of the first video segment.
The second data obtaining unit 4042 is configured to read all data of the nth video segment from the time starting point to deltan.
The third data obtaining unit 4043 is configured to read all data from the time end point of the first video segment to the time start point of the nth video segment.
Referring to fig. 9, the video file generating module 405 includes:
the first decoding unit 4051 is configured to decode the valid video data to obtain valid decoded data. May be used to implement steps S2051 and S1.
The second decoding unit 4052 is configured to decode the video data to be inserted to obtain decoded data to be inserted. May be used to implement steps S2052 and S2.
A first encoding unit 4053 configured to encode the valid decoded data in chronological order, and when the encoding process proceeds to data to be inserted, insert encoded data obtained by encoding the decoded data to be inserted; the data to be inserted is the actual insertion time T in the playing process of the effective video datainsertAnd decoding data corresponding to the data during broadcasting. May be used to implement steps S2053 and S3.
The second encoding unit 4054 is configured to, after the encoding of the to-be-inserted decoded data is completed, continue to encode the valid decoded data until the encoding of the valid decoded data is completed. May be used to implement steps S2054 and S4.
The video file generating unit 4055 is configured to generate a video file according to the encoding result. May be used to implement steps S2055 and S5.
Further, the apparatus further comprises:
and a streaming processing module 406, configured to perform streaming processing on the video file. May be used to implement step S10.
And the storage module 407 is configured to store the video file after the streaming processing. May be used to implement step S20.
And the deleting module 408 is configured to delete the video segment used for generating the video file and the video data to be inserted. May be used to implement step S30.
The video file publishing module 409 is configured to publish the video file directly after obtaining the video file.
Both the device embodiment and the method embodiment of the present invention provide a video editing device based on the same inventive concept, and this embodiment can be used to implement the video editing method provided in the above embodiments.
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store program codes executed by the video editing method implemented by the foregoing embodiment.
Optionally, in this embodiment, the storage medium may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
the method comprises the following steps of firstly, obtaining N (N is more than or equal to 1) video clips for video editing; the video segments are numbered sequentially in a temporal order;
secondly, obtaining an offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment, an offset value deltatan of the video ending time relative to the time starting point of the Nth video segment and an offset value deltax of the video inserting time relative to the time starting point of the x (x is less than or equal to N) th video segment;
thirdly, according to the offset value delta1 of the video start time relative to the time start point of the 1 st video segment and the insertion time in the video relative to the x (x ≦ N) th video segmentThe offset deltax of the start of time of a segment yields the actual insertion time Tinsert
Fourthly, intercepting video data between the video starting time and the video ending time in the N video clips to obtain effective video data;
fifthly, acquiring video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
calculating the total time length L from the time starting point of the first fragment to the time end point of the (x-1) th fragment;
according to the formula TinsertL-delta1+ deltax yields the actual insertion time Tinsert
Optionally, the storage medium is further arranged to store program code for performing the steps of:
if N is 1, reading all data of the first video clip from delta1 until the end of the first video clip;
if N is 2, reading first data, where the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; splicing the first data and the second data according to the time sequence to obtain effective video data;
if N is more than 2, reading and reading first data, wherein the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; reading third data, wherein the third data is all data from the time end point of the first video clip to the time start point of the Nth video clip; and splicing the first data, the third data and the second data according to the time sequence to obtain effective video data.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
decoding the effective video data to obtain effective decoding data;
decoding the video data to be inserted to obtain decoded data to be inserted;
coding the effective decoding data according to a time sequence, and inserting coded data obtained by coding the decoding data to be inserted when the coding process is carried out to the data to be inserted; the data to be inserted is the actual insertion time T in the playing process of the effective video datainsertDecoding data corresponding to the data during broadcasting;
after the coding of the decoding data to be inserted is finished, continuing coding the effective decoding data until the coding of the effective decoding data is finished;
and generating a video file according to the coding result.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
and coding according to the preset coding parameters, and changing the preset transcoding parameters to obtain one or more video files.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
carrying out fluidization processing on the video file;
storing the video file after the fluidization treatment;
and deleting the video clips and the video data to be inserted for generating the video files.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
and directly releasing the video file after the video file is obtained.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Referring to fig. 10, an embodiment of the present invention provides a terminal, which may be used to implement the method for implementing video editing provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal may include RF (Radio Frequency) circuitry 110, memory 120 including one or more computer-readable storage media, input unit 130, display unit 140, sensor 150, audio circuitry 160, WiFi (wireless fidelity) module 170, processor 180 including one or more processing cores, and power supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then sends the received downlink information to the one or more processors 180 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (low noise amplifier), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division multiple access), WCDMA (Wideband Code Division multiple access), LTE (Long Term Evolution), email, SMS (Short Messaging Service), etc.
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 120 may further include a memory controller to provide the processor 180 and the input unit 130 with access to the memory 120.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 may also include other input devices 132. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by or provided to a user and various graphic user interfaces of the terminal, which may be configured by graphics, text, icons, video, and any combination thereof. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in FIG. 5, touch-sensitive surface 131 and display panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated with display panel 141 to implement input and output functions.
The terminal may also include at least one sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 141 and/or a backlight when the terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the terminal is stationary, and can be used for applications of recognizing terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between a user and the terminal. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 160, and then outputs the audio data to the processor 180 for processing, and then to the RF circuit 110 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 120 for further processing. The audio circuit 160 may also include an earbud jack to provide communication of peripheral headphones with the terminal.
WiFi belongs to a short-distance wireless transmission technology, and the terminal can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the WiFi module 170, and provides wireless broadband internet access for the user. Although fig. 10 shows the WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the terminal. Optionally, processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The terminal also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 180 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal may further include a camera, a bluetooth module, and the like, which are not described herein again. Specifically, in this embodiment, the display unit of the terminal is a touch screen display, the terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors according to the instructions of the method embodiments of the present invention.
In summary, the terminal provided by the embodiment of the present invention can accurately locate the time point at which the video to be inserted needs to be inserted only in the case of a video clip, and complete the insertion of the video to be inserted in the finally generated video file, and the insertion accuracy can reach millisecond level. Furthermore, by changing transcoding parameters, video files with different definitions can be generated, and the requirement of video release is fully met.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 800, which may vary significantly depending on configuration or performance, may include one or more Central Processing Units (CPUs) 822 (e.g., one or more processors) and memory 832, one or more storage media 830 (e.g., one or more mass storage devices) storing applications 842 or data 844. Memory 832 and storage medium 830 may be, among other things, transient or persistent storage. The program stored in the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 822 may be provided in communication with the storage medium 830 for executing a series of instruction operations in the storage medium 830 on the server 800. The server 800 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input-output interfaces 858, and/or one or more operating systems 841, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth. The steps performed by the above-described method embodiment may be based on the server structure shown in fig. 11.
Referring to fig. 12, fig. 12 is a system architecture diagram provided by the embodiment of the present invention, where the system architecture diagram may be used in a system of a common client-server architecture or a system of a common browser-server architecture, and further, the server 120 in the system may be a single server or a server cluster including multiple nodes. There may be multiple terminals 110, such as terminal 110(1) and terminal 110 (2). The system can be used for executing the method for realizing video editing in the method embodiment of the invention.
It should be noted that: the above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (13)

1. A method of video editing, the method comprising:
acquiring N video clips for video editing; the video clips are numbered sequentially according to a time sequence, and N is greater than 1;
obtaining an offset value delta1 of the video starting time relative to the time starting point of the 1 st video segment, an offset value deltatan of the video ending time relative to the time starting point of the Nth video segment and an offset value deltax of the video inserting time relative to the time starting point of the x-th video segment, wherein x is less than or equal to N;
calculating the total time length L from the time starting point of the first video segment to the time end point of the x-1 th video segment according to the formula TinsertL-delta1+ deltax yields the actual insertion time Tinsert
Obtaining effective video data according to the video data between the video starting time and the video ending time in the N video clips;
acquiring video data to be inserted, and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
2. The method of claim 1, wherein obtaining valid video data from video data between the video start time and the video end time in the N video segments comprises:
if N is 2, reading first data, where the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; splicing the first data and the second data according to the time sequence to obtain effective video data;
if N >2, reading first data, wherein the first data is all data of the first video clip from delta1 to the time end of the first video clip; reading second data, wherein the second data is all data from a time starting point to deltan of the Nth video clip; reading third data, wherein the third data is all data from the time end point of the first video clip to the time start point of the Nth video clip; and splicing the first data, the third data and the second data according to the time sequence to obtain effective video data.
3. The method of claim 1, wherein said time T is based on said actual insertioninsertInserting the video data to be inserted into the valid video data to generate a video file comprises:
decoding the effective video data to obtain effective decoding data;
decoding the video data to be inserted to obtain decoded data to be inserted;
coding the effective decoding data according to time sequence, and inserting the coded data obtained by coding the to-be-inserted decoding data when the coding process is carried out to the to-be-inserted dataCode data; the data to be inserted is the actual insertion time T in the playing process of the effective video datainsertDecoding data corresponding to the data during broadcasting;
after the coding of the decoding data to be inserted is finished, continuing coding the effective decoding data until the coding of the effective decoding data is finished;
and generating a video file according to the coding result.
4. The method of claim 3, wherein:
and coding according to the preset coding parameters, and changing the preset transcoding parameters to obtain one or more video files.
5. The method of claim 3, further comprising:
carrying out fluidization processing on the video file;
storing the video file after the fluidization treatment;
and deleting the video clips and the video data to be inserted for generating the video files.
6. The method of claim 1, further comprising:
and directly releasing the video file after the video file is obtained.
7. A video editing apparatus, comprising:
the video clip acquisition module is used for acquiring N video clips for video editing; the video clips are numbered sequentially according to a time sequence, and N is greater than 1;
a parameter obtaining module, configured to obtain an offset value delta1 of a video start time relative to a time start point of a1 st video segment, an offset value deltan of a video end time relative to a time start point of an nth video segment, and an offset value deltax of an insertion time in a video relative to a time start point of an xth video segment, where x is equal to or less than N;
intermediate insertion time acquisition module forCalculating the total length L of time from the time start point of the first video segment to the time end point of the x-1 th video segment according to the formula TinsertL-delta1+ deltax yields the actual insertion time Tinsert
The effective data acquisition module is used for acquiring effective video data according to the video data between the video starting time and the video ending time in the N video clips;
a video file generation module for obtaining video data to be inserted and according to the actual insertion time TinsertAnd inserting the video data to be inserted into the effective video data to generate a video file.
8. The apparatus of claim 7, wherein the valid data acquisition module comprises:
a first data obtaining unit, configured to read all data of the first video segment from delta1 until the end of the first video segment;
a second data obtaining unit, configured to read all data of the nth video segment from a time start point to deltan;
and the third data acquisition unit is used for reading all data from the time end point of the first video clip to the time start point of the Nth video clip.
9. The apparatus of claim 7, wherein the video file generation module comprises:
a first decoding unit, configured to decode the valid video data to obtain valid decoded data;
the second decoding unit is used for decoding the video data to be inserted to obtain decoded data to be inserted;
a first encoding unit configured to encode the valid decoded data in time order, and to insert encoded data obtained by encoding the decoded data to be inserted when an encoding process is performed on data to be inserted; the data to be inserted is the actual insertion time T in the playing process of the effective video datainsertDecoding data corresponding to the data during broadcasting;
the second coding unit is used for continuing coding the effective decoding data until the effective decoding data is coded completely after the coding of the decoding data to be inserted is finished;
and the video file generating unit is used for generating a video file according to the coding result.
10. The apparatus of claim 9, further comprising:
the fluidization processing module is used for carrying out fluidization processing on the video file;
the storage module is used for storing the video file after the fluidization processing;
and the deleting module is used for deleting the video clips used for generating the video files and the video data to be inserted.
11. The apparatus of claim 7, further comprising:
and the video file publishing module is used for directly publishing the video file after the video file is obtained.
12. A video editing terminal characterized in that the terminal comprises the video editing apparatus of any one of claims 7 to 11.
13. A computer storage medium having stored therein at least one instruction, the at least one instruction being loaded by a processor and executing a video editing method according to any one of claims 1 to 6.
CN201710288746.0A 2017-04-27 2017-04-27 Video editing method and device and terminal Active CN108810596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710288746.0A CN108810596B (en) 2017-04-27 2017-04-27 Video editing method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710288746.0A CN108810596B (en) 2017-04-27 2017-04-27 Video editing method and device and terminal

Publications (2)

Publication Number Publication Date
CN108810596A CN108810596A (en) 2018-11-13
CN108810596B true CN108810596B (en) 2021-12-14

Family

ID=64069419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710288746.0A Active CN108810596B (en) 2017-04-27 2017-04-27 Video editing method and device and terminal

Country Status (1)

Country Link
CN (1) CN108810596B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111385599B (en) * 2018-12-28 2022-02-11 北京字节跳动网络技术有限公司 Video processing method and device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434195B1 (en) * 1998-11-20 2002-08-13 General Instrument Corporaiton Splicing of video data in progressively refreshed video streams
US9060200B1 (en) * 2004-08-11 2015-06-16 Visible World, Inc. System and method for digital program insertion in cable systems
US8699578B2 (en) * 2008-06-17 2014-04-15 Cisco Technology, Inc. Methods and systems for processing multi-latticed video streams
JP5334771B2 (en) * 2008-10-07 2013-11-06 トムソン ライセンシング Method for inserting an advertisement clip into a video sequence and corresponding device
CN101557464B (en) * 2009-04-01 2013-06-05 深圳市融创天下科技股份有限公司 Method for dynamically embedding other media segments in video program playback
CN101883244B (en) * 2009-05-05 2013-01-23 百视通网络电视技术发展有限责任公司 System and method for inserting and playing advertisement in network television video program
US9521437B2 (en) * 2009-06-17 2016-12-13 Google Technology Holdings LLC Insertion of recorded secondary digital video content during playback of primary digital video content
US8495675B1 (en) * 2012-07-30 2013-07-23 Mdialog Corporation Method and system for dynamically inserting content into streaming media
US9100721B2 (en) * 2012-12-06 2015-08-04 Cable Television Laboratories, Inc. Advertisement insertion
CN103414941A (en) * 2013-07-15 2013-11-27 深圳Tcl新技术有限公司 Program editing method and device based on intelligent television
CN108471554A (en) * 2017-02-23 2018-08-31 合网络技术(北京)有限公司 Multimedia resource synthetic method and device

Also Published As

Publication number Publication date
CN108810596A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN106454391B (en) Live broadcast-to-on-demand broadcast method, device and terminal
CN108737908B (en) Media playing method, device and storage medium
US9924205B2 (en) Video remote-commentary synchronization method and system, and terminal device
CN106791958B (en) Position mark information generation method and device
CN107454416B (en) Video stream sending method and device
CN106803993B (en) Method and device for realizing video branch selection playing
CN105979312B (en) Information sharing method and device
WO2016066092A1 (en) Multimedia playing controlling method and device, and storage medium
US11202066B2 (en) Video data encoding and decoding method, device, and system, and storage medium
CN110784771B (en) Video sharing method and electronic equipment
CN109121008B (en) Video preview method, device, terminal and storage medium
EP3429176B1 (en) Scenario-based sound effect control method and electronic device
CN109582274B (en) Volume adjusting method and device, electronic equipment and computer readable storage medium
CN111199424A (en) Advertisement putting method and device, terminal equipment and storage medium
CN109429076B (en) Playing card pause processing method and device for multimedia data
EP3550790B1 (en) Streaming media data processing method and mobile terminal
US20160330423A1 (en) Video shooting method and apparatus
CN108564539B (en) Method and device for displaying image
CN107622234B (en) Method and device for displaying budding face gift
CN105513098B (en) Image processing method and device
US20160119695A1 (en) Method, apparatus, and system for sending and playing multimedia information
CN106407359A (en) Image playing method and mobile terminal
CN108810596B (en) Video editing method and device and terminal
CN105159655B (en) Behavior event playing method and device
CN110750743A (en) Animation playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant