CN105611401B - A kind of method and apparatus of video clipping - Google Patents
A kind of method and apparatus of video clipping Download PDFInfo
- Publication number
- CN105611401B CN105611401B CN201510968558.3A CN201510968558A CN105611401B CN 105611401 B CN105611401 B CN 105611401B CN 201510968558 A CN201510968558 A CN 201510968558A CN 105611401 B CN105611401 B CN 105611401B
- Authority
- CN
- China
- Prior art keywords
- video
- video file
- data frame
- audio
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Abstract
The invention discloses a kind of method and apparatus of video clipping, to improve the play quality of editing backsight frequency file.This method includes:Source video file is cut, at least two sections the first video files to be combined are obtained;The inspection of audio data frame is carried out to the final position of the first video file described in every section, and when the final position of first video file lacks audio data frame, audio is carried out to first video file and fills into processing, acquisition is corresponding treated the first video file;Every section of first video file is merged to target video file to be played.In this way, the first video file after cutting is adjusted according to audio data frame, and merges the first video file after adjustment, when to play the target video file after merging, there is no the time difference between audio and video, the play quality of editing backsight frequency file is improved.
Description
Technical field
The present invention relates to multimedia technology field, more particularly to a kind of method and apparatus of video clipping.
Background technology
As the broadcasting of the development of multimedia technology, video, audio, picture etc. is generally known by user.User exists
When carrying out multimedia, such as when video playing, it may not be necessary to whole video contents is watched, alternatively, merely desiring to watch it
In certain several segment.In this way, it is necessary to editing be carried out to video, cut several segment texts from source video file according to demand
Part is then combined with into a new video file and plays out.
Currently, some video clipping softwares can to source video file carry out non-linear editing, can according to user instruction,
Video is scanned for, until searching cut-point, and by Video segmentation at multistage segment video, the segment that then will need to be watched
Video merging is played out at a new video file, and this operation is very quick, but may have new video text
Existence time is poor between part sound intermediate frequency and video or certain video requency frame datas are imperfect and the problems such as can not playing.
Invention content
The present invention provides a kind of method and apparatus of video clipping, to improve the play quality of editing backsight frequency file.
The present invention provides a kind of method of video clipping, and this method includes:
Source video file is cut, at least two sections the first video files to be combined are obtained;
The inspection of audio data frame is carried out to the final position of the first video file described in every section, and works as first video
When the final position of file lacks audio data frame, audio is carried out to first video file and fills into processing, is obtained corresponding
Treated the first video file;
Every section of first video file is merged to target video file to be played.
It is described when the final position of first video file lacks audio data frame in one embodiment of the invention, it is right
The first video file progress audio fills into processing and includes:
The initial position of pair next first video file adjacent with current first video file carries out audio data frame
Inspection;
If the start bit of next first video file is equipped with extra audio data frame and when without video data frame,
The extra audio data frame is filled into the final position of current first video file;
Delete audio data frame extra in next first video file.
It is described that source video file is cut in one embodiment of the invention, it obtains at least two sections to be combined first and regards
Frequency file includes:
It receives the cutting comprising the point of contact time to instruct, wherein the point of contact time includes:Initial time and termination time;
According to video time stamp corresponding with video flowing in institute's source video file, and audio time corresponding with audio stream
Stamp obtains video data frame corresponding with the point of contact time and audio data frame, obtains first video file.
It is described when the final position of first video file lacks audio data frame in one embodiment of the invention, it is right
The first video file progress audio fills into processing and includes:
Determine the time difference between video flowing of the first video file sound intermediate frequency stream;
Fill into audio data frame corresponding with the time difference.
In one embodiment of the invention, the time between video flowing of determination the first video file sound intermediate frequency stream
Difference includes:
By the video time stamp of the first frame of video flowing in first video file, the audio with first frame in audio stream
Timestamp is compared, and it is poor at the first time to obtain;
By the video time stamp of the last frame of video flowing in first video file, with last frame in audio stream
Audio time stamp is compared, and obtains for the second time difference;
According to the first time poor and described second time difference, obtain the audio stream between the video flowing
Time difference.
The present invention provides a kind of device of video clipping, which includes:
Cutter unit obtains at least two sections the first video files to be combined for being cut to source video file;
Processing unit, the inspection for carrying out audio data frame to the final position of the first video file described in every section, and
When the final position of first video file lacks audio data frame, audio is carried out to first video file and fills into place
Reason obtains corresponding treated the first video file;
Combining unit, for every section of first video file to be merged to target video file to be played.
In one embodiment of the invention, the processing unit includes:
It checks subelement, is used for the initial position of pair next first video file adjacent with current first video file
Carry out the inspection of audio data frame;
First fills into subelement, if the start bit for next first video file is equipped with extra audio data
Frame and when without video data frame, the extra audio data frame is filled into the final position of current first video file;
Subelement is deleted, for deleting audio data frame extra in next first video file.
In one embodiment of the invention, the cutter unit includes:
Receiving subelement, for receiving the cutting instruction comprising the point of contact time, wherein the point of contact time includes:Starting
Time and termination time;
Subelement is obtained, for according to video time stamp corresponding with video flowing in institute's source video file, and and audio
It flows corresponding audio time stamp, obtains video data frame corresponding with the point of contact time and audio data frame, obtain described the
One video file.
In one embodiment of the invention, the processing unit includes:
Determination subelement, the time difference between video flowing for determining the first video file sound intermediate frequency stream;
Second fills into subelement, for filling into audio data frame corresponding with the time difference.
In one embodiment of the invention, the determination subelement is specifically used for video flowing in first video file
The video time stamp of first frame is compared with the audio time stamp of first frame in audio stream, and it is poor at the first time to obtain;It will be described
The video time stamp of the last frame of video flowing in first video file is carried out with the audio time stamp of last frame in audio stream
Compare, obtained for the second time difference;And according to the first time poor and described second time difference, obtain the audio stream
Time difference between the video flowing.
Some advantageous effects of the embodiment of the present invention may include:
The first video file after cutting is adjusted according to audio data frame, and merges the first video text after adjustment
Part when to play the target video file after merging, there is no the time difference between audio and video, improves editing backsight
The play quality of frequency file.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that understand through the implementation of the invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and is obtained in book, claims and attached drawing.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention
It applies example to be used to explain the present invention together, not be construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart according to the video clipping shown in an exemplary embodiment;
Fig. 2 is the flow chart according to the video clipping shown in an exemplary embodiment one;
Fig. 3 is the flow chart according to the video clipping shown in an exemplary embodiment two;
Fig. 4 is the structure chart according to the device of the video clipping shown in an exemplary embodiment;
Fig. 5 is the structure chart according to the cutter unit 410 shown in an exemplary embodiment;
Fig. 6 is the structure chart according to the processing unit 420 shown in an exemplary embodiment;
Fig. 7 is the structure chart according to the processing unit 420 shown in an exemplary embodiment.
Specific implementation mode
Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings, it should be understood that preferred reality described herein
Apply example only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention.
The technical solution that the embodiment of the present disclosure provides, at least two first video files are cut by source video file, and
The first video file after cutting is adjusted according to audio data frame, and merges the first video file after adjustment, to
When playing the target video file after merging, there is no the time difference between audio and video, editing backsight frequency file is improved
Play quality.
Fig. 1 is the flow chart according to the video clipping shown in an exemplary embodiment.The process of video clipping such as Fig. 1 institutes
Show, including:
Step 101:Source video file is cut, at least two sections the first video files to be combined are obtained.
Here, it needs by the video file of editing to be source video file.There are video flowing and audio stream in source video file,
And have corresponding video time stamp in video flowing per frame video data frame, and have correspondence per frame audio data frame in audio stream
Audio time stamp.
Therefore, source video file can be carried out according to the point of contact time of input and video time stamp and audio time stamp
Cutting, specifically may include:It receives the cutting comprising the point of contact time to instruct, wherein the point of contact time includes:Initial time is at the end of
Between;Then, according to video time stamp corresponding with video flowing in source video file, and audio time corresponding with audio stream
Stamp obtains video data frame corresponding with the point of contact time and audio data frame, obtains the first video file.
By taking video flowing as an example, the video playing rate of source video file is 25FPS, i.e. 25 frame video datas of broadcasting per second
Frame, the i.e. reproduction time per frame are 40ms.In this way, with 8:00 is with reference to starting point, then the video of first frame video data frame
Timestamp be 8 points 0 second 0 minute;And the video time stamp of the second frame video data frame be 8 points 0 minute 40 milliseconds;Third frame video data
The video time stamp of frame be 8 points 0 minute 80 milliseconds;And so on, and the video time stamp of the 9000th frame video data frame is 8: 6
Divide 0 second.
According to application demand, it is determined that after the time of point of contact, then the cutting instruction comprising the point of contact time can be inputted, in this way, can
Switching command is received, such as:The point of contact time for including in switching command includes:8 points of initial time 0 minute and 0 second, and the end time is
8 points 5 minutes 120 milliseconds, then can obtain video data frame of the video time stamp between initial time and end time.
Certainly, also there is audio time stamp in the audio data frame of source video file sound intermediate frequency stream, equally can also obtain audio
Audio data frame of the timestamp between initial time and end time, to obtain the first video file.
It is, of course, also possible to obtain the first video file corresponding with other point of contact times, such as obtain at 8 points 0 second to 8 20 minutes
10 seconds 25 minutes the first video files of point.
Step 102:The inspection of audio data frame is carried out to the final position of every section of first video file, and when the first video
When the final position of file lacks audio data frame, audio is carried out to the first video file and fills into processing, obtains corresponding processing
The first video file afterwards.
Source video file includes video flowing and audio stream, since video flowing is not necessarily synchronous with audio stream, in this way, first
It may be poor with regard to having time between video file sound intermediate frequency stream and video flowing.Such as:In source video file since 8 points 0 second 0 minute just
There is video data frame, but not necessarily have sound at this time, i.e., not necessarily have audio data frame, may just be opened within 30 seconds 0 minute from 8 points
There is sound in beginning, there is audio data frame, if then cutting source video file is proceeded by within 0 second 0 minute from 8 points, then the first video
Between file sound intermediate frequency stream and video flowing may having time it is poor.
In another example:It is obtained after cutting in the first video file, the corresponding video of last frame video data frame of video flowing
Timestamp be 8 points 0 second 5 minutes, and in audio stream last frame audio data frame corresponds to audio time stamp for 8 points 0 second 4 minutes, then
Lack audio data frame in the final position of the first video file, if when being played per frame video data frame being 40 milliseconds, big probable error
250 frame audio data frames.
At this point, can carry out audio to the first video file fill into processing, acquisition is corresponding treated the first video file.
Due to being that two or more first video files merge, the video file for lacking audio data frame at this time is current first
Video file, can pair next first video file adjacent with current first video file initial position carry out audio number
According to the inspection of frame;If the start bit of next first video file is equipped with extra audio data frame and when without video data frame,
Extra audio data frame is filled into the final position of current first video file;It is extra in next first video file to delete
Audio data frame.
Such as:The final position of current first video file lacks 250 frame audio data frames, and literary with current first video
The initial position of adjacent next first video file of part only has audio data frame and without video data frame, such as:And it is next
In a first video file, the corresponding video time stamp of first frame video data frame of video flowing be 8 points 0 second 11 minutes, and audio
In stream first frame audio data frame correspond to audio time stamp be 8 points 0 second 10 minutes, then in the start bit of this first video file
It is equipped with extra audio data frame, if when being played per frame video data frame being 40 milliseconds, probably extra 250 frame audio datas
Frame.At this point, this 250 extra frame audio data frame can be added into the final position of current first video file.It is of course possible to
The quantity of audio data frame can lack audio number nor matching can be completed according to the final position of current first video file
How much supplement is determined according to the quantity of frame.If the extra audio data frame of next first video file is rich, can polishing work as
The final position of preceding first video file lacks audio data frame.
Certainly, the extra audio data frame of next first video file also fills into the stop bit of current first video file
It sets, then needs to delete audio data frame extra in next first video file.
In the embodiment of the present invention, it can also be come according to the time difference between video flowing of the first video file sound intermediate frequency stream
Fill into audio data frame corresponding with the time difference.Determine the time between video flowing of the first video file sound intermediate frequency stream
Then difference fills into audio data frame corresponding with the time difference.
Due to there is corresponding video time stamp in video flowing per frame video data frame, and per frame audio data frame in audio stream
Have corresponding audio time stamp, to, more corresponding timestamp be can determine the first video file sound intermediate frequency stream and video
Time difference between stream.Specifically it may include:By the video time stamp of the first frame of video flowing in the first video file, with audio stream
The audio time stamp of middle first frame is compared, and it is poor at the first time to obtain;By the last frame of video flowing in the first video file
Video time stamp, be compared with the audio time stamp of last frame in audio stream, obtain the second time difference;When according to first
Between difference and the second time difference, obtain the time difference between video flowing of audio stream.It is, of course, also possible to which each acquisition is regarded with first
Video flowing corresponding first time in frequency file, the second time corresponding with audio stream, when then according to first time with second
Between obtain the time difference between video flowing of audio stream.
Since the final position of the first video file lacks audio data frame, existence time is poor, can fill into and the time difference pair
The audio data frame answered, first video file that obtains that treated.
Step 103:Every section of first video file is merged into target video file to be played.
When the final position of the first video file lacks audio data frame, audio is carried out to the first video file and fills into place
Reason obtains corresponding treated the first video file.And the final position of the first video file lack audio data frame when, then
It is not required to handle, is still the first video file.Every section of first video file can be merged to target video file to be played,
In, it can convert to the timestamp of video flowing in every section of first video file according to crystal oscillator frequency, determine target video file
Reproduction time.
Such as the first video file of first segment is since 8 points 0 second 0 minute, 8 points 5 minutes 30 milliseconds terminate, and second segment first
When video file since 8 points 12 minutes 500 milliseconds, 8 points are terminated for 0 second 20 minutes, then are merged this two section of first video file,
But also the video time stamp of video flowing and the audio time stamp of audio stream in the first video file need to be changed.It can be according to crystalline substance
Vibration frequency converts to the timestamp of video flowing in every section of first video file, determines the reproduction time of target video file.
If the frame per second played is still 25FPS, in the video flowing of the first video file of second segment when the video of first frame video data frame
Between stamp may since 8 points 5 minutes 70 milliseconds, change video time stamp of first video file per frame video data frame successively.Its
In, the frame per second of broadcasting can be corresponding with crystal oscillator frequency, such as per frame play time be 21 milliseconds when, corresponding crystal oscillator be 1920 hertz
Hereby.I.e. according to crystal oscillator frequency, converts to the timestamp of video flowing in every section of first video file, determine target video file
Reproduction time.
As it can be seen that in the embodiment of the present invention, the first video file after cutting is adjusted according to audio data frame, and closes
And the first video file after adjusting, when to play the target video file after merging, between audio and video there is no
Time difference improves the play quality of editing backsight frequency file.
The method that the embodiment of the present disclosure provides in operating process set to specific embodiment, will be illustrated below.
Embodiment one, referring to Fig. 2, the process of video clipping includes in the present embodiment:
Step 201:Source video file is cut, two sections of first video files to be combined are obtained.
The point of contact time in cutting instruction be respectively 8 points 0 second 0 minute and 8 points 0 second 10 minutes, corresponding cutting instruction obtains
Obtain the first video file of first segment.The point of contact time in cutting instruction be respectively 8 points 0 second 20 minutes and 8 points 0 second 30 minutes, it is corresponding
The cutting instructs, and obtains the first video file of second segment.It specifically can be according in source video file when video corresponding with video flowing
Between stab, and audio time stamp corresponding with audio stream obtains video data frame corresponding with the point of contact time and audio data frame,
Obtain the first video file.
Step 202:Judge whether the final position of every section of first video file lacks audio data frameIf so, executing step
Rapid 203, otherwise, execute step 206.
Here, the inspection that audio data frame is carried out to the final position of every section of first video file, if final position lacks
Audio data frame executes step 203, otherwise, executes step 206.
Step 203:First video file is determined as current first video file.
Step 204:Judging the initial position of pair next first video file adjacent with current first video file is
It is no to have extra audio data frameIf so, executing step 205, otherwise, step 206 is executed.
Step 205:Audio data frame extra in next first video file is filled into current first video file
Final position, and by audio data frame deletion extra in next first video file, respectively obtaining that treated first regards
Frequency file.
Step 206:Every section of first video file is merged to target video file to be played.
After getting the first video file, the first video file having may be after treatment, it is also possible to directly cut
It obtains, then two section of first video file can be merged, obtain target video file to be played.Certainly in combined mistake
Cheng Zhong can modify to the timestamp of video flowing in the first video file, obtain according to reference time point and crystal oscillator frequency
To correct reproduction time.
It can be seen that in this implementation, it can obtain there is no the time difference between audio stream and video flowing by the processing to audio stream
Target video file further increases the play quality of video file.
Embodiment two, referring to Fig. 3, the process of video clipping includes in the present embodiment:
Step 301:Source video file is cut, three sections of first video files to be combined are obtained.
The point of contact time in cutting instruction be respectively 8 points 0 second 0 minute and 8 points 0 second 10 minutes, corresponding cutting instruction obtains
Obtain the first video file of first segment.The point of contact time in cutting instruction be respectively 8 points 0 second 20 minutes and 8 points 0 second 30 minutes, it is corresponding
The cutting instructs, and obtains the first video file of second segment.Cutting instruction in the point of contact time be respectively 8 points 0 second and 8 45 minutes
Point 55 minutes and 0 second, corresponding cutting instruction, obtains third the first video file of section.
It specifically can be according to video time stamp corresponding with video flowing in source video file, and audio corresponding with audio stream
Timestamp obtains video data frame corresponding with the point of contact time and audio data frame, obtains the first video file.
Step 302:Judge whether the final position of every section of first video file lacks audio data frame, if so, executing step
Rapid 303, otherwise, execute step 306.
Step 303:First video file is determined as current first video file.
Step 304:Determine the time difference between video flowing of current first video file sound intermediate frequency stream.
According to video time stamp corresponding with video flowing in current first video file, and audio corresponding with audio stream
Timestamp obtains the time difference between video flowing of corresponding audio stream.Specifically, by video in current first video file
The video time stamp of the first frame of stream is compared with the audio time stamp of first frame in audio stream, and it is poor at the first time to obtain;It will
The video time stamp of the last frame of video flowing, the audio time stamp with last frame in audio stream in current first video file
It is compared, obtained for the second time difference;And according to poor and the second time difference at the first time, obtain audio stream and video flowing
Between time difference.
Step 305:Audio data frame corresponding with the time difference is filled into, first video file that obtains that treated.
Step 306:Every section of first video file is merged to target video file to be played.
After getting the first video file, the first video file having may be after treatment, it is also possible to directly cut
It obtains, then two section of first video file can be merged, obtain target video file to be played.Certainly in combined mistake
Cheng Zhong can modify to the timestamp of video flowing in the first video file, obtain according to reference time point and crystal oscillator frequency
To correct reproduction time.
It can be seen that in this implementation, it can obtain there is no time difference mesh between audio stream and video flowing by the processing to audio stream
Video file is marked, the play quality of video file is further increased.
Following is embodiment of the present disclosure, can be used for executing embodiments of the present disclosure.
According to the process of above-mentioned video clipping, a kind of device of video clipping can be built, as shown in figure 4, the device includes:
Including:Cutter unit 410, processing unit 420 and combining unit 430, wherein
Cutter unit 410 obtains at least two sections the first video texts to be combined for being cut to source video file
Part.
Processing unit 420, the inspection for carrying out audio data frame to the final position of every section of first video file, and work as
When the final position of first video file lacks audio data frame, audio is carried out to the first video file and fills into processing, acquisition pair
First video file of answering that treated.
Combining unit 430, for every section of first video file to be merged to target video file to be played.
In one embodiment of the invention, as shown in figure 5, cutter unit 410 includes:Receiving subelement 411 and acquisition subelement
412, wherein
Receiving subelement 411, for receiving the cutting instruction comprising the point of contact time, wherein the point of contact time includes:When starting
Between and terminate the time.
Subelement 412 is obtained, for according to video time stamp corresponding with video flowing in institute's source video file, and and sound
Frequency flows corresponding audio time stamp, obtains video data frame corresponding with the point of contact time and audio data frame, obtains the first video
File.
In one embodiment of the invention, as shown in fig. 6, processing unit 420 includes:Check that subelement 421, first fill into sub- list
Member 422 and deletion subelement 423.Wherein,
Subelement 421 is checked, for the starting pair next first video file adjacent with current first video file
Position carries out the inspection of audio data frame.
First fills into subelement 422, if the start bit for next first video file is equipped with extra audio data
Frame and when without video data frame, extra audio data frame is filled into the final position of current first video file.
Subelement 423 is deleted, for deleting audio data frame extra in next first video file.
In one embodiment of the invention, as shown in fig. 7, processing unit 420 may include:Determination subelement 424 and second fills into
Subelement 425.Wherein,
Determination subelement, the time difference between video flowing for determining the first video file sound intermediate frequency stream.
Second fills into subelement, for filling into audio data frame corresponding with the time difference.
In one embodiment of the invention, determination subelement 424 is specifically used for the first frame of video flowing in the first video file
Video time stamp, be compared with the audio time stamp of first frame in audio stream, obtain at the first time it is poor;By the first video text
The video time stamp of the last frame of video flowing in part, is compared with the audio time stamp of last frame in audio stream, obtains
Second time difference;And according to poor and the second time difference at the first time, obtain the time difference between video flowing of audio stream.
As it can be seen that the device of video clipping of the embodiment of the present invention can be to the first video file after cutting according to audio data frame
It is adjusted, and merges the first video file after adjustment, when to play the target video file after merging, audio and video
Between there is no the time difference, improve the play quality of editing backsight frequency file.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer
The shape for the computer program product implemented in usable storage medium (including but not limited to magnetic disk storage and optical memory etc.)
Formula.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or
The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (8)
1. a kind of method of video clipping, which is characterized in that including:
Source video file is cut, at least two sections the first video files to be combined are obtained;
The inspection of audio data frame is carried out to the final position of the first video file described in every section, and works as first video file
Final position when lacking audio data frame, audio is carried out to first video file and fills into processing, obtains corresponding processing
The first video file afterwards;
Every section of first video file is merged to target video file to be played;
It is described when the final position of first video file lacks audio data frame, to first video file carry out sound
Frequency fills into processing and includes:
The initial position of pair next first video file adjacent with current first video file carries out the inspection of audio data frame
It looks into;
If the start bit of next first video file is equipped with extra audio data frame and when without video data frame, by institute
State the final position that extra audio data frame fills into current first video file;
Delete audio data frame extra in next first video file.
2. the method as described in claim 1, which is characterized in that it is described that source video file is cut, obtain at least two sections
First video file to be combined includes:
It receives the cutting comprising the point of contact time to instruct, wherein the point of contact time includes:Initial time and termination time;
According to video time stamp corresponding with video flowing in institute's source video file, and audio time stamp corresponding with audio stream,
Video data frame corresponding with the point of contact time and audio data frame are obtained, first video file is obtained.
3. method as claimed in claim 2, which is characterized in that described when the final position of first video file lacks sound
When frequency data frame, filling into processing to first video file progress audio includes:
Determine the time difference between video flowing of the first video file sound intermediate frequency stream;
Fill into audio data frame corresponding with the time difference.
4. method as claimed in claim 3, which is characterized in that determination the first video file sound intermediate frequency stream and regard
Frequently the time difference between stream includes:
By the video time stamp of the first frame of video flowing in first video file, the audio time with first frame in audio stream
Stamp is compared, and it is poor at the first time to obtain;
By the video time stamp of the last frame of video flowing in first video file, the audio with last frame in audio stream
Timestamp is compared, and obtains for the second time difference;
According to the first time poor and described second time difference, obtain the audio stream time between the video flowing
Difference.
5. a kind of device of video clipping, which is characterized in that including:
Cutter unit obtains at least two sections the first video files to be combined for being cut to source video file;
Processing unit, the inspection for carrying out audio data frame to the final position of the first video file described in every section, and work as institute
When stating the final position of the first video file and lacking audio data frame, audio is carried out to first video file and fills into processing,
Obtain corresponding treated the first video file;
Combining unit, for every section of first video file to be merged to target video file to be played;
The processing unit includes:
Check subelement, the initial position for pair next first video file adjacent with current first video file carries out
The inspection of audio data frame;
First fills into subelement, if the start bit for next first video file be equipped with extra audio data frame and
When without video data frame, the extra audio data frame is filled into the final position of current first video file;
Subelement is deleted, for deleting audio data frame extra in next first video file.
6. device as claimed in claim 5, which is characterized in that the cutter unit includes:
Receiving subelement, for receiving the cutting instruction comprising the point of contact time, wherein the point of contact time includes:Initial time
With the termination time;
Obtain subelement, for according to video time stamp corresponding with video flowing in institute's source video file, and with audio stream pair
The audio time stamp answered obtains video data frame corresponding with the point of contact time and audio data frame, obtains described first and regard
Frequency file.
7. device as claimed in claim 6, which is characterized in that the processing unit includes:
Determination subelement, the time difference between video flowing for determining the first video file sound intermediate frequency stream;
Second fills into subelement, for filling into audio data frame corresponding with the time difference.
8. device as claimed in claim 7, which is characterized in that
The determination subelement, is specifically used for the video time stamp of the first frame of video flowing in first video file, with
The audio time stamp of first frame is compared in audio stream, and it is poor at the first time to obtain;By video flowing in first video file
Last frame video time stamp, be compared with the audio time stamp of last frame in audio stream, obtain the second time difference;
And according to the first time poor and described second time difference, obtain the audio stream between the video flowing when
Between it is poor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510968558.3A CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510968558.3A CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105611401A CN105611401A (en) | 2016-05-25 |
CN105611401B true CN105611401B (en) | 2018-08-24 |
Family
ID=55990886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510968558.3A Expired - Fee Related CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105611401B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108616768B (en) * | 2018-05-02 | 2021-10-15 | 腾讯科技(上海)有限公司 | Synchronous playing method and device of multimedia resources, storage position and electronic device |
CN111601162B (en) * | 2020-06-08 | 2022-08-02 | 北京世纪好未来教育科技有限公司 | Video segmentation method and device and computer storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771869A (en) * | 2008-12-30 | 2010-07-07 | 深圳市万兴软件有限公司 | AV (audio/video) encoding and decoding device and method |
CN102316358A (en) * | 2011-09-02 | 2012-01-11 | 惠州Tcl移动通信有限公司 | Method for recording streaming media file and corresponding equipment |
CN103096184A (en) * | 2013-01-18 | 2013-05-08 | 深圳市龙视传媒有限公司 | Method and device for video editing |
CN103167342A (en) * | 2013-03-29 | 2013-06-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronous processing device and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112012010772A2 (en) * | 2009-11-06 | 2020-09-08 | Telefonaktiebolaget Lm Ericsson (Publ) | method and device for providing streaming media content, media content rendering method, and user terminal |
-
2015
- 2015-12-18 CN CN201510968558.3A patent/CN105611401B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771869A (en) * | 2008-12-30 | 2010-07-07 | 深圳市万兴软件有限公司 | AV (audio/video) encoding and decoding device and method |
CN102316358A (en) * | 2011-09-02 | 2012-01-11 | 惠州Tcl移动通信有限公司 | Method for recording streaming media file and corresponding equipment |
CN103096184A (en) * | 2013-01-18 | 2013-05-08 | 深圳市龙视传媒有限公司 | Method and device for video editing |
CN103167342A (en) * | 2013-03-29 | 2013-06-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronous processing device and method |
Also Published As
Publication number | Publication date |
---|---|
CN105611401A (en) | 2016-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3127109B1 (en) | Efficient coding of audio scenes comprising audio objects | |
EP3996382A1 (en) | Gapless video looping | |
US9111519B1 (en) | System and method for generating cuepoints for mixing song data | |
CN110415723B (en) | Method, device, server and computer readable storage medium for audio segmentation | |
US20180226101A1 (en) | Methods and systems for interactive multimedia creation | |
CN105898500A (en) | Network video play method and device | |
CN105898376A (en) | Online video stream play method, device and system | |
CN104185088B (en) | A kind of method for processing video frequency and device | |
US20130232233A1 (en) | Systems and methods for client-side media chunking | |
KR20140145584A (en) | Method and system of playing online video at a speed variable in real time | |
CN104410930A (en) | A method and device for controlling playing speed of transport stream TS media file | |
CN105611401B (en) | A kind of method and apparatus of video clipping | |
EP1071294B1 (en) | System, method and recording medium for audio-video synchronous playback | |
CN105530534B (en) | A kind of method and apparatus of video clipping | |
CN109587514A (en) | A kind of video broadcasting method, medium and relevant apparatus | |
CN108600814A (en) | A kind of audio video synchronization playback method and device | |
CN105262957B (en) | The treating method and apparatus of video image | |
CN105578261B (en) | A kind of method and apparatus of video clipping | |
CN110087116B (en) | Multi-rate live video stream editing method and device, terminal and storage medium | |
CN105763923A (en) | Video and video template editing methods and device thereof | |
CN105578260A (en) | Video editing method and device | |
CN106210840B (en) | A kind of text display method and equipment | |
CN104780389B (en) | A kind of method for processing video frequency and device | |
US20210243485A1 (en) | Receiving apparatus, transmission apparatus, receiving method, transmission method, and program | |
WO2020201297A1 (en) | System and method for performance-based instant assembling of video clips |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A method and device for video editing Effective date of registration: 20210104 Granted publication date: 20180824 Pledgee: Inner Mongolia Huipu Energy Co.,Ltd. Pledgor: WUXI TVMINING MEDIA SCIENCE & TECHNOLOGY Co.,Ltd. Registration number: Y2020990001517 |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180824 |