CN104967864B - A kind of method and device merging video - Google Patents
A kind of method and device merging video Download PDFInfo
- Publication number
- CN104967864B CN104967864B CN201410503239.0A CN201410503239A CN104967864B CN 104967864 B CN104967864 B CN 104967864B CN 201410503239 A CN201410503239 A CN 201410503239A CN 104967864 B CN104967864 B CN 104967864B
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- time
- merging
- display time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013508 migration Methods 0.000 claims description 3
- 230000005012 migration Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 16
- 238000012545 processing Methods 0.000 description 12
- 230000006854 communication Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- GUGNSJAORJLKGP-UHFFFAOYSA-K sodium 8-methoxypyrene-1,3,6-trisulfonate Chemical compound [Na+].[Na+].[Na+].C1=C2C(OC)=CC(S([O-])(=O)=O)=C(C=C3)C2=C2C3=C(S([O-])(=O)=O)C=C(S([O-])(=O)=O)C2=C1 GUGNSJAORJLKGP-UHFFFAOYSA-K 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a kind of method and devices for merging video, belong to field of computer technology.The described method includes: getting frame interval time;According to the display time for the first frame video for including in the frame period time and the first video and the display time of last frame video, acquisition time offset;According to the display time for the every frame video for including in the time offset, the second video and decoding time, the display time in the video of the every frame video for including in the second video after merging and decoding time are calculated;According in the video of the every frame video for including in the second video after merging the display time and decoding time, be a video by the first video and the second video merging.Described device includes: the first acquisition module, the second acquisition module, computing module and merging module.Multiple video mergings can be a video by the present invention, improve the efficiency for playing video.
Description
Technical field
The present invention relates to field of computer technology, in particular to a kind of method and device for merging video.
Background technique
Currently, as, substantially with the function of shooting video, user usually uses the function to shoot on the mobile terminals such as mobile phone
Video.Sometimes user may shoot multiple videos whithin a period of time, and the content topic of these videos may be identical or related.Example
Such as, user to some place travel, during travelling user shoot two sections of videos, the content topic of this two sections of videos be all about
This travelling, so two sections of videos are that content topic is identical or relevant video.
The video of shooting is stored in the local storage of mobile terminal by mobile terminal after having shot one section of video.
When user wants to play video, is locally found out in mobile terminal and the video played is needed to play out.Sometimes in user needs to play
Hold that theme is identical or relevant multiple videos, at this time user locally found out from mobile terminal first video that one needs to play into
Row plays, and after playing the video, then locally finds out the video that other needs play from mobile terminal and plays out.If
There are also the videos not played in multiple video, then continuation is locally found out the video that other are not played from mobile terminal and broadcast
It puts, until playing multiple video.
In the implementation of the present invention, the inventor finds that the existing technology has at least the following problems:
When broadcasting content theme is identical or relevant multiple videos, user needs locally to find out a view from mobile terminal
Frequency plays out, and plays and locally finds out other videos from mobile terminal again and play out, and so arrives a video playing is complete
Start to play another video and not be utilized this period, causes the efficiency for playing video very low.For example, there are content masters
Inscribe identical or relevant 2 videos, user, which needs locally to find out 2 videos in two times from mobile terminal, to be played out, the
One video playing terminates to starting to play terminal idle in second video this period, and no video can play, cause
Playing efficiency is low.
Summary of the invention
In order to improve the efficiency for playing video, the present invention provides a kind of method and devices for merging video.The technology
Scheme is as follows:
A method of merging video, which comprises
Getting frame interval time;
According to the display time for the first frame video for including in the frame period time and the first video and last frame
The display time of video, acquisition time offset;
According to the display time for the every frame video for including in the time offset, the second video and decoding time, calculate
Display time and decoding time in the video of the every frame video for including in second video after merging;
According in the video of the every frame video for including in second video after merging the display time and decoding time,
It is a video by first video and second video merging.
A kind of device merging video, described device include:
First obtains module, is used for getting frame interval time;
Second obtains module, for the display according to the first frame video for including in the frame period time and the first video
Time and the display time of last frame video, acquisition time offset;
Computing module, for according to display time of the every frame video for including in the time offset, the second video and
Decoding time, when calculating display time and the decoding in the video of the every frame video for including in second video after merging
Between;
Merging module, when for according to display in the every frame video for including in second video video after merging
Between and decoding time, be a video by first video and second video merging.
In embodiments of the present invention, getting frame interval time;According to include in the frame period time and the first video
The display time of one frame video and the display time of last frame video, acquisition time offset;According to the time offset,
The display time for the every frame video for including in second video and decoding time calculate the every frame video for including in the second video and are closing
Display time and decoding time in video after and;According in the video of the every frame video for including in the second video after merging
The display time and decoding time, be a video by the first video and the second video merging.Due to according to time offset,
The display time for the every frame video for including in two videos and decoding time calculate the every frame video for including in the second video and are closing
It display time and decoding time in video after and, can be according to the view of the every frame video for including in the second video after merging
First video and the second video merging are a video, so when playing video by display time and decoding time in frequency
First video and the second video are continuously played as a video, are avoided first to play when the first video plays the second video again and be deposited
Waiting time, improve play video efficiency.
Detailed description of the invention
Fig. 1 is a kind of method flow diagram for merging video that the embodiment of the present invention 1 provides;
Fig. 2 is a kind of method flow diagram for merging video that the embodiment of the present invention 2 provides;
Fig. 3 is a kind of apparatus structure schematic diagram for merging video that the embodiment of the present invention 3 provides;
Fig. 4 is a kind of structural schematic diagram for terminal that the embodiment of the present invention 4 provides.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
Embodiment 1
Referring to Fig. 1, the embodiment of the invention provides a kind of methods for merging video, comprising:
Step 101: getting frame interval time;
Step 102: according to the display time for the first frame video for including in the frame period time and the first video and most
The display time of a later frame video, acquisition time offset;
Step 103: when according to the display time for the every frame video for including in the time offset, the second video and decoding
Between, calculate the display time in the video of the every frame video for including in the second video after merging and decoding time;
Step 104: according in the every frame video for including in the second video video after merging the display time and decoding
First video and the second video merging are a video by the time.
Preferably, according to the display time for the first frame video for including in the frame period time and the first video and finally
The display time of one frame video, acquisition time offset, comprising:
It is regarded according to the display time for the first frame video for including in the frame period time and the first video and last frame
The display time of frequency, formula (1) as described below calculate time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is time offset, PTSNFor the display time of last frame video, PTS1For first frame view
The display time of frequency, t0 are the frame period time.
Preferably, according to the display time for the every frame video for including in the time offset, the second video and decoding time,
Calculate the display time in the video of the every frame video for including in the second video after merging and decoding time, comprising:
According to the display time for the every frame video for including in the second video and the time offset, (2) are counted as follows
Calculate the display time in the video of the every frame video for including in the second video after merging;
PTSi'=PTSi+T……(2)
In formula (2), PTSiFor the display time for the i-th frame video for including in the second video, PTSi' it is the second video
In include the i-th frame video video after merging in the display time;
According to the decoding time for the every frame video for including in the second video and the time offset, (3) are counted as follows
Calculate the decoding time in the video of the every frame video for including in the second video after merging;
DTSi'=DTSi+T……(3)
In formula (3), DTSiFor the decoding time for the i-th frame video for including in the second video, DTSi' it is the second video
In include the i-th frame video video after merging in decoding time.
Further, before getting frame interval time, further includes:
First thread, the second thread and third thread are created, is shot and is regarded by first thread, the second thread and third thread
Frequently.
Preferably, video is shot by first thread, the second thread and third thread, comprising:
The one-frame video data that camera current shooting is obtained by first thread regards a frame of camera current shooting
Frequency is according to the tail of the queue for being inserted into video queue;
The frame audio data that microphone currently acquires, the frame sound that microphone is currently acquired are obtained by the second thread
Frequency is according to the tail of the queue for being inserted into audio queue;
By third thread, one-frame video data is obtained from the head of the queue of video queue and is obtained from the head of the queue of audio queue
One frame audio data, the one-frame video data that will acquire and a frame audio data group are combined into the frame video in video.
Preferably, getting frame interval time, comprising:
Obtain the number and first of the occupied time of every frame video in the first video, the video frame that the first video includes
The total duration of video;
According to the frame video occupied time every in the first video, the total time that video frame occupies in the first video is calculated;
According to the total duration of the total time of video frame occupancy and the first video in the first video, include in the first video of calculating
Frame period occupied total time;
According to the number for the video frame that frame period occupied total time and the first video include, the frame period time is calculated.
Preferably, according to the display time in the every frame video for including in the second video video after merging and when decoding
Between, it is a video by the first video and the second video merging, comprising:
The every frame video for including in first video is stored in a video file;
According in the video of the every frame video for including in the second video after merging the display time and decoding time, determine
The storage order for the every frame video for including in second video;
After the last frame video that the first video includes in video file, according to determining storage order storage second
The every frame video for including in video, to realize the first video and the second video merging as a video.
In embodiments of the present invention, getting frame interval time;According to include in the frame period time and the first video
The display time of one frame video and the display time of last frame video, acquisition time offset;According to the time offset,
The display time for the every frame video for including in second video and decoding time calculate the every frame video for including in the second video and are closing
Display time and decoding time in video after and;According in the video of the every frame video for including in the second video after merging
The display time and decoding time, be a video by the first video and the second video merging.Due to according to time offset,
The display time for the every frame video for including in two videos and decoding time calculate the every frame video for including in the second video and are closing
It display time and decoding time in video after and, can be according to the view of the every frame video for including in the second video after merging
First video and the second video merging are a video, so when playing video by display time and decoding time in frequency
First video and the second video are continuously played as a video, are avoided first to play when the first video plays the second video again and be deposited
Waiting time, improve play video efficiency.
Embodiment 2
The embodiment of the invention provides a kind of methods for merging video.
Currently, as, substantially with the function of shooting video, user usually uses the function to shoot on the mobile terminals such as mobile phone
Video.Sometimes user may shoot multiple videos whithin a period of time, and the content topic of these videos may be identical or related.?
When playing video, user needs broadcasting content theme identical or relevant multiple videos sometimes, and terminal can be by this hair at this time
This multiple video merging is a video by the method that bright embodiment provides.
Referring to fig. 2, this method specifically includes:
Step 201: creation first thread, the second thread and third thread pass through first thread, the second thread and third line
Journey shoots the first video and the second video;
Wherein, it is configured with camera and microphone in terminal, video data can be shot by camera, pass through Mike
Wind acquires audio data.And in order to acquire audio data while shooting video data, can be created by multithreading
Multiple threads are built, the video data of camera shooting is obtained by a thread, while Mike is obtained by another thread
The audio data of elegance collection.
This step is specifically, create first thread, the second thread and third thread by multithreading.Pass through First Line
Journey obtains the one-frame video data of camera current shooting, and the one-frame video data of camera current shooting is inserted into video team
The tail of the queue of column.The frame audio data that microphone currently acquires, the frame that microphone is currently acquired are obtained by the second thread
Audio data is inserted into the tail of the queue of audio queue.By third thread, from the head of the queue of video queue obtain one-frame video data with
And a frame audio data is obtained from the head of the queue of audio queue, the one-frame video data that will acquire and a frame audio data group are combined into the
A frame video in one video.For other every frame videos in the first video, the first view can be obtained according to aforesaid operations
Other every frame videos in frequency.Equally, every frame video in the second video can also be obtained according to the method described above.
Wherein, video queue and audio queue are fifo queue.Since the processing speed of third thread is than first
The processing speed of thread and the second thread is many slowly, so the video data for obtaining first thread is needed to be inserted into video queue
Tail of the queue waited, and need the audio data for obtaining the second thread to be inserted into the tail of the queue of audio queue and wait.
For example, creating first thread t1, the second thread t2 and third thread t3 by multithreading.Pass through first thread
T1 obtain camera current shooting one-frame video data v3, by the one-frame video data v3 of camera current shooting be inserted into as
The tail of the queue of video queue shown in queue 1.The frame audio data r3 that microphone currently acquires is obtained by the second thread t2, it will
The frame audio data r3 that microphone currently acquires is inserted into the tail of the queue of the audio queue as shown in queue 2.Pass through third thread
T3 obtains one-frame video data v1 from the head of the queue of the video queue such as queue 1 and from the team of the audio queue as shown in queue 2
Head obtains a frame audio data r1, and the one-frame video data v1 that will acquire and a frame audio data r1 group are combined into the first video
A frame video in video1.For other every frame videos in the first video video1, can be obtained according to aforesaid operations
Other every frame videos in first video.Equally, every frame view in the second video video2 can also be obtained according to the method described above
Frequently.
Queue 1
v3 | v2 | v1 |
Queue 2
r3 | r2 | r1 |
Wherein, video is shot by multithreading, while shooting video data by camera, can passed through
Microphone acquires audio data, and one-frame video data that the same time obtains and a frame audio data group are combined into video
A frame video, can so shorten obtain video time, improve obtain video efficiency.
Further, the first video and the second video may not be the video of terminal shooting.First video and the second view
Frequency can be the video that stored video or terminal are obtained from video server in terminal.But the first video and the second video
Video format must be identical.Video format can (Audio Video Interactive, audio and video mixing be compiled for AVI
Code), WMV (Windows Media Video, numerical digit coding and decoding video format) or DV (Digital Video, digital video)
Deng.
Wherein, function av_read_frame () is audio-video function reading, for reading the every frame video for including in video
Data.After getting the first video and the second video by aforesaid operations, function av_read_frame () can be passed through
The data for the every frame video for including in data and the second video to read the every frame video for including in the first video respectively.
It wherein, can be by walking as follows after 201 operation gets the first video and the second video through the above steps
First video and the second video merging are a video by the operation of rapid 202-205.
Step 202: getting frame interval time;
Wherein, frame period time interval time between adjacent two frames video in video.
This step is specifically, obtain the video frame that every frame video occupied time, the first video include in the first video
Number and the first video total duration.The summation for calculating every frame video occupied time obtains all views in the first video
The frequency frame occupied time.The difference of the total duration and all video frames occupied time that calculate the first video obtains the first view
Total frame period time in frequency.The number for the video frame for including according to the first video determines the number of frame period in the first video.
The ratio for calculating the number of total frame period time and frame period obtains the frame period time.
It wherein, include the time shared by the frame video in the data of a frame video, and the every frame view for including in general video
Frequently the occupied time is equal, and it is occupied can to obtain every frame video in the data of every frame video respectively from the first video
Time.
For example, it is assumed that the total duration of the first video video1 is 1s, it include 20 frame videos, every frame in the first video video1
The time that video occupies is equal and is 0.04s.Obtaining every frame video occupied time in the first video video1 is
The number 20 for the video frame that 0.04s, the first video video1 include and the total duration 1s of the first video video1.Calculate every frame
The summation of video occupied time is 0.8s, and obtaining all video frame occupied times in the first video video1 is
0.8s.The difference for calculating the total duration 1s and the occupied time 0.8s of all video frames of the first video video1 obtains the first view
Total frame period time in frequency video1 is 0.2s.Is determined according to the number 20 of the first video video1 video frame for including
The number of frame period is 19 in one video video1.The ratio for calculating the number 19 of total frame period time 0.2s and frame period obtains
The frame period time is 0.01s.
Further, a preset frame period time can also be set in advance.
Step 203: according to the display time for the first frame video for including in the frame period time and the first video and most
The display time of a later frame video, acquisition time offset;
It wherein, include the display time of the frame video in a frame video.When playing video, in the display of the frame video
Between play the frame video.It is available to the aobvious of first frame video when the data for the every frame video for including in obtaining the first video
Show the display time of time and last frame video.
This step specifically, according to the display time for the first frame video for including in the frame period time and the first video with
And the display time of last frame video, formula (1) as described below calculate time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is time offset, PTSNFor the display time of last frame video, PTS1For first frame view
The display time of frequency, t0 are the frame period time.
For example, it is assumed that the display time for the first frame video for including in the first video video1 is 0:00, last frame
The display time is 0:01.When according to the display for the first frame video for including in frame period time 0.01s and the first video video1
Between the display time 0:01 of 0:00 and last frame video, formula (2) as described below calculates time offset and is
1.01s。
T=PTSN-PTS1+ t0=0:01-0:00+0.01=1.01 ... (2)
Wherein, through the above steps 202 and 203 the step of get time offset after, pass through following step 204
Operation is to calculate display time and the decoding time in the video of the every frame video for including in the second video after merging.
Step 204: according to the display time for the every frame video for including in time offset, the second video and decoding time,
Calculate the display time in the video of the every frame video for including in the second video after merging and decoding time;
Wherein, the display time in a frame video including the frame video and decoding time.When playing video, regarded in the frame
The decoding time of frequency is decoded the data of the frame video, then plays the frame video in the display time of the frame video.?
The display of the available every frame video for including into the second video when obtaining the data for the every frame video for including in the second video
Time and decoding time.
This step is specifically, according to the display time for the every frame video for including in the second video and time offset, by such as
Lower formula (3) calculates the display time in the video of the every frame video for including in the second video after merging.
PTSi'=PTSi+T……(3)
In formula (3), PTSiFor the display time for the i-th frame video for including in the second video, PTSi' it is the second video
In include the i-th frame video video after merging in the display time.
According to the decoding time and time offset of the every frame video for including in the second video, (4) are calculated as follows
Decoding time in the video of the every frame video for including in second video after merging.
DTSi'=DTSi+T……(4)
In formula (4), DTSiFor the decoding time for the i-th frame video for including in the second video, DTSi' it is the second video
In include the i-th frame video video after merging in decoding time.
For example, it is assumed that including 3 frame videos in the second video video2, the display time of first frame video is 0:00 and solution
The code time is 0:00, and the display time of the second frame video is 0:01 and decoding time is 0:00, when the display of third frame video
Between be 0:02 and decoding time is 0:01.According to the display time 0:00 for the first frame video for including in the second video video2
With time offset 1.01, (5) calculate the first frame video for including in the second video video2 after merging as follows
The display time in video is 1:01.The display in the video of the second frame video after merging is calculated also according to the above method
Time be 1:02 and third frame video video after merging in the display time be 1:03.
PTS1'=PTS1+ T=0:00+1.01=1:01 ... (5)
According to the decoding time 0:00 and time offset 1.01 of the first frame video for including in the second video video2, press
The decoding time that following formula (6) calculates in the video of the first frame video for including in the second video video2 after merging is 1:
01.Calculating the decoding time in the video of the second frame video after merging also according to the above method is 1:01 and third
Decoding time in the video of frame video after merging is 1:02.
DTSi'=DTSi+ T=0:00+1.01=1:01 ... (6)
Wherein, 204 operation calculates the view of the every frame video for including in the second video after merging through the above steps
After the display time and decoding time in frequency, 205 operation as follows merges the first video and the second video.
Step 205: according in the every frame video for including in the second video video after merging the display time and decoding
First video and the second video merging are a video by the time.
Specifically, every frame video of the first video is stored in a video file, is then wrapped according in the second video
Display time and decoding time in the video of the every frame video included after merging determine the every frame view for including in the second video
The storage order of frequency.After the last frame video that the first video includes in the video file, according to determining storage order
Every frame video in second video also is stored in the video file, to realize the first video and the second video merging as one
A video.
Wherein, storage order can for include in the second video every frame video video after merging in the display time
With the sequencing of decoding time.So the first video and the second video are stored directly in a video file, Ke Yiti
Height merges the efficiency of video.
For example, every frame video of the first video video1 is stored in a video file 1, then according to the second video
Display time 1:01 and decoding time 1:01 in the video of the first frame video for including in video2 after merging, the second frame
The view of display time 1:02 and decoding time 1:01 and third frame video after merging in the video of video after merging
Display time 1:03 and decoding time 1:02 in frequency determine that the storage for the every frame video for including in the second video video2 is suitable
Sequence is first frame video, the second frame video and third frame video.What first video video1 included in the video file 1 is last
It, will be in the second video video2 according to the sequence of first frame video, the second frame video and third frame video after one frame video
Every frame video also is stored in the video file 1, and the first video video1 and the second video video2 are merged into one to realize
A video.
Wherein, after by the first video and the second video merging for a video, when playing video, continuous play merges
Video afterwards does not have in playing process free time to improve so compared with individually playing the first video and playing the second video again
Play the efficiency of video.
Wherein, if there is also combined video is needed, obtained video will be merged as the first video, will need to close
And a video as the second video, then first video and the second video are closed according to method provided in an embodiment of the present invention
It and is a video.
Wherein, in embodiments of the present invention, video data is acquired by camera and audio number is acquired by microphone
According to when, be to be handled by java applet code.But it since program code is in the task of processing, finally requires to count
According to incoming bottom and it is translated as hardware signal and handles, and data are being passed to bottom and are being translated as hardware letter by java applet code
Number when need by translating many times, efficiency is very low.Therefore in embodiments of the present invention, by camera acquire video data with
And audio data is acquired by microphone, and after getting the first video and the second video, by the first video and the second video
It is passed to JNI (Java Native Interface, script local calling layer).At JNI layers, pass through the program of C++ object-oriented
First video and the second video merging are a video by code, and the program code of such C++ object-oriented is passed by data
Enter bottom and the mistake from java applet code translation to the program code of C++ object-oriented can be saved when being translated as hardware signal
Journey improves treatment effeciency.
In embodiments of the present invention, getting frame interval time;According to include in the frame period time and the first video
The display time of one frame video and the display time of last frame video, acquisition time offset;According to the time offset,
The display time for the every frame video for including in second video and decoding time calculate the every frame video for including in the second video and are closing
Display time and decoding time in video after and;According in the video of the every frame video for including in the second video after merging
The display time and decoding time, be a video by the first video and the second video merging.Due to according to time offset,
The display time for the every frame video for including in two videos and decoding time calculate the every frame video for including in the second video and are closing
It display time and decoding time in video after and, can be according to the view of the every frame video for including in the second video after merging
First video and the second video merging are a video, so when playing video by display time and decoding time in frequency
First video and the second video are continuously played as a video, are avoided first to play when the first video plays the second video again and be deposited
Waiting time, improve play video efficiency.
Embodiment 3
Referring to Fig. 3, the embodiment of the invention provides a kind of devices for merging video, comprising:
First obtains module 301, is used for getting frame interval time;
Second obtains module 302, for according to the aobvious of the first frame video for including in the frame period time and the first video
Show the display time of time and last frame video, acquisition time offset;
Computing module 303, for according to display time of the every frame video for including in the time offset, the second video and
Decoding time calculates the display time in the video of the every frame video for including in the second video after merging and decoding time;
Merging module 304, when for according to display in the every frame video for including in the second video video after merging
Between and decoding time, be a video by the first video and the second video merging.
Wherein, second module 302 is obtained, for according to the first frame video for including in the frame period time and the first video
The display time and last frame video the display time, formula (1) as described below calculates time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is time offset, PTSNFor the display time of last frame video, PTS1For first frame view
The display time of frequency, t0 are the frame period time.
Wherein, computing module 303 includes:
First computing unit, for according to display time of the every frame video for including in the second video and the time migration
Amount, (2) calculate the display time in the video of the every frame video for including in the second video after merging as follows;
PTSi'=PTSi+T……(2)
In formula (2), PTSiFor the display time for the i-th frame video for including in the second video, PTSi' it is the second video
In include the i-th frame video video after merging in the display time;
Second computing unit, for the decoding time and the time migration according to the every frame video for including in the second video
Amount, (3) calculate the decoding time in the video of the every frame video for including in the second video after merging as follows;
DTSi'=DTSi+T……(3)
In formula (3), DTSiFor the decoding time for the i-th frame video for including in the second video, DTSi' it is the second video
In include the i-th frame video video after merging in decoding time.
Further, the device further include:
Shooting module, for creating first thread, the second thread and third thread, by first thread, the second thread and
Third thread shoots video.
Wherein, shooting module includes:
First acquisition unit will be imaged for obtaining the one-frame video data of camera current shooting by first thread
The one-frame video data of head current shooting is inserted into the tail of the queue of video queue;
Second acquisition unit, for obtaining the frame audio data that microphone currently acquires by the second thread, by Mike
The frame audio data that wind currently acquires is inserted into the tail of the queue of audio queue;
Assembled unit obtains one-frame video data from the head of the queue of video queue and from audio for by third thread
The head of the queue of queue obtains a frame audio data, and the one-frame video data that will acquire and a frame audio data group are combined into one in video
Frame video.
Wherein, the first acquisition module 301 includes:
Third acquiring unit, the view for obtaining the occupied time of every frame video in the first video, the first video includes
The total duration of the number of frequency frame and the first video;
Third computing unit, for calculating and being regarded in the first video according to the frame video occupied time every in the first video
The total time that frequency frame occupies;
4th computing unit, the total duration of total time and the first video for being occupied according to video frame in the first video,
Calculate the frame period for including in the first video occupied total time;
5th computing unit, the number of the video frame for including according to frame period occupied total time and the first video
Mesh calculates the frame period time.
Wherein, merging module 304 includes:
First storage unit, for the every frame video for including in the first video to be stored in a video file;
Determination unit, for according in the every frame video for including in the second video video after merging the display time and
Decoding time determines the storage order for the every frame video for including in the second video;
Second storage unit, for after the first video includes in video file last frame video, according to determination
Storage order store the every frame video for including in the second video, to realize that by the first video and the second video merging be a view
Frequently.
In embodiments of the present invention, getting frame interval time;According to include in the frame period time and the first video
The display time of one frame video and the display time of last frame video, acquisition time offset;According to the time offset,
The display time for the every frame video for including in second video and decoding time calculate the every frame video for including in the second video and are closing
Display time and decoding time in video after and;According in the video of the every frame video for including in the second video after merging
The display time and decoding time, be a video by the first video and the second video merging.Due to according to time offset,
The display time for the every frame video for including in two videos and decoding time calculate the every frame video for including in the second video and are closing
It display time and decoding time in video after and, can be according to the view of the every frame video for including in the second video after merging
First video and the second video merging are a video, so when playing video by display time and decoding time in frequency
First video and the second video are continuously played as a video, are avoided first to play when the first video plays the second video again and be deposited
Waiting time, improve play video efficiency.
Embodiment 4
Referring to FIG. 4, it illustrates, with the terminal structure schematic diagram of touch sensitive surface, be somebody's turn to do involved in the embodiment of the present invention
The method for the merging video that terminal can be used for implementing providing in above-described embodiment.Specifically:
Terminal 900 may include RF (Radio Frequency, radio frequency) circuit 110, include one or more meter
The memory 120 of calculation machine readable storage medium storing program for executing, input unit 130, display unit 140, sensor 150, voicefrequency circuit 160,
WiFi (wireless fidelity, Wireless Fidelity) module 170, the processing for including one or more than one processing core
The components such as device 180 and power supply 190.It will be understood by those skilled in the art that terminal structure shown in Fig. 4 is not constituted pair
The restriction of terminal may include perhaps combining certain components or different component cloth than illustrating more or fewer components
It sets.Wherein:
RF circuit 110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, one or the processing of more than one processor 180 are transferred to;In addition, the data for being related to uplink are sent to
Base station.In general, RF circuit 110 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, uses
Family identity module (SIM) card, transceiver, coupler, LNA (Low Noise Amplifier, low-noise amplifier), duplex
Device etc..In addition, RF circuit 110 can also be communicated with network and other equipment by wireless communication.The wireless communication can make
With any communication standard or agreement, and including but not limited to GSM (Global System of Mobile communication, entirely
Ball mobile communcations system), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code
Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple
Access, wideband code division multiple access), LTE (Long Term Evolution, long term evolution), Email, SMS (Short
Messaging Service, short message service) etc..
Memory 120 can be used for storing software program and module, and processor 180 is stored in memory 120 by operation
Software program and module, thereby executing various function application and data processing.Memory 120 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to terminal 900
According to (such as audio data, phone directory etc.) etc..In addition, memory 120 may include high-speed random access memory, can also wrap
Include nonvolatile memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Correspondingly, memory 120 can also include Memory Controller, to provide processor 180 and input unit 130 to memory
120 access.
Input unit 130 can be used for receiving the number or character information of input, and generate and user setting and function
Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, input unit 130 may include touching
Sensitive surfaces 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad are collected and are used
Family on it or nearby touch operation (such as user using any suitable object or attachment such as finger, stylus in touch-sensitive table
Operation on face 131 or near touch sensitive surface 131), and corresponding attachment device is driven according to preset formula.It is optional
, touch sensitive surface 131 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used
The touch orientation at family, and touch operation bring signal is detected, transmit a signal to touch controller;Touch controller is from touch
Touch information is received in detection device, and is converted into contact coordinate, then gives processor 180, and can receive processor 180
The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves
Realize touch sensitive surface 131.In addition to touch sensitive surface 131, input unit 130 can also include other input equipments 132.Specifically,
Other input equipments 132 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.),
One of trace ball, mouse, operating stick etc. are a variety of.
Display unit 140 can be used for showing information input by user or the information and terminal 900 that are supplied to user
Various graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof.
Display unit 140 may include display panel 141, optionally, can use LCD (Liquid Crystal Display, liquid crystal
Show device), the forms such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure display panel
141.Further, touch sensitive surface 131 can cover display panel 141, when touch sensitive surface 131 detects touching on it or nearby
After touching operation, processor 180 is sent to determine the type of touch event, is followed by subsequent processing device 180 according to the type of touch event
Corresponding visual output is provided on display panel 141.Although touch sensitive surface 131 and display panel 141 are conducts in Fig. 4
Two independent components realize input and input function, but in some embodiments it is possible to by touch sensitive surface 131 and display
Panel 141 is integrated and realizes and outputs and inputs function.
Terminal 900 may also include at least one sensor 150, such as optical sensor, motion sensor and other sensings
Device.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 141, and proximity sensor can close display when terminal 900 is moved in one's ear
Panel 141 and/or backlight.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally
Three axis) acceleration size, can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);Extremely
In other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensors that terminal 900 can also configure, herein
It repeats no more.
Voicefrequency circuit 160, loudspeaker 161, microphone 162 can provide the audio interface between user and terminal 900.Audio
Electric signal after the audio data received conversion can be transferred to loudspeaker 161, be converted to sound by loudspeaker 161 by circuit 160
Sound signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 162, after being received by voicefrequency circuit 160
Audio data is converted to, then by after the processing of audio data output processor 180, such as another end is sent to through RF circuit 110
End, or audio data is exported to memory 120 to be further processed.Voicefrequency circuit 160 is also possible that earphone jack,
To provide the communication of peripheral hardware earphone Yu terminal 900.
WiFi belongs to short range wireless transmission technology, and terminal 900 can help user's transceiver electronics by WiFi module 170
Mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 4 is shown
WiFi module 170, but it is understood that, and it is not belonging to must be configured into for terminal 900, it can according to need completely
Do not change in the range of the essence of invention and omits.
Processor 180 is the control centre of terminal 900, utilizes each portion of various interfaces and connection whole mobile phone
Point, by running or execute the software program and/or module that are stored in memory 120, and calls and be stored in memory 120
Interior data execute the various functions and processing data of terminal 900, to carry out integral monitoring to mobile phone.Optionally, processor
180 may include one or more processing cores;Preferably, processor 180 can integrate application processor and modem processor,
Wherein, the main processing operation system of application processor, user interface and application program etc., modem processor mainly handles nothing
Line communication.It is understood that above-mentioned modem processor can not also be integrated into processor 180.
Terminal 900 further includes the power supply 190 (such as battery) powered to all parts, it is preferred that power supply can pass through electricity
Management system and processor 180 are logically contiguous, to realize management charging, electric discharge and power consumption by power-supply management system
The functions such as management.Power supply 190 can also include one or more direct current or AC power source, recharging system, power supply event
Hinder the random components such as detection circuit, power adapter or inverter, power supply status indicator.
Although being not shown, terminal 900 can also include camera, bluetooth module etc., and details are not described herein.Specifically in this reality
It applies in example, the display unit of terminal 900 is touch-screen display, and terminal 900 further includes having memory and one or one
Above program, one of them perhaps more than one program be stored in memory and be configured to by one or one with
Upper processor execution states one or more than one program includes the instruction for performing the following operation:
Getting frame interval time;
It is regarded according to the display time for the first frame video for including in the frame period time and the first video and last frame
The display time of frequency, acquisition time offset;
According to the display time for the every frame video for including in the time offset, the second video and decoding time, the is calculated
Display time and decoding time in the video of the every frame video for including in two videos after merging;
According in the video of the every frame video for including in the second video after merging the display time and decoding time, by
One video and the second video merging are a video.
Preferably, according to the display time for the first frame video for including in the frame period time and the first video and finally
The display time of one frame video, acquisition time offset, comprising:
It is regarded according to the display time for the first frame video for including in the frame period time and the first video and last frame
The display time of frequency, formula (1) as described below calculate time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is time offset, PTSNFor the display time of last frame video, PTS1For first frame view
The display time of frequency, t0 are the frame period time.
Preferably, according to the display time for the every frame video for including in the time offset, the second video and decoding time,
Calculate the display time in the video of the every frame video for including in the second video after merging and decoding time, comprising:
According to the display time for the every frame video for including in the second video and the time offset, (2) are counted as follows
Calculate the display time in the video of the every frame video for including in the second video after merging;
PTSi'=PTSi+T……(2)
In formula (2), PTSiFor the display time for the i-th frame video for including in the second video, PTSi' it is the second video
In include the i-th frame video video after merging in the display time;
According to the decoding time for the every frame video for including in the second video and the time offset, (3) are counted as follows
Calculate the decoding time in the video of the every frame video for including in the second video after merging;
DTSi'=DTSi+T……(3)
In formula (3), DTSiFor the decoding time for the i-th frame video for including in the second video, DTSi' it is the second video
In include the i-th frame video video after merging in decoding time.
Further, before getting frame interval time, further includes:
First thread, the second thread and third thread are created, is shot and is regarded by first thread, the second thread and third thread
Frequently.
Preferably, video is shot by first thread, the second thread and third thread, comprising:
The one-frame video data that camera current shooting is obtained by first thread regards a frame of camera current shooting
Frequency is according to the tail of the queue for being inserted into video queue;
The frame audio data that microphone currently acquires, the frame sound that microphone is currently acquired are obtained by the second thread
Frequency is according to the tail of the queue for being inserted into audio queue;
By third thread, one-frame video data is obtained from the head of the queue of video queue and is obtained from the head of the queue of audio queue
One frame audio data, the one-frame video data that will acquire and a frame audio data group are combined into the frame video in video.
Preferably, getting frame interval time, comprising:
Obtain the number and first of the occupied time of every frame video in the first video, the video frame that the first video includes
The total duration of video;
According to the frame video occupied time every in the first video, the total time that video frame occupies in the first video is calculated;
According to the total duration of the total time of video frame occupancy and the first video in the first video, include in the first video of calculating
Frame period occupied total time;
According to the number for the video frame that frame period occupied total time and the first video include, the frame period time is calculated.
Preferably, according to the display time in the every frame video for including in the second video video after merging and when decoding
Between, it is a video by the first video and the second video merging, comprising:
The every frame video for including in first video is stored in a video file;
According in the video of the every frame video for including in the second video after merging the display time and decoding time, determine
The storage order for the every frame video for including in second video;
After the last frame video that the first video includes in video file, according to determining storage order storage second
The every frame video for including in video, to realize the first video and the second video merging as a video.
In embodiments of the present invention, getting frame interval time;According to include in the frame period time and the first video
The display time of one frame video and the display time of last frame video, acquisition time offset;According to the time offset,
The display time for the every frame video for including in second video and decoding time calculate the every frame video for including in the second video and are closing
Display time and decoding time in video after and;According in the video of the every frame video for including in the second video after merging
The display time and decoding time, be a video by the first video and the second video merging.Due to according to time offset,
The display time for the every frame video for including in two videos and decoding time calculate the every frame video for including in the second video and are closing
It display time and decoding time in video after and, can be according to the view of the every frame video for including in the second video after merging
First video and the second video merging are a video, so when playing video by display time and decoding time in frequency
First video and the second video are continuously played as a video, are avoided first to play when the first video plays the second video again and be deposited
Waiting time, improve play video efficiency.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (14)
1. a kind of method for merging video, which is characterized in that the described method includes:
Getting frame interval time, the frame period time refer in the first video interval time between adjacent two frames video;
According to the display time for the first frame video for including in the frame period time and first video and last frame
The display time of video, acquisition time offset;
According to the display time for the every frame video for including in the time offset, the second video and decoding time, described in calculating
Display time and decoding time in the video of the every frame video for including in second video after merging;
According in the video of the every frame video for including in second video after merging the display time and decoding time, by institute
It states the first video and second video merging is a video.
2. the method as described in claim 1, which is characterized in that described to include according in the frame period time and the first video
First frame video display the time and last frame video the display time, acquisition time offset, comprising:
According to the display time for the first frame video for including in the frame period time and the first video and last frame video
The display time, formula (1) as described below calculates time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is the time offset, PTSNFor the display time of the last frame video, PTS1It is described
The display time of first frame video, t0 are the frame period time.
3. the method as described in claim 1, which is characterized in that described to include according in the time offset, the second video
Every frame video the display time and decoding time, calculate the video of the every frame video for including in second video after merging
In the display time and decoding time, comprising:
According to the display time for the every frame video for including in the second video and the time offset, (2) are calculated as follows
The display time in the video of the every frame video for including in second video after merging;
PTSi'=PTSi+T……(2)
In formula (2), PTSiDisplay time for the i-th frame video for including in second video, PTSi' it is described second
Display time of the i-th frame video for including in video in the video after the merging;
According to the decoding time and the time offset of the every frame video for including in second video, as follows (3)
Calculate decoding time of the every frame video for including in second video in the video after the merging;
DTSi'=DTSi+T……(3)
In formula (3), DTSiDecoding time for the i-th frame video for including in second video, DTSi' it is described second
Decoding time of the i-th frame video for including in video in the video after the merging.
4. the method as described in claim 1, which is characterized in that before the getting frame interval time, further includes:
First thread, the second thread and third thread are created, is shot and is regarded by the first thread, the second thread and third thread
Frequently.
5. method as claimed in claim 4, which is characterized in that described to pass through the first thread, the second thread and third line
Journey shoots video, comprising:
The one-frame video data that camera current shooting is obtained by the first thread, by the one of the camera current shooting
Frame video data is inserted into the tail of the queue of video queue;
The frame audio data that microphone currently acquires, one that the microphone is currently acquired are obtained by second thread
Frame audio data is inserted into the tail of the queue of audio queue;
By the third thread, one-frame video data is obtained from the head of the queue of the video queue and from the audio queue
Head of the queue obtains a frame audio data, and the one-frame video data of the acquisition and a frame audio data group are combined into the frame in video
Video.
6. the method as described in claim 1, which is characterized in that the getting frame interval time, comprising:
Obtain the occupied time of every frame video in the first video, the number of the video frame that first video includes and described
The total duration of first video;
According to the frame video occupied time every in first video, calculate that video frame in first video occupies it is total when
Between;
The total duration of the total time and first video that are occupied according to video frame in first video calculate first view
The frame period for including in frequency occupied total time;
According to the number for the video frame that the frame period occupied total time and first video include, when calculating frame period
Between.
7. the method as described in claim 1, which is characterized in that described to be existed according to the every frame video for including in second video
First video and second video merging are a view by display time and decoding time in the video after merging
Frequently, comprising:
The every frame video for including in first video is stored in a video file;
According in the video of the every frame video for including in second video after merging the display time and decoding time, determine
The storage order for the every frame video for including in second video;
After the last frame video that the first video described in the video file includes, according to the storage order of the determination
The every frame video for including in second video is stored, to realize first video and second video merging as one
Video.
8. a kind of device for merging video, which is characterized in that described device includes:
First obtains module, is used for getting frame interval time, and the frame period time refers to adjacent two frames video in the first video
Between interval time;
Second obtains module, for the display according to the first frame video for including in the frame period time and first video
Time and the display time of last frame video, acquisition time offset;
Computing module, for according to the every frame video for including in the time offset, the second video the display time and decoding
Time calculates the display time in the video of the every frame video for including in second video after merging and decoding time;
Merging module, for according in the every frame video for including in second video video after merging the display time and
First video and second video merging are a video by decoding time.
9. device as claimed in claim 8, which is characterized in that described second obtains module, when for according to the frame period
Between and the first video in include first frame video display the time and last frame video the display time, as described below
Formula (1) calculate time offset;
T=PTSN-PTS1+t0……(1)
In formula (1), T is the time offset, PTSNFor the display time of the last frame video, PTS1It is described
The display time of first frame video, t0 are the frame period time.
10. device as claimed in claim 8, which is characterized in that the computing module includes:
First computing unit, for according to the every frame video for including in the second video the display time and the time offset,
(2) calculate the display time in the video of the every frame video for including in second video after merging as follows;
PTSi'=PTSi+T……(2)
In formula (2), PTSiDisplay time for the i-th frame video for including in second video, PTSi' it is described second
Display time of the i-th frame video for including in video in the video after the merging;
Second computing unit, for the decoding time and the time migration according to the every frame video for including in second video
Amount, when (3) calculate decoding of the every frame video for including in second video in the video after the merging as follows
Between;
DTSi'=DTSi+T……(3)
In formula (3), DTSiDecoding time for the i-th frame video for including in second video, DTSi' it is described second
Decoding time of the i-th frame video for including in video in the video after the merging.
11. device as claimed in claim 8, which is characterized in that described device further include:
Shooting module, for creating first thread, the second thread and third thread, by the first thread, the second thread and
Third thread shoots video.
12. device as claimed in claim 11, which is characterized in that the shooting module includes:
First acquisition unit will be described for obtaining the one-frame video data of camera current shooting by the first thread
The one-frame video data of camera current shooting is inserted into the tail of the queue of video queue;
Second acquisition unit will be described for obtaining the frame audio data that microphone currently acquires by second thread
The frame audio data that microphone currently acquires is inserted into the tail of the queue of audio queue;
Assembled unit, for by the third thread, from the head of the queue of video queue acquisition one-frame video data and from
The head of the queue of the audio queue obtains a frame audio data, combines the one-frame video data of the acquisition and a frame audio data
For the frame video in video.
13. device as claimed in claim 8, which is characterized in that described first, which obtains module, includes:
Third acquiring unit, the view for obtaining the occupied time of every frame video in the first video, first video includes
The total duration of the number of frequency frame and first video;
Third computing unit, for calculating first video according to the frame video occupied time every in first video
The total time that middle video frame occupies;
4th computing unit, for according in first video video frame occupy total time and first video it is total when
It is long, calculate the frame period occupied total time for including in first video;
5th computing unit, the video frame for including according to the frame period occupied total time and first video
Number calculates the frame period time.
14. device as claimed in claim 8, which is characterized in that the merging module includes:
First storage unit, for the every frame video for including in first video to be stored in a video file;
Determination unit, for according in the every frame video for including in second video video after merging the display time and
Decoding time determines the storage order for the every frame video for including in second video;
Second storage unit, after the last frame video for including for the first video described in the video file, according to
The storage order of the determination stores the every frame video for including in second video, to realize first video and described
Second video merging is a video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410503239.0A CN104967864B (en) | 2014-09-26 | 2014-09-26 | A kind of method and device merging video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410503239.0A CN104967864B (en) | 2014-09-26 | 2014-09-26 | A kind of method and device merging video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104967864A CN104967864A (en) | 2015-10-07 |
CN104967864B true CN104967864B (en) | 2019-01-11 |
Family
ID=54221788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410503239.0A Active CN104967864B (en) | 2014-09-26 | 2014-09-26 | A kind of method and device merging video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104967864B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105657524A (en) * | 2016-01-13 | 2016-06-08 | 上海视云网络科技有限公司 | Seamless video switching method |
CN109429030A (en) * | 2017-08-31 | 2019-03-05 | 爱唯秀股份有限公司 | The method for rebuilding video using super-resolution algorithms |
CN110401866B (en) * | 2018-04-25 | 2022-05-20 | 广州虎牙信息科技有限公司 | Live video display method, device, terminal and storage medium |
CN108966026B (en) * | 2018-08-03 | 2021-03-30 | 广州酷狗计算机科技有限公司 | Method and device for making video file |
CN113750527B (en) * | 2021-09-10 | 2023-09-01 | 福建天晴数码有限公司 | High-accuracy frame rate control method and system thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101409831A (en) * | 2008-07-10 | 2009-04-15 | 浙江师范大学 | Method for processing multimedia video object |
CN101740082A (en) * | 2009-11-30 | 2010-06-16 | 孟智平 | Method and system for clipping video based on browser |
CN102054510A (en) * | 2010-11-08 | 2011-05-11 | 武汉大学 | Video preprocessing and playing method and system |
CN102075792A (en) * | 2010-12-23 | 2011-05-25 | 华为技术有限公司 | Video file playing method and system, user equipment and server equipment |
US20120169883A1 (en) * | 2010-12-31 | 2012-07-05 | Avermedia Information, Inc. | Multi-stream video system, video monitoring device and multi-stream video transmission method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8379677B2 (en) * | 2007-04-30 | 2013-02-19 | Vixs Systems, Inc. | System for combining a plurality of video streams and method for use therewith |
-
2014
- 2014-09-26 CN CN201410503239.0A patent/CN104967864B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101409831A (en) * | 2008-07-10 | 2009-04-15 | 浙江师范大学 | Method for processing multimedia video object |
CN101740082A (en) * | 2009-11-30 | 2010-06-16 | 孟智平 | Method and system for clipping video based on browser |
CN102054510A (en) * | 2010-11-08 | 2011-05-11 | 武汉大学 | Video preprocessing and playing method and system |
CN102075792A (en) * | 2010-12-23 | 2011-05-25 | 华为技术有限公司 | Video file playing method and system, user equipment and server equipment |
US20120169883A1 (en) * | 2010-12-31 | 2012-07-05 | Avermedia Information, Inc. | Multi-stream video system, video monitoring device and multi-stream video transmission method |
Also Published As
Publication number | Publication date |
---|---|
CN104967864A (en) | 2015-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104427083B (en) | The method and apparatus for adjusting volume | |
CN105554550B (en) | Video broadcasting method and device | |
CN104169856B (en) | Side menu display method, device and terminal | |
CN104519404B (en) | The player method and device of graphic interchange format file | |
CN105808060B (en) | A kind of method and apparatus of playing animation | |
CN104036536B (en) | The generation method and device of a kind of stop-motion animation | |
CN104978176B (en) | Application programming interfaces call method, device and computer readable storage medium | |
CN106488296B (en) | A kind of method and apparatus showing video barrage | |
CN104618794B (en) | The method and apparatus for playing video | |
CN106933525B (en) | A kind of method and apparatus showing image | |
CN104252341B (en) | The client device of the skin change method of application program, device and application program | |
CN104967864B (en) | A kind of method and device merging video | |
CN105094513B (en) | User's head portrait setting method, device and electronic equipment | |
CN104238893B (en) | A kind of method and apparatus that video preview picture is shown | |
CN104967865B (en) | Video previewing method and device | |
CN109271327A (en) | EMS memory management process and device | |
CN104021129B (en) | Show the method and terminal of group picture | |
CN104516624B (en) | A kind of method and device inputting account information | |
CN107203960A (en) | image rendering method and device | |
CN106504303B (en) | A kind of method and apparatus playing frame animation | |
CN104007887B (en) | The method and terminal that floating layer is shown | |
CN107943417A (en) | Image processing method, terminal, computer-readable storage medium and computer program | |
CN108124059A (en) | A kind of way of recording and mobile terminal | |
CN107396193B (en) | The method and apparatus of video playing | |
CN106210838B (en) | Caption presentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |