CN104967864A - Video merging method and apparatus - Google Patents

Video merging method and apparatus Download PDF

Info

Publication number
CN104967864A
CN104967864A CN201410503239.0A CN201410503239A CN104967864A CN 104967864 A CN104967864 A CN 104967864A CN 201410503239 A CN201410503239 A CN 201410503239A CN 104967864 A CN104967864 A CN 104967864A
Authority
CN
China
Prior art keywords
video
frame
time
displaying
merging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410503239.0A
Other languages
Chinese (zh)
Other versions
CN104967864B (en
Inventor
李达
吴凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201410503239.0A priority Critical patent/CN104967864B/en
Publication of CN104967864A publication Critical patent/CN104967864A/en
Application granted granted Critical
Publication of CN104967864B publication Critical patent/CN104967864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The invention discloses a video merging method and apparatus, and belongs to the field of a computer technology. The method comprises the following steps: obtaining a frame interval time; according to the frame interval time, display time of a first frame of a video included in a first video and display time of a last frame of the video included in the first video, obtaining a time deviation; according to the time deviation, the display time and decoding time of each frame of the video included in a second video, calculating display time and decoding time of each frame of the video included in the second video in a merged video; and according to the display time and decoding time of each frame of the video included in the second video in the merged video, merging the first video into the second video to form one video. The apparatus comprises a first obtaining module, a second obtaining module, a calculation module and a merging module. According to the invention, multiple videos can be merged into one video, and the video playing efficiency is improved.

Description

A kind of method and device merging video
Technical field
The present invention relates to field of computer technology, particularly a kind of method and device merging video.
Background technology
At present, as the function substantially with capture video on the mobile terminals such as mobile phone, user usually uses this function capture video.Sometimes user may take multiple video within a period of time, and the content topic of these videos may be identical or relevant.Such as, user, to the travelling of certain place, takes two sections of videos user during travel, and the content topic of these two sections of videos is all about this travelling, so these two sections of videos are videos that content topic is identical or relevant.
Mobile terminal after having taken one section of video, by shooting video storage in the local storage of mobile terminal.When user thinks displaying video, find out in mobile terminal this locality and need the video play to play.Sometimes user's multiple videos of needing play content theme identical or relevant, now first user finds out one from mobile terminal this locality needs the video play to play, after playing this video, then the video finding out other needs broadcastings from mobile terminal this locality is play.If the video do not play in addition in the plurality of video, then continue to find out other videos do not play from mobile terminal this locality and play, until play the plurality of video.
Realizing in process of the present invention, inventor finds that prior art at least exists following problem:
When multiple video that play content theme is identical or relevant, user needs to find out a video from mobile terminal this locality and plays, play and find out other videos from mobile terminal this locality again and play, so not utilized during this period of time to starting to play another video video playback is complete, causing the efficiency of displaying video very low.Such as, there are 2 videos that content topic is identical or relevant, user needs to find out these 2 videos at twice from mobile terminal this locality and plays, first video playback terminate to start to play second video during this period of time in terminal idle, do not have video to play, cause playing efficiency low.
Summary of the invention
In order to improve the efficiency of displaying video, the invention provides a kind of method and the device that merge video.Described technical scheme is as follows:
Merge a method for video, described method comprises:
Getting frame interval time;
The displaying time of the first frame video comprised according to described frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
The displaying time of the every frame video comprised according to described time offset, the second video and decode time, calculate the displaying time in every frame video that described second video comprises video after merging and decode time;
Displaying time in the every frame video comprised according to described second video video after merging and decode time, merge into a video by described first video and described second video.
Merge a device for video, described device comprises:
First acquisition module, for getting frame interval time;
Second acquisition module, for the displaying time of the first frame video that comprises according to described frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
Computing module, for the displaying time of every frame video that comprises according to described time offset, the second video and decode time, calculates the displaying time in every frame video that described second video comprises video after merging and decode time;
Merge module, for the displaying time in every frame video of comprising according to described second video video after merging and decode time, described first video and described second video are merged into a video.
In embodiments of the present invention, getting frame interval time; The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount; The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time; Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.Due to according to time offset, the displaying time of every frame video that the second video comprises and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, displaying time in every frame video that can comprise according to the second video video after merging and decode time, first video and the second video are merged into a video, so when displaying video the first video and the second video as a video Continuous Play, avoid the stand-by period of first playing and existing when the first video plays the second video again, improve the efficiency of displaying video.
Accompanying drawing explanation
Fig. 1 is a kind of method flow diagram merging video that the embodiment of the present invention 1 provides;
Fig. 2 is a kind of method flow diagram merging video that the embodiment of the present invention 2 provides;
Fig. 3 is a kind of apparatus structure schematic diagram merging video that the embodiment of the present invention 3 provides;
Fig. 4 is the structural representation of a kind of terminal that the embodiment of the present invention 4 provides.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment 1
See Fig. 1, embodiments provide a kind of method merging video, comprising:
Step 101: getting frame interval time;
Step 102: the displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
Step 103: the displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculates the displaying time in every frame video that the second video comprises video after merging and decode time;
Step 104: the displaying time in the every frame video comprised according to the second video video after merging and decode time, merges into a video by the first video and the second video.
Preferably, the displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount, comprising:
The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is time offset, PTS nfor the displaying time of last frame video, PTS 1be the displaying time of the first frame video, t0 is the frame period time.
Preferably, the displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, comprising:
The displaying time of the every frame video comprised according to the second video and this time offset, calculate the displaying time in every frame video that the second video comprises video after merging by following formula (2);
PTS i’=PTS i+T……(2)
In formula (2), PTS ibe the displaying time of the i-th frame video that the second video comprises, PTS i' be displaying time in the i-th frame video video after merging of comprising of the second video;
The decode time of the every frame video comprised according to the second video and this time offset, calculate the decode time in every frame video that the second video comprises video after merging by following formula (3);
DTS i’=DTS i+T……(3)
In formula (3), DTS ibe the decode time of the i-th frame video that the second video comprises, DTS i' be decode time in the i-th frame video video after merging of comprising of the second video.
Further, before getting frame interval time, also comprise:
Create the first thread, the second thread and the 3rd thread, by the first thread, the second thread and the 3rd thread capture video.
Preferably, by the first thread, the second thread and the 3rd thread capture video, comprising:
Obtained the one-frame video data of camera current shooting by the first thread, the one-frame video data of camera current shooting is inserted into the tail of the queue of video queue;
Obtained a frame voice data of the current collection of microphone by the second thread, a frame voice data of current for microphone collection is inserted into the tail of the queue of audio queue;
By the 3rd thread, obtain one-frame video data from the head of the queue of video queue and obtain a frame voice data from the head of the queue of audio queue, the one-frame video data of acquisition and a frame voice data being combined as the frame video in video.
Preferably, getting frame interval time, comprising:
Obtain the number of frame of video and total duration of the first video that in the first video, every time shared by frame video, the first video comprise;
According to the time shared by frame video every in the first video, calculate the total time that in the first video, frame of video takies;
Total duration of the total time taken according to frame of video in the first video and the first video, calculates the total time shared by frame period that the first video comprises;
The number of the frame of video that the total time shared by frame period and the first video comprise, calculates the frame period time.
Preferably, the displaying time in the every frame video comprised according to the second video video after merging and decode time, the first video and the second video are merged into a video, comprising:
The every frame video storage comprised by first video is in a video file;
Displaying time in the every frame video comprised according to the second video video after merging and decode time, determine the storage order of every frame video that the second video comprises;
After the last frame video that first video comprises in video file, store every frame video that the second video comprises, to realize the first video and the second video to merge into a video according to the storage order determined.
In embodiments of the present invention, getting frame interval time; The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount; The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time; Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.Due to according to time offset, the displaying time of every frame video that the second video comprises and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, displaying time in every frame video that can comprise according to the second video video after merging and decode time, first video and the second video are merged into a video, so when displaying video the first video and the second video as a video Continuous Play, avoid the stand-by period of first playing and existing when the first video plays the second video again, improve the efficiency of displaying video.
Embodiment 2
Embodiments provide a kind of method merging video.
At present, as the function substantially with capture video on the mobile terminals such as mobile phone, user usually uses this function capture video.Sometimes user may take multiple video within a period of time, and the content topic of these videos may be identical or relevant.When displaying video, user's multiple videos of needing play content theme identical or relevant sometimes, this multiple video is merged into a video by the method that now terminal can be provided by the embodiment of the present invention.
See Fig. 2, the method specifically comprises:
Step 201: create the first thread, the second thread and the 3rd thread, takes the first video and the second video by the first thread, the second thread and the 3rd thread;
Wherein, terminal is configured with camera and microphone, capture video data can be carried out by camera, gather voice data by microphone.And in order to gather voice data while capture video data, multiple thread can be created by multithreading, obtained the video data of camera shooting by a thread, obtained the voice data of microphone collection simultaneously by another thread.
This step is specially, and creates the first thread, the second thread and the 3rd thread by multithreading.Obtained the one-frame video data of camera current shooting by the first thread, the one-frame video data of camera current shooting is inserted into the tail of the queue of video queue.Obtained a frame voice data of the current collection of microphone by the second thread, a frame voice data of current for microphone collection is inserted into the tail of the queue of audio queue.By the 3rd thread, obtain one-frame video data from the head of the queue of video queue and obtain a frame voice data from the head of the queue of audio queue, the one-frame video data of acquisition and a frame voice data being combined as the frame video in the first video.For the every frame video of other in the first video, other the every frame videos in the first video can be obtained according to aforesaid operations.Equally, the every frame video in the second video can also be obtained according to the method described above.
Wherein, video queue and audio queue are fifo queue.Because the processing speed of the 3rd thread is more a lot of slowly than the processing speed of the first thread and the second thread, so need the tail of the queue video data that the first thread obtains being inserted into video queue to wait for, and the tail of the queue voice data that the second thread obtains being inserted into audio queue is needed to wait for.
Such as, the first thread t1, the second thread t2 and the 3rd thread t3 is created by multithreading.Obtained the one-frame video data v3 of camera current shooting by the first thread t1, the one-frame video data v3 of camera current shooting is inserted into the tail of the queue of the video queue as shown in queue 1.Obtained a frame voice data r3 of the current collection of microphone by the second thread t2, a frame voice data r3 of current for microphone collection is inserted into the tail of the queue of the audio queue as shown in queue 2.By the 3rd thread t3, obtain one-frame video data v1 from the head of the queue of the video queue of such as queue 1 and obtain a frame voice data r1 from the head of the queue of the such as audio queue shown in queue 2, the one-frame video data v1 of an acquisition and frame voice data r1 being combined as the frame video in the first video video1.For the every frame video of other in the first video video1, other the every frame videos in the first video can be obtained according to aforesaid operations.Equally, the every frame video in the second video video2 can also be obtained according to the method described above.
Queue 1
v3 v2 v1
Queue 2
r3 r2 r1
Wherein, capture video is carried out by multithreading, while passing through camera capture video data, voice data can be gathered by microphone, and the one-frame video data same time to be obtained and a frame voice data are combined as the frame video in video, so can shorten the time obtaining video, improve the efficiency obtaining video.
Further, the first video and the second video may not be the video of terminal taking.The video that first video and the second video can obtain from video server for video that terminal has stored or terminal.But the first video must be identical with the video format of the second video.Video format can be AVI (Audio VideoInteractive, Voice & Video hybrid coding), WMV (Windows Media Video, numerical digit coding and decoding video form) or DV (Digital Video, digital video) etc.
Wherein, function av_read_frame () is audio frequency and video function reading, for reading the data of every frame video that video comprises.After getting the first video and the second video by aforesaid operations, the data of every frame video that the data of every frame video that the first video comprises and the second video comprise can be read respectively by function av_read_frame ().
Wherein, after getting the first video and the second video by the operation of above-mentioned steps 201, the operation of 202-205 as follows the first video and the second video can be merged into a video.
Step 202: getting frame interval time;
Wherein, the frame period time is interlude between adjacent two frame videos in video.
This step is specially, and obtains the number of frame of video and total duration of the first video that in the first video, every time shared by frame video, the first video comprise.The summation calculating the time shared by every frame video obtains the time in the first video shared by all frame of video.The difference calculating total duration of the first video and the time shared by all frame of video obtains the total frame period time in the first video.The number of the frame of video comprised according to the first video determines the number of frame period in the first video.The ratio calculating the number of total frame period time and frame period obtains the frame period time.
Wherein, the data of a frame video comprise this time shared by frame video, and the time shared by every frame video that general video comprises is all equal, can from the first video every frame video data in obtain time shared by every frame video respectively.
Such as, suppose that total duration of the first video video1 is 1s, the first video video1 comprises 20 frame videos, and the time that every frame video takies is equal and be 0.04s.Obtain the number 20 of the frame of video that the time is 0.04s, the first video video1 comprises in the first video video1 often shared by frame video and total duration 1s of the first video video1.The summation calculating the time shared by every frame video is 0.8s, and the time obtained in the first video video1 shared by all frame of video is 0.8s.Total frame period time that the difference calculating total duration 1s of the first video video1 and the time 0.8s shared by all frame of video obtains in the first video video1 is 0.2s.The number 20 of the frame of video comprised according to the first video video1 determines that the number of frame period in the first video video1 is 19.It is 0.01s that the ratio calculating the number 19 of total frame period time 0.2s and frame period obtains the frame period time.
Further, default frame period time can also be set in advance.
Step 203: the displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
Wherein, a frame video comprises the displaying time of this frame video.When displaying video, play this frame video at the displaying time of this frame video.The displaying time of the first frame video and the displaying time of last frame video can be got when the data of every frame video that acquisition first video comprises.
This step is specially, the displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is time offset, PTS nfor the displaying time of last frame video, PTS 1be the displaying time of the first frame video, t0 is the frame period time.
Such as, suppose that the displaying time of the first frame video that the first video video1 comprises is 0:00, the displaying time of last frame is 0:01.The displaying time 0:00 of the first frame video comprised according to frame period time 0.01s and the first video video1 and the displaying time 0:01 of last frame video, calculating time offset by formula (2) as follows is 1.01s.
T=PTS N-PTS 1+t0=0:01-0:00+0.01=1.01 ……(2)
Wherein, after getting time offset by the step of above-mentioned steps 202 and 203, calculate displaying time in every frame video that the second video comprises video after merging and decode time by the operation of following step 204.
Step 204: the displaying time of the every frame video comprised according to time offset, the second video and decode time, calculates the displaying time in every frame video that the second video comprises video after merging and decode time;
Wherein, a frame video comprises displaying time and the decode time of this frame video.When displaying video, at the decode time of this frame video to the decoding data of this frame video, then play this frame video at the displaying time of this frame video.Displaying time and the decode time of every frame video that the second video comprises can be got when the data of every frame video that acquisition second video comprises.
This step is specially, the displaying time of the every frame video comprised according to the second video and time offset, calculates the displaying time in every frame video that the second video comprises video after merging by following formula (3).
PTS i’=PTS i+T……(3)
In formula (3), PTS ibe the displaying time of the i-th frame video that the second video comprises, PTS i' be displaying time in the i-th frame video video after merging of comprising of the second video.
The decode time of the every frame video comprised according to the second video and time offset, calculate the decode time in every frame video that the second video comprises video after merging by following formula (4).
DTS i’=DTS i+T……(4)
In formula (4), DTS ibe the decode time of the i-th frame video that the second video comprises, DTS i' be decode time in the i-th frame video video after merging of comprising of the second video.
Such as, suppose that the second video video2 comprises 3 frame videos, the displaying time of the first frame video is 0:00 and decode time is 0:00, and the displaying time of the second frame video is 0:01 and decode time is 0:00, and the displaying time of the 3rd frame video is 0:02 and decode time is 0:01.The displaying time 0:00 of the first frame video comprised according to the second video video2 and time offset 1.01, the displaying time calculated in the first frame video video after merging that the second video video2 comprises by following formula (5) is 1:01.The displaying time calculated equally according to the method described above in the second frame video video is after merging 1:02, and the displaying time in the 3rd frame video video is after merging 1:03.
PTS 1’=PTS 1+T=0:00+1.01=1:01 ……(5)
The decode time 0:00 of the first frame video comprised according to the second video video2 and time offset 1.01, the decode time calculated in the first frame video video after merging that the second video video2 comprises by following formula (6) is 1:01.The decode time calculated equally according to the method described above in the second frame video video is after merging 1:01, and the decode time in the 3rd frame video video is after merging 1:02.
DTS i’=DTS i+T=0:00+1.01=1:01 ……(6)
Wherein, after calculating displaying time in every frame video that the second video comprises video after merging and decode time by the operation of above-mentioned steps 204, the operation of 205 merges the first video and the second video as follows.
Step 205: the displaying time in the every frame video comprised according to the second video video after merging and decode time, merges into a video by the first video and the second video.
Particularly, by every frame video storage of the first video in a video file, the displaying time in the every frame video then comprised according to the second video video after merging and decode time, determine the storage order of every frame video that the second video comprises.After the last frame video that first video comprises in this video file, according to the storage order determined, the every frame video in the second video is also stored in this video file, to realize the first video and the second video to merge into a video.
Wherein, storage order can be the sequencing of displaying time in every frame video of comprising of the second video video after merging and decode time.So the first video and the second video are directly stored in a video file, the efficiency merging video can be improved.
Such as, by every frame video storage of the first video video1 in a video file 1, then the displaying time 1:01 in the first frame video video after merging comprised according to the second video video2 and decode time 1:01, displaying time 1:02 in second frame video video after merging and decode time 1:01, and displaying time 1:03 in the 3rd frame video video after merging and decode time 1:02, determine that the storage order of every frame video that the second video video2 comprises is the first frame video, the second frame video and the 3rd frame video.After the last frame video that first video video1 comprises in this video file 1, according to the order of the first frame video, the second frame video and the 3rd frame video, the every frame video in the second video video2 is also stored in this video file 1, to realize the first video video1 and the second video video2 to merge into a video.
Wherein, after the first video and the second video are merged into a video, when displaying video, video after Continuous Play merges, there is no free time in playing process, so with play separately the first video and play compared with the second video again, improve the efficiency of displaying video.
Wherein, if also there is the video needing to merge, the video then merging obtained is as the first video, and the video merged will be needed as the second video, and the first video and the second video are merged into a video by the method then provided according to the embodiment of the present invention.
Wherein, in embodiments of the present invention, by camera collection video data and when gathering voice data by microphone, processed by java applet code.But because program code is when Processing tasks, finally all need import data into bottom and be translated as hardware signal to process, and java applet code needs when importing data into bottom and being translated as hardware signal to translate through many times, efficiency is very low.Therefore in embodiments of the present invention, by camera collection video data and gather voice data by microphone, and after getting the first video and the second video, the first video and the second video are passed to JNI (Java NativeInterface, the local calling layer of script).At JNI layer, by the OO program code of C++, the first video and the second video are merged into a video, the OO program code of C++ like this can save the process from java applet code translation to the OO program code of C++ when importing data into bottom and being translated as hardware signal, improve treatment effeciency.
In embodiments of the present invention, getting frame interval time; The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount; The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time; Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.Due to according to time offset, the displaying time of every frame video that the second video comprises and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, displaying time in every frame video that can comprise according to the second video video after merging and decode time, first video and the second video are merged into a video, so when displaying video the first video and the second video as a video Continuous Play, avoid the stand-by period of first playing and existing when the first video plays the second video again, improve the efficiency of displaying video.
Embodiment 3
See Fig. 3, embodiments provide a kind of device merging video, comprising:
First acquisition module 301, for getting frame interval time;
Second acquisition module 302, for the displaying time of the first frame video that comprises according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
Computing module 303, for the displaying time of every frame video that comprises according to this time offset, the second video and decode time, calculates the displaying time in every frame video that the second video comprises video after merging and decode time;
Merge module 304, for the displaying time in every frame video of comprising according to the second video video after merging and decode time, the first video and the second video are merged into a video.
Wherein, the second acquisition module 302, for the displaying time of the first frame video that comprises according to this frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is time offset, PTS nfor the displaying time of last frame video, PTS 1be the displaying time of the first frame video, t0 is the frame period time.
Wherein, computing module 303 comprises:
First computing unit, for the displaying time of every frame video that comprises according to the second video and this time offset, calculates the displaying time in every frame video that the second video comprises video after merging by following formula (2);
PTS i’=PTS i+T……(2)
In formula (2), PTS ibe the displaying time of the i-th frame video that the second video comprises, PTS i' be displaying time in the i-th frame video video after merging of comprising of the second video;
Second computing unit, for the decode time of every frame video that comprises according to the second video and this time offset, calculates the decode time in every frame video that the second video comprises video after merging by following formula (3);
DTS i’=DTS i+T……(3)
In formula (3), DTS ibe the decode time of the i-th frame video that the second video comprises, DTS i' be decode time in the i-th frame video video after merging of comprising of the second video.
Further, this device also comprises:
Taking module, for creating the first thread, the second thread and the 3rd thread, by the first thread, the second thread and the 3rd thread capture video.
Wherein, taking module comprises:
First acquiring unit, for being obtained the one-frame video data of camera current shooting by the first thread, is inserted into the tail of the queue of video queue by the one-frame video data of camera current shooting;
Second acquisition unit, for being obtained a frame voice data of the current collection of microphone by the second thread, is inserted into the tail of the queue of audio queue by a frame voice data of current for microphone collection;
Assembled unit, for by the 3rd thread, obtains one-frame video data from the head of the queue of video queue and obtains a frame voice data from the head of the queue of audio queue, the one-frame video data of acquisition and a frame voice data being combined as the frame video in video.
Wherein, the first acquisition module 301 comprises:
3rd acquiring unit, for obtaining time in the first video shared by every frame video, the number of frame of video that the first video comprises and total duration of the first video;
3rd computing unit, for the time shared by frame video every in the first video, calculates the total time that in the first video, frame of video takies;
4th computing unit, for total duration of total time of taking according to frame of video in the first video and the first video, calculates the total time shared by frame period that the first video comprises;
5th computing unit, for the number of the frame of video that the total time shared by frame period and the first video comprise, calculates the frame period time.
Wherein, merge module 304 to comprise:
First memory cell, for every frame video storage of being comprised by the first video in a video file;
Determining unit, for the displaying time in every frame video of comprising according to the second video video after merging and decode time, determines the storage order of every frame video that the second video comprises;
Second memory cell, for after the last frame video that the first video in video file comprises, stores every frame video that the second video comprises, to realize the first video and the second video to merge into a video according to the storage order determined.
In embodiments of the present invention, getting frame interval time; The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount; The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time; Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.Due to according to time offset, the displaying time of every frame video that the second video comprises and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, displaying time in every frame video that can comprise according to the second video video after merging and decode time, first video and the second video are merged into a video, so when displaying video the first video and the second video as a video Continuous Play, avoid the stand-by period of first playing and existing when the first video plays the second video again, improve the efficiency of displaying video.
Embodiment 4
Please refer to Fig. 4, it illustrates the terminal structure schematic diagram with Touch sensitive surface involved by the embodiment of the present invention, this terminal may be used for the method implementing the merging video provided in above-described embodiment.Specifically:
Terminal 900 can comprise RF (Radio Frequency, radio frequency) circuit 110, the memory 120 including one or more computer-readable recording mediums, input unit 130, display unit 140, transducer 150, voicefrequency circuit 160, WiFi (wireless fidelity, Wireless Fidelity) module 170, include the parts such as processor 180 and power supply 190 that more than or processes core.It will be understood by those skilled in the art that the restriction of the not structure paired terminal of the terminal structure shown in Fig. 4, the parts more more or less than diagram can be comprised, or combine some parts, or different parts are arranged.Wherein:
RF circuit 110 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, especially, after being received by the downlink information of base station, transfer to more than one or one processor 180 to process; In addition, base station is sent to by relating to up data.Usually, RF circuit 110 includes but not limited to antenna, at least one amplifier, tuner, one or more oscillator, subscriber identity module (SIM) card, transceiver, coupler, LNA (Low Noise Amplifier, low noise amplifier), duplexer etc.In addition, RF circuit 110 can also by radio communication and network and other devices communicatings.Described radio communication can use arbitrary communication standard or agreement, include but not limited to GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, code division multiple access), WCDMA (Wideband CodeDivision Multiple Access, Wideband Code Division Multiple Access (WCDMA)), LTE (Long Term Evolution, Long Term Evolution), Email, SMS (Short Messaging Service, Short Message Service) etc.
Memory 120 can be used for storing software program and module, and processor 180 is stored in software program and the module of memory 120 by running, thus performs the application of various function and data processing.Memory 120 mainly can comprise storage program district and store data field, and wherein, storage program district can storage operation system, application program (such as sound-playing function, image player function etc.) etc. needed at least one function; Store data field and can store the data (such as voice data, phone directory etc.) etc. created according to the use of terminal 900.In addition, memory 120 can comprise high-speed random access memory, can also comprise nonvolatile memory, such as at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 120 can also comprise Memory Controller, to provide the access of processor 180 and input unit 130 pairs of memories 120.
Input unit 130 can be used for the numeral or the character information that receive input, and produces and to arrange with user and function controls relevant keyboard, mouse, action bars, optics or trace ball signal and inputs.Particularly, input unit 130 can comprise Touch sensitive surface 131 and other input equipments 132.Touch sensitive surface 131, also referred to as touch display screen or Trackpad, user can be collected or neighbouring touch operation (such as user uses any applicable object or the operations of annex on Touch sensitive surface 131 or near Touch sensitive surface 131 such as finger, stylus) thereon, and drive corresponding jockey according to the formula preset.Optionally, Touch sensitive surface 131 can comprise touch detecting apparatus and touch controller two parts.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation brings, and sends signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and converts it to contact coordinate, then gives processor 180, and the order that energy receiving processor 180 is sent also is performed.In addition, the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave can be adopted to realize Touch sensitive surface 131.Except Touch sensitive surface 131, input unit 130 can also comprise other input equipments 132.Particularly, other input equipments 132 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
Display unit 140 can be used for the various graphical user interface showing information or the information being supplied to user and the terminal 900 inputted by user, and these graphical user interface can be made up of figure, text, icon, video and its combination in any.Display unit 140 can comprise display floater 141, optionally, the form such as LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) can be adopted to configure display floater 141.Further, Touch sensitive surface 131 can cover display floater 141, when Touch sensitive surface 131 detects thereon or after neighbouring touch operation, send processor 180 to determine the type of touch event, on display floater 141, provide corresponding vision to export with preprocessor 180 according to the type of touch event.Although in the diagram, Touch sensitive surface 131 and display floater 141 be as two independently parts realize input and input function, in certain embodiments, can by Touch sensitive surface 131 and display floater 141 integrated and realize input and output function.
Terminal 900 also can comprise at least one transducer 150, such as optical sensor, motion sensor and other transducers.Particularly, optical sensor can comprise ambient light sensor and proximity transducer, and wherein, ambient light sensor the light and shade of environmentally light can regulate the brightness of display floater 141, proximity transducer when terminal 900 moves in one's ear, can cut out display floater 141 and/or backlight.As the one of motion sensor, Gravity accelerometer can detect the size of all directions (are generally three axles) acceleration, size and the direction of gravity can be detected time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as pedometer, knock) etc.; As for terminal 900 also other transducers such as configurable gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, do not repeat them here.
Voicefrequency circuit 160, loud speaker 161, microphone 162 can provide the audio interface between user and terminal 900.Voicefrequency circuit 160 can by receive voice data conversion after the signal of telecommunication, be transferred to loud speaker 161, by loud speaker 161 be converted to voice signal export; On the other hand, the voice signal of collection is converted to the signal of telecommunication by microphone 162, voice data is converted to after being received by voicefrequency circuit 160, after again voice data output processor 180 being processed, through RF circuit 110 to send to such as another terminal, or export voice data to memory 120 to process further.Voicefrequency circuit 160 also may comprise earphone jack, to provide the communication of peripheral hardware earphone and terminal 900.
WiFi belongs to short range wireless transmission technology, and by WiFi module 170, terminal 900 can help that user sends and receive e-mail, browsing page and access streaming video etc., and its broadband internet wireless for user provides is accessed.Although Fig. 4 shows WiFi module 170, be understandable that, it does not belong to must forming of terminal 900, can omit in the scope of essence not changing invention as required completely.
Processor 180 is control centres of terminal 900, utilize the various piece of various interface and the whole mobile phone of connection, software program in memory 120 and/or module is stored in by running or performing, and call the data be stored in memory 120, perform various function and the deal with data of terminal 900, thus integral monitoring is carried out to mobile phone.Optionally, processor 180 can comprise one or more process core; Preferably, processor 180 accessible site application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication.Be understandable that, above-mentioned modem processor also can not be integrated in processor 180.
Terminal 900 also comprises the power supply 190 (such as battery) of powering to all parts, preferably, power supply can be connected with processor 180 logic by power-supply management system, thus realizes the functions such as management charging, electric discharge and power managed by power-supply management system.Power supply 190 can also comprise one or more direct current or AC power, recharging system, power failure detection circuit, power supply changeover device or the random component such as inverter, power supply status indicator.
Although not shown, terminal 900 can also comprise camera, bluetooth module etc., does not repeat them here.Specifically in the present embodiment, the display unit of terminal 900 is touch-screen displays, terminal 900 also includes memory, and one or more than one program, one of them or more than one program are stored in memory, and are configured to be performed by more than one or one processor state more than one or one program package containing the instruction for carrying out following operation:
Getting frame interval time;
The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time;
Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.
Preferably, the displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount, comprising:
The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is time offset, PTS nfor the displaying time of last frame video, PTS 1be the displaying time of the first frame video, t0 is the frame period time.
Preferably, the displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, comprising:
The displaying time of the every frame video comprised according to the second video and this time offset, calculate the displaying time in every frame video that the second video comprises video after merging by following formula (2);
PTS i’=PTS i+T……(2)
In formula (2), PTS ibe the displaying time of the i-th frame video that the second video comprises, PTS i' be displaying time in the i-th frame video video after merging of comprising of the second video;
The decode time of the every frame video comprised according to the second video and this time offset, calculate the decode time in every frame video that the second video comprises video after merging by following formula (3);
DTS i’=DTS i+T……(3)
In formula (3), DTS ibe the decode time of the i-th frame video that the second video comprises, DTS i' be decode time in the i-th frame video video after merging of comprising of the second video.
Further, before getting frame interval time, also comprise:
Create the first thread, the second thread and the 3rd thread, by the first thread, the second thread and the 3rd thread capture video.
Preferably, by the first thread, the second thread and the 3rd thread capture video, comprising:
Obtained the one-frame video data of camera current shooting by the first thread, the one-frame video data of camera current shooting is inserted into the tail of the queue of video queue;
Obtained a frame voice data of the current collection of microphone by the second thread, a frame voice data of current for microphone collection is inserted into the tail of the queue of audio queue;
By the 3rd thread, obtain one-frame video data from the head of the queue of video queue and obtain a frame voice data from the head of the queue of audio queue, the one-frame video data of acquisition and a frame voice data being combined as the frame video in video.
Preferably, getting frame interval time, comprising:
Obtain the number of frame of video and total duration of the first video that in the first video, every time shared by frame video, the first video comprise;
According to the time shared by frame video every in the first video, calculate the total time that in the first video, frame of video takies;
Total duration of the total time taken according to frame of video in the first video and the first video, calculates the total time shared by frame period that the first video comprises;
The number of the frame of video that the total time shared by frame period and the first video comprise, calculates the frame period time.
Preferably, the displaying time in the every frame video comprised according to the second video video after merging and decode time, the first video and the second video are merged into a video, comprising:
The every frame video storage comprised by first video is in a video file;
Displaying time in the every frame video comprised according to the second video video after merging and decode time, determine the storage order of every frame video that the second video comprises;
After the last frame video that first video comprises in video file, store every frame video that the second video comprises, to realize the first video and the second video to merge into a video according to the storage order determined.
In embodiments of the present invention, getting frame interval time; The displaying time of the first frame video comprised according to this frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount; The displaying time of the every frame video comprised according to this time offset, the second video and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time; Displaying time in the every frame video comprised according to the second video video after merging and decode time, merge into a video by the first video and the second video.Due to according to time offset, the displaying time of every frame video that the second video comprises and decode time, calculate the displaying time in every frame video that the second video comprises video after merging and decode time, displaying time in every frame video that can comprise according to the second video video after merging and decode time, first video and the second video are merged into a video, so when displaying video the first video and the second video as a video Continuous Play, avoid the stand-by period of first playing and existing when the first video plays the second video again, improve the efficiency of displaying video.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be read-only memory, disk or CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (14)

1. merge a method for video, it is characterized in that, described method comprises:
Getting frame interval time;
The displaying time of the first frame video comprised according to described frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
The displaying time of the every frame video comprised according to described time offset, the second video and decode time, calculate the displaying time in every frame video that described second video comprises video after merging and decode time;
Displaying time in the every frame video comprised according to described second video video after merging and decode time, merge into a video by described first video and described second video.
2. the method for claim 1, is characterized in that, the described displaying time of the first frame video that comprises according to described frame period time and the first video and the displaying time of last frame video, and acquisition time side-play amount, comprising:
The displaying time of the first frame video comprised according to described frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is described time offset, PTS nfor the displaying time of described last frame video, PTS 1for the displaying time of described first frame video, t0 is the described frame period time.
3. the method for claim 1, it is characterized in that, the displaying time of the described every frame video comprised according to described time offset, the second video and decode time, calculate the displaying time in every frame video that described second video comprises video after merging and decode time, comprising:
The displaying time of the every frame video comprised according to the second video and described time offset, calculate the displaying time in every frame video that described second video comprises video after merging by following formula (2);
PTS i’=PTS i+T……(2)
In formula (2), PTS ifor the displaying time of the i-th frame video that described second video comprises, PTS i' displaying time in the video of the i-th frame video after described merging that comprise for described second video;
The decode time of the every frame video comprised according to described second video and described time offset, calculate the decode time in every frame video that described second video comprises video after described merging by following formula (3);
DTS i’=DTS i+T……(3)
In formula (3), DTS ifor the decode time of the i-th frame video that described second video comprises, DTS i' decode time in the video of the i-th frame video after described merging that comprise for described second video.
4. the method for claim 1, is characterized in that, before described getting frame interval time, also comprises:
Create the first thread, the second thread and the 3rd thread, by described first thread, the second thread and the 3rd thread capture video.
5. method as claimed in claim 4, is characterized in that, described by described first thread, the second thread and the 3rd thread capture video, comprising:
Obtained the one-frame video data of camera current shooting by described first thread, the one-frame video data of described camera current shooting is inserted into the tail of the queue of video queue;
Obtained a frame voice data of the current collection of microphone by described second thread, a frame voice data of current for described microphone collection is inserted into the tail of the queue of audio queue;
By described 3rd thread, obtain one-frame video data from the head of the queue of described video queue and obtain a frame voice data from the head of the queue of described audio queue, the one-frame video data of described acquisition and a frame voice data being combined as the frame video in video.
6. the method for claim 1, is characterized in that, described getting frame interval time, comprising:
Obtain the number of frame of video and total duration of described first video that in the first video, every time shared by frame video, described first video comprise;
According to the time shared by frame video every in described first video, calculate the total time that in described first video, frame of video takies;
Total duration of the total time taken according to frame of video in described first video and described first video, calculates the total time shared by frame period that described first video comprises;
The number of the frame of video that the total time shared by described frame period and described first video comprise, calculates the frame period time.
7. the method for claim 1, is characterized in that, the displaying time in the described every frame video comprised according to described second video video after merging and decode time, described first video and described second video is merged into a video, comprising:
The every frame video storage comprised by described first video is in a video file;
Displaying time in the every frame video comprised according to described second video video after merging and decode time, determine the storage order of every frame video that described second video comprises;
After the last frame video that the first video described in described video file comprises, the every frame video comprised according to described described second video of storage order storage determined, to realize described first video and described second video to merge into a video.
8. merge a device for video, it is characterized in that, described device comprises:
First acquisition module, for getting frame interval time;
Second acquisition module, for the displaying time of the first frame video that comprises according to described frame period time and the first video and the displaying time of last frame video, acquisition time side-play amount;
Computing module, for the displaying time of every frame video that comprises according to described time offset, the second video and decode time, calculates the displaying time in every frame video that described second video comprises video after merging and decode time;
Merge module, for the displaying time in every frame video of comprising according to described second video video after merging and decode time, described first video and described second video are merged into a video.
9. device as claimed in claim 8, it is characterized in that, described second acquisition module, for the displaying time of the first frame video that comprises according to described frame period time and the first video and the displaying time of last frame video, by formula (1) side-play amount computing time as follows;
T=PTS N-PTS 1+t0……(1)
In formula (1), T is described time offset, PTS nfor the displaying time of described last frame video, PTS 1for the displaying time of described first frame video, t0 is the described frame period time.
10. device as claimed in claim 8, it is characterized in that, described computing module comprises:
First computing unit, for the displaying time of every frame video that comprises according to the second video and described time offset, calculates the displaying time in every frame video that described second video comprises video after merging by following formula (2);
PTS i’=PTS i+T……(2)
In formula (2), PTS ifor the displaying time of the i-th frame video that described second video comprises, PTS i' displaying time in the video of the i-th frame video after described merging that comprise for described second video;
Second computing unit, for the decode time of every frame video that comprises according to described second video and described time offset, calculate the decode time in every frame video that described second video comprises video after described merging by following formula (3);
DTS i’=DTS i+T……(3)
In formula (3), DTS ifor the decode time of the i-th frame video that described second video comprises, DTS i' decode time in the video of the i-th frame video after described merging that comprise for described second video.
11. devices as claimed in claim 8, it is characterized in that, described device also comprises:
Taking module, for creating the first thread, the second thread and the 3rd thread, by described first thread, the second thread and the 3rd thread capture video.
12. devices as claimed in claim 11, it is characterized in that, described taking module comprises:
First acquiring unit, for being obtained the one-frame video data of camera current shooting by described first thread, is inserted into the tail of the queue of video queue by the one-frame video data of described camera current shooting;
Second acquisition unit, for being obtained a frame voice data of the current collection of microphone by described second thread, is inserted into the tail of the queue of audio queue by a frame voice data of current for described microphone collection;
Assembled unit, for passing through described 3rd thread, obtain one-frame video data from the head of the queue of described video queue and obtain a frame voice data from the head of the queue of described audio queue, the one-frame video data of described acquisition and a frame voice data being combined as the frame video in video.
13. devices as claimed in claim 8, it is characterized in that, described first acquisition module comprises:
3rd acquiring unit, for obtaining the time in the first video shared by every frame video, the number of frame of video that described first video comprises and total duration of described first video;
3rd computing unit, for the time shared by frame video every in described first video, calculates the total time that in described first video, frame of video takies;
4th computing unit, for total duration of total time of taking according to frame of video in described first video and described first video, calculates the total time shared by frame period that described first video comprises;
5th computing unit, for the number of the frame of video that the total time shared by described frame period and described first video comprise, calculates the frame period time.
14. devices as claimed in claim 8, it is characterized in that, described merging module comprises:
First memory cell, for every frame video storage of being comprised by described first video in a video file;
Determining unit, for the displaying time in every frame video of comprising according to described second video video after merging and decode time, determines the storage order of every frame video that described second video comprises;
Second memory cell, for after the last frame video that comprises at the first video described in described video file, according to every frame video that described described second video of storage order storage determined comprises, to realize described first video and described second video to merge into a video.
CN201410503239.0A 2014-09-26 2014-09-26 A kind of method and device merging video Active CN104967864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410503239.0A CN104967864B (en) 2014-09-26 2014-09-26 A kind of method and device merging video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410503239.0A CN104967864B (en) 2014-09-26 2014-09-26 A kind of method and device merging video

Publications (2)

Publication Number Publication Date
CN104967864A true CN104967864A (en) 2015-10-07
CN104967864B CN104967864B (en) 2019-01-11

Family

ID=54221788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410503239.0A Active CN104967864B (en) 2014-09-26 2014-09-26 A kind of method and device merging video

Country Status (1)

Country Link
CN (1) CN104967864B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657524A (en) * 2016-01-13 2016-06-08 上海视云网络科技有限公司 Seamless video switching method
CN108966026A (en) * 2018-08-03 2018-12-07 广州酷狗计算机科技有限公司 The method and apparatus for making video file
CN109429030A (en) * 2017-08-31 2019-03-05 爱唯秀股份有限公司 The method for rebuilding video using super-resolution algorithms
CN110401866A (en) * 2018-04-25 2019-11-01 广州虎牙信息科技有限公司 Display methods, device, terminal and the storage medium of live video
CN113750527A (en) * 2021-09-10 2021-12-07 福建天晴数码有限公司 High-accuracy frame rate control method and system thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374231A (en) * 2007-04-30 2009-02-25 Vixs系统公司 System and method for combining a plurality of video streams
CN101409831A (en) * 2008-07-10 2009-04-15 浙江师范大学 Method for processing multimedia video object
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102075792A (en) * 2010-12-23 2011-05-25 华为技术有限公司 Video file playing method and system, user equipment and server equipment
US20120169883A1 (en) * 2010-12-31 2012-07-05 Avermedia Information, Inc. Multi-stream video system, video monitoring device and multi-stream video transmission method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374231A (en) * 2007-04-30 2009-02-25 Vixs系统公司 System and method for combining a plurality of video streams
CN101409831A (en) * 2008-07-10 2009-04-15 浙江师范大学 Method for processing multimedia video object
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102075792A (en) * 2010-12-23 2011-05-25 华为技术有限公司 Video file playing method and system, user equipment and server equipment
US20120169883A1 (en) * 2010-12-31 2012-07-05 Avermedia Information, Inc. Multi-stream video system, video monitoring device and multi-stream video transmission method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657524A (en) * 2016-01-13 2016-06-08 上海视云网络科技有限公司 Seamless video switching method
CN109429030A (en) * 2017-08-31 2019-03-05 爱唯秀股份有限公司 The method for rebuilding video using super-resolution algorithms
CN110401866A (en) * 2018-04-25 2019-11-01 广州虎牙信息科技有限公司 Display methods, device, terminal and the storage medium of live video
CN110401866B (en) * 2018-04-25 2022-05-20 广州虎牙信息科技有限公司 Live video display method, device, terminal and storage medium
CN108966026A (en) * 2018-08-03 2018-12-07 广州酷狗计算机科技有限公司 The method and apparatus for making video file
CN108966026B (en) * 2018-08-03 2021-03-30 广州酷狗计算机科技有限公司 Method and device for making video file
CN113750527A (en) * 2021-09-10 2021-12-07 福建天晴数码有限公司 High-accuracy frame rate control method and system thereof
CN113750527B (en) * 2021-09-10 2023-09-01 福建天晴数码有限公司 High-accuracy frame rate control method and system thereof

Also Published As

Publication number Publication date
CN104967864B (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN104113782B (en) Based on the method for registering of video, terminal, server and system
CN104967896A (en) Method for displaying bulletscreen comment information, and apparatus thereof
CN104869468A (en) Method and apparatus for displaying screen information
CN105005909A (en) Method and device for predicting lost users
CN103559731B (en) Method and terminal for displaying lyrics under screen locking state
CN104618794A (en) Method and device for playing video
CN104519404A (en) Graphics interchange format file playing method and device
CN104618222A (en) Method and device for matching expression image
CN104602100A (en) Method and device for recording video and audio in applications
CN104853081A (en) Breakpoint filming method, device and mobile terminal
CN105516784A (en) Virtual good display method and device
CN103475914A (en) Video playing method, video playing device, terminal equipment and server
CN104036536A (en) Generating method and apparatus of stop motion animation
CN104869465A (en) Video playing control method and device
CN104618223A (en) Information recommendation management method, device and system
CN104967865A (en) Video previewing method and apparatus
CN106507204A (en) A kind of video play-reverse method and apparatus
CN103945241A (en) Streaming data statistical method, system and related device
CN104967864A (en) Video merging method and apparatus
CN104602135A (en) Method and device for controlling full screen play
CN104571778A (en) Lock screen picture setting method and device
CN104243394A (en) Multimedia file playing method and device
CN104254020B (en) The player method of media data, device and terminal
CN104901992A (en) Resource transfer method and device
CN105245432A (en) Unread message counting method, unread message counting device and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant