CN109982126A - A kind of stacking method of associated video - Google Patents

A kind of stacking method of associated video Download PDF

Info

Publication number
CN109982126A
CN109982126A CN201711446435.9A CN201711446435A CN109982126A CN 109982126 A CN109982126 A CN 109982126A CN 201711446435 A CN201711446435 A CN 201711446435A CN 109982126 A CN109982126 A CN 109982126A
Authority
CN
China
Prior art keywords
video
signal
data
time
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711446435.9A
Other languages
Chinese (zh)
Inventor
李金楠
唐兴波
陈忠会
李刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edip (beijing) Cultural Polytron Technologies Inc
Original Assignee
Edip (beijing) Cultural Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edip (beijing) Cultural Polytron Technologies Inc filed Critical Edip (beijing) Cultural Polytron Technologies Inc
Priority to CN201711446435.9A priority Critical patent/CN109982126A/en
Publication of CN109982126A publication Critical patent/CN109982126A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention discloses a kind of stacking methods of associated video, this method includes carrying out correlation judgement and sequence processing to video and being overlapped processing to associated video, wherein: correlation judgement being carried out to video and sequence processing includes: a) to extract the feature of the signal data in video, and by signal data sequences segmentation at the significant scene of several tape indexes;B) correlation between video is judged according to significant scene;C) the set of metadata of similar data queue for having sequence indicia scene is generated;Being overlapped processing to associated video includes: a ') the mark key frame of two significant scenes selecting similarity value high from the back end in set of metadata of similar data queue in sequence as signal data;B ') it adjusts the out points of adjacent signals data and enters out the time, play the mark key frame with high similarity signal data in the same time;C ') adjust two adjacent videos signal data the out points time, make unlike signal data while playing.

Description

A kind of stacking method of associated video
Technical field
The present invention relates to video data process fields, and in particular to a kind of stacking method of associated video.
Background technique
With the rapid development of computer technology, multimedia type is more and more abundant, the processing for various media datas Demand is also varied.Wherein, due to the existing space attribute of video data again having time attribute, when shooting video Due to shooting angle, the difference of time, same reference object can generate many videos.
When several videos or its higher sub-piece similarity, then determine that video has correlation.Video is judged at present Whether the technological means with correlation has video retrieval technology and audio retrieval technology.When production has the video of correlation, Routine operation is to need artificial editor (put fastly, slow play etc.) video, finds out the video pictures of high similarity, cuts the time and simultaneously carries out Superposition, enables several videos can be according to the transition of high similarity picture smoothness.Due to the complexity of video content, difference is used Family in operation, even if to same portion's video, focus on angle be also possible to difference.Therefore not only smart using the method Spend it is low, time-consuming also grow, cumbersome, producing efficiency is extremely low.
Summary of the invention
For technical problem mentioned above, the invention proposes a kind of stacking method of associated video, this method is utilized Retrieval technique finds out the high similarity picture of several videos, cuts the time according to high similarity picture and is overlapped, quickly Accurate intelligentized editing splicing.
A kind of stacking method of associated video of the invention include to video carry out correlation judgement and sequence processing and Processing is overlapped to associated video, in which:
Correlation judgement is carried out to video and sequence processing includes:
A) feature extraction carried out respectively to the signal data in multiple videos of importing, and by signal data sequences segmentation at The significant scene of several tape indexes;
B) according to the judgement of the significant scene of the signal data, whether video has correlation two-by-two;
C) the set of metadata of similar data queue for having sequence indicia scene is generated, it is described similar significant scene information to be added to When data queue, significant scene information is sequentially added by the set of metadata of similar data according to the scene time of the high similarity of video Queue;
Being overlapped processing to associated video includes:
A ') in sequence from set of metadata of similar data queue select the first two signal data all back end, from these number According to two significant scenes for selecting similarity value high in node, a signal data for choosing two significant scenes respectively is made For the mark key frame of two signal datas;
B ') after determining mark key frame, by adjusting the out points of adjacent signals data and entering out the time, make two Mark key frame with high similarity signal data is played out in synchronization;
C ') adjust two adjacent videos signal data the out points time, make unlike signal data access point, go out point, The angle of incidence is consistent, and is made different data signal while being played.
In an embodiment of the invention, the signal data in video is vision signal, passes through video frequency searching Technology carries out feature extraction to multiple vision signals of importing respectively.
Optionally, according to the judgement of the significant scene of vision signal, whether video there is the specific judgement of correlation to walk two-by-two Suddenly are as follows:
I) video is selected, the significant scene of its vision signal and the vision signal of other videos is compared Compared with;
Ii) if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, is selected next A video is compared with other videos;
Iii) if dissimilar, current video is continued compared with other videos, until all videos are all completeer The similarity of the significant scene of vision signal.
Wherein, in step a ') in, when selection marker key frame, the scenic picture in back end described in preview respectively, and Mark key frame of the frame picture as first video and second video in significant scene is chosen respectively.
In step b ') in, realize image switching same by adjusting the out points of adjacent video signal and entering the time One moment played, specifically:
A) first vision signal access point and angle of incidence default are 0, need to cut out a little and out time, and putting out is its mark The time value of will key frame, out the time be the angle of incidence+go out point-access point value;
B) second vision signal needs to cut access point and the angle of incidence, and access point is the time value of its mark key frame, enters Time is the time out of previous video signal;
C) then second group of back end is adjusted, i.e., the switching of second vision signal and third vision signal Scene is adjusted;Pass through step a ') find out switching it is crucial after, cut second vision signal goes out to point out time and third Access point angle of incidence ... of vision signal and so on, respectively to each adjacent two video signal data in set of metadata of similar data queue Back end carries out above-mentioned steps operation, so that the mark key frame of previous video signal data and the latter vision signal number According to mark key frame picture be to be played out in synchronization.
In another embodiment of the present invention, the signal data in video is audio signal data, passes through audio Retrieval technique carries out feature extraction to multiple audio signal datas of importing respectively, analyzes and calculate the similarity of video, superposition Associated video enables the audio synchronization of high similarity to play.
Optionally, according to the judgement of the significant scene of audio signal data, whether video has specifically sentencing for correlation two-by-two Disconnected step are as follows:
I ') one video of selection, by the significant scene of the significant scene of its audio signal and other audio signals into Row compares;
Ii ') if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, under selection One video is compared with other videos;
Iii ') if dissimilar, current video is continued compared with other videos, until all videos are all completeer The similarity of the significant scene of audio signal.
Wherein, in step b ') in, high phase is realized by adjusting the out points of adjacent audio signals data and entering the time It is played like the audio signal of degree in synchronization, specifically:
A ') first audio signal access point and angle of incidence default be 0, need to cut out a little and out time, putting out is it Indicate key frame time value, out the time be the angle of incidence+go out point-access point value;
B ') second audio signal need to cut access point and the angle of incidence, and access point is the time value of its mark key frame, is entered Time is the time out of previous video signal;
C ') second group of back end is adjusted, i.e. the switching field of second audio signal and third audio signal Scape is adjusted, according to a ') step find out switching it is crucial after, cut second audio signal goes out to point out time and third sound Access point angle of incidence ... of frequency signal and so on, it is other that each adjacent two audio signal data in set of metadata of similar data queue is carried out State step operation, with guarantee the mark key frame sound of previous audio signal and the latter audio signal be synchronization into Row plays.
A kind of stacking method of associated video of the invention is utilized retrieval technique and finds out the high similarity picture of several videos Face cuts the time according to high similarity picture and is overlapped, quickly accurate intelligentized editing splicing, to greatly improve Precision, reduces manual operation, to reduce the operating time, largely improves video production efficiency.
Detailed description of the invention
Fig. 1 is the flow chart according to a kind of stacking method of associated video of the embodiment of the present invention;
Fig. 2 is the intuitively comparing effect according to a kind of associated video of the stacking method of associated video of the embodiment of the present invention Schematic diagram;
Fig. 3 is the set of metadata of similar data queue schematic diagram according to a kind of stacking method of associated video of the embodiment of the present invention;
Fig. 4 is the video Overlay schematic diagram according to a kind of stacking method of associated video of the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, the embodiments described below are only the embodiment of the present invention, are only used for clearly to this hair It is bright to be explained and illustrated, it can not limited the scope of protection of the present invention with this.
A kind of Fig. 1 flow chart of the stacking method of associated video according to an embodiment of the present invention.
With reference to Fig. 1, the embodiment of the present invention 1 discloses a kind of stacking method of associated video, and this method includes to video It carries out correlation judgement and sequence processing P1 and processing P2 is overlapped to associated video, in which:
Correlation judgement is carried out to video and sequence processing P1 includes:
S101 carries out feature extraction to the signal data in multiple videos of importing respectively, and signal data sequence is divided It is cut into the significant scene of several tape indexes;
S102, according to the judgement of the significant scene of the signal data, whether video has correlation two-by-two;
S103 generates the set of metadata of similar data queue for having sequence indicia scene, described significant scene information to be added to When set of metadata of similar data queue, significant scene information is sequentially added into according to the scene time of the high similarity of video described similar Data queue;
Being overlapped processing to associated video includes P2:
S201 selects all back end of the first two signal data, from these from set of metadata of similar data queue in sequence Two significant scenes for selecting similarity value high in back end choose a signal data of two significant scenes respectively Mark key frame as two signal datas;
S202, by adjusting the out points of adjacent signals data and entering out the time, makes two after determining mark key frame A mark key frame with high similarity signal data is played out in synchronization;
S203 adjusts the out points time of the signal data of two adjacent videos, makes the access point of unlike signal data, goes out Point, the angle of incidence are consistent, and are made different data signal while being played.
As interchangeable embodiment, the signal data in the video in embodiment 2 can be specially vision signal, Feature extraction is carried out respectively by multiple vision signals of the video retrieval technology to importing.
Referring to figs. 2 and 3, according to the judgement of the significant scene of vision signal, whether video has the specific of correlation two-by-two Judgment step are as follows:
I) video 1 is selected, by its vision signal and the vision signal of other videos 2 (or video 3 ...) Significant scene (scene 11, scene 17, scene 30, scene 55, scene 63, scene 80, scene 280, scene 300, scene 34, Scene 56 ...) it is compared;
Ii) if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, is selected next A video is compared with other videos;
Iii) if dissimilar, current video is continued compared with other videos, until all videos are all completeer The similarity of the significant scene of vision signal.
Wherein, in selection marker key frame, the scenic picture in back end described in preview, and selection mark respectively respectively Mark key frame of the frame picture as first video and second video in will scene.
With reference to Fig. 4, realize image switching in same a period of time by adjusting the out points of adjacent video signal and entering the time It carves and plays, specifically:
A) first vision signal access point and angle of incidence default are 0, need to cut out a little and out time, and putting out is its mark The time value of will key frame, out the time be the angle of incidence+go out point-access point value;
B) second vision signal needs to cut access point and the angle of incidence, and access point is the time value of its mark key frame, enters Time is the time out of previous video signal;
C) then second group of back end is adjusted, i.e., the switching of second vision signal and third vision signal Scene is adjusted;Pass through step a ') find out switching it is crucial after, cut second vision signal goes out to point out time and third Access point angle of incidence ... of vision signal and so on, respectively to each adjacent two video signal data in set of metadata of similar data queue Back end carries out above-mentioned steps operation, so that the mark key frame of previous video signal data and the latter vision signal number According to mark key frame picture be to be played out in synchronization.
As another interchangeable embodiment, the signal data in video in embodiment 3 is audio signal number According to carrying out feature extraction respectively by multiple audio signal datas of the audio retrieval technology to importing, analyze and calculate video Similarity, superposition associated video enable the audio synchronization of high similarity to play.
Referring to figs. 2 and 3, optionally, according to the judgement of the significant scene of audio signal data, whether video has two-by-two The specific judgment step of correlation are as follows:
I ') one video of selection, by the significant scene of the significant scene of its audio signal and other audio signals into Row compares;
Ii ') if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, under selection One video is compared with other videos;
Iii ') if dissimilar, current video is continued compared with other videos, until all videos are all completeer The similarity of the significant scene of audio signal.
Wherein, with reference to Fig. 4, high similarity is realized by adjusting the out points of adjacent audio signals data and entering the time Audio signal synchronization play, specifically:
A ') first audio signal access point and angle of incidence default be 0 (user can also voluntarily adjust different numerical value), it needs The a little and out time is cut out, point is the time value of its mark key frame out, and the time is the angle of incidence+go out point-access point value out;
B ') second audio signal need to cut access point and the angle of incidence, and access point is the time value of its mark key frame, is entered Time is the time out of previous video signal;
C ') second group of back end is adjusted, i.e. the switching field of second audio signal and third audio signal Scape is adjusted, according to a ') step find out switching it is crucial after, cut second audio signal goes out to point out time and third sound Access point angle of incidence ... of frequency signal and so on, it is other that each adjacent two audio signal data in set of metadata of similar data queue is carried out State step operation, with guarantee the mark key frame sound of previous audio signal and the latter audio signal be synchronization into Row plays.
Through the above, it can be seen that key problem in technology point of the invention is: when several associated videos are superimposed, passing through tune Out points and the time for saving video/audio play the high similarity picture of different video or sound in synchronization, and regard sound Frequency is synchronous.Wherein, when associated video is superimposed, high similarity mark key frame is chosen, out points and the time of video and audio is adjusted, makes The high similarity picture or sound of different video are played in synchronization, and video and audio is synchronous.
The technical solution illustrated according to front, the present invention find out the high similarity picture of several videos using retrieval technique, The time is cut according to high similarity picture and is overlapped, it being capable of quickly accurate intelligentized editing splicing.
A specific embodiment of the invention is described in detail above, but those skilled in the art are according to this The creative concept of invention can carry out various changes and modifications to the present invention, but the various changes and modifications done do not depart from The spirit and scope of the present invention, within the scope of coming under the claims in the present invention.

Claims (8)

1. a kind of stacking method of associated video, which is characterized in that this method includes carrying out correlation judgement and sequence to video It handles and processing is overlapped to associated video, in which:
Correlation judgement is carried out to video and sequence processing includes:
A) feature extraction is carried out respectively to the signal data in multiple videos of importing, and by signal data sequences segmentation at several The significant scene of tape index;
B) according to the judgement of the significant scene of the signal data, whether video has correlation two-by-two;
C) the set of metadata of similar data queue for having sequence indicia scene is generated, significant scene information is being added to the set of metadata of similar data When queue, significant scene information is sequentially added by the set of metadata of similar data team according to the scene time of the high similarity of video Column;
Being overlapped processing to associated video includes:
A ') in sequence from set of metadata of similar data queue select the first two signal data all back end, from these data sections Two significant scenes for selecting similarity value high in point choose a signal data of two significant scenes as two respectively The mark key frame of a signal data;
B ') after determining mark key frame, by adjusting the out points of adjacent signals data and entering out the time, make two to have The mark key frame of high similarity signal data is played out in synchronization;
C ') adjust two adjacent videos signal data the out points time, make the access point of unlike signal data, go out point, fashionable Between be consistent, make different data signal while playing.
2. passing through the method according to claim 1, wherein the signal data in video is vision signal Video retrieval technology carries out feature extraction to multiple vision signals of importing respectively.
3. according to the method described in claim 2, it is characterized in that, judging video two-by-two according to the significant scene of vision signal Whether there is the specific judgment step of correlation are as follows:
I) video is selected, its vision signal is compared with the significant scene of the vision signal of other videos;
Ii) if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, selects next view Frequency is compared with other videos;
Iii) if dissimilar, current video is continued compared with other videos, until all completeer video of all videos The similarity of the significant scene of signal.
4. according to the method described in claim 2, it is characterized in that, in step a ') in, it is pre- respectively when selection marker key frame The scenic picture look in the back end, and the frame picture in significant scene is chosen respectively as first video and The mark key frame of two videos.
5. according to the method described in claim 2, it is characterized in that, in step b ') in, by adjusting entering for adjacent video signal Put out and enter the time realize image switching synchronization play, specifically:
A) first vision signal access point and angle of incidence default are 0, need to cut out a little and out time, and point is its mark pass out The time value of key frame, out the time be the angle of incidence+go out point-access point value;
B) second vision signal needs to cut access point and the angle of incidence, and access point is the time value of its mark key frame, the angle of incidence It is the time out of previous video signal;
C) then second group of back end is adjusted, i.e., the handoff scenario of second vision signal and third vision signal It is adjusted;Pass through step a ') find out switching it is crucial after, cut second vision signal goes out to point out time and third video Access point angle of incidence ... of signal and so on, respectively to the data of each adjacent two video signal data in set of metadata of similar data queue Node carries out above-mentioned steps operation, so that the mark key frame and the latter video signal data of previous video signal data Indicate that key frame picture is played out in synchronization.
6. the method according to claim 1, wherein the signal data in video is audio signal data, Feature extraction is carried out respectively by multiple audio signal datas of the audio retrieval technology to importing, is analyzed and is calculated the similar of video Degree, superposition associated video enable the audio synchronization of high similarity to play.
7. according to the method described in claim 6, it is characterized in that, two-by-two according to the judgement of the significant scene of audio signal data Whether video has the specific judgment step of correlation are as follows:
I ') one video of selection, the significant scene of its audio signal and the significant scene of other audio signals are compared Compared with;
Ii ') if height is similar, the information preservation of the significant scene is recorded into set of metadata of similar data queue, is selected next Video is compared with other videos;
Iii ') if dissimilar, current video is continued compared with other videos, until all completeer audio of all videos The similarity of the significant scene of signal.
8. according to the method described in claim 6, it is characterized in that, in step b ') in, by adjusting adjacent audio signals data Out points and enter the time and realize that the audio signal of high similarity is played in synchronization, specifically:
A ') first audio signal access point and angle of incidence default be 0 (user can also voluntarily adjust different numerical value), it needs to cut out The a little and out time is cut, point is the time value of its mark key frame out, and the time is the angle of incidence+go out point-access point value out;
B ') second audio signal need to cut access point and the angle of incidence, and access point is the time value of its mark key frame, the angle of incidence It is the time out of previous video signal;
C ') second group of back end is adjusted, i.e., the handoff scenario of second audio signal and third audio signal into Row is adjusted, according to a ') step find out switching it is crucial after, cut second audio signal go out to point out the time and third audio is believed Number the access point angle of incidence ... and so on, it is other that above-mentioned step is carried out to each adjacent two audio signal data in set of metadata of similar data queue Rapid operation, to guarantee that the mark key frame sound of previous audio signal and the latter audio signal is broadcast in synchronization It puts.
CN201711446435.9A 2017-12-27 2017-12-27 A kind of stacking method of associated video Pending CN109982126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711446435.9A CN109982126A (en) 2017-12-27 2017-12-27 A kind of stacking method of associated video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711446435.9A CN109982126A (en) 2017-12-27 2017-12-27 A kind of stacking method of associated video

Publications (1)

Publication Number Publication Date
CN109982126A true CN109982126A (en) 2019-07-05

Family

ID=67071648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711446435.9A Pending CN109982126A (en) 2017-12-27 2017-12-27 A kind of stacking method of associated video

Country Status (1)

Country Link
CN (1) CN109982126A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101409831A (en) * 2008-07-10 2009-04-15 浙江师范大学 Method for processing multimedia video object
CN101464893A (en) * 2008-12-31 2009-06-24 清华大学 Method and device for extracting video abstract
CN101882308A (en) * 2010-07-02 2010-11-10 上海交通大学 Method for improving accuracy and stability of image mosaic
CN104376003A (en) * 2013-08-13 2015-02-25 深圳市腾讯计算机系统有限公司 Video retrieval method and device
CN104392416A (en) * 2014-11-21 2015-03-04 中国电子科技集团公司第二十八研究所 Video stitching method for sports scene
CN104519401A (en) * 2013-09-30 2015-04-15 华为技术有限公司 Video division point acquiring method and equipment
CN105100892A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Video playing device and method
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
US20170127123A1 (en) * 2015-11-02 2017-05-04 AppNexus Inc. Systems and methods for reducing digital video latency

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101409831A (en) * 2008-07-10 2009-04-15 浙江师范大学 Method for processing multimedia video object
CN101464893A (en) * 2008-12-31 2009-06-24 清华大学 Method and device for extracting video abstract
CN101882308A (en) * 2010-07-02 2010-11-10 上海交通大学 Method for improving accuracy and stability of image mosaic
CN104376003A (en) * 2013-08-13 2015-02-25 深圳市腾讯计算机系统有限公司 Video retrieval method and device
CN104519401A (en) * 2013-09-30 2015-04-15 华为技术有限公司 Video division point acquiring method and equipment
CN104392416A (en) * 2014-11-21 2015-03-04 中国电子科技集团公司第二十八研究所 Video stitching method for sports scene
CN105100892A (en) * 2015-07-28 2015-11-25 努比亚技术有限公司 Video playing device and method
US20170127123A1 (en) * 2015-11-02 2017-05-04 AppNexus Inc. Systems and methods for reducing digital video latency
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device

Similar Documents

Publication Publication Date Title
KR100997599B1 (en) Method for processing contents
US7432940B2 (en) Interactive animation of sprites in a video production
US20170201793A1 (en) TV Content Segmentation, Categorization and Identification and Time-Aligned Applications
US20100094441A1 (en) Image selection apparatus, image selection method and program
US20100104261A1 (en) Brief and high-interest video summary generation
JP2010518673A (en) Method and system for video indexing and video synopsis
US9594957B2 (en) Apparatus and method for identifying a still image contained in moving image contents
GB2354104A (en) An editing method and system
JP2008134725A (en) Content reproduction device
JP2007525900A (en) Method and apparatus for locating content in a program
US8634708B2 (en) Method for creating a new summary of an audiovisual document that already includes a summary and reports and a receiver that can implement said method
JP4732418B2 (en) Metadata processing method
JP6934402B2 (en) Editing system
JP2002281457A (en) Replaying video information
KR20160123647A (en) Apparatus and method for providing additional information usung object tracking
US20040054668A1 (en) Dynamic image content search information managing apparatus
CN109982126A (en) A kind of stacking method of associated video
KR100650665B1 (en) A method for filtering video data
JP2006039753A (en) Image processing apparatus and image processing method
US20100079673A1 (en) Video processing apparatus and method thereof
JP4652389B2 (en) Metadata processing method
EP1331577A1 (en) Dynamic image content search information managing apparatus
Coimbra et al. The shape of the game
JP2000069420A (en) Video image processor
Saini et al. Automated Video Mashups: Research and Challenges

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100101 Beijing city Chaoyang District District three building 10, building 1, Hui Li two

Applicant after: Aidipu Technology Co., Ltd

Address before: 100101 Beijing city Chaoyang District District three building 10, building 1, Hui Li two

Applicant before: IDEAPOOL (BEIJING) CULTURE AND TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190705

WD01 Invention patent application deemed withdrawn after publication