CN104506947A - Video fast forward/fast backward speed self-adaptive regulating method based on semantic content - Google Patents
Video fast forward/fast backward speed self-adaptive regulating method based on semantic content Download PDFInfo
- Publication number
- CN104506947A CN104506947A CN201410817471.1A CN201410817471A CN104506947A CN 104506947 A CN104506947 A CN 104506947A CN 201410817471 A CN201410817471 A CN 201410817471A CN 104506947 A CN104506947 A CN 104506947A
- Authority
- CN
- China
- Prior art keywords
- camera lens
- node
- semantic
- video
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The invention relates to a video fast forward/fast backward speed self-adaptive regulating method based on semantic content. The video fast forward/fast backward speed self-adaptive regulating method comprises the following steps: firstly, extracting a shot semantic set of each shot by taking the shot as a unit, then, obtaining semantic context of a video shot in a supervisory manner, and representing the context through a context tag tree; then, regulating weight of the shot by the context tag tree of a video; determining the fast forward/fast backward rate of each shot according to the weight of the shot. The video fast forward/fast backward speed self-adaptive regulating method based on semantic content disclosed by the invention can be used for regulating the video fast forward/fast backward speed in a self-adaptive manner according to the video semantic content and the context of each shot to help a user to quickly skip uninterested content; moreover, the interested content is not missed due to excessively high skip speed, so that convenience is provided for the user.
Description
Technical field
The present invention relates to video playback field, particularly a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method.
Background technology
User is when watching video; capital runs into be needed to fast skip situation partly of loseing interest in; therefore the video playback capability in video jukebox software or the Internet can be supplied to the function of user's F.F. and rewind usually, and user oneself can also arrange the speed of F.F. or rewind.
But, fast forwarding and fast rewinding function of today needs user to arrange the speed of fast forwarding and fast rewinding, but due to user, cannot to predict which partial content exactly more excellent, user often because arrange the content that too fast fast forward speed has resulted in skipped its part interested, or causes it cannot skip its uninterested part with the shortest time because arranged slow speed.This will finally cause user cannot fast and accurate positioning video partial content interested, the plenty of time of waste user, reduces Consumer's Experience.Therefore the fast annealing method of video fast forward of today also needs to improve.
Summary of the invention
The object of the present invention is to provide a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, video fast forward or rewind speed is adjusted by the context-adaptive of video semanteme content and each camera lens, user is helped to skip content of loseing interest in fast, content of interest can not be missed because of excessive velocities, provide users with the convenient.
For achieving the above object, technical scheme of the present invention is: a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, is characterized in that, realizes in accordance with the following steps:
S1: the camera lens extracting each camera lens in inputting video data in units of the camera lens in the video data inputted is semantic; Camera lens semanteme according to each camera lens is analyzed the camera lens semantic context between camera lens semanteme, and being expanded by the semantic sequence that the camera lens semanteme corresponding by each camera lens forms is that contextual tab is set, in order to characterize the camera lens semantic context between camera lens;
S2: the initial weight setting each camera lens according to the camera lens semanteme of each camera lens respectively, and according to described contextual tab tree, the camera lens weight of each camera lens is adjusted;
S3: initial F.F. or rewind speed V that the video of pending F.F. or fast reverse play is set; When user selects F.F. or rewind, according to F.F. or the rewind speed V of this camera lens of weight adjusting of each camera lens
curr.
Further, in described step S1, also comprise the steps:
S11: to n training video fragment video of input
jcarry out shot segmentation, obtain training video fragment video
jcamera lens, in units of camera lens, the camera lens of each camera lens of artificial mark is semantic, wherein, j ∈ 1 ..., n}; Camera lens semanteme after mark is classified, and is that every class camera lens semanteme constructs the semantic training set of camera lens with training classifier, obtain camera lens semantic analyzer; Input the pending F.F. or fast reverse play video segment video' that are made up of t camera lens, utilize camera lens semantic analyzer to obtain the camera lens semanteme of each camera lens in this pending F.F. or fast reverse play video segment video', with camera lens semantic label l
irepresent that each camera lens is semantic respectively, obtain camera lens semantic sequence wu' by semantic for the camera lens of each camera lens in this pending F.F. or fast reverse play video segment video' according to sequential relationship arrangement, and wu'={l
1..., l
t, wherein l
i∈ L, L are camera lens semantic label collection, and wherein a kind of camera lens of each element representation is semantic, and i is described camera lens semantic label l
isequence number;
S12: described camera lens semantic sequence wu' is expanded for contextual tab tree LT; Each leaf node of described contextual tab tree LT is camera lens semantic label l, l ∈ L; Each nonleaf node of described contextual tab tree LT is contextual tab, represents the context between the leaf node that each nonleaf node is corresponding; Described contextual tab comprises: video contextual tab video, scene contextual tab scene and general context label nl, and nl ∈ NL, NL are general context tally set; The context that described video contextual tab video represents is " semantic content of the leaf node co expression video under this contextual tab ", and the context that described scene contextual tab scene represents is " semantic content of the same video scene of leaf node co expression under this contextual tab "; And in described contextual tab tree LT, described video contextual tab video is root node, the child node that described video contextual tab video is is described scene contextual tab scene, the child node of described scene contextual tab scene is general context label or camera lens semantic label, and the child node of general context label nl is general context label or camera lens semantic label.
Further, in described step S11, adopt with SVM multi-categorizer as disaggregated model, and utilize camera lens semantic training set training SVM multi-categorizer, after having trained, obtain described camera lens semantic analyzer.
Further, in described step S12, in accordance with the following steps described camera lens semantic sequence wu' is expanded as described contextual tab tree LT:
S121: generate a leaf node successively according to each camera lens semantic label li in described camera lens semantic sequence wu', from left to right generates initial labels sequence node Curr={c
1..., c
t, wherein ci=li, ci are the i-th label node in initial labels sequence node Curr;
S122: from left to right travel through this label node sequence C urr, for its subsequence { c
k... c
k+m, if meet contextual tab create-rule p ∈ P, namely exist and describe c
k..., c
k+mform contextual contextual tab c
p, then with each label node in this subsequence for child node, and with label cp generate new label node for father node, replace described label sequence node Curr sequence Atom sequence { c with described label cp
k... c
k+m, wherein, P is context create-rule collection, contextual tab create-rule p:c
p← c
1... c
s(c
s∈ L ∪ NL, c
p∈ video ∪ scene ∪ NL), k, s and m are the positive integer being more than or equal to 1, and k+m≤t;
S123: traversal terminates to generate new label node sequence C urr;
S124: whether the sequence length judging this new label node sequence C urr is 1, if not, then return in step S122, travel through the label node sequence C urr that this is new, otherwise terminate.
Further, in described step S2, also comprise the steps:
S21: camera lens is divided into non-key camera lens and crucial camera lens according to the semantic information in camera lens; In described contextual tab tree LT, obtain camera lens semantic label l corresponding to each camera lens, if camera lens is non-key camera lens, then marks this camera lens semantic label l with non-key label sub, and give weight 1; If camera lens is crucial camera lens, then marks this camera lens semantic label l with crucial label chi, and give weight q-chi, thus complete the setting of each camera lens corresponding leaf node weight in described contextual tab tree LT;
S22: according to the weight of the contextual information amendment nonleaf node in described contextual tab tree LT.
Further, in described step S22, also comprise the steps:
S221: search for described contextual tab tree LT child nodes non-key label sub or crucial label chi mark, but the nonleaf node node itself be not labeled
hif exist, enter step S222; Otherwise enter step S223;
S222: if this nonleaf node node
hall child nodes are all labeled as non-key label sub, then mark this nonleaf node node with non-key label sub
h, and give weight 1; Otherwise mark this node with crucial label chi
h, and by this node
hweight be set to its child node weight sum, forward described step S221 after mark to;
S223: travel through described contextual tab tree LT in the mode of breadth First from root node; By this nonleaf node node
hchild node node
hgmodify in the following manner:
Q(node
hg)_new=Q(node
h)*Q(node
hg)/(Q(node
h1)+...+Q(node
hd))
Wherein Q (node) be node node weight, Q (node
hg) _ new is node
hgamended weight, d is node
hchild node number.
Further, in described step S3, obtain F.F. or the rewind speed V of each camera lens in the following manner
curr: V
curr=V/Q (l
i), wherein l
ifor described camera lens semantic label.
Compared to prior art, the present invention has following beneficial effect: a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method proposed by the invention can obtain the significance level of each camera lens of video according to video lens semanteme and camera lens semantic context, and arranges corresponding weight.Then the skip forward/back speed of each camera lens is automatically adjusted according to weight.A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method proposed by the invention, user can be helped automatically to adjust F.F. or the rewind speed of each camera lens, avoid perhaps can not fast skipping non-splendid contents because user arranges the improper of F.F. or rewind speed and misses in video highlight.Therefore the method provides users with the convenient, and improves the Consumer's Experience of video jukebox software.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method in the present invention.
Fig. 2 is that in the present invention, camera lens semantic sequence sets arborescence with corresponding contextual tab.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is specifically described.
The invention provides a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, as shown in Figure 1, it is characterized in that, realize in accordance with the following steps:
S1: the camera lens extracting each camera lens in inputting video data in units of the camera lens in the video data inputted is semantic; Camera lens semanteme according to each camera lens is analyzed the camera lens semantic context between camera lens semanteme, and being expanded by the semantic sequence that the camera lens semanteme corresponding by each camera lens forms is that contextual tab is set, in order to characterize the camera lens semantic context between camera lens;
S2: the initial weight setting each camera lens according to the camera lens semanteme of each camera lens respectively, and according to described contextual tab tree, the camera lens weight of each camera lens is adjusted;
S3: initial F.F. or rewind speed V that the video of pending F.F. or fast reverse play is set; When user selects F.F. or rewind, according to F.F. or the rewind speed V of this camera lens of weight adjusting of each camera lens
curr.
Further, in the present embodiment, in described step S1, also comprise the steps:
S11: to n training video fragment video of input
jcarry out shot segmentation, obtain r training video fragment video
jcamera lens, extract and quantize the visual signature of each camera lens, being configured to visual feature vector v, in units of camera lens, in artificial this r of mark camera lens, the camera lens of each camera lens is semantic, and the semantic training set of structure camera lens, classifies to the camera lens semanteme after mark, adopt SVM multi-categorizer as disaggregated model, and utilize camera lens semantic training set training SVM multi-categorizer, obtain described camera lens semantic analyzer after having trained, wherein, j ∈ { 1, ..., n}, obtains camera lens semantic analyzer; Input the pending F.F. or fast reverse play video segment video' that are made up of t camera lens, camera lens semantic analyzer is utilized to obtain the camera lens semanteme of each camera lens in this pending F.F. or fast reverse play video segment video', in the present embodiment, extract the visual signature composition characteristic vector v of each camera lens, v being inputted camera lens semantic analyzer, to obtain the camera lens of each camera lens of video' semantic, and with camera lens semantic label l
irepresent that each camera lens is semantic respectively, obtain camera lens semantic sequence wu' by semantic for the camera lens of each camera lens in this pending F.F. or fast reverse play video segment video' according to sequential relationship arrangement, and wu'={l
1..., l
t, wherein l
i∈ L, L are camera lens semantic label collection, and wherein a kind of camera lens of each element representation is semantic, and i is described camera lens semantic label l
isequence number;
S12: expanded by described camera lens semantic sequence wu' as contextual tab tree LT, this contextual tab tree LT is tree structure; Each leaf node of described contextual tab tree LT is camera lens semantic label l, l ∈ L; Each nonleaf node of described contextual tab tree LT is contextual tab, represents the context between the leaf node that each nonleaf node is corresponding; Described contextual tab comprises: video contextual tab video, scene contextual tab scene and general context label nl, and nl ∈ NL, NL are general context tally set, and dissimilar video has different NL; The context that described video contextual tab video represents is " semantic content of the leaf node co expression video under this contextual tab ", and the context that described scene contextual tab scene represents is " semantic content of the same video scene of leaf node co expression under this contextual tab "; And in described contextual tab tree LT, as shown in Figure 2, described video contextual tab video is root node, the child node that described video contextual tab video is is described scene contextual tab scene, the child node of described scene contextual tab scene is general context label or camera lens semantic label, and the child node of general context label nl is general context label or camera lens semantic label.
Further, in described step S12, in accordance with the following steps described camera lens semantic sequence wu' is expanded as described contextual tab tree LT:
S121: generate a leaf node successively according to each camera lens semantic label li in described camera lens semantic sequence wu', from left to right generates initial labels sequence node Curr={c
1..., c
t, wherein ci=li, ci are the i-th label node in initial labels sequence node Curr;
S122: from left to right travel through this label node sequence C urr, for its subsequence { c
k... c
k+m, if meet contextual tab create-rule p ∈ P, namely exist and describe c
k..., c
k+mform contextual contextual tab c
p, then with each label node in this subsequence for child node, and with label cp generate new label node for father node, replace described label sequence node Curr sequence Atom sequence { c with described label cp
k... c
k+m, wherein, P is context create-rule collection; In the present embodiment, contextual tab has shape as p:c
p← c
1... c
s(c
s∈ L ∪ NL, c
p∈ video ∪ scene ∪ NL) context create-rule, k, s and m are the positive integer being more than or equal to 1, and k+m≤t, and dissimilar video has different context create-rule collection P;
S123: traversal terminates to generate new label node sequence C urr;
S124: whether the sequence length judging this new label node sequence C urr is 1, if not, then return in step S122, travel through the label node sequence C urr that this is new, otherwise terminate.
Further, in described step S2, also comprise the steps:
S21: camera lens is divided into non-key camera lens and crucial camera lens according to the semantic information in camera lens; In described contextual tab tree LT, obtain camera lens semantic label l corresponding to each camera lens, if camera lens is non-key camera lens, then marks this camera lens semantic label l with non-key label sub, and give weight l; If camera lens is crucial camera lens, then marks this camera lens semantic label l with crucial label chi, and give weight q-chi, and and crucial camera lens is further divided into lev grade, thus complete the setting of each camera lens corresponding leaf node weight in described contextual tab tree LT;
S22: according to the weight of the contextual information amendment nonleaf node in described contextual tab tree LT.
Further, in described step S22, also comprise the steps:
S221: search for described contextual tab tree LT child nodes and marked by non-key label sub or crucial label chi, but the nonleaf node node itself be not labeled
hif exist, enter step S222; Otherwise enter step S223;
S222: if this nonleaf node node
hall child nodes are all labeled as non-key label sub, then mark this nonleaf node node with non-key label sub
h, and give weight 1; Otherwise mark this node with crucial label chi
h, and by this node
hweight be set to its child node weight sum, forward described step S221 after mark to;
S223: travel through described contextual tab tree LT in the mode of breadth First from root node; By this nonleaf node node
hchild node node
hgmodify in the following manner:
Q(node
hg)_new=Q(node
h)*Q(node
hg)/(Q(node
h1)+...+Q(node
hd))
Wherein Q (node) be node node weight, Q (node
hg) _ new is node
hgamended weight, d is node
hchild node number.
Further, in described step S3, obtain F.F. or the rewind speed V of each camera lens in the following manner
curr: V
curr=V/Q (l
i), wherein l
ifor described camera lens semantic label, when user clicks F.F. or rewind, in video', V pressed by each camera lens
currcarry out F.F. or fast reverse play.
Be more than preferred embodiment of the present invention, all changes done according to technical solution of the present invention, when the function produced does not exceed the scope of technical solution of the present invention, all belong to protection scope of the present invention.
Claims (7)
1. based on video fast forward/rewind speeds self-adapting regulation method of semantic content, it is characterized in that, realize in accordance with the following steps:
S1: the camera lens extracting each camera lens in inputting video data in units of the camera lens in the video data inputted is semantic; Camera lens semanteme according to each camera lens is analyzed the camera lens semantic context between camera lens semanteme, and being expanded by the semantic sequence that the camera lens semanteme corresponding by each camera lens forms is that contextual tab is set, in order to characterize the camera lens semantic context between camera lens;
S2: the initial weight setting each camera lens according to the camera lens semanteme of each camera lens respectively, and according to described contextual tab tree, the camera lens weight of each camera lens is adjusted;
S3: initial F.F. or rewind speed V that the video of pending F.F. or fast reverse play is set; When user selects F.F. or rewind, according to F.F. or the rewind speed V of this camera lens of weight adjusting of each camera lens
curr.
2. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 1, is characterized in that, in described step S1, also comprise the steps:
S11: to n training video fragment video of input
jcarry out shot segmentation, obtain training video fragment video
jcamera lens, in units of camera lens, the camera lens of each camera lens of artificial mark is semantic, wherein, j ∈ 1 ..., n}; Camera lens semanteme after mark is classified, and is that every class camera lens semanteme constructs the semantic training set of camera lens with training classifier, obtain camera lens semantic analyzer; Input the pending F.F. or fast reverse play video segment video ' that are made up of t camera lens, utilize camera lens semantic analyzer to obtain the camera lens semanteme of each camera lens in this pending F.F. or fast reverse play video segment video ', with camera lens semantic label l
irepresent that each camera lens is semantic respectively, obtain camera lens semantic sequence wu ' by semantic for the camera lens of each camera lens in this pending F.F. or fast reverse play video segment video ' according to sequential relationship arrangement, and wu '={ l
1..., l
t, wherein l
i∈ L, L are camera lens semantic label collection, and wherein a kind of camera lens of each element representation is semantic, and i is described camera lens semantic label l
isequence number;
S12: described camera lens semantic sequence wu ' is expanded into contextual tab tree LT; Each leaf node of described contextual tab tree LT is camera lens semantic label l, l ∈ L; Each nonleaf node of described contextual tab tree LT is contextual tab, represents the context between the leaf node that each nonleaf node is corresponding; Described contextual tab comprises: video contextual tab video, scene contextual tab scene and general context label nl, and nl ∈ NL, NL are general context tally set; The context that described video contextual tab video represents is " semantic content of the leaf node co expression video under this contextual tab ", and the context that described scene contextual tab scene represents is " semantic content of the same video scene of leaf node co expression under this contextual tab "; And in described contextual tab tree LT, described video contextual tab video is root node, the child node that described video contextual tab video is is described scene contextual tab scene, the child node of described scene contextual tab scene is general context label or camera lens semantic label, and the child node of general context label nl is general context label or camera lens semantic label.
3. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 2, it is characterized in that, in described step S11, adopt with SVM multi-categorizer as disaggregated model, and utilize camera lens semantic training set training SVM multi-categorizer, obtain described camera lens semantic analyzer after having trained.
4. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 1, is characterized in that, in described step S12, is expanded by described camera lens semantic sequence wu ' as described contextual tab tree LT in accordance with the following steps:
S121: generate a leaf node successively according to each camera lens semantic label li in described camera lens semantic sequence wu ', from left to right generates initial labels sequence node Curr={c
1..., c
t, wherein ci=li, ci are the i-th label node in initial labels sequence node Curr;
S122: from left to right travel through this label node sequence C urr, for its subsequence { c
k... c
k+m, if meet contextual tab create-rule p ∈ P, namely exist and describe c
k..., c
k+mform contextual contextual tab c
p, then with each label node in this subsequence for child node, and with label cp generate new label node for father node, replace described label sequence node Curr sequence Atom sequence { c with described label cp
k... c
k+m, wherein, P is context create-rule collection, contextual tab create-rule p:c
p← c
1... c
s(c
s∈ L ∪ NL, c
p∈ video ∪ scene ∪ NL), k, s and m are the positive integer being more than or equal to 1, and k+m≤t;
S123: traversal terminates to generate new label node sequence C urr;
S124: whether the sequence length judging this new label node sequence C urr is 1, if not, then return in step S122, travel through the label node sequence C urr that this is new, otherwise terminate.
5. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 1, is characterized in that, in described step S2, also comprise the steps:
S21: camera lens is divided into non-key camera lens and crucial camera lens according to the semantic information in camera lens; In described contextual tab tree LT, obtain camera lens semantic label l corresponding to each camera lens, if camera lens is non-key camera lens, then marks this camera lens semantic label l with non-key label sub, and give weight 1; If camera lens is crucial camera lens, then marks this camera lens semantic label l with crucial label chi, and give weight q-chi, thus complete the setting of each camera lens corresponding leaf node weight in described contextual tab tree LT;
S22: according to the weight of the contextual information amendment nonleaf node in described contextual tab tree LT.
6. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 5, is characterized in that, in described step S22, also comprise the steps:
S221: search for described contextual tab tree LT child nodes non-key label sub or crucial label chi mark, but the nonleaf node node itself be not labeled
hif exist, enter step S222; Otherwise enter step S223;
S222: if this nonleaf node node
hall child nodes are all labeled as non-key label sub, then mark this nonleaf node node with non-key label sub
h, and give weight 1; Otherwise mark this node with crucial label chi
h, and by this node
hweight be set to its child node weight sum, forward described step S221 after mark to;
S223: travel through described contextual tab tree LT in the mode of breadth First from root node; By this nonleaf node node
hchild node node
hgmodify in the following manner:
Q(node
hg)_new=Q(node
h)*Q(node
hg)/(Q(node
h1)+...+Q(node
hd))
Wherein Q (node) be node node weight, Q (node
hg) _ new is node
hgamended weight, d is node
hchild node number.
7. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 2, is characterized in that, in described step S3, obtains F.F. or the rewind speed V of each camera lens in the following manner
curr: V
curr=V/Q (l
i), wherein l
ifor described camera lens semantic label.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817471.1A CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817471.1A CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104506947A true CN104506947A (en) | 2015-04-08 |
CN104506947B CN104506947B (en) | 2017-09-05 |
Family
ID=52948651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410817471.1A Expired - Fee Related CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104506947B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106341700A (en) * | 2016-09-05 | 2017-01-18 | Tcl集团股份有限公司 | Method and system for automatically adjusting video frame rate |
CN107147957A (en) * | 2017-04-19 | 2017-09-08 | 北京小米移动软件有限公司 | Video broadcasting method and device |
CN108174243A (en) * | 2017-12-28 | 2018-06-15 | 广东欧珀移动通信有限公司 | A kind of adjusting method, device, storage medium and the terminal of video playing rate |
CN108184169A (en) * | 2017-12-28 | 2018-06-19 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, storage medium and electronic equipment |
CN108259988A (en) * | 2017-12-26 | 2018-07-06 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
CN108513130A (en) * | 2017-12-29 | 2018-09-07 | 西安电子科技大学 | A kind of realization system and method for Tag-Tree codings |
CN110209877A (en) * | 2018-02-06 | 2019-09-06 | 上海全土豆文化传播有限公司 | Video analysis method and device |
CN113507624A (en) * | 2021-09-10 | 2021-10-15 | 明品云(北京)数据科技有限公司 | Video information recommendation method and system |
CN115442661A (en) * | 2021-06-01 | 2022-12-06 | 北京字跳网络技术有限公司 | Video processing method, device, storage medium and computer program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6424789B1 (en) * | 1999-08-17 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System and method for performing fast forward and slow motion speed changes in a video stream based on video content |
CN1732685A (en) * | 2002-12-27 | 2006-02-08 | Lg电子有限公司 | Method and apparatus for dynamic search of video contents |
CN102265609A (en) * | 2008-12-26 | 2011-11-30 | 富士通株式会社 | Program data processing device, method, and program |
CN104036023A (en) * | 2014-06-26 | 2014-09-10 | 福州大学 | Method for creating context fusion tree video semantic indexes |
-
2014
- 2014-12-24 CN CN201410817471.1A patent/CN104506947B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6424789B1 (en) * | 1999-08-17 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System and method for performing fast forward and slow motion speed changes in a video stream based on video content |
CN1732685A (en) * | 2002-12-27 | 2006-02-08 | Lg电子有限公司 | Method and apparatus for dynamic search of video contents |
CN102265609A (en) * | 2008-12-26 | 2011-11-30 | 富士通株式会社 | Program data processing device, method, and program |
CN104036023A (en) * | 2014-06-26 | 2014-09-10 | 福州大学 | Method for creating context fusion tree video semantic indexes |
Non-Patent Citations (1)
Title |
---|
PETROVIC N AT EL.: "Adaptive Video Fast Forward", 《MULTIMEDIA TOOLS AND APPLICATIONS》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106341700A (en) * | 2016-09-05 | 2017-01-18 | Tcl集团股份有限公司 | Method and system for automatically adjusting video frame rate |
CN107147957B (en) * | 2017-04-19 | 2019-09-10 | 北京小米移动软件有限公司 | Video broadcasting method and device |
CN107147957A (en) * | 2017-04-19 | 2017-09-08 | 北京小米移动软件有限公司 | Video broadcasting method and device |
CN108259988B (en) * | 2017-12-26 | 2021-05-18 | 努比亚技术有限公司 | Video playing control method, terminal and computer readable storage medium |
CN108259988A (en) * | 2017-12-26 | 2018-07-06 | 努比亚技术有限公司 | A kind of video playing control method, terminal and computer readable storage medium |
CN108184169A (en) * | 2017-12-28 | 2018-06-19 | 广东欧珀移动通信有限公司 | Video broadcasting method, device, storage medium and electronic equipment |
CN108184169B (en) * | 2017-12-28 | 2020-10-09 | Oppo广东移动通信有限公司 | Video playing method and device, storage medium and electronic equipment |
CN108174243A (en) * | 2017-12-28 | 2018-06-15 | 广东欧珀移动通信有限公司 | A kind of adjusting method, device, storage medium and the terminal of video playing rate |
CN108513130A (en) * | 2017-12-29 | 2018-09-07 | 西安电子科技大学 | A kind of realization system and method for Tag-Tree codings |
CN110209877A (en) * | 2018-02-06 | 2019-09-06 | 上海全土豆文化传播有限公司 | Video analysis method and device |
CN115442661A (en) * | 2021-06-01 | 2022-12-06 | 北京字跳网络技术有限公司 | Video processing method, device, storage medium and computer program product |
WO2022253276A1 (en) * | 2021-06-01 | 2022-12-08 | 北京字跳网络技术有限公司 | Video processing method and device, storage medium, and computer program product |
CN115442661B (en) * | 2021-06-01 | 2024-03-19 | 北京字跳网络技术有限公司 | Video processing method, apparatus, storage medium, and computer program product |
CN113507624A (en) * | 2021-09-10 | 2021-10-15 | 明品云(北京)数据科技有限公司 | Video information recommendation method and system |
CN113507624B (en) * | 2021-09-10 | 2021-12-21 | 明品云(北京)数据科技有限公司 | Video information recommendation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN104506947B (en) | 2017-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104506947A (en) | Video fast forward/fast backward speed self-adaptive regulating method based on semantic content | |
US11625920B2 (en) | Method for labeling performance segment, video playing method, apparatus and system | |
US10652605B2 (en) | Visual hot watch spots in content item playback | |
US20210272599A1 (en) | Systems and methods for automating video editing | |
CN112511854B (en) | Live video highlight generation method, device, medium and equipment | |
US10998003B2 (en) | Computerized system and method for automatically extracting GIFs from videos | |
US11749241B2 (en) | Systems and methods for transforming digitial audio content into visual topic-based segments | |
CN114342353B (en) | Method and system for video segmentation | |
CN104052714B (en) | The method for pushing and server of multimedia messages | |
US20110161348A1 (en) | System and Method for Automatically Creating a Media Compilation | |
CN110035325A (en) | Barrage answering method, barrage return mechanism and live streaming equipment | |
CN106484733B (en) | News clue personalized push method and system | |
WO2021238081A1 (en) | Voice packet recommendation method, apparatus and device, and storage medium | |
US20180210954A1 (en) | Method and apparatus for creating a summary video | |
CN107122786B (en) | Crowdsourcing learning method and device | |
CN113746874A (en) | Voice packet recommendation method, device, equipment and storage medium | |
CN108462900A (en) | Video recommendation method and device | |
KR102580017B1 (en) | Voice packet recommendation methods, devices, facilities and storage media | |
US20240220537A1 (en) | Metadata tag identification | |
US20240171784A1 (en) | Determining watch time loss regions in media content items | |
CN113992973B (en) | Video abstract generation method, device, electronic equipment and storage medium | |
US9667886B2 (en) | Apparatus and method for editing video data according to common video content attributes | |
US10007843B1 (en) | Personalized segmentation of media content | |
Everett | Black film, new media industries, and BAMMs (Black American media moguls) in the digital media ecology | |
US20230402065A1 (en) | Generating titles for content segments of media items using machine-learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170905 Termination date: 20201224 |
|
CF01 | Termination of patent right due to non-payment of annual fee |