CN104506947B - A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method - Google Patents
A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method Download PDFInfo
- Publication number
- CN104506947B CN104506947B CN201410817471.1A CN201410817471A CN104506947B CN 104506947 B CN104506947 B CN 104506947B CN 201410817471 A CN201410817471 A CN 201410817471A CN 104506947 B CN104506947 B CN 104506947B
- Authority
- CN
- China
- Prior art keywords
- camera lens
- node
- semantic
- video
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6587—Control parameters, e.g. trick play commands, viewpoint selection
Abstract
The present invention relates to a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, the camera lens semanteme collection of each camera lens is first extracted in units of camera lens, the semantic context of video lens is obtained with then having supervision, and represented with context tag tree.Then the weight of camera lens is adjusted using the contextual tab tree of video.When user needs to carry out the fast forward and fast reverse playback of video, the fast forward and fast reverse speed of each camera lens is determined according to the weight of camera lens.A kind of video fast forward or fast reverse play speed adaptive method of adjustment based on video semanteme content proposed by the invention can adjust video fast forward or rewind speed according to the context-adaptive of video semanteme content and each camera lens, user is helped quickly to skip content of loseing interest in, and content of interest will not be missed because of excessive velocities, provide users with the convenient.
Description
Technical field
The present invention relates to video broadcasting field, particularly a kind of video fast forward/rewind speeds based on semantic content are adaptive
Answer method of adjustment.
Background technology
User can run into the situation for needing to fast skip part of loseing interest in, therefore broadcast in video when watching video
The function of user's F.F. and rewind would generally be supplied to by softening the video playback capability on part or internet, and user can be with
Oneself sets the speed of fast forward and fast reverse.
However, fast forwarding and fast rewinding function of today needs the speed of user's setting fast forwarding and fast rewinding, but because user can not be accurate
Ground predicts which partial content is more excellent, and user is often because set too fast fast forward speed to result in skipped its portion interested
The content divided, or because set slow speed to cause it can not skip its uninterested part with the most short time.This
Will eventually lead to user quickly can not be accurately positioned video interested partial content, waste the plenty of time of user, reduce
Consumer's Experience.Therefore the fast annealing method of video fast forward of today also needs to improve.
The content of the invention
It is an object of the invention to provide a kind of adaptive side of adjustment of video fast forward/rewind speeds based on semantic content
Method, video fast forward or rewind speed are adjusted by the context-adaptive of video semanteme content and each camera lens, help user
Content of loseing interest in quickly is skipped, content of interest will not be missed because of excessive velocities, is provided users with the convenient.
To achieve the above object, the technical scheme is that:A kind of video fast forward/rewind speeds based on semantic content
Self-adapting regulation method, it is characterised in that realize in accordance with the following steps:
S1:The camera lens language of each camera lens in inputting video data is extracted in units of the camera lens in the video data of input
Justice;Camera lens semantic context of the camera lens between semantic is analyzed according to the camera lens of each camera lens is semantic, will be by each camera lens
It is contextual tab tree that the semantic sequence of corresponding camera lens semanteme composition, which is expanded, under characterizing the camera lens between camera lens semantically
Text;
S2:Set the initial weight of each camera lens respectively according to the camera lens of each camera lens semanteme, and according to the context
Tag tree is adjusted to the camera lens weight of each camera lens;
S3:The initial fast forward and fast reverse speed V of the video of pending fast forward and fast reverse playback is set;When user selects F.F.
Or during rewind, the fast forward and fast reverse speed V of the camera lens is adjusted according to the weight of each camera lenscurr。
Further, in the step S1, also comprise the following steps:
S11:To n training video fragment video of inputjShot segmentation is carried out, training video fragment video is obtainedj's
Camera lens, the camera lens that each camera lens is manually marked in units of camera lens is semantic, wherein, j ∈ { 1 ..., n };To the camera lens after mark
Semanteme is classified, and, to train grader, to obtain camera lens semantic analysis per the semantic training set of class camera lens semanteme construction camera lens
Device;The pending fast forward and fast reverse playback video segment video' being made up of t camera lens is inputted, is obtained using camera lens semantic analyzer
Take the camera lens of each camera lens in the pending fast forward and fast reverse playback video segment video' semantic, with camera lens semantic label liPoint
Do not represent that each camera lens is semantic, the camera lens of each camera lens in the pending fast forward and fast reverse playback video segment video' is semantic
Camera lens semantic sequence wu', and wu'={ l are obtained according to sequential relationship arrangement1,...,lt, wherein li∈ L, L are camera lens semanteme mark
A kind of label collection, camera lens of each of which element representation is semantic, and i is the camera lens semantic label liSequence number;
S12:It is contextual tab tree LT that the camera lens semantic sequence wu', which is expanded,;The contextual tab tree LT's is every
Individual leaf node is camera lens semantic label l, l ∈ L;Each nonleaf node of the contextual tab tree LT is contextual tab, table
Show the context between the corresponding leaf node of each nonleaf node;The contextual tab includes:Video contextual tab
Video, scene contextual tab scene and general context label nl, and nl ∈ NL, NL are general context tally set;Institute
Context that video contextual tab video represents is stated for " the leaf node co expression one under the contextual tab is regarded
The semantic content of frequency ", the context that the scene contextual tab scene is represented is " the leaf segment under the contextual tab
The semantic content of the point same video scene of co expression ";And in the contextual tab tree LT, the video context mark
Label video is root node, and the child node that the video contextual tab video is is the scene contextual tab scene, institute
The child node for stating scene contextual tab scene is general context label or camera lens semantic label, general context label nl
Child node be general context label or camera lens semantic label.
Further, in the step S11, using with SVM multi-categorizers as disaggregated model, it is and semantic using camera lens
Training set trains SVM multi-categorizers, and the camera lens semantic analyzer is obtained after the completion of training.
Further, in the step S12, it is described in accordance with the following steps to expand the camera lens semantic sequence wu'
Contextual tab tree LT:
S121:One leaf node is sequentially generated according to each camera lens semantic label li in the camera lens semantic sequence wu',
From left to right generate initial labels sequence node Curr={ c1,...,ct, wherein ci=li, ci is initial labels sequence node
I-th label node in Curr;
S122:Label node sequence C urr is from left to right traveled through, for its subsequence { ck,...ck+m, if meeting
Hereafter, that is, there is description c in label create-rule p ∈ Pk,...,ck+mThe contextual tab c of formed contextp, then with the sub- sequence
The label node of each in row is child node, and with label cpThe new label node of generation is father node, with the label cpSubstitution
Atomic series { c in the label sequence node Curr sequencesk,...ck+m, wherein, P is context create-rule collection, context mark
Sign create-rule p:cp←c1...cs(cs∈L∪NL,cp∈ video ∪ scene ∪ NL), k, s and m are more than or equal to 1 just
Integer, and k+m≤t;
S123:Traversal terminates the new label node sequence C urr of generation;
S124:Whether the sequence length for judging the new label node sequence C urr is 1, if it is not, then return to step
In S122, the new label node sequence C urr is traveled through, is otherwise terminated.
Further, in the step S2, also comprise the following steps:
S21:Camera lens is divided into non-key camera lens and crucial camera lens by the semantic information in camera lens;In the context mark
The corresponding camera lens semantic label l of each camera lens is obtained in label tree LT, if camera lens is non-key camera lens, with non-key label sub
Camera lens semantic label l is marked, and assigns weight 1;If camera lens is crucial camera lens, the camera lens language is marked with key label chi
Adopted label l, and weight q-chi is assigned, so as to complete each camera lens correspondence leaf node weight in the contextual tab tree LT
Setting;
S22:Contextual information in the contextual tab tree LT changes the weight of nonleaf node.
Further, in the step S22, also comprise the following steps:
S221:Search for the contextual tab tree LT child nodes non-key label sub or crucial labels chi marks
Note, but the nonleaf node node itself not being labeledh, if in the presence of if enter step S222;Otherwise step S223 is entered;
S222:If nonleaf node nodehAll child nodes are collectively labeled as non-key label sub, then with non-key mark
Label sub marks nonleaf node nodeh, and assign weight 1;Otherwise the node is marked with key label chih, and by the nodeh
Weight be set to its child node weight sum, mark after go to the step S221;
S223:The contextual tab tree LT is traveled through in the way of breadth First since root node;By the nonleaf node
nodehChild node nodehgModify as follows:
Q(nodehg) _ new=Q (nodeh)*Q(nodehg)/(Q(nodeh1)+...+Q(nodehd))
Wherein Q (node) is node node weight, Q (nodehg) _ new is nodehgAmended weight, d is nodeh
Child node number.
Further, in the step S3, the fast forward and fast reverse speed V of each camera lens is obtained as followscurr:
Vcurr=V/Q (li), wherein liFor the camera lens semantic label.
Compared to prior art, the invention has the advantages that:One kind proposed by the invention is based on semantic content
Video fast forward/rewind speeds self-adapting regulation method can and camera lens semantic context semantic according to video lens obtain video
The significance level of each camera lens, and corresponding weight is set.Then the skip forward/back of each camera lens is automatically adjusted according to weight
Speed.A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method proposed by the invention, can be helped
The fast forward and fast reverse speed of each camera lens of user's adjust automatically, it is to avoid because user set fast forward and fast reverse speed it is improper miss
Non- splendid contents can not be perhaps fast skipped in video highlight.Therefore this method provides users with the convenient, and improves video and broadcasts
Soften the Consumer's Experience of part.
Brief description of the drawings
Fig. 1 is a kind of flow of the video fast forward based on semantic content/rewind speeds self-adapting regulation method in the present invention
Figure.
Fig. 2 is camera lens semantic sequence and corresponding contextual tab tree arborescence in the present invention.
Embodiment
Below in conjunction with the accompanying drawings, technical scheme is specifically described.
The present invention provides a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, such as Fig. 1 institutes
Show, it is characterised in that realize in accordance with the following steps:
S1:The camera lens language of each camera lens in inputting video data is extracted in units of the camera lens in the video data of input
Justice;Camera lens semantic context of the camera lens between semantic is analyzed according to the camera lens of each camera lens is semantic, will be by each camera lens
It is contextual tab tree that the semantic sequence of corresponding camera lens semanteme composition, which is expanded, under characterizing the camera lens between camera lens semantically
Text;
S2:Set the initial weight of each camera lens respectively according to the camera lens of each camera lens semanteme, and according to the context
Tag tree is adjusted to the camera lens weight of each camera lens;
S3:The initial fast forward and fast reverse speed V of the video of pending fast forward and fast reverse playback is set;When user selects F.F.
Or during rewind, the fast forward and fast reverse speed V of the camera lens is adjusted according to the weight of each camera lenscurr。
Further, in the present embodiment, in the step S1, also comprise the following steps:
S11:To n training video fragment video of inputjShot segmentation is carried out, r training video fragment is obtained
videojCamera lens, extract and quantify the visual signature of each camera lens, be configured to visual feature vector v, using camera lens as unit people
The camera lens that work marks each camera lens in the r camera lens is semantic, the semantic training set of construction camera lens, and the camera lens semanteme after mark is carried out
Classification, using SVM multi-categorizers as disaggregated model, and using the semantic training set training SVM multi-categorizers of camera lens, training is completed
After obtain the camera lens semantic analyzer, wherein, j ∈ { 1 ..., n } obtain camera lens semantic analyzer;Input is by t lens group
Into pending fast forward and fast reverse playback video segment video', obtain the pending F.F. or fast using camera lens semantic analyzer
The camera lens semanteme for playing each camera lens in video segment video' is moved back, in the present embodiment, the visual signature of each camera lens is extracted
Composition characteristic vector v, the camera lens semanteme that camera lens semantic analyzer obtains each camera lens of video' is inputted by v, and semantic with camera lens
Label liRepresent that each camera lens is semantic respectively, by each camera lens in the pending fast forward and fast reverse playback video segment video'
Camera lens is semantic to obtain camera lens semantic sequence wu', and wu'={ l according to sequential relationship arrangement1,...,lt, wherein li∈ L, L are mirror
A kind of head semantic label collection, camera lens of each of which element representation is semantic, and i is the camera lens semantic label liSequence number;
S12:It is contextual tab tree LT that the camera lens semantic sequence wu', which is expanded, and the contextual tab tree LT is tree-like
Structure;Each leaf node of the contextual tab tree LT is camera lens semantic label l, l ∈ L;The contextual tab tree LT's
Each nonleaf node is contextual tab, represents the context between the corresponding leaf node of each nonleaf node;The context
Label includes:Video contextual tab video, scene contextual tab scene and general context label nl, and nl ∈ NL,
NL is general context tally set, and different types of video has different NL;What the video contextual tab video was represented
Context is " semantic content of one video of leaf node co expression under the contextual tab ", the scene context
The context that label scene is represented is " the language of the same video scene of leaf node co expression under the contextual tab
Adopted content ";And in the contextual tab tree LT, as shown in Fig. 2 the video contextual tab video is root node,
The child node that the video contextual tab video is is the scene contextual tab scene, the scene contextual tab
Scene child node is general context label or camera lens semantic label, and general context label nl child node is on common
Hereafter label or camera lens semantic label.
Further, in the step S12, it is described in accordance with the following steps to expand the camera lens semantic sequence wu'
Contextual tab tree LT:
S121:One leaf node is sequentially generated according to each camera lens semantic label li in the camera lens semantic sequence wu',
From left to right generate initial labels sequence node Curr={ c1,...,ct, wherein ci=li, ci is initial labels sequence node
I-th label node in Curr;
S122:Label node sequence C urr is from left to right traveled through, for its subsequence { ck,...ck+m, if meeting
Hereafter, that is, there is description c in label create-rule p ∈ Pk,...,ck+mThe contextual tab c of formed contextp, then with the sub- sequence
The label node of each in row is child node, and with label cpThe new label node of generation is father node, with the label cpSubstitution
Atomic series { c in the label sequence node Curr sequencesk,...ck+m, wherein, P is context create-rule collection;In this implementation
In example, contextual tab has shape such as p:cp←c1...cs(cs∈L∪NL,cp∈ video ∪ scene ∪ NL) context life
Into rule, k, s and m are the positive integer more than or equal to 1, and k+m≤t, and different types of video has different contexts lifes
Into rule set P;
S123:Traversal terminates the new label node sequence C urr of generation;
S124:Whether the sequence length for judging the new label node sequence C urr is 1, if it is not, then return to step
In S122, the new label node sequence C urr is traveled through, is otherwise terminated.
Further, in the step S2, also comprise the following steps:
S21:Camera lens is divided into non-key camera lens and crucial camera lens by the semantic information in camera lens;In the context mark
The corresponding camera lens semantic label l of each camera lens is obtained in label tree LT, if camera lens is non-key camera lens, with non-key label sub
Camera lens semantic label l is marked, and assigns weight l;If camera lens is crucial camera lens, the camera lens language is marked with key label chi
Adopted label l, and assign weight q-chi, and and crucial camera lens is further divided into lev grade, so as to complete each camera lens described
The setting of correspondence leaf node weight in contextual tab tree LT;
S22:Contextual information in the contextual tab tree LT changes the weight of nonleaf node.
Further, in the step S22, also comprise the following steps:
S221:The contextual tab tree LT child nodes are searched for mark via non-key label sub or crucial labels chi
Note, but the nonleaf node node itself not being labeledh, if in the presence of if enter step S222;Otherwise step S223 is entered;
S222:If nonleaf node nodehAll child nodes are collectively labeled as non-key label sub, then with non-key mark
Label sub marks nonleaf node nodeh, and assign weight 1;Otherwise the node is marked with key label chih, and by the nodeh
Weight be set to its child node weight sum, mark after go to the step S221;
S223:The contextual tab tree LT is traveled through in the way of breadth First since root node;By the nonleaf node
nodehChild node nodehgModify as follows:
Q(nodehg) _ new=Q (nodeh)*Q(nodehg)/(Q(nodeh1)+...+Q(nodehd))
Wherein Q (node) is node node weight, Q (nodehg) _ new is nodehgAmended weight, d is nodeh
Child node number.
Further, in the step S3, the fast forward and fast reverse speed V of each camera lens is obtained as followscurr:
Vcurr=V/Q (li), wherein liFor the camera lens semantic label, when user clicks on fast forward and fast reverse, each camera lens in video'
By VcurrCarry out fast forward and fast reverse playback.
Above is presently preferred embodiments of the present invention, all changes made according to technical solution of the present invention, produced function is made
During with scope without departing from technical solution of the present invention, protection scope of the present invention is belonged to.
Claims (5)
1. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method, it is characterised in that according to following step
It is rapid to realize:
S1:The camera lens that each camera lens in inputting video data is extracted in units of the camera lens in the video data of input is semantic;Root
Camera lens semantic context of the camera lens between semantic is analyzed according to the camera lens of each camera lens is semantic, will be corresponding by each camera lens
It is contextual tab tree that the semantic sequence of camera lens semanteme composition, which is expanded, to characterize the camera lens semantic context between camera lens;
S2:Set the initial weight of each camera lens respectively according to the camera lens of each camera lens semanteme, and according to the contextual tab
Set and the camera lens weight of each camera lens is adjusted;
S3:The initial fast forward and fast reverse speed V of the video of pending fast forward and fast reverse playback is set;When user selects F.F. or fast
When moving back, the fast forward and fast reverse speed V of the camera lens is adjusted according to the weight of each camera lenscurr;
In the step S1, also comprise the following steps:
S11:To n training video fragment video of inputjShot segmentation is carried out, training video fragment video is obtainedjMirror
Head, the camera lens that each camera lens is manually marked in units of camera lens is semantic, wherein, j ∈ { 1 ..., n };To the camera lens language after mark
Justice is classified, and, to train grader, to obtain camera lens semantic analyzer per the semantic training set of class camera lens semanteme construction camera lens;
The pending fast forward and fast reverse playback video segment video' being made up of t camera lens is inputted, is obtained using camera lens semantic analyzer
The camera lens of each camera lens is semantic in the pending fast forward and fast reverse playback video segment video', with camera lens semantic label liRespectively
Represent that each camera lens is semantic, the camera lens semanteme of each camera lens in the pending fast forward and fast reverse playback video segment video' is pressed
Camera lens semantic sequence wu', and wu'={ l are obtained according to sequential relationship arrangement1,...,lt, wherein li∈ L, L are camera lens semantic label
A kind of collection, camera lens of each of which element representation is semantic, and i is the camera lens semantic label liSequence number;
S12:It is contextual tab tree LT that the camera lens semantic sequence wu', which is expanded,;Each leaf of the contextual tab tree LT
Node is camera lens semantic label l, l ∈ L;Each nonleaf node of the contextual tab tree LT is contextual tab, represents every
Context between the corresponding leaf node of individual nonleaf node;The contextual tab includes:Video contextual tab video, field
Scape contextual tab scene and general context label nl, and nl ∈ NL, NL are general context tally set;On the video
The context that hereafter label video is represented is " the semanteme of one video of leaf node co expression under the contextual tab
Content ", the context that the scene contextual tab scene is represented is " the common table of leaf node under the contextual tab
Up to the semantic content of same video scene ";And in the contextual tab tree LT, the video contextual tab video
For root node, the child node that the video contextual tab video is is the scene contextual tab scene, the scene
Contextual tab scene child node is general context label or camera lens semantic label, general context label nl section
Point is general context label or camera lens semantic label;
In the step S12, it is the contextual tab tree in accordance with the following steps to expand the camera lens semantic sequence wu'
LT:
S121:One leaf node is sequentially generated according to each camera lens semantic label li in the camera lens semantic sequence wu', from a left side
To right generation initial labels sequence node Curr={ c1,...,ct, wherein ci=li, ci is initial labels sequence node Curr
In the i-th label node;
S122:Label node sequence C urr is from left to right traveled through, for its subsequence { ck,...ck+m, if meeting context mark
Create-rule p ∈ P are signed, that is, there is description ck,...,ck+mThe contextual tab c of formed contextp, then with every in the subsequence
One label node is child node, and with label cpThe new label node of generation is father node, with the label cpThe substitution label
Atomic series { c in sequence node Curr sequencesk,...ck+m, wherein, P is context create-rule collection, contextual tab generation
Regular p:cp←c1...cs(cs∈L∪NL,cp∈ video ∪ scene ∪ NL), k, s and m are the positive integer more than or equal to 1,
And k+m≤t;
S123:Traversal terminates the new label node sequence C urr of generation;
S124:Whether the sequence length for judging the new label node sequence C urr is 1, if it is not, then in return to step S122,
The new label node sequence C urr is traveled through, is otherwise terminated.
2. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 1, its
It is characterised by, in the step S11, is assembled for training using with SVM multi-categorizers as disaggregated model, and using camera lens semanteme training
Practice SVM multi-categorizers, the camera lens semantic analyzer is obtained after the completion of training.
3. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 1, its
It is characterised by, in the step S2, also comprises the following steps:
S21:Camera lens is divided into non-key camera lens and crucial camera lens by the semantic information in camera lens;In the contextual tab tree
The corresponding camera lens semantic label l of each camera lens is obtained in LT, if camera lens is non-key camera lens, is marked with non-key label sub
Camera lens semantic label l, and assign weight 1;If camera lens is crucial camera lens, camera lens semanteme mark is marked with key label chi
L is signed, and assigns weight q-chi, so that complete each camera lens corresponds to setting for leaf node weight in the contextual tab tree LT
Put;
S22:Contextual information in the contextual tab tree LT changes the weight of nonleaf node.
4. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 3, its
It is characterised by, in the step S22, also comprises the following steps:
S221:The contextual tab tree LT child nodes non-key label sub or crucial labels chi marks are searched for, but
The nonleaf node node itself not being labeledh, if in the presence of if enter step S222;Otherwise step S223 is entered;
S222:If nonleaf node nodehAll child nodes are collectively labeled as non-key label sub, then with non-key label sub
Mark nonleaf node nodeh, and assign weight 1;Otherwise the node is marked with key label chih, and by the nodehWeight
The step S221 is gone to after being set to its child node weight sum, mark;
S223:The contextual tab tree LT is traveled through in the way of breadth First since root node;By nonleaf node nodeh
Child node nodehgModify as follows:
Q(nodehg) _ new=Q (nodeh)*Q(nodehg)/(Q(nodeh1)+...+Q(nodehd))
Wherein Q (node) is node node weight, Q (nodehg) _ new is nodehgAmended weight, d is nodehSon
Node number.
5. a kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method according to claim 4, its
It is characterised by, in the step S3, the fast forward and fast reverse speed V of each camera lens is obtained as followscurr:Vcurr=V/Q
(li), wherein liFor the camera lens semantic label.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817471.1A CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410817471.1A CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104506947A CN104506947A (en) | 2015-04-08 |
CN104506947B true CN104506947B (en) | 2017-09-05 |
Family
ID=52948651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410817471.1A Expired - Fee Related CN104506947B (en) | 2014-12-24 | 2014-12-24 | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104506947B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106341700B (en) * | 2016-09-05 | 2020-10-27 | Tcl科技集团股份有限公司 | Automatic video frame rate adjusting method and system |
CN107147957B (en) * | 2017-04-19 | 2019-09-10 | 北京小米移动软件有限公司 | Video broadcasting method and device |
CN108259988B (en) * | 2017-12-26 | 2021-05-18 | 努比亚技术有限公司 | Video playing control method, terminal and computer readable storage medium |
CN108184169B (en) * | 2017-12-28 | 2020-10-09 | Oppo广东移动通信有限公司 | Video playing method and device, storage medium and electronic equipment |
CN108174243A (en) * | 2017-12-28 | 2018-06-15 | 广东欧珀移动通信有限公司 | A kind of adjusting method, device, storage medium and the terminal of video playing rate |
CN108513130B (en) * | 2017-12-29 | 2019-10-08 | 西安电子科技大学 | A kind of realization system and method for Tag-Tree coding |
CN110209877A (en) * | 2018-02-06 | 2019-09-06 | 上海全土豆文化传播有限公司 | Video analysis method and device |
CN115442661B (en) * | 2021-06-01 | 2024-03-19 | 北京字跳网络技术有限公司 | Video processing method, apparatus, storage medium, and computer program product |
CN113507624B (en) * | 2021-09-10 | 2021-12-21 | 明品云(北京)数据科技有限公司 | Video information recommendation method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6424789B1 (en) * | 1999-08-17 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System and method for performing fast forward and slow motion speed changes in a video stream based on video content |
CN1732685A (en) * | 2002-12-27 | 2006-02-08 | Lg电子有限公司 | Method and apparatus for dynamic search of video contents |
CN102265609A (en) * | 2008-12-26 | 2011-11-30 | 富士通株式会社 | Program data processing device, method, and program |
CN104036023A (en) * | 2014-06-26 | 2014-09-10 | 福州大学 | Method for creating context fusion tree video semantic indexes |
-
2014
- 2014-12-24 CN CN201410817471.1A patent/CN104506947B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6424789B1 (en) * | 1999-08-17 | 2002-07-23 | Koninklijke Philips Electronics N.V. | System and method for performing fast forward and slow motion speed changes in a video stream based on video content |
CN1732685A (en) * | 2002-12-27 | 2006-02-08 | Lg电子有限公司 | Method and apparatus for dynamic search of video contents |
CN102265609A (en) * | 2008-12-26 | 2011-11-30 | 富士通株式会社 | Program data processing device, method, and program |
CN104036023A (en) * | 2014-06-26 | 2014-09-10 | 福州大学 | Method for creating context fusion tree video semantic indexes |
Non-Patent Citations (1)
Title |
---|
Adaptive Video Fast Forward;Petrovic N at el.;《Multimedia Tools and Applications》;20051231;第26卷(第3期);327-344 * |
Also Published As
Publication number | Publication date |
---|---|
CN104506947A (en) | 2015-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104506947B (en) | A kind of video fast forward based on semantic content/rewind speeds self-adapting regulation method | |
US10566009B1 (en) | Audio classifier | |
CN108009228B (en) | Method and device for setting content label and storage medium | |
WO2021000909A1 (en) | Curriculum optimisation method, apparatus, and system | |
CN106534548B (en) | Voice error correction method and device | |
CN112511854B (en) | Live video highlight generation method, device, medium and equipment | |
CN105138991B (en) | A kind of video feeling recognition methods merged based on emotion significant characteristics | |
Chang et al. | Searching persuasively: Joint event detection and evidence recounting with limited supervision | |
TW201717062A (en) | Multi-modal fusion based intelligent fault-tolerant video content recognition system and recognition method | |
CN105224581B (en) | The method and apparatus of picture are presented when playing music | |
CN109275046A (en) | A kind of teaching data mask method based on double video acquisitions | |
CN112749326B (en) | Information processing method, information processing device, computer equipment and storage medium | |
WO2015101155A1 (en) | Method for recommending information to user | |
CN106547889A (en) | A kind of exercise question method for pushing and device | |
CN103678668A (en) | Prompting method of relevant search result, server and system | |
CN105760521A (en) | Information input method and device | |
CN108460122B (en) | Video searching method, storage medium, device and system based on deep learning | |
CN111753104A (en) | Contextual search of multimedia content | |
US20190007734A1 (en) | Video channel categorization schema | |
WO2018094952A1 (en) | Content recommendation method and apparatus | |
US20230368448A1 (en) | Comment video generation method and apparatus | |
CN110309360A (en) | A kind of the topic label personalized recommendation method and system of short-sighted frequency | |
CN113746875A (en) | Voice packet recommendation method, device, equipment and storage medium | |
US9268861B2 (en) | Method and system for recommending relevant web content to second screen application users | |
CN111177462B (en) | Video distribution timeliness determination method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170905 Termination date: 20201224 |
|
CF01 | Termination of patent right due to non-payment of annual fee |