CN104469547B - A kind of video abstraction generating method based on tree-shaped movement objective orbit - Google Patents

A kind of video abstraction generating method based on tree-shaped movement objective orbit Download PDF

Info

Publication number
CN104469547B
CN104469547B CN201410755692.0A CN201410755692A CN104469547B CN 104469547 B CN104469547 B CN 104469547B CN 201410755692 A CN201410755692 A CN 201410755692A CN 104469547 B CN104469547 B CN 104469547B
Authority
CN
China
Prior art keywords
tree
container
frame
goal tree
goal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410755692.0A
Other languages
Chinese (zh)
Other versions
CN104469547A (en
Inventor
朱虹
苟荣涛
张静波
王栋
邢楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201410755692.0A priority Critical patent/CN104469547B/en
Publication of CN104469547A publication Critical patent/CN104469547A/en
Application granted granted Critical
Publication of CN104469547B publication Critical patent/CN104469547B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking

Abstract

The invention discloses a kind of video abstraction generating method based on tree-shaped movement objective orbit, implement according to following steps:Step 1, by Gaussian Background modeling method detect and pursuit movement target, obtain moving target movement locus box;Step 2, whether movement objective orbit box is clustered according to its adhesion, built goal tree;Step 3, goal tree is described;Step 4, goal tree is ranked up according to descending descending;Step 5, initialization build the empty container of generation video frequency abstract;Step 6, first aim tree is entered in a reservoir;Step 7, determine that follow-up goal tree enters the initial point position of container;Step 8, enter goal tree;Step 9, judge whether that goal tree is all drained;The container that step 10, input enter target complete tree is the video frequency abstract of generation.The method of the present invention, can keep continuity of the moving target in video frequency abstract, and the thickening efficiency of video frequency abstract is high.

Description

A kind of video abstraction generating method based on tree-shaped movement objective orbit
Technical field
The invention belongs to image identification technical field, it is related to a kind of video frequency abstract based on tree-shaped movement objective orbit to generate Method.
Background technology
Video summarization technique is effectively concentrated by the redundant eliminating in massive video data and by remaining effective information To supply people's fast browsing.By watching video frequency abstract, people just can must neither browse original massive video data energy again Understand the general contents of massive video.Because video frequency abstract is used to carry out fast browsing to the event occurred in monitor video, The concern event for being also required to be found during to browsing simultaneously carries out positioning playback, so, the general way of video frequency abstract is by original After moving target in beginning video carries out detecting and tracking, each moving target is generated into video frequency abstract as an event chain. This way there is a problem of it is fatal be, the quality of video frequency abstract effect, depending in original video, the detection of moving target and The quality of tracking effect.However, the accuracy of the detecting and tracking algorithm of moving target, is dramatically limited to video monitoring ring Whether border is complicated, the factor such as whether moving target is intensive.Therefore, also limit the application of video frequency abstract.
The content of the invention
It is an object of the invention to provide a kind of video abstraction generating method based on tree-shaped movement objective orbit, it is no longer required for Each moving target can carry out complete detection and tracking, but allow several multiple mesh such as stick together, intersect, blocking Mark, is described in the form of goal tree, solves prior art intensive because moving object detection tracking is limited to moving target The limited problem of video frequency abstract generation caused by degree, and monitors environment complexity.
The technical solution adopted in the present invention is, a kind of video abstraction generating method based on tree-shaped movement objective orbit, Implement according to following steps:
Step 1, moving target is extracted from monitor video, obtain the box of moving target movement locus
Moving target in monitor video is extracted using mixed Gaussian background modeling method, afterwards, according to adjacent interframe Overlapping area is same mesh calibration method to the maximum carries out motion target tracking, in each frame, each motion mesh that tracking is obtained Region is marked to be represented with its minimum boundary rectangle,
The minimum enclosed rectangle of each moving target is a boundary rectangle in a two field picture, claims this boundary rectangle Confined, the region in this two field picture is an agglomerate;These agglomerates stack up on a timeline, are formed one Box, the starting point of box is the frame for finding certain moving target, and terminal is the former frame that moving target disappears in the monitoring ken,
It is assumed that appear in the moving target in the video monitoring ken to each, using a box come if describing, Then shown in the description of each moving target such as formula (1):
Wherein, Ok() represents k-th moving target, k=1,2 ..., Ns, mk=1,2 ..., Nk, NSIt is the fortune for detecting The sum of moving-target, NkRepresent k-th frame number of target Continuous;
Represent k-th target in mkThe coordinate in the boundary rectangle upper left corner in frame,
Represent k-th target in mkThe coordinate in the boundary rectangle lower right corner in frame,
Represent k-th target in mkCenter-of-mass coordinate in frame,
In this step, similar situation is described in the form of tree during moving target recognition, will It is designated same mark, i.e. tree and gives a continuous movement locus, is no longer required for only including a motion Target;
Step 2, moving target box cluster
To detected moving targetK=1,2 ..., Ns, mk =1,2 ..., Nk, judge to whether there is adhesion in motion process between its target, there will be the target box whole of adhesion Same class is classified as, that is, is referred to as to belong to same goal tree, and in each frame, each moving target is external by its minimum The region that rectangle is confined is referred to as an agglomerate of tree, if there is multiple moving targets in a frame, is deposited in the frame In multiple agglomerates;
Step 3, goal tree is described;
Step 4, step 3 N number of goal tree for obtaining of cluster is ranked up
This N number of goal tree is arranged according to the descending descending of length, it is convenient in order to represent, still set this N number of Goal tree after sequence is Treeid, id=1,2 ..., N;
Step 5, design summarization generation container are simultaneously initialized to it;
Step 6, enter in first aim tree to container;
Step 7, determine that goal tree enters the position of container;
Step 8, will determine position goal tree to be arranged enter in container;
Step 9, judge whether also there is the goal tree entered in container in need
If it did not, i.e. id=N+1 then sorts to finish exiting, 10 are gone to step,
Otherwise, next goal tree Tree is takenid, into step 7;
Step 10, the element value of container is rounded, be then the video frequency abstract of generation,.
The beneficial effects of the invention are as follows the method in advance extracts the movement objective orbit data in video, then These tracks are re-started with the planning on time shaft, is pressed as far as possible in the case of acceptable in movement objective orbit impact severity These moving targets are finally regenerated one section of summary and regarded by the total length of contracting video frequency abstract according to the new route planned Frequently, specifically:
First, moving target information is not lost, it is upper to greatest extent to retain all possible useful information.Secondly, by right Again the planning of movement objective orbit, significantly can remove the redundant segments of without motion target on time shaft, make video frequency abstract Short as far as possible, summarized radio browses nature, smoothness, and the facility of maximum is just provided for people's fast browsing.Finally, it is raw Into summary be still normal video playout speed and mode, visual effect has also been fully retained the vision effect of original video Really.
Brief description of the drawings
Fig. 1 is that one moving target boundary rectangle is superimposed the inventive method the box schematic diagram for obtaining on a timeline;
Fig. 2 is the movement objective orbit relation schematic diagram of the inventive method video frequency abstract;
Fig. 3 is the Trace Formation relation schematic diagram of the inventive method video frequency abstract;
Fig. 4 is the moving target adherence Separation track schematic diagram in the inventive method;
Fig. 5 is the moving target center of mass motion relation schematic diagram in the inventive method;
Fig. 6 is the moving target model schematic of the tree in the inventive method;
Fig. 7 is the collision relation schematic diagram that goal tree is discharged into the container in the inventive method;
Fig. 8 is the agglomerate collision relation schematic diagram in the inventive method.
Specific embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description.
Video abstraction generating method based on tree-shaped movement objective orbit of the invention belongs to dynamic video summary, that is, pass through Effective information is extracted from original monitor video and they are fused into one section of video skimming, monitoring is contained in the video skimming The moving target monitored in the ken.
Video abstraction generating method based on tree-shaped movement objective orbit of the invention, implements according to following steps:
Step 1, moving target is extracted from monitor video, obtain the box of moving target movement locus
Moving target in monitor video is extracted using mixed Gaussian background modeling method, afterwards, according to adjacent interframe Overlapping area is same mesh calibration method to the maximum carries out motion target tracking, in each frame, each motion mesh that tracking is obtained Mark region is represented with its minimum boundary rectangle.
(note:The tracking of the moving target detecting method and moving target is in related professional book or related It is described in paper, is not repeated herein.)
As shown in figure 1, the minimum enclosed rectangle of each moving target is in a two field picture, and it is a boundary rectangle, claim this What individual boundary rectangle was confined, the region in this two field picture is an agglomerate;These external agglomerates have been superimposed on a timeline Come, be formed a box, the starting point of box is the frame for finding certain moving target, and terminal is that moving target disappears in monitoring The former frame of the ken,
It is assumed that appear in the moving target in the video monitoring ken to each, using a box come if describing, Then shown in the description of each moving target such as formula (1):
Wherein, Ok() represents k-th moving target, k=1,2 ..., Ns, mk=1,2 ..., Nk, NSIt is the fortune for detecting The sum of moving-target, NkRepresent k-th frame number of target Continuous;
Represent k-th target in mkThe coordinate in the boundary rectangle upper left corner in frame,
Represent k-th target in mkThe coordinate in the boundary rectangle lower right corner in frame,
Represent k-th target in mkCenter-of-mass coordinate in frame, (note:The computational methods of barycenter are in correlation Referred in teaching material and paper, no longer repeated here.)
As shown in Figures 2 and 3, be the track of moving target in original video, and by the Trace Formation of below step Afterwards, the target trajectory schematic diagram of the video frequency abstract for obtaining, the schematic diagram omits y-axis side in the case where without prejudice to understands To information.In FIG. 2, it is assumed that there is tetra- targets of A, B, C, D, this four targets are separate, viscous in the absence of between Situations such as connecting, block, intersecting, therefore, after eliminating the redundancy between this four tracks, then the video frequency abstract for obtaining Fig. 3 just compares appearance Easily.
In fact, in actual monitor video, moving target often exist block, adhesion situations such as, these situations Influence can be brought on the extraction of moving target.For example shown in Fig. 4, there are moving target 1, moving target 2 on the left of visual field without viscous Even, they have obtained correct mark during motion target tracking.When two targets move to visual field stage casing, target 1 Together with being adhered to target 2, at this moment they are identified as moving target 3;When two targets move to right section of visual field, two Target is separated again, and at this moment moving target is identified as target 4, target 5, in reflection to center of mass motion just as shown in Figure 5.So, It is generally on a timeline in original video while two moving targets for occurring but are identified as five moving targets. They probably can be planned for five different time periods in planning process by follow-up movement objective orbit again, and this just cuts Break the continuity of themselves, the visual effect of the last summary of strong influence.
In this step, similar situation is described in the form of tree during moving target recognition, will It is designated same mark, provides control result as shown in Figure 6, and in other words, tree gives one continuously Movement locus, is no longer required for only including a moving target;
Step 2, moving target box cluster
To detected moving targetK=1,2 ..., Ns, mk =1,2 ..., Nk, judge to whether there is adhesion in motion process between its target, as shown in Figure 4 and Figure 5, there will be adhesion Target box be all classified as same class, that is, be referred to as to belong to same goal tree, and in each frame, each motion mesh The region that mark is confined by its minimum enclosed rectangle is referred to as an agglomerate of tree, if there are multiple moving targets in a frame When, then there are multiple agglomerates in the frame;
Step 3, goal tree is described
First, the parameter set Tree of goal tree unique mark is definedid, shown in model such as formula (2):
Wherein, id=1,2 ..., N, id be goal tree numbering,
N is the total number of the goal tree that step 2 is obtained;
It is initial frame number of the moving target in the goal tree in original video,
It is its end frame number in original video,
Just represent the length of the goal tree;
It is agglomerate collection, It is goal tree in t frames The number of agglomerate, i.e., the number of the moving target for being detected by step 1 is to be classified as same goal tree by what step 2 was obtained The region that all moving target box boundary rectangles in respective frame are confined is constituted,
In each agglomerateInformation is described For:
Wherein,T=1,2 ..., Δ tid,
It is i-th of t frames in goal treebFrame number of the individual agglomerate in original video,
It is i-thbThe upper left corner and the lower right corner of individual agglomerate boundary rectangle in former video Coordinate, represent area coordinate of the agglomerate in original video frame,
It is i-thbThe 1/2 of the length of long sides of the maximum boundary rectangle of individual agglomerate,
It is i-thbThe pixel value in individual agglomerate region;
Step 4, step 3 N number of goal tree for obtaining of cluster is ranked up
In view of goal tree length be influence final digest total length a principal element, therefore, it is N number of to this first Goal tree is arranged according to the descending descending of length, in order to represent convenient, still set the goal tree after this N number of sequence as Treeid, id=1,2 ..., N;
Step 5, design summarization generation container are simultaneously initialized to it
So-called summarization generation container, is to generate one section of video after the target complete tree fusion for generating step 3 to pluck The three-dimensional array wanted, wherein bidimensional represent the size of video frequency abstract two field picture, identical with original video frame picture size, in addition one Dimension table shows the time, so, just constitutes the data mode that expression video is the picture frame sequence for changing over time,
This three-dimensional array is initialized, an empty container is referred to as built, formula (4) is seen:
Wherein, ci,j,l=0, i=1,2 ..., m, j=1,2 ..., n, l=1,2 ..., Δ tC, the size of array is m ×n×ΔtC, m is the line number of frame of video picture, and n is the columns of frame of video picture, Δ tCIt is the length of container,
During initialization, Δ t is madeC=Δ tmax, Δ tmax=max { Δ tid| id=1,2 ..., N }, i.e. the length of empty container Elect the length of maximum target tree as, after step 4 sorts, Δ tmax=Δ t1,
Id=1 is made, first aim tree is entered in container, the position for entering is tstart=1;
Step 6, enter in first aim tree to container
First goal tree Tree most long will be come1It is put into container, the element in container not for 0 should be the target All agglomerates of tree, then have formula (5):
Wherein,
Afterwards, id=id+1 is made, next goal tree is selected and is gone to step the position that 7 determinations enter container;
Step 7, determine that goal tree enters the position of container
7.1) every frame agglomerate collision that goal tree has been arranged in goal tree to be arranged and container is asked
When discharging into new goal tree, it is with the collision detection process between the goal tree for having arranged in a reservoir, in such as Fig. 7 Part below a regions is the ranked goal tree for finishing in container, and the goal tree above arrow is goal tree to be arranged;As schemed In 7 shown in b regions, illustrate on the right side of container, goal tree to be arranged up is moved since the bottommost of container, i.e., calmly Device scope tstart∈[1,ΔtC] in, by tstart=1 starts, and calculates collision, and shape is collided in c regions to meet acceptance level in Fig. 7 The goal tree of state is entered,
The agglomerate collision process arranged between goal tree in every frame in goal tree to be arranged and container is, as shown in figure 8, The digital frame for marking " 1,2,3 " is set to the goal tree being had drained into container in the agglomerate of a certain frame (being set to t frames), marks The alphabetical frame of " A, B, C, D " is agglomerate of the goal tree to be arranged in this frame, the collision entered between agglomerate that is fashionable and having arranged goal tree As both the position of agglomerate in two field picture have some or all of identical, that is, assert and generate collision, in fig. 8, agglomerate D Do not collide, agglomerate B, C there occurs slight impact, agglomerate A then there occurs serious collision with digital frame 2, slight impact Situation influences Visual Observations Observations effect also slight because its overlap is not serious, therefore is considered the collision of permission,
If goal tree to be arranged is in taThe agglomerate collection of frame isGoal tree in a reservoir has been arranged in taFrame Agglomerate collection be
Assuming thatThe agglomerate of concentrationIts center-of-mass coordinate is The agglomerate of concentrationIts matter Heart coordinate is
The criterion then whether collided two-by-two between agglomerate such as formula (6):
Wherein,Then in taIn frame, the row in goal tree to be arranged and container Collision Colli (the t of goal treea) whether criterion such as formula (7):
7.2) collision frame by frame that goal tree has been arranged in goal tree to be arranged and container is asked
Because the length of goal tree to be arranged is Δ tid, therefore, according to step 7.1) collision of the calculating per frame, container is The position for arranging goal tree is taken as tstart,tstart+1,...,tstart-1+Δtid, with all frames of goal tree to be arranged, i.e. ta=1, 2,...,Δtid, frame by frame according to step 7.1) and it is calculated the agglomerate collision Colli (t of every framea), ta=1,2 ..., Δ tid
7.3) the overall collision rate that goal tree has been arranged in goal tree to be arranged and container is sought
Goal tree starting point to be arranged is in tstartOn position the overall collision rate of goal tree has been arranged with containerCalculating formula Such as formula (8):
7.4) position that goal tree to be arranged can be placed is judged
According to the collision rate that formula (8) is calculatedSpan beUser's needs basis can The dense degree of receiving, and the requirement to video frequency abstract length, set collision rate threshold value ρTh, preferably empirical value is ρTh=1/ 3,
IfRepresent that goal tree to be arranged does not exist collision with the goal tree of row in container, if Show that collision situation belongs to acceptable degree, at this moment, goal tree to be arranged is discharged into the t in containerstartPosition on, go to step 8 realize entering the goal tree in container;
IfShow that every frame has collision, at this moment can largely effect on visual effect, ifTable Bright collision situation belongs to unacceptable degree, needs to change the calculating position of collision of goal tree to be arranged for this, even tstart= tstart+ 1 (showing that a frame is moved in the position of goal tree to be arranged backward), goes to step the calculating for 7.1) carrying out collision rate again, until Find satisfactionPosition tstart
Step 8, will determine position goal tree to be arranged enter in container
By goal tree TreeidAccording to the original position t that step 7 determinesstartIt is put into container, first adjusts container Length, if tstart+Δtid> Δs tC, then have Δ tC=(Δ tid+tstart), otherwise container length keeps constant, in container The calculating of element value such as formula (9):
Wherein,
The non-zero on relevant position in the agglomerate pixel value and container of goal tree arrange is sought in expression The average of value,
After the drained goal tree, id=id+1 is made, that is, represent the implication that the sequence number of goal tree adds 1;
Step 9, judge whether also there is the goal tree entered in container in need
If it did not, i.e. id=N+1 then sorts to finish exiting, 10 are gone to step,
Otherwise, next goal tree Tree is takenid, into step 7;
Step 10, the element value of container is rounded, be then the video frequency abstract of generation, output,.

Claims (6)

1. a kind of video abstraction generating method based on tree-shaped movement objective orbit, it is characterised in that:Implement according to following steps:
Step 1, moving target is extracted from monitor video, obtain the box of moving target movement locus
Moving target in monitor video is extracted using mixed Gaussian background modeling method, afterwards, is overlapped according to adjacent interframe Area is same mesh calibration method to the maximum carries out motion target tracking, in each frame, each moving target area that tracking is obtained Domain represents with its minimum boundary rectangle,
The minimum enclosed rectangle of each moving target is a boundary rectangle in a two field picture, claims this boundary rectangle institute frame Fixed, the region in this two field picture is an agglomerate;These agglomerates stack up on a timeline, are formed a box Son, the starting point of box is the frame for finding certain moving target, and terminal is the former frame that moving target disappears in the monitoring ken,
It is assumed that appear in the moving target in the video monitoring ken to each, using a box come if describing, then often Shown in the description of individual moving target such as formula (1):
O k ( x L m k , y L m k ; x R m k , y R m k ; x 0 m k , y 0 m k , N k ) , - - - ( 1 )
Wherein, Ok() represents k-th moving target, k=1,2 ..., Ns, mk=1,2 ..., Nk, NSIt is the motion mesh for detecting Target sum, NkRepresent k-th frame number of target Continuous;
Represent k-th target in mkThe coordinate in the boundary rectangle upper left corner in frame,
Represent k-th target in mkThe coordinate in the boundary rectangle lower right corner in frame,
Represent k-th target in mkCenter-of-mass coordinate in frame,
In this step, similar situation is described in the form of tree during moving target recognition, is marked It is that same mark, i.e. tree give a continuous movement locus to know, and is no longer required for only including a moving target;
Step 2, moving target box cluster
To detected moving targetmk=1, 2,...,Nk, judging to whether there is adhesion in motion process between its target, the target box that there will be adhesion is all classified as Same class, that is, be referred to as to belong to same goal tree, and in each frame, each moving target is by its minimum enclosed rectangle The region confined is referred to as an agglomerate of tree, exists if there is multiple moving targets in a frame, in the frame many Individual agglomerate;
Step 3, goal tree is described;
Step 4, step 3 N number of goal tree for obtaining of cluster is ranked up
This N number of goal tree is arranged according to the descending descending of length, it is convenient in order to represent, still set this N number of sequence Goal tree afterwards is Treeid, id=1,2 ..., N;
Step 5, design summarization generation container are simultaneously initialized to it;
Step 6, enter in first aim tree to container;
Step 7, determine that goal tree enters the position of container;
Step 8, will determine position goal tree to be arranged enter in container;
Step 9, judge whether also there is the goal tree entered in container in need
If it did not, i.e. id=N+1 then sorts to finish exiting, 10 are gone to step,
Otherwise, next goal tree Tree is takenid, into step 7;
Step 10, the element value of container is rounded, be then the video frequency abstract of generation,.
2. the video abstraction generating method based on tree-shaped movement objective orbit according to claim 1, it is characterised in that:Institute In the step of stating 3, the parameter set Tree of goal tree unique mark is definedid, parameter set TreeidShown in expression formula such as formula (2):
Tree i d = ( t s t a r t i d , t e n d i d , { Block t i d } , t = 1 , 2 , ... , Δt i d ) , - - - ( 2 )
Wherein, id=1,2 ..., N, id be goal tree numbering,
N is the total number of the goal tree that step 2 is obtained;
It is initial frame number of the moving target in the goal tree in original video,
It is its end frame number in original video,
Just represent the length of the goal tree;
It is agglomerate collection, It is agglomerate of the goal tree in t frames Number, i.e., the number of the moving target for being detected by step 1 is to be classified as same all fortune of goal tree by what step 2 was obtained The region that moving-target box boundary rectangle in respective frame is confined is constituted,
In each agglomerateInformation is described as:
Block t i b = ( s t i b , Rect t i b , r t i b , { pixel t i b } ) , - - - ( 3 )
Wherein,
It is i-th of t frames in goal treebFrame number of the individual agglomerate in original video,
It is i-thbThe seat in individual the agglomerate upper left corner of boundary rectangle and lower right corner in former video Mark, represents area coordinate of the agglomerate in original video frame,
It is i-thbThe 1/2 of the length of long sides of the maximum boundary rectangle of individual agglomerate,
It is i-thbThe pixel value in individual agglomerate region.
3. the video abstraction generating method based on tree-shaped movement objective orbit according to claim 2, it is characterised in that:Institute In the step of stating 5, so-called summarization generation container is that one section of generation is regarded after the target complete tree fusion for generating step 3 The three-dimensional array of frequency summary, wherein bidimensional represents the size of video frequency abstract two field picture, identical with original video frame picture size, separately The outer one-dimensional representation time, so, the data mode that expression video is the picture frame sequence for changing over time just is constituted,
This three-dimensional array is initialized, an empty container is referred to as built, formula (4) is seen:
C = [ c i , j , l ] m × n × Δt C , - - - ( 4 )
Wherein, ci,j,l=0, i=1,2 ..., m, j=1,2 ..., n, l=1,2 ..., Δ tC, the size of array for m × n × ΔtC, m is the line number of frame of video picture, and n is the columns of frame of video picture, Δ tCIt is the length of container,
During initialization, Δ t is madeC=Δ tmax, Δ tmax=max { Δ tid| id=1,2 ..., N }, i.e. the length of empty container is elected as The length of maximum target tree, after step 4 sorts, Δ tmax=Δ t1,
Id=1 is made, first aim tree is entered in container, the position for entering is tstart=1.
4. the video abstraction generating method based on tree-shaped movement objective orbit according to claim 3, it is characterised in that:Institute In the step of stating 6, first goal tree Tree most long will be come1It is put into container, the element in container not for 0 should be to be somebody's turn to do All agglomerates of goal tree, then have formula (5):
c x , y , l = pixel l i b , - - - ( 5 )
Wherein,
Afterwards, id=id+1 is made, next goal tree is selected and is gone to step the position that 7 determinations enter container.
5. the video abstraction generating method based on tree-shaped movement objective orbit according to claim 4, it is characterised in that:Institute In the step of stating 7, following steps are specifically included:
7.1) every frame agglomerate collision that goal tree has been arranged in goal tree to be arranged and container is asked
When discharging into new goal tree, it is with the collision detection process between the goal tree for having arranged in a reservoir, by target to be arranged Tree is up moved since the bottommost of container, i.e., from container scope tstart∈[1,ΔtC] in, by tstart=1 starts, and calculates Collision, the goal tree for as meeting acceptance level collision status is entered,
The agglomerate collision process arranged between goal tree in every frame in goal tree to be arranged and container be enter it is fashionable with arranged mesh Marking as both the position of agglomerate in two field picture of the collision between the agglomerate of tree has some or all of identical, that is, assert and produce Collision, the situation of slight impact does not influence Visual Observations Observations effect slight because its overlap is not serious yet, therefore is considered permission Collision,
If goal tree to be arranged is in taThe agglomerate collection of frame isGoal tree in a reservoir has been arranged in taThe group of frame Block collection is
Assuming thatThe agglomerate of concentrationIts center-of-mass coordinate isThe agglomerate of concentrationIts matter Heart coordinate is
The criterion then whether collided two-by-two between agglomerate such as formula (6):
Col t ( i a , j a ) = 1 i f ( ( x 0 i a - x 0 j a ) 2 + ( y 0 i a - y 0 j a ) 2 ≤ | r t i a - r t j a | ) 0 i f ( ( x 0 i a - x 0 j a ) 2 + ( y 0 i a - y 0 j a ) 2 > | r t i a - r t j a | ) , - - - ( 6 )
Wherein,Then in taIn frame, the row's target in goal tree to be arranged and container Collision Colli (the t of treea) whether criterion such as formula (7):
C o l l i ( t a ) = 1 i f Σ k s = 1 n t a i d Σ i s = 1 n t a C Col t a ( i a , j a ) ≥ 0 0 i f Σ k s = 1 n t a i d Σ i s = 1 n t a C Col t a ( i a , j a ) = 0 ; - - - ( 7 )
7.2) collision frame by frame that goal tree has been arranged in goal tree to be arranged and container is asked
Because the length of goal tree to be arranged is Δ tid, therefore, according to step 7.1) collision of the calculating per frame, row's mesh of container The position for marking tree is taken as tstart,tstart+1,...,tstart-1+Δtid, with all frames of goal tree to be arranged, i.e. ta=1,2 ..., Δtid, frame by frame according to step 7.1) and it is calculated the agglomerate collision Colli (t of every framea), ta=1,2 ..., Δ tid
7.3) the overall collision rate that goal tree has been arranged in goal tree to be arranged and container is sought
Goal tree starting point to be arranged is in tstartOn position the overall collision rate of goal tree has been arranged with containerCalculating formula such as formula (8):
ρ C t s t a r t = Σ t a = 1 Δt i d C o l l i ( t a ) Δt i d ; - - - ( 8 )
7.4) position that goal tree to be arranged can be placed is judged
According to the collision rate that formula (8) is calculatedSpan be
User is needed according to acceptable dense degree, and the requirement to video frequency abstract length, sets collision rate threshold value ρTh,
IfRepresent that goal tree to be arranged does not exist collision with the goal tree of row in container, ifTable Bright collision situation belongs to acceptable degree, at this moment, goal tree to be arranged is discharged into the t in containerstartPosition on, go to step 8 Realization enters in container the goal tree;
IfShow that every frame has collision, at this moment can largely effect on visual effect, ifShow collision Situation belongs to unacceptable degree, needs to change the calculating position of collision of goal tree to be arranged for this, even tstart=tstart+ 1, that is, show that a frame is moved in the position of goal tree to be arranged backward, the calculating for 7.1) carrying out collision rate again is gone to step, until finding full FootPosition tstart
6. the video abstraction generating method based on tree-shaped movement objective orbit according to claim 5, it is characterised in that:Institute In the step of stating 8, by goal tree TreeidAccording to the original position t that step 7 determinesstartIt is put into container, first adjustment is held The length of device, if tstart+Δtid> Δs tC, then have Δ tC=(Δ tid+tstart), otherwise container length keeps constant, container In element value calculating such as formula (9):
c x a , y a , t a = pixel t a i a i f c x a , y a , t a = 0 ( c x a , y a , t a + pixel t a i a ) / 2 i f c x a , y a , t a · pixel t a i a ≠ 0 , - - - ( 9 )
Wherein,
The nonzero value on relevant position in the agglomerate pixel value and container of goal tree arrange is sought in expression Average,
After the drained goal tree, id=id+1 is made, that is, represent the implication that the sequence number of goal tree adds 1.
CN201410755692.0A 2014-12-10 2014-12-10 A kind of video abstraction generating method based on tree-shaped movement objective orbit Expired - Fee Related CN104469547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410755692.0A CN104469547B (en) 2014-12-10 2014-12-10 A kind of video abstraction generating method based on tree-shaped movement objective orbit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410755692.0A CN104469547B (en) 2014-12-10 2014-12-10 A kind of video abstraction generating method based on tree-shaped movement objective orbit

Publications (2)

Publication Number Publication Date
CN104469547A CN104469547A (en) 2015-03-25
CN104469547B true CN104469547B (en) 2017-06-06

Family

ID=52914792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410755692.0A Expired - Fee Related CN104469547B (en) 2014-12-10 2014-12-10 A kind of video abstraction generating method based on tree-shaped movement objective orbit

Country Status (1)

Country Link
CN (1) CN104469547B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460032A (en) * 2017-02-17 2018-08-28 杭州海康威视数字技术股份有限公司 A kind of generation method and device of video frequency abstract
CN109101646B (en) * 2018-08-21 2020-12-18 北京深瞐科技有限公司 Data processing method, device, system and computer readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1320067A1 (en) * 2001-12-13 2003-06-18 Microsoft Corporation Universal graphics adapter
CN101539925A (en) * 2008-03-20 2009-09-23 中国科学院计算技术研究所 Audio/video file-abstracting method based on attention-degree analysis
EP2207111A1 (en) * 2009-01-08 2010-07-14 Thomson Licensing SA Method and apparatus for generating and displaying a video abstract
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN104063883A (en) * 2014-07-07 2014-09-24 杭州银江智慧医疗集团有限公司 Surveillance video abstract generating method based on combination of object and key frames

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1320067A1 (en) * 2001-12-13 2003-06-18 Microsoft Corporation Universal graphics adapter
CN101539925A (en) * 2008-03-20 2009-09-23 中国科学院计算技术研究所 Audio/video file-abstracting method based on attention-degree analysis
EP2207111A1 (en) * 2009-01-08 2010-07-14 Thomson Licensing SA Method and apparatus for generating and displaying a video abstract
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103413330A (en) * 2013-08-30 2013-11-27 中国科学院自动化研究所 Method for reliably generating video abstraction in complex scene
CN104063883A (en) * 2014-07-07 2014-09-24 杭州银江智慧医疗集团有限公司 Surveillance video abstract generating method based on combination of object and key frames

Also Published As

Publication number Publication date
CN104469547A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
EP2473969B1 (en) Detecting anomalous trajectories in a video surveillance system
CN102930553B (en) Bad video content recognition method and device
CN111507283B (en) Student behavior identification method and system based on classroom scene
CN102708182B (en) Rapid video concentration abstracting method
US8416296B2 (en) Mapper component for multiple art networks in a video analysis system
US8270732B2 (en) Clustering nodes in a self-organizing map using an adaptive resonance theory network
CN107122751A (en) A kind of face tracking and facial image catching method alignd based on face
Kataoka et al. Recognition of Transitional Action for Short-Term Action Prediction using Discriminative Temporal CNN Feature.
CN104469547B (en) A kind of video abstraction generating method based on tree-shaped movement objective orbit
JP5771127B2 (en) Attention level estimation device and program thereof
CN107909081A (en) The quick obtaining and quick calibrating method of image data set in a kind of deep learning
CN105100727A (en) Real-time tracking method for specified object in fixed position monitoring image
Bertasius et al. Unsupervised learning of important objects from first-person videos
CN103336967A (en) Hand motion trail detection method and apparatus
Zhao et al. Scene segmentation and categorization using ncuts
CN105512610A (en) Point-of-interest-position-information-based human body motion identification method in video
CN104954743B (en) A kind of polyphaser semantic association method for tracking target
CN102938153B (en) Video image splitting method based on restrain spectral clustering and markov random field
Biradar et al. DEARESt: deep Convolutional aberrant behavior detection in real-world scenarios
Fleuret et al. Re-identification for improved people tracking
Karim et al. A region-based deep learning algorithm for detecting and tracking objects in manufacturing plants
Loy et al. Pose-based clustering in action sequences
CN103226586A (en) Video abstracting method based on optimal strategy of energy distribution
Nghiem et al. Controlling background subtraction algorithms for robust object detection
Sarkar et al. Shot classification and replay detection in broadcast soccer video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170606

Termination date: 20201210

CF01 Termination of patent right due to non-payment of annual fee