CN106446820A - Background feature point identification method and device in dynamic video editing - Google Patents
Background feature point identification method and device in dynamic video editing Download PDFInfo
- Publication number
- CN106446820A CN106446820A CN201610833676.8A CN201610833676A CN106446820A CN 106446820 A CN106446820 A CN 106446820A CN 201610833676 A CN201610833676 A CN 201610833676A CN 106446820 A CN106446820 A CN 106446820A
- Authority
- CN
- China
- Prior art keywords
- point
- time window
- graph
- characteristic
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a background feature point identification method and device in dynamic video editing, which can accurately identify background feature points in a video in a highly dynamic scenario. The method comprises the following steps: S1, dividing the video into a plurality of overlapped time windows, and classifying the feature points within each time window based on motion feature; S2, by regarding a feature point category within each time window as a graph node, adding an edgeconstruction weighted digraph among the graph nodes with the number of the same feature points not less a first value in adjacent time windows, and finding out a global optimal path with the minimum sum of edge weights by using a dynamic programming algorithm, wherein the weights of the edges of the weighted digraph is dependent on the rank of a motion trajectory matrix and the number of the same feature points; and S3, reclassifying the non-background points according to colors and spatial features, and adding the points consistent with the existing background point motion to a background point set.
Description
Technical field
The present invention relates to dynamic video editor field is and in particular to the background characteristics point in a kind of dynamic video editor identifies
Method and device.
Background technology
Estimate the mobile route of camera by video sequence, be basic in many video editings and video source modeling application
Business.For example, the video that handheld camera is shot often has unstable, nondirectional camera and moves, and allows the viewing of video
Experience is very poor, and Video Stabilization (video stabilization) aims to solve the problem that this problem.In order to estimate that former camera path carries
Take sparse features point, be the important first step in current video digital image stabilization method, be also that video quickly edits the important step in propagation
Suddenly.For example, if wanting to insert new object in the video background that mobile camera is shot, once estimating reliable camera path,
Object just simply can be placed in the first frame by user, be then automatically propagated in remaining video sequence.
In research before, camera path to be estimated often through extracting sparse features point, and is used for calculating consecutive frame
Between transformational relation, such as homography matrix (homography).Research before have one universal it is assumed that the spy that extracts
Levy and be a little mainly all located in the static background region in video, they only come from camera movement in the displacement of interframe.In order to carry
High robust, RANSAC method is frequently used for filtering out some abnormal characteristic points, but this simple screening technique is not enough
To process the dynamic video comprising a large amount of mobile objects.In this kind of video, background is seriously blocked, and therefore characteristic point is led on the contrary
To be located in the object of movement.If additionally, object and camera all strenuous exercises, the background parts in video will constantly change,
This makes background tracking can not possibly realize for a long time.Due to there is no a more robust Feature Selection method, current video editing
Application all can not correctly estimate camera path in such scene.
Content of the invention
In view of the shortcomings of the prior art and defect, the present invention provides the background characteristics point in a kind of dynamic video editor
Recognition methods and device.
On the one hand, the embodiment of the present invention proposes the background characteristics point recognition methods in a kind of dynamic video editor, including:
S1, divide video into multiple overlap time windows, the characteristic point in each time window is carried out based on fortune
The classification of dynamic feature;
S2, by the characteristic point class in each time window is considered as a node of graph, have in adjacent time window
The quantity of same characteristic features point be not less than and add side to build weighted digraph between the node of graph of the first numerical value, and advised by dynamic
Method to one's profit finds the minimum global optimum path of a line weight sum, wherein, the side of described weighted digraph in graph model
Weight depends on the number of movement locus rank of matrix and same characteristic features point;
S3, according to color and space characteristics, non-background dot is classified again, will add with the existing background dot consistent point of motion
Concentrate to background dot.
On the other hand, the embodiment of the present invention proposes the background characteristics point identifying device in a kind of dynamic video editor, including:
Taxon, for dividing video into the time window of multiple overlaps, to the characteristic point in each time window
Carry out the classification based on motion feature;
Find unit, for by the characteristic point class in each time window is considered as a node of graph, in adjacent time
The quantity of the same characteristic features point having in window builds weighted digraph not less than addition side between the node of graph of the first numerical value, and
The minimum global optimum path of a line weight sum is found in graph model by dynamic programming algorithm, wherein, described weighting has
Depend on the number of movement locus rank of matrix and same characteristic features point to the weight on the side of figure;
Adding device, for according to color and space characteristics, classifying again to non-background dot, will be with existing background dot motion one
The point causing adds to be concentrated to background dot.
Traditional background extracting and method of motion analysis propose many Utopian it is assumed that as static in camera, for a long time
A large amount of characteristic points can be tracked, and mobile object is seldom and very little etc..From unlike conventional method, the present invention is based on real scene
Propose 2 points of ordinary hypothesis:First, background have all the time in video partially visible;2nd, at short notice, some background characteristics points
Can be extracted and follow the trail of.A famous theory based on computer vision field for the present invention, i.e. the feature of different mobile status
Point can be regarded as positioned at different linear subspaces, is broadly divided into two stages:Local motion analysis and global optimization, finally
Also can have powerful connections the refinement of mark.According to aforementioned it is assumed that being extracted using existing video features point and tracking obtains
In characteristic point sequence, the present invention completes to belong to the identification process of background characteristics point in characteristic point through three phases:Transport in local
The dynamic analysis phase, long dynamic video is divided into overlapped multiple time windows, carries out local motion in each window
Analysis, classifies to the characteristic point comprising in local window:Spy in global optimization's stage, each window dividing
Levy a subclass and be considered a node of graph, then obtain including the background characteristics running through whole section of video by space-time diagram optimization
Point sequence, this sequence has the characteristics that minimum in motion complexity in comprised characteristic point;In the elaboration phase of background mark,
Residue character point is classified again, the point consistent with the motion of existing background dot is added and concentrates to background dot, to solve because of background spy
Levy and a little there may be different motions, lead to be separated into different classes of in, only part background characteristics point is correctly marked
Problem.As a kind of advanced Feature Selection instrument, the present invention directly more robustly can estimate camera path, and significantly carries
Performance in complex scene for the high existing video editing method, can apply in many vital tasks, such as video stabilization
(stabilization), background reconstruction (background reconstruction) and VS synthesis (video
object composition).
Brief description
Fig. 1 is the schematic flow sheet of background characteristics point recognition methods one embodiment in dynamic video editor of the present invention;
Fig. 2 is that the part flow process of another embodiment of background characteristics point recognition methods in dynamic video editor of the present invention is illustrated
Figure;
Fig. 3 is the structural representation of background characteristics point identifying device one embodiment in dynamic video editor of the present invention.
Specific embodiment
Purpose, technical scheme and advantage for making the embodiment of the present invention are clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is explicitly described it is clear that described embodiment is the present invention
A part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not having
The every other embodiment being obtained under the premise of making creative work, broadly falls into the scope of protection of the invention.
Referring to Fig. 1, the present embodiment discloses the background characteristics point recognition methods in a kind of dynamic video editor, including:
S1, divide video into multiple overlap time windows, the characteristic point in each time window is carried out based on fortune
The classification of dynamic feature;
Present method invention pays close attention to the background parts finding in characteristic point, any reliable feature point detection and track algorithm
May be used to initialize, such as can be from more ripe KLT method as feature point extraction and the basic skills followed the tracks of.
Thus having obtained each characteristic point in the corresponding two-dimensional coordinate of each frame.For long dynamic video, characteristic point often disappears, and
New characteristic point is constantly had to occur.In the correspondence position not having the frame occurring, particular value can be set, such as -1, to represent it no
Effect property.
After tracing characteristic points, video is split as K overlapping time window (as shown in Figure 2), each duration W
Frame, overlapping frame number is W/2.The video being 30 for frame per second, the fixing value of W can be 40.During for occurring in k-th window
The long characteristic point more than 0.5W frame, is added in feature point set, and the point in feature point set is by the motion analysis after participating in.
Because too short feature point reliability is not enough, thus they can be excluded.
As shown in Fig. 2 after time window cutting, needing the characteristic point in each time window is classified, obtaining
Characteristic point class.Classified based on motion feature for by characteristic point, existing method can be adopted, here is omitted.
S2, by the characteristic point class in each time window is considered as a node of graph, have in adjacent time window
The quantity of same characteristic features point be not less than and add side to build weighted digraph between the node of graph of the first numerical value, and advised by dynamic
Method to one's profit finds the minimum global optimum path of a line weight sum, wherein, the side of described weighted digraph in graph model
Weight depends on the number of movement locus rank of matrix and same characteristic features point;
For k-th window, local motion analysis has obtained individual characteristic point class.The dynamic vision that the video camera of motion is shot
In frequency, background locus of points rank of matrix will be less than other foreground features points, based on 2 points of reasons:(1) background motion can be approximate
For homograph, it is simpler than typical foreground object moves;(2) background motion is only caused by camera motion, and prospect
Sports bag contains camera motion and object of which movement two parts.This method is passed through to detect the movement locus rank of matrix of each node of graph,
To analyze the complexity of local motion.In order to identify background parts therein, as shown in Fig. 2 after obtaining characteristic point class, we
Method constructs a weighted digraph to whole video.Characteristic point in each window is defined as node, each node on behalf
One characteristic point class.If the quantity of the same characteristic features point having in the two of adjacent window apertures classes is not less than the first numerical value, two
Add arrow between individual corresponding node.Point to the side of the node of graph j of+1 time window of kth from the node of graph i of k-th time window
WeightComputing formula can be
Wherein, α is constant, and general value is 0.5 or 1,Number for same characteristic features point in node of graph i and node of graph j
Amount,The movement locus rank of matrix constituting for same characteristic features point in node of graph i and node of graph j.It should be noted that it is described
Movement locus matrix Γ is
Wherein, xtmCoordinate for t-th characteristic point on m two field picture (if t-th characteristic point on m two field picture not
Occur, then value -1), p is video totalframes,
For the calculating of the order of movement locus matrix Γ, can be by SVD decomposition be carried out to movement locus matrix Γ, statistics
On the diagonal of the diagonal matrix obtaining, the quantity of nonzero element obtains.
The weight on side is bigger, and inherent motion is more complicated.The computing formula of above-mentioned weight comprises same characteristic features and counts out
Exponential term it means that, if rank of matrix is identical, same characteristic features point is more, and weight is less.
After digraph is fabricated, target is to find a continuous optimal path based on background criterion.In order to look for
To the optimal path that a line weight sum is minimum, this method adopts dynamic programming algorithm, enumerates all differences from the beginning to the end
Combination, finds optimal path (as shown in the path in the middle of Fig. 2), all nodes on this path are identified by as background.Each
The characteristic point sequence that node is comprised, using as the background point sequence in this time window.Thus have found entirely long video
In all time windows background dot.
S3, according to color and space characteristics, non-background dot is classified again, will add with the existing background dot consistent point of motion
Concentrate to background dot.
After above-mentioned optimization process, a category feature point in each time window, will be had to be noted as background.But although the back of the body
Scape characteristic point defers to identical homograph, but there may be different motions, thus be separated into different classes of in.This
In the case of kind, only part background characteristics point is correctly marked.A kind of method of background mark refinement is explained below to solve
Determine this problem.
In each time window, exclude the characteristic point being noted as background first, then believed according to color and space
Residue character point is classified by breath again.For color characteristic, using the average Luv color value around characteristic point.For space characteristics,
Locus after being normalized on all frames using this feature point in from initial frame to abort frame is averaging and obtains.Characteristic point
Sorting technique adopts manifold mean shift cluster.Can see, then sorting technique and before only using motion letter
The classification of breath is different.
Then, detect whether the motion of each characteristic point class is consistent with background characteristics point.Estimated by background characteristics point set
Go out homography matrix, calculate the Mean mapping error in this window according to homography matrix again.Then calculate each characteristic point class
Mean error, if error be less than Mean mapping error, just by the characteristic point in such add to this feature point concentrate.This process
Circulation is carried out, and concentrates until not having any characteristic point can be added to this feature point.
Background characteristics point recognition methods in the dynamic video editor that the present invention provides, based on computer vision field
Individual famous theory, that is, the characteristic point of different mobile status can be regarded as positioned at different linear subspaces, be broadly divided into two ranks
Section:Local motion analysis and global optimization, the refinement of mark of finally also having powerful connections.According to aforementioned it is assumed that having utilizing
Video features point extract in the characteristic point sequence that obtains with tracking, the present invention completes genus in characteristic point through three phases
Identification process in background characteristics point:In the local motion analysis phase, when long dynamic video is divided into overlapped multiple
Between window, carry out local motion analysis in each window, the characteristic point comprising in local window classified:In the overall situation
Optimizing phase, the characteristic point subclass in each window dividing is considered a node of graph, is then optimized by space-time diagram
Obtain including the background characteristics point sequence running through whole section of video, this sequence has in comprised characteristic point in motion complexity
Minimum feature;In the elaboration phase of background mark, residue character point is classified again, by the point consistent with the motion of existing background dot
Add and concentrate to background dot, to solve to there may be different motions because of background characteristics point, lead to be separated into different classes of in,
By the problem that only part background characteristics point is correctly marked.
Referring to Fig. 3, the present embodiment discloses the background characteristics point identifying device in a kind of dynamic video editor, including:
Taxon 1, for dividing video into the time window of multiple overlaps, to the characteristic point in each time window
Carry out the classification based on motion feature;
Find unit 2, for by the characteristic point class in each time window is considered as a node of graph, in adjacent time
The quantity of the same characteristic features point having in window builds weighted digraph not less than addition side between the node of graph of the first numerical value, and
The minimum global optimum path of a line weight sum is found in graph model by dynamic programming algorithm, wherein, described weighting has
Depend on the number of movement locus rank of matrix and same characteristic features point to the weight on the side of figure;
In a particular application, the side of the node of graph j of+1 time window of kth is pointed to from the node of graph i of k-th time window
WeightComputing formula be
Wherein, α is constant,For the quantity of same characteristic features point in node of graph i and node of graph j,For node of graph i and
The movement locus rank of matrix that in node of graph j, same characteristic features point is constituted.
Adding device 3, for according to color and space characteristics, classifying to non-background dot, will be moved with existing background dot again
Consistent point adds to be concentrated to background dot.
In the present embodiment, described adding device 3, specifically can be used for:
For each time window, exclude the characteristic point being present in this time window in described global optimum path, and
Residue character point is classified again using manifold mean shift cluster method according to color characteristic and space characteristics,
Wherein, the average Luv color value of the characteristic point surrounding neighbors that described color characteristic is excluded using this time window, described space
Feature, using this time window from initial frame to abort frame in this characteristic point of being excluded normalize on all frames after space
Position is averaging and obtains;
For each time window, homography matrix is estimated according to the feature point set that this time window is excluded, further according to
Described homography matrix calculates the Mean mapping error in this time window, then for this time window each feature of residue
Point class, calculates the mean error of this feature point class, if judging to know that this mean error is less than described Mean mapping error, should
Characteristic point in class is added to this in feature point set being excluded.
Background characteristics point identifying device in the dynamic video editor that the present invention provides, based on computer vision field
Individual famous theory, that is, the characteristic point of different mobile status can be regarded as positioned at different linear subspaces, be broadly divided into two ranks
Section:Local motion analysis and global optimization, the refinement of mark of finally also having powerful connections.According to aforementioned it is assumed that having utilizing
Video features point extract in the characteristic point sequence that obtains with tracking, the present invention completes genus in characteristic point through three phases
Identification process in background characteristics point:In the local motion analysis phase, when long dynamic video is divided into overlapped multiple
Between window, carry out local motion analysis in each window, the characteristic point comprising in local window classified:In the overall situation
Optimizing phase, the characteristic point subclass in each window dividing is considered a node of graph, is then optimized by space-time diagram
Obtain including the background characteristics point sequence running through whole section of video, this sequence has in comprised characteristic point in motion complexity
Minimum feature;In the elaboration phase of background mark, residue character point is classified again, by the point consistent with the motion of existing background dot
Add and concentrate to background dot, to solve to there may be different motions because of background characteristics point, lead to be separated into different classes of in,
By the problem that only part background characteristics point is correctly marked.
Those skilled in the art are it should be appreciated that embodiments herein can be provided as method, system or computer program
Product.Therefore, the application can be using complete hardware embodiment, complete software embodiment or the reality combining software and hardware aspect
Apply the form of example.And, the application can be using in one or more computers wherein including computer usable program code
The upper computer program implemented of usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) produces
The form of product.
The application is the flow process with reference to method, equipment (system) and computer program according to the embodiment of the present application
Figure and/or block diagram are describing.It should be understood that can be by each stream in computer program instructions flowchart and/or block diagram
Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
The processor instructing all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device is to produce
A raw machine is so that produced for reality by the instruction of computer or the computing device of other programmable data processing device
The device of the function of specifying in present one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide computer or other programmable data processing device with spy
Determine in the computer-readable memory that mode works so that the instruction generation inclusion being stored in this computer-readable memory refers to
Make the manufacture of device, this command device realize in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or
The function of specifying in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing device so that counting
On calculation machine or other programmable devices, execution series of operation steps to be to produce computer implemented process, thus in computer or
On other programmable devices, the instruction of execution is provided for realizing in one flow process of flow chart or multiple flow process and/or block diagram one
The step of the function of specifying in individual square frame or multiple square frame.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation are made a distinction with another entity or operation, and not necessarily require or imply these entities or deposit between operating
In any this actual relation or order.And, term " inclusion ", "comprising" or its any other variant are intended to
Comprising of nonexcludability, wants so that including a series of process of key elements, method, article or equipment and not only including those
Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment
Intrinsic key element.In the absence of more restrictions, the key element that limited by sentence "including a ..." it is not excluded that
Also there is other identical element including in the process of described key element, method, article or equipment.Term " on ", D score etc. refers to
The orientation showing or position relationship are based on orientation shown in the drawings or position relationship, are for only for ease of the description present invention and simplification
Description, rather than indicate or imply that the device of indication or element must have specific orientation, with specific azimuth configuration and behaviour
Make, be therefore not considered as limiting the invention.Unless otherwise clearly defined and limited, term " installation ", " being connected ",
" connection " should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected, or is integrally connected;Can be
It is mechanically connected or electrically connect;Can be to be joined directly together it is also possible to be indirectly connected to by intermediary, can be two
The connection of element internal.For the ordinary skill in the art, above-mentioned term can be understood as the case may be at this
Concrete meaning in invention.
In the specification of the present invention, illustrate a large amount of details.Although it is understood that, embodiments of the invention can
To put into practice in the case of there is no these details.In some instances, known method, structure and skill are not been shown in detail
Art, so as not to obscure the understanding of this description.Similarly it will be appreciated that disclosing and help understand respectively to simplify the present invention
One or more of individual inventive aspect, in the description to the exemplary embodiment of the present invention above, each of the present invention is special
Levy and be sometimes grouped together in single embodiment, figure or descriptions thereof.However, should not be by the method solution of the disclosure
Release be intended to following in reflection:I.e. the present invention for required protection requires than the feature being expressly recited in each claim more
Many features.More precisely, as the following claims reflect, inventive aspect is less than single reality disclosed above
Apply all features of example.Therefore, it then follows claims of specific embodiment are thus expressly incorporated in this specific embodiment,
Wherein each claim itself is as the separate embodiments of the present invention.It should be noted that in the case of not conflicting, this
Embodiment in application and the feature in embodiment can be mutually combined.The invention is not limited in any single aspect,
It is not limited to any single embodiment, be also not limited to any combination and/or the displacement of these aspects and/or embodiment.And
And, can be used alone each aspect of the present invention and/or embodiment or with other aspects one or more and/or its enforcement
Example is used in combination.
Finally it should be noted that:Various embodiments above only in order to technical scheme to be described, is not intended to limit;To the greatest extent
Pipe has been described in detail to the present invention with reference to foregoing embodiments, it will be understood by those within the art that:Its according to
So the technical scheme described in foregoing embodiments can be modified, or wherein some or all of technical characteristic is entered
Row equivalent;And these modifications or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology
The scope of scheme, it all should be covered in the middle of the claim of the present invention and the scope of specification.
Claims (10)
1. the background characteristics point recognition methods in a kind of dynamic video editor is it is characterised in that include:
S1, divide video into the time windows of multiple overlaps, the characteristic point in each time window is carried out special based on motion
The classification levied;
S2, by the characteristic point class in each time window is considered as a node of graph, the phase having in adjacent time window
Add side to build weighted digraph between the node of graph being not less than the first numerical value with the quantity of characteristic point, and calculated by Dynamic Programming
Method finds the minimum global optimum path of a line weight sum, wherein, the weight on the side of described weighted digraph in graph model
Number depending on movement locus rank of matrix and same characteristic features point;
S3, according to color and space characteristics, non-background dot is classified again, will add to the back of the body with the existing background dot consistent point of motion
Sight spot is concentrated.
2. method according to claim 1 is it is characterised in that carry out base described to the characteristic point in each time window
Before the classification of motion feature, also include:
Characteristic point in described video is extracted using KLT method;
For each time window, it is added to this time window by duration occurs in this time window more than the characteristic point of 0.5W frame
In the feature point set of mouth, wherein, each time window duration W frame, overlapping frame number is W/2.
3. method according to claim 2 is it is characterised in that described first numerical value is 10.
4. method according to claim 2 is it is characterised in that point to kth+1 from the node of graph i of k-th time window
The weight on the side of node of graph j of time windowComputing formula be
Wherein, α is constant,For the quantity of same characteristic features point in node of graph i and node of graph j,For node of graph i and Tu Jie
The movement locus rank of matrix that in point j, same characteristic features point is constituted.
5. method according to claim 1 it is characterised in that described according to color with space characteristics, to non-background dot again
Classification, including:
For each time window, exclude the characteristic point being present in this time window in described global optimum path, and according to
Residue character point is classified by color characteristic and space characteristics using manifold mean shift cluster method again, wherein,
The average Luv color value of the characteristic point surrounding neighbors that described color characteristic is excluded using this time window, described space characteristics,
Using this time window from initial frame to abort frame in this characteristic point of being excluded normalize on all frames after locus
It is averaging and obtain.
6. method according to claim 1 it is characterised in that described by with the existing background dot consistent point of motion add to
Background dot is concentrated, including:
For each time window, homography matrix is estimated according to the feature point set that this time window is excluded, further according to described
Homography matrix calculates the Mean mapping error in this time window, then for this time window each characteristic point of residue
Class, calculates the mean error of this feature point class, if judging to know that this mean error is less than described Mean mapping error, by such
In characteristic point add to this in feature point set being excluded.
7. the background characteristics point identifying device in a kind of dynamic video editor is it is characterised in that include:
Taxon, for dividing video into the time window of multiple overlaps, is carried out to the characteristic point in each time window
Classification based on motion feature;
Find unit, for by the characteristic point class in each time window is considered as a node of graph, in adjacent time window
In the quantity of same characteristic features point that has be not less than and add side to build weighted digraph between the node of graph of the first numerical value, and pass through
Dynamic programming algorithm finds the minimum global optimum path of a line weight sum, wherein, described weighted digraph in graph model
The weight on side depend on the number of movement locus rank of matrix and same characteristic features point;
Adding device, for according to color and space characteristics, classifying again to non-background dot, will be consistent with the motion of existing background dot
Point adds to be concentrated to background dot.
8. device according to claim 1 is it is characterised in that point to kth+1 from the node of graph i of k-th time window
The weight on the side of node of graph j of time windowComputing formula be
Wherein, α is constant,For the quantity of same characteristic features point in node of graph i and node of graph j,For node of graph i and Tu Jie
The movement locus rank of matrix that in point j, same characteristic features point is constituted.
9. device according to claim 1, it is characterised in that described adding device, is used for:
For each time window, exclude the characteristic point being present in this time window in described global optimum path, and according to
Residue character point is classified by color characteristic and space characteristics using manifold mean shift cluster method again, wherein,
The average Luv color value of the characteristic point surrounding neighbors that described color characteristic is excluded using this time window, described space characteristics,
Using this time window from initial frame to abort frame in this characteristic point of being excluded normalize on all frames after locus
It is averaging and obtain.
10. device according to claim 1, it is characterised in that described adding device, is used for:
For each time window, homography matrix is estimated according to the feature point set that this time window is excluded, further according to described
Homography matrix calculates the Mean mapping error in this time window, then for this time window each characteristic point of residue
Class, calculates the mean error of this feature point class, if judging to know that this mean error is less than described Mean mapping error, by such
In characteristic point add to this in feature point set being excluded.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610833676.8A CN106446820B (en) | 2016-09-19 | 2016-09-19 | Background characteristics point recognition methods and device in dynamic video editor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610833676.8A CN106446820B (en) | 2016-09-19 | 2016-09-19 | Background characteristics point recognition methods and device in dynamic video editor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106446820A true CN106446820A (en) | 2017-02-22 |
CN106446820B CN106446820B (en) | 2019-05-14 |
Family
ID=58166007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610833676.8A Active CN106446820B (en) | 2016-09-19 | 2016-09-19 | Background characteristics point recognition methods and device in dynamic video editor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106446820B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102459A (en) * | 2018-08-03 | 2018-12-28 | 清华大学 | The extending method and equipment of background frame in a kind of pair of video |
CN112637520A (en) * | 2020-12-23 | 2021-04-09 | 新华智云科技有限公司 | Dynamic video editing method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156995A (en) * | 2011-04-21 | 2011-08-17 | 北京理工大学 | Video movement foreground dividing method in moving camera |
CN102256065A (en) * | 2011-07-25 | 2011-11-23 | 中国科学院自动化研究所 | Automatic video condensing method based on video monitoring network |
CN103390278A (en) * | 2013-07-23 | 2013-11-13 | 中国科学技术大学 | Detecting system for video aberrant behavior |
CN105208407A (en) * | 2014-06-23 | 2015-12-30 | 哈曼贝克自动系统股份有限公司 | Device and method for processing a stream of video data |
CN105574848A (en) * | 2014-11-04 | 2016-05-11 | 诺基亚技术有限公司 | A method and an apparatus for automatic segmentation of an object |
CN105719327A (en) * | 2016-02-29 | 2016-06-29 | 北京中邮云天科技有限公司 | Art stylization image processing method |
-
2016
- 2016-09-19 CN CN201610833676.8A patent/CN106446820B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156995A (en) * | 2011-04-21 | 2011-08-17 | 北京理工大学 | Video movement foreground dividing method in moving camera |
CN102256065A (en) * | 2011-07-25 | 2011-11-23 | 中国科学院自动化研究所 | Automatic video condensing method based on video monitoring network |
CN103390278A (en) * | 2013-07-23 | 2013-11-13 | 中国科学技术大学 | Detecting system for video aberrant behavior |
CN105208407A (en) * | 2014-06-23 | 2015-12-30 | 哈曼贝克自动系统股份有限公司 | Device and method for processing a stream of video data |
CN105574848A (en) * | 2014-11-04 | 2016-05-11 | 诺基亚技术有限公司 | A method and an apparatus for automatic segmentation of an object |
CN105719327A (en) * | 2016-02-29 | 2016-06-29 | 北京中邮云天科技有限公司 | Art stylization image processing method |
Non-Patent Citations (3)
Title |
---|
DIJUN LUO 等: "Video Motion Segmentation Using New Adaptive Manifold Denoising Model", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
RAGHAV SUBBARAO 等: "Nonlinear Mean Shift for Clustering over Analytic Manifolds", 《PROCEEDINGS OF THE 2006 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
朱征宇: "基于Manifold Ranking和结合前景背景特征的显著性检测", 《计算机应用》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109102459A (en) * | 2018-08-03 | 2018-12-28 | 清华大学 | The extending method and equipment of background frame in a kind of pair of video |
CN112637520A (en) * | 2020-12-23 | 2021-04-09 | 新华智云科技有限公司 | Dynamic video editing method and system |
CN112637520B (en) * | 2020-12-23 | 2022-06-21 | 新华智云科技有限公司 | Dynamic video editing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN106446820B (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | Crossing-line crowd counting with two-phase deep neural networks | |
Breitenstein et al. | Online multiperson tracking-by-detection from a single, uncalibrated camera | |
Qiao et al. | Lgpma: Complicated table structure recognition with local and global pyramid mask alignment | |
CN101639354B (en) | Method and apparatus for object tracking | |
Liu et al. | Tracking sports players with context-conditioned motion models | |
Ma et al. | Action recognition and localization by hierarchical space-time segments | |
US11288820B2 (en) | System and method for transforming video data into directional object count | |
CN108053427A (en) | A kind of modified multi-object tracking method, system and device based on KCF and Kalman | |
Idrees et al. | Tracking in dense crowds using prominence and neighborhood motion concurrence | |
CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
Petersen et al. | Real-time modeling and tracking manual workflows from first-person vision | |
CN108052859A (en) | A kind of anomaly detection method, system and device based on cluster Optical-flow Feature | |
US20090060352A1 (en) | Method and system for the detection and the classification of events during motion actions | |
EP2930690B1 (en) | Apparatus and method for analyzing a trajectory | |
Morimitsu et al. | Exploring structure for long-term tracking of multiple objects in sports videos | |
US11257224B2 (en) | Object tracker, object tracking method, and computer program | |
KR20170097265A (en) | System for tracking of moving multi target and method for tracking of moving multi target using same | |
CN112784724A (en) | Vehicle lane change detection method, device, equipment and storage medium | |
CN107948586A (en) | Trans-regional moving target detecting method and device based on video-splicing | |
CN106446820A (en) | Background feature point identification method and device in dynamic video editing | |
Liu et al. | Detecting and tracking sports players with random forests and context-conditioned motion models | |
Li et al. | Cross-modal object tracking: Modality-aware representations and a unified benchmark | |
US10438066B2 (en) | Evaluation of models generated from objects in video | |
Mahasseni et al. | Detecting the moment of snap in real-world football videos | |
Rimboux et al. | Smart IoT cameras for crowd analysis based on augmentation for automatic pedestrian detection, simulation and annotation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |