CN107315994A - Clustering algorithm based on Spectral Clustering space trackings - Google Patents
Clustering algorithm based on Spectral Clustering space trackings Download PDFInfo
- Publication number
- CN107315994A CN107315994A CN201710334850.9A CN201710334850A CN107315994A CN 107315994 A CN107315994 A CN 107315994A CN 201710334850 A CN201710334850 A CN 201710334850A CN 107315994 A CN107315994 A CN 107315994A
- Authority
- CN
- China
- Prior art keywords
- mrow
- track
- coordinate system
- msub
- movement locus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses the clustering algorithm based on Spectral Clustering space trackings, video image acquisition is carried out to road using video camera, characteristic point is extracted using ORB algorithms to all moving targets in each two field picture in video image, then utilize the KLT track algorithms constrained based on bidirectional weighting invertibity to be tracked characteristic point, obtain each coordinate value of the tracing point in image coordinate system on a plurality of movement locus and every movement locus of all moving targets;Constrained using rigid motion and similar matrix progress spectral clustering is built to the n bars movement locus of all moving targets, obtain different classes of movement locus;Merge carrying out class to different classes of movement locus, obtain the movement locus after merging between class;The method that the present invention is previously mentioned is not influenceed and limitation by all kinds of environment on engineer applied, and is easily achieved, and accurate detection, therefore with very wide application prospect in real time can be effectively carried out to vehicle.
Description
Technical field
The invention belongs to video detection technology field, and in particular to based on Spectral Clustering space trackings
Clustering algorithm.
Background technology
With economic fast development and the progress of society, the improvement of people's living standards, motor vehicle number has significantly
Increase, on the other side is that the road traffic traffic capacity is decreased obviously, and to occur in that congested in traffic, a passage is blocked up etc. a series of to ask
Topic.The vehicle flowrate of road is detected and counted, supervision department is sent this information to and can formulate effective measures and releive friendship
It is logical, reach the purpose for administering traffic.Simultaneously long-term vehicle flowrate provides for design and maintenance of future city road etc.
Important foundation.
Vehicle detection and vehicle flowrate based on traffic scene, due to low, easy to install with Real time detection performance, cost
The advantage such as use increasingly to attract attention.But vehicle detection and vehicle flowrate software conventional at present is big by vehicle flowrate
Small, scene complexity etc. is limited, and can not often be obtained higher accuracy, in actual scene, is even more unable to reach expection
Effect.
The content of the invention
For above-mentioned problems of the prior art or defect, it is an object of the present invention to provide based on Spectral
The clustering algorithm of Clustering space trackings.
To achieve these goals, the present invention is adopted the following technical scheme that:
Based on the clustering algorithm of Spectral Clustering space trackings, comprise the following steps:
Step 1, video image acquisition is carried out to road using video camera, obtained in each two field picture in video image
Each coordinate value of the tracing point in image coordinate system on a plurality of movement locus and every movement locus of all moving targets;
If the quantity of the movement locus of all moving targets is n, there are r continuous tracing points on every movement locus;Its
In, n is the natural number more than or equal to 1, and r is the natural number more than or equal to 1;
Origin is in any angle of described image coordinate system each two field picture using in video image, with the level of each two field picture
Direction is u axles, using the vertical direction of each two field picture as v axles;
Step 2, using rigid motion constrain that the n bars movement locus of all moving targets is built similar matrix and compose and gather
Class, obtains different classes of movement locus;
Including:
Step 21, using the direction parallel to road track graticule as Y-axis, using the direction perpendicular to road track graticule as X
Axle, and X-axis and Y-axis are parallel with road, using the intersection point of X-axis and Y-axis as origin O, with video camera and road beeline
Direction is Z axis, sets up world coordinate system;
Step 22, optional two movement locus from a plurality of movement locus of all moving targets, are designated as M and N, rail respectively
Mark M height hMDirection be horizontal plane vertical direction, hMSpan be 1-3m, hMValue at intervals of 0.1m;Track
N height hNDirection be horizontal plane horizontal direction, hNSpan be 0-4m, hNValue at intervals of 0.01m;
Δ D is obtained by formula (1)1Face:
In formula (1), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M
(Pi+1) represent i+1 tracing point, N (P on the M of tracki+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;
M(Pi)N(Pi) represent tracing point P on synchronization track MiWith the tracing point P on the N of trackiIn world coordinate system away from
From;
Δ D is obtained by formula (2)2Face:
In formula (2), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M
(Pi+1) represent i+1 tracing point, N (P on the M of tracki+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;
M(Pi+1)M(Pi) represent tracing point P on the M of trackiAnd Pi+1Distance in world coordinate system;
By formula (3) by tracing point PiCoordinate value (u in image coordinate systemi,vi) be converted to seat under world coordinate system
Scale value (xi,yi,zi):
Pi=C3×4 -1λipi (3)
In formula (3), pi=[ui,vi,1]T, Pi=[xi,yi,zi,1]T, λiFor scale factor, 0≤λi≤1;C-1 3×4Represent
The inverse matrix of the perspective projection matrix of video camera;
Step 23, with hM、hNA planes are built as zero plane as two axles, then Δ D1Face, Δ D2Face is respectively vertical
Directly in two vertical planes of zero plane;
ΔD1Face and Δ D2Face forms two intersections with zero plane respectively, is used as Δ D1Intersection and Δ D2Intersection;If Δ D1Hand over
Line and Δ D2The distance between intersection is Δ Diff12, and Δ D1Intersection and Δ D2Angle between intersection is θ, Δ D1Intersection it is oblique
Rate is k1, Δ D2The slope of intersection is k2;
Step 24, an element w in similar matrix A is obtained by formula (4)oq:
In formula (4), q=1,2 ... n;O=1,2 ... n;ΔD1IJRepresent when track M height is hMI, track N height
Spend for hNJWhen Δ D1Value;ΔD2IJRepresent when track M height is hMI, track N height be hNJWhen Δ D2Value;hMI=1,
1.1,1.2 ..., 2.9,3, unit is m;hNJ=0,0.01,0.02 ..., 3.99,4, unit is m;
Step 25, repeat step 22 is to step 24, until n bars track, between any two all by as track M and track N, is obtained
To n × n similar matrix A, step 26 is performed;
Step 26, similar matrix A characteristic value is arranged according to order from big to small, all characteristic values and for Sn;
Choose minimum k values so thatThen the clusters number of n bars movement locus is k, performs step 27;Wherein, Sk
For the sum of preceding k characteristic value, k is the natural number more than or equal to 1;
Step 27, the characteristic vector space of n × k dimensions is built using the characteristic vector corresponding to preceding k characteristic value, K- is utilized
Means algorithms are clustered to n × k characteristic vector spaces tieed up, and n bars movement locus is clustered as the movement locus of k classification.
Further, in addition to:
Step 3, the movement locus of the k classification obtained step 2 merges carrying out class, obtains the motion after merging between class
Track;
Including:
Step 31, in the movement locus of the k classification clustered from step 2, optional two classifications are designated as C respectivelyaAnd CbIf,
Classification CaMiddle track paThe minimum v of inverse projection speeda, classification CbMiddle track pbThe minimum v of inverse projection speedb;Wherein, va≥
0, vb≥0;
Step 32, if vaLess than vb, then by CaIt is used as reference category, track paAs the characteristic point of reference category, by CbMake
For classification to be combined, track pbIt is used as the characteristic point of classification to be combined;If vbLess than va, then by CbIt is used as reference category, track pb
As the characteristic point of reference category, by CaIt is used as classification to be combined, track paIt is used as the characteristic point of classification to be combined;
The height H of the characteristic point of classification to be combined is obtained by formula (5)p:
In formula (5), v is the speed of moving target, v=min (va,vb);vpFor the minimum inverse projection speed of classification to be combined
Degree;HcThe height for being video camera in world coordinate system;
Coordinate value of the characteristic point of classification to be combined in world coordinate system is obtained by formula (6):
P '=C3×4 -1λip′ (6)
In formula (6), p '=[ui′,vi′,1]T;P '=[Xi′,Yi′,Zi′,1]T;ui', vi' it is class another characteristic to be combined
Coordinate value of the point in image coordinate system;Xi′,Yi′,Zi' it is coordinate of the characteristic point of classification to be combined in world coordinate system
Value;λiFor scale factor, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
Coordinate value of the characteristic point of reference category in world coordinate system is obtained by formula (7):
P "=C3×4 -1λip″ (7)
In formula (7), p "=[ui″,vi″,1]T;P "=[Xi″,Yi", 0,1]T;ui", vi" exist for the characteristic point of reference category
Coordinate value in image coordinate system;Xi″,Yi", 0 is coordinate value of the characteristic point of reference category in world coordinate system;λiFor chi
Spend the factor, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
Step 33, the characteristic point of classification to be combined and the characteristic point of reference category are obtained in world coordinate system by formula (8)
In absolute distance Δ X, Δ Y, Δ Z:
Step 34, if Δ X=X ', Δ Y=Y ', Δ Z=Z ', then classification and reference category to be combined merge into a class
Not;Otherwise, step 35 is performed;
Step 35, repeat step 31 is to step 34, until the movement locus of k classification is all by as merging classification and wait to close
And classification, obtain the movement locus after merging between class.
Further, the inverse matrix C of the perspective projection matrix of the video camera is obtained by formula (9)-1 3×4:
C3×4 -1={ K [R3×3|t3×1]}-1 (9)
In formula (9), K represents the Intrinsic Matrix of video camera, R3×3Represent between camera coordinate system and world coordinate system
Spin matrix, t3×1Represent the translation matrix between camera coordinate system and world coordinate system;
The camera coordinate system is the photocentre O with video cameraCFor coordinate origin, XCWith the u direction of principal axis of image coordinate system
Unanimously, YCIt is consistent with the v direction of principal axis of image coordinate system, ZCThe plane that axle is constituted perpendicular to image coordinate system, and ZCAxle is put down with image
The intersection point in face is referred to as the principal point of video camera.
Further, origin is in the upper left corner of described image coordinate system each two field picture using in video image, with each frame
The horizontal direction of image is u axles, using the vertical direction of each two field picture as v axles.
Compared with prior art, the present invention has following technique effect:
The method that the present invention is previously mentioned is not influenceed and limitation by all kinds of environment on engineer applied, and is easily achieved, energy
It is effective that accurate detection, therefore with very wide application prospect in real time is carried out to vehicle.
Brief description of the drawings
Fig. 1 is the two field picture in video image in embodiment 1;
Fig. 2 is the schematic diagram of image coordinate system in embodiment 1;
Fig. 3 is the characteristic point result figure of extraction moving target in embodiment 1;
Fig. 4 is vehicle movement track following result in embodiment 1;
Fig. 5 is the schematic diagram of world coordinate system in embodiment 1;
Fig. 6 (a) is the Δ D of No. 2 tracks and No. 3 track draftings in embodiment 11Face and Δ D2Face;Fig. 6 (b) is Δ D1Intersection
With Δ D2Intersection;
Fig. 7 (a) is the Δ D of No. 0 track and No. 2 track mark draftings in embodiment 11Face and Δ D2Face;Fig. 7 (b) is Δ D1Hand over
Line and Δ D2Intersection;
Fig. 8 is 4 movement locus chosen from embodiment 1;
Fig. 9 is the cluster result figure of a part of movement locus in embodiment 1;
The relation of Figure 10 camera imaging models and three kinds of coordinate systems.
Embodiment
Below by drawings and examples, the invention will be further described.
Embodiment 1
The clustering algorithm based on Spectral Clustering space trackings is present embodiments provided, is comprised the following steps:
Step 1, video image acquisition is carried out to road using video camera, to the institute in each two field picture in video image
There is moving target to extract characteristic point using ORB algorithms, then utilize the KLT track algorithms pair constrained based on bidirectional weighting invertibity
Characteristic point is tracked, and is obtained each tracing point on a plurality of movement locus and every movement locus of all moving targets and is being schemed
As the coordinate value in coordinate system;
Wherein, ORB algorithms come from Rublee E., Rabaud V., Konolige K., Bradski G.ORB:an
efficient alternative to SIFT or SURF[J].Proc.of IEEE Conf.on Computer
Vision,2011:2564-2571.
KLT track algorithms come from KLT track algorithms and come from Song Lin, Cheng Yongmei, and the such as Liu Nan uses the nothing of multiple constraint
Man-machine navigation KLT Vision Trackings [J] is infrared and laser engineering, 2013.42 (10):2828-2835.
If the quantity of the movement locus of all moving targets is n, there are r continuous tracing points on every movement locus;Its
In, n is the natural number more than or equal to 1, and r is the natural number more than or equal to 1;
Origin is in any angle of described image coordinate system each two field picture using in video image, with the level of each two field picture
Direction is u axles, using the vertical direction of each two field picture as v axles;
Origin is in the lower left corner of described image coordinate system each two field picture using in video image, with the level of each two field picture
Direction is u axles, using the vertical direction of each two field picture as v axles, as shown in Figure 2.
The traffic video that the present embodiment is used is 720 × 288 gray level image, is a wherein two field picture as shown in Figure 1,
Fig. 3 is that the characteristic point that ORB algorithms are extracted is used to the moving target in Fig. 1 images, and Fig. 4 is a plurality of motion rail in Fig. 1 images
There is corresponding numbering mark, every track, totally 33, wherein 0~No. 1 track is the track of same car, 2~No. 9 tracks are same
The track of one car, 10~16 and 19~No. 32 tracks are the track of same car, and 17~No. 18 tracks are the rail of same car
Mark.
Step 2, using rigid motion constrain that the n bars movement locus of all moving targets is built similar matrix and compose and gather
Class, obtains different classes of movement locus;
Including:
Step 21, such as Fig. 5, using the direction parallel to road track graticule as Y-axis, with perpendicular to the side of road track graticule
To for X-axis, and X-axis and Y-axis are parallel with road, using the intersection point of X-axis and Y-axis as origin O, with video camera and road most short distance
From direction be Z axis, set up world coordinate system;
Step 22, the principle that the clustering method of the present embodiment is used for:Rigid body is in its motion process, with two spies
Point:1st, the line on rigid body between arbitrary two points is parallel and equal during rigid body does translation;2nd, on rigid body
Position vector between any matter member is different, the permanent vector of difference one between them, but displacement, the speed of each matter member
And acceleration is identical.
Optional two movement locus, are designated as M and N respectively from a plurality of movement locus of all moving targets, if every rail
There are r continuous tracing points, track M height h on markMDirection be horizontal plane vertical direction, hMSpan be 1-
3m, hMValue at intervals of 0.1m;Track N height hNDirection be horizontal plane horizontal direction, hNSpan be 0-
4m, hNValue at intervals of 0.01m;
Δ D is obtained by formula (1)1Face:
In formula (1), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M
(Pi+1) represent i+1 tracing point, N (P on the M of tracki+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;
M(Pi)N(Pi) represent tracing point P on synchronization track MiWith the tracing point P on the N of trackiIn world coordinate system away from
From;
Δ D is obtained by formula (2)2Face:
In formula (2), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M
(Pi+1) represent i+1 tracing point, N (P on the M of tracki+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;
M(Pi+1)M(Pi) represent tracing point P on the M of trackiAnd Pi+1Distance in world coordinate system;
By formula (3) by tracing point PiCoordinate value (u in image coordinate systemi,vi) be converted to seat under world coordinate system
Scale value (xi,yi,zi):
P=C3×4 -1λip (3)
In formula (3), p=[ui,vi,1]T, P=[Xi,Yi,Zi,1]T, λiFor scale factor, 0≤λi≤1;C-1 3×4Expression is taken the photograph
The inverse matrix of the perspective projection matrix of camera;
Step 23, with hM、hNA planes are built as zero plane as two axles, then Δ D1、ΔD2Respectively perpendicular to
Two vertical planes of zero plane, respectively as Δ D1Face and Δ D2Face;
ΔD1Face and Δ D2Face forms two intersections with zero plane respectively, is used as Δ D1Intersection and Δ D2Intersection;If Δ D1Hand over
Line and Δ D2The distance between intersection is Δ Diff12, and Δ D1Intersection and Δ D2Angle between intersection is θ, Δ D1Intersection it is oblique
Rate is k1, Δ D2The slope of intersection is k2;
Step 24, an element w in similar matrix A is obtained by formula (4)oq:
In formula (4), q=1,2 ... n;O=1,2 ... n;ΔD1IJRepresent when track M height is hMI, track N height
Spend for hNJWhen Δ D1Value;ΔD2IJRepresent when track M height is hMI, track N height be hNJWhen Δ D2Value;hMI=1,
1.1,1.2 ..., 2.9,3, unit is m;hNJ=0,0.01,0.02 ..., 3.99,4, unit is m;
Step 25, repeat step 22 is to step 24, until n bars track, between any two all by as track M and track N, is obtained
To n × n similar matrix A, step 26 is performed;
Step 26, similar matrix A characteristic value is arranged according to order from big to small, all characteristic values and for Sn;
Choose minimum k values so thatThen the clusters number of n bars movement locus is k, performs step 27;Wherein, Sk
For the sum of preceding k characteristic value, k is the natural number more than or equal to 1;
In the present embodiment, δ=95%;
Step 27, the characteristic vector space of n × k dimensions is built using the characteristic vector corresponding to preceding k characteristic value, K- is utilized
Means algorithms are clustered to n × k characteristic vector spaces tieed up, and n bars movement locus is clustered as the movement locus of k classification;
In the present embodiment, such as Fig. 6 (a) show the Δ D that No. 2 tracks and No. 3 tracks are drawn1Face and Δ D2Face, Fig. 6 (b) is
ΔD1Intersection and Δ D2Intersection, it can be seen that the Δ D of No. 2 tracks and No. 3 tracks in Fig. 6 (a), Fig. 6 (b)1Intersection and Δ D2Hand over
Difference in height Δ Diff between line12Angle theta very little between very little, and two straight lines, in threshold range, belongs to same
The movement locus of car, so being a classification;
As Fig. 7 (a) show the Δ D that No. 0 track and No. 2 track marks are drawn1Face and Δ D2Face, Fig. 7 (b) Δs D1Intersection and
ΔD2Intersection, it can be seen that the Δ D of No. 0 track and No. 2 tracks in Fig. 7 (a), Fig. 7 (b)1Intersection and Δ D2Height between intersection
Spend discrepancy delta Diff12Angle theta between two intersections has been above threshold range, is not belonging to the movement locus of same car,
So belonging to two classifications;
Fig. 8 is, from Fig. 44 movement locus figures chosen, No. 0, No. 1, No. 2, No. 3 movement locus to be designated as respectively;
The track Δ Diff that comparing calculation is obtained two-by-two that table 1 obtains for 4 tracks in Fig. 812With θ results;
Table 1
Data in table 1 can obtain similar matrix A
The cluster result figure of the present embodiment, is illustrated in figure 9 the cluster result figure of a part of movement locus.
Embodiment 2
The present embodiment is on the basis of embodiment 1, in order that the precision of cluster is higher, in addition to:
Step 3, the movement locus of the k classification obtained step 2 merges carrying out class, obtains the motion after merging between class
Track.
Including:
Step 31, in the movement locus of the k classification clustered from step 2, optional two classifications are designated as C respectivelyaAnd CbIf,
Classification CaMiddle track paThe minimum v of inverse projection speeda, classification CbMiddle track pbThe minimum v of inverse projection speedb;
Step 32, if vaLess than vb, then CaFor reference category, track paFor the characteristic point of reference category, CbFor class to be combined
Not, track pbIt is characterized a little;If vbLess than va, then CbFor reference category, track pbFor the characteristic point of reference category, CaTo be to be combined
Classification, track paFor the characteristic point of classification to be combined;
The height H of the characteristic point of classification to be combined is obtained by formula (5)p:
In formula (5), v is the speed of moving target, v=min (va,vb);vpFor the minimum inverse projection speed of classification to be combined
Degree;HcThe height for being video camera in world coordinate system;
Coordinate value of the characteristic point of classification to be combined in world coordinate system is obtained by formula (6):
P '=C3×4 -1λip′ (6)
In formula (6), p '=[ui′,vi′,1]T;P '=[Xi′,Yi′,Zi′,1]T;ui', vi' it is class another characteristic to be combined
Coordinate value of the point in image coordinate system;Xi′,Yi′,Zi' collect for the characteristic point of classification to be combined in world coordinate system;λi
For scale factor, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
In the present embodiment, C3×4 -1={ K [R3×3|t3×1]}-1, the Intrinsic Matrix for the video camera that K is represented, the parameter for
The video camera is 3 × 3 fixed matrixes, R3×3Represent the spin matrix between camera coordinate system and world coordinate system, t3×1Table
Show the translation matrix between camera coordinate system and world coordinate system;
Camera coordinate system is the photocentre O with video cameraCFor coordinate origin, XCWith the u direction of principal axis one of image coordinate system
Cause, YCIt is consistent with the v direction of principal axis of image coordinate system, ZCThe plane that axle is constituted perpendicular to image coordinate system, and ZCAxle and the plane of delineation
Intersection point be referred to as the principal point of video camera.Such as Figure 10.
Coordinate value of the characteristic point of reference category in world coordinate system is obtained by formula (7):
P "=C3×4 -1λip″ (7)
In formula (7), p "=[ui″,vi″,1]T;P "=[Xi″,Yi″,0,1]T;ui", vi" exist for the characteristic point of reference category
Coordinate value in image coordinate system;Xi″,Yi", 0 is coordinate value of the characteristic point of reference category in world coordinate system;λiFor chi
Spend the factor, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
In the present embodiment, C3×4 -1={ K [R3×3|t3×1]}-1, the Intrinsic Matrix for the video camera that K is represented, the parameter for
The video camera is 3 × 3 fixed matrixes, R3×3Represent the spin matrix between camera coordinate system and world coordinate system, t3×1Table
Show the translation matrix between camera coordinate system and world coordinate system;
Camera coordinate system is the photocentre O with video cameraCFor coordinate origin, XC、YCAxle parallel to two dimensional image plane,
XCIt is consistent with the u direction of principal axis of image coordinate system, YCIt is consistent with the v direction of principal axis of image coordinate system, ZCAxle is perpendicular to image coordinate system structure
Into plane, and ZCThe intersection point of axle and the plane of delineation is referred to as the principal point of video camera.Such as Figure 10.
Step 33, the characteristic point of classification to be combined and the characteristic point of reference category are obtained in world coordinate system by formula (8)
In absolute distance Δ X, Δ Y, Δ Z:
Step 34, if Δ X=X ', Δ Y=Y ', Δ Z=Z ', then classification and reference category to be combined merge into a class
Not;Otherwise, step 35 is performed;
Step 35, repeat step 31 is to step 34, until the movement locus of k classification is all by as merging classification and wait to close
And classification, obtain the movement locus after merging between class.
Claims (4)
1. the clustering algorithm based on Spectral Clustering space trackings, it is characterised in that comprise the following steps:
Step 1, video image acquisition is carried out to road using video camera, obtains all in each two field picture in video image
Each coordinate value of the tracing point in image coordinate system on a plurality of movement locus and every movement locus of moving target;
If the quantity of the movement locus of all moving targets is n, there are r continuous tracing points on every movement locus;Wherein, n
For the natural number more than or equal to 1, r is the natural number more than or equal to 1;
Origin is in any angle of described image coordinate system each two field picture using in video image, with the horizontal direction of each two field picture
For u axles, using the vertical direction of each two field picture as v axles;
Step 2, constrained using rigid motion and similar matrix progress spectral clustering built to the n bars movement locus of all moving targets,
Obtain different classes of movement locus;
Including:
Step 21, using the direction parallel to road track graticule as Y-axis, using the direction perpendicular to road track graticule as X-axis, and
X-axis and Y-axis are parallel with road, using the intersection point of X-axis and Y-axis as origin O, using the direction of video camera and road beeline as
Z axis, sets up world coordinate system;
Step 22, optional two movement locus from a plurality of movement locus of all moving targets, are designated as M and N, track M respectively
Height hMDirection be horizontal plane vertical direction, hMSpan be 1-3m, hMValue at intervals of 0.1m;Track N's
Height hNDirection be horizontal plane horizontal direction, hNSpan be 0-4m, hNValue at intervals of 0.01m;
Δ D is obtained by formula (1)1Face:
<mrow>
<msub>
<mi>&Delta;D</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>r</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mo>|</mo>
<mi>M</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>M</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (1), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M (Pi+1) table
Show i+1 tracing point on the M of track, N (Pi+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;M(Pi)N
(Pi) represent tracing point P on synchronization track MiWith the tracing point P on the N of trackiDistance in world coordinate system;
Δ D is obtained by formula (2)2Face:
<mrow>
<msub>
<mi>&Delta;D</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>r</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<mo>|</mo>
<mi>M</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mi>M</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mi>N</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (2), N (Pi) represent track N on i-th of tracing point, M (Pi) represent upper i-th of the tracing point of track M, M (Pi+1) table
Show i+1 tracing point on the M of track, N (Pi+1) represent track N on i+1 tracing point, i=1,2 ..., r-1;M(Pi+1)M
(Pi) represent tracing point P on the M of trackiAnd Pi+1Distance in world coordinate system;
By formula (3) by tracing point PiCoordinate value (u in image coordinate systemi,vi) be converted to coordinate value under world coordinate system
(xi,yi,zi):
Pi=C3×4 -1λipi (3)
In formula (3), pi=[ui,vi,1]T, Pi=[xi,yi,zi,1]T, λiFor scale factor, 0≤λi≤1;C-1 3×4Represent shooting
The inverse matrix of the perspective projection matrix of machine;
Step 23, with hM、hNA planes are built as zero plane as two axles, then Δ D1Face, Δ D2Face be respectively perpendicular to
Two vertical planes of zero plane;
ΔD1Face and Δ D2Face forms two intersections with zero plane respectively, is used as Δ D1Intersection and Δ D2Intersection;If Δ D1Intersection and
ΔD2The distance between intersection is Δ Diff12, and Δ D1Intersection and Δ D2Angle between intersection is θ, Δ D1The slope of intersection is
k1, Δ D2The slope of intersection is k2;
Step 24, an element w in similar matrix A is obtained by formula (4)oq:
In formula (4), q=1,2 ... n;O=1,2 ... n;ΔD1IJRepresent when track M height is hMI, track N height be
hNJWhen Δ D1Value;ΔD2IJRepresent when track M height is hMI, track N height be hNJWhen Δ D2Value;hMI=1,1.1,
1.2 ..., 2.9,3, unit is m;hNJ=0,0.01,0.02 ..., 3.99,4, unit is m;
Step 25, repeat step 22 is to step 24, until n bars track all as track M and track N, obtained between any two n ×
N similar matrix A, performs step 26;
Step 26, similar matrix A characteristic value is arranged according to order from big to small, all characteristic values and for Sn;
Choose minimum k values so thatThen the clusters number of n bars movement locus is k, performs step 27;Wherein, SkTo be preceding
The sum of k characteristic value, k is the natural number more than or equal to 1;
Step 27, the characteristic vector space of n × k dimensions is built using the characteristic vector corresponding to preceding k characteristic value, K- is utilized
Means algorithms are clustered to n × k characteristic vector spaces tieed up, and n bars movement locus is clustered as the movement locus of k classification.
2. the clustering algorithm as claimed in claim 1 based on Spectral Clustering space trackings, it is characterised in that
Also include:
Step 3, the movement locus of the k classification obtained step 2 merges carrying out class, obtains the motion rail after merging between class
Mark;
Including:
Step 31, in the movement locus of the k classification clustered from step 2, optional two classifications are designated as C respectivelyaAnd CbIf, classification
CaMiddle track paThe minimum v of inverse projection speeda, classification CbMiddle track pbThe minimum v of inverse projection speedb;Wherein, va>=0, vb
≥0;
Step 32, if vaLess than vb, then by CaIt is used as reference category, track paAs the characteristic point of reference category, by CbAs treating
Merge classification, track pbIt is used as the characteristic point of classification to be combined;If vbLess than va, then by CbIt is used as reference category, track pbAs
The characteristic point of reference category, by CaIt is used as classification to be combined, track paIt is used as the characteristic point of classification to be combined;
The height H of the characteristic point of classification to be combined is obtained by formula (5)p:
<mrow>
<msub>
<mi>H</mi>
<mi>p</mi>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<mi>v</mi>
<msub>
<mi>v</mi>
<mi>p</mi>
</msub>
</mfrac>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>H</mi>
<mi>c</mi>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula (5), v is the speed of moving target, v=min (va,vb);vpFor the minimum inverse projection speed of classification to be combined;HcFor
Height of the video camera in world coordinate system;
Coordinate value of the characteristic point of classification to be combined in world coordinate system is obtained by formula (6):
P '=C3×4 -1λip′ (6)
In formula (6), p '=[ui′,vi′,1]T;P '=[Xi′,Yi′,Zi′,1]T;ui', vi' exist for the characteristic point of classification to be combined
Coordinate value in image coordinate system;Xi′,Yi′,Zi' it is coordinate value of the characteristic point of classification to be combined in world coordinate system;λi
For scale factor, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
Coordinate value of the characteristic point of reference category in world coordinate system is obtained by formula (7):
P "=C3×4 -1λip″ (7)
In formula (7), p "=[u "i,v″i,1]T;P "=[X "i,Y″i,0,1]T;u″i, v "iFor reference category characteristic point in image
Coordinate value in coordinate system;X″i,Y″i, 0 is coordinate value of the characteristic point of reference category in world coordinate system;λiFor yardstick because
Son, 0≤λi≤1;C-1 3×4Represent the inverse matrix of the perspective projection matrix of video camera;
Step 33, the characteristic point of classification to be combined and the characteristic point of reference category are obtained in world coordinate system by formula (8)
Absolute distance Δ X, Δ Y, Δ Z:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>X</mi>
<mo>=</mo>
<mo>|</mo>
<msup>
<mi>X</mi>
<mo>&prime;</mo>
</msup>
<mo>-</mo>
<msup>
<mi>X</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>Y</mi>
<mo>=</mo>
<mo>|</mo>
<msup>
<mi>Y</mi>
<mo>&prime;</mo>
</msup>
<mo>-</mo>
<msup>
<mi>Y</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msup>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>&Delta;</mi>
<mi>Z</mi>
<mo>=</mo>
<mo>|</mo>
<msup>
<mi>Z</mi>
<mo>&prime;</mo>
</msup>
<mo>-</mo>
<mn>0</mn>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
Step 34, if Δ X=X ', Δ Y=Y ', Δ Z=Z ', then classification and reference category to be combined merge into a classification;It is no
Then, step 35 is performed;
Step 35, repeat step 31 is to step 34, until the movement locus of k classification is all used as merging classification and class to be combined
Not, the movement locus after merging between class is obtained.
3. the clustering algorithm as claimed in claim 1 or 2 based on Spectral Clustering space trackings, its feature exists
In obtaining the inverse matrix C of the perspective projection matrix of the video camera by formula (9)-1 3×4:
C3×4 -1={ K [R3×3|t3×1]}-1(9)
In formula (9), K represents the Intrinsic Matrix of video camera, R3×3Represent the rotation between camera coordinate system and world coordinate system
Matrix, t3×1Represent the translation matrix between camera coordinate system and world coordinate system;
The camera coordinate system is the photocentre O with video cameraCFor coordinate origin, XCWith the u direction of principal axis one of image coordinate system
Cause, YCIt is consistent with the v direction of principal axis of image coordinate system, ZCThe plane that axle is constituted perpendicular to image coordinate system, and ZCAxle and the plane of delineation
Intersection point be referred to as the principal point of video camera.
4. the clustering algorithm as claimed in claim 1 or 2 based on Spectral Clustering space trackings, its feature exists
In origin is in the upper left corner of described image coordinate system each two field picture using in video image, with the horizontal direction of each two field picture
For u axles, using the vertical direction of each two field picture as v axles.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710334850.9A CN107315994B (en) | 2017-05-12 | 2017-05-12 | Clustering method based on Spectral Clustering space trajectory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710334850.9A CN107315994B (en) | 2017-05-12 | 2017-05-12 | Clustering method based on Spectral Clustering space trajectory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107315994A true CN107315994A (en) | 2017-11-03 |
CN107315994B CN107315994B (en) | 2020-08-18 |
Family
ID=60181424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710334850.9A Active CN107315994B (en) | 2017-05-12 | 2017-05-12 | Clustering method based on Spectral Clustering space trajectory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107315994B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108922172A (en) * | 2018-06-19 | 2018-11-30 | 上海理工大学 | Congestion in road based on vehicle characteristics matrix sequence mutation analysis monitors system |
CN109737976A (en) * | 2019-01-07 | 2019-05-10 | 上海极奥网络科技有限公司 | Map road section and lane line automatic Generation |
CN110189363A (en) * | 2019-05-30 | 2019-08-30 | 南京林业大学 | A kind of low multi-view video speed-measuring method of the mobile target of airdrome scene |
CN111311010A (en) * | 2020-02-22 | 2020-06-19 | 中国平安财产保险股份有限公司 | Vehicle risk prediction method and device, electronic equipment and readable storage medium |
CN112634320A (en) * | 2019-09-24 | 2021-04-09 | 成都通甲优博科技有限责任公司 | Method and system for identifying object motion direction at intersection |
CN112686941A (en) * | 2020-12-24 | 2021-04-20 | 北京英泰智科技股份有限公司 | Vehicle motion track rationality identification method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855638A (en) * | 2012-08-13 | 2013-01-02 | 苏州大学 | Detection method for abnormal behavior of vehicle based on spectrum clustering |
CN103456192A (en) * | 2013-09-01 | 2013-12-18 | 中国民航大学 | Terminal area prevailing traffic flow recognizing method based on track spectral clusters |
CN104504897A (en) * | 2014-09-28 | 2015-04-08 | 北京工业大学 | Intersection traffic flow characteristic analysis and vehicle moving prediction method based on trajectory data |
US20160140386A1 (en) * | 2011-11-29 | 2016-05-19 | General Electric Company | System and method for tracking and recognizing people |
-
2017
- 2017-05-12 CN CN201710334850.9A patent/CN107315994B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160140386A1 (en) * | 2011-11-29 | 2016-05-19 | General Electric Company | System and method for tracking and recognizing people |
CN102855638A (en) * | 2012-08-13 | 2013-01-02 | 苏州大学 | Detection method for abnormal behavior of vehicle based on spectrum clustering |
CN103456192A (en) * | 2013-09-01 | 2013-12-18 | 中国民航大学 | Terminal area prevailing traffic flow recognizing method based on track spectral clusters |
CN104504897A (en) * | 2014-09-28 | 2015-04-08 | 北京工业大学 | Intersection traffic flow characteristic analysis and vehicle moving prediction method based on trajectory data |
Non-Patent Citations (4)
Title |
---|
LE XIN ET AL.: ""Traffic flow characteristic analysis at intersections from multi-layer spectral clustering of motion patterns using raw vehicle trajectory"", 《2011 14TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)》 * |
ZHOUYU FU ET AL.: ""Similarity based vehicle trajectory clustering and anomaly detection"", 《IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 2005》 * |
李倩丽 等: ""基于视频车辆运动轨迹场的交通事件检测方法"", 《电视技术》 * |
李明之 等: ""交通监控中运动目标轨迹的距离计算和聚类"", 《计算机工程与设计》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108922172A (en) * | 2018-06-19 | 2018-11-30 | 上海理工大学 | Congestion in road based on vehicle characteristics matrix sequence mutation analysis monitors system |
CN108922172B (en) * | 2018-06-19 | 2021-03-05 | 上海理工大学 | Road congestion monitoring system based on vehicle characteristic matrix sequence change analysis |
CN109737976A (en) * | 2019-01-07 | 2019-05-10 | 上海极奥网络科技有限公司 | Map road section and lane line automatic Generation |
CN110189363A (en) * | 2019-05-30 | 2019-08-30 | 南京林业大学 | A kind of low multi-view video speed-measuring method of the mobile target of airdrome scene |
CN110189363B (en) * | 2019-05-30 | 2023-05-05 | 南京林业大学 | Airport scene moving target low-visual-angle video speed measuring method |
CN112634320A (en) * | 2019-09-24 | 2021-04-09 | 成都通甲优博科技有限责任公司 | Method and system for identifying object motion direction at intersection |
CN111311010A (en) * | 2020-02-22 | 2020-06-19 | 中国平安财产保险股份有限公司 | Vehicle risk prediction method and device, electronic equipment and readable storage medium |
CN112686941A (en) * | 2020-12-24 | 2021-04-20 | 北京英泰智科技股份有限公司 | Vehicle motion track rationality identification method and device and electronic equipment |
CN112686941B (en) * | 2020-12-24 | 2023-09-19 | 北京英泰智科技股份有限公司 | Method and device for recognizing rationality of movement track of vehicle and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107315994B (en) | 2020-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107315994A (en) | Clustering algorithm based on Spectral Clustering space trackings | |
CN111145545B (en) | Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning | |
CN105005771B (en) | A kind of detection method of the lane line solid line based on light stream locus of points statistics | |
CN110285793A (en) | A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System | |
CN104268539B (en) | A kind of high performance face identification method and system | |
CN103593678B (en) | A kind of long-span bridge vehicle dynamic load distribution detection method | |
CN109460709A (en) | The method of RTG dysopia analyte detection based on the fusion of RGB and D information | |
CN105575125B (en) | A kind of wagon flow video detecting analysis system | |
Guan et al. | A lightweight framework for obstacle detection in the railway image based on fast region proposal and improved YOLO-tiny network | |
CN104899590A (en) | Visual target tracking method and system for unmanned aerial vehicle | |
CN109299644A (en) | A kind of vehicle target detection method based on the full convolutional network in region | |
CN108389430A (en) | A kind of intersection pedestrian based on video detection and collision of motor-driven vehicles prediction technique | |
CN103324920A (en) | Method for automatically identifying vehicle type based on vehicle frontal image and template matching | |
CN108205667A (en) | Method for detecting lane lines and device, lane detection terminal, storage medium | |
CN104537342B (en) | A kind of express lane line detecting method of combination ridge border detection and Hough transformation | |
CN105761507B (en) | A kind of vehicle count method based on three-dimensional track cluster | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
CN102737386A (en) | Moving target anti-fusion shielding tracking algorithm | |
CN104574993B (en) | A kind of method of road monitoring and device | |
Kavitha et al. | Pothole and object detection for an autonomous vehicle using yolo | |
CN109948690A (en) | A kind of high-speed rail scene perception method based on deep learning and structural information | |
CN106919902A (en) | A kind of vehicle identification and trajectory track method based on CNN | |
CN107315998A (en) | Vehicle class division method and system based on lane line | |
CN109791607A (en) | It is detected from a series of images of video camera by homography matrix and identifying object | |
CN106327528A (en) | Moving object tracking method and operation method of unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20171103 Assignee: Jiangsu Baisheng Engineering Consulting Co.,Ltd. Assignor: CHANG'AN University Contract record no.: X2022980013572 Denomination of invention: Clustering Method Based on Spectral Clustering Spatial Trajectory Granted publication date: 20200818 License type: Common License Record date: 20220831 |
|
EE01 | Entry into force of recordation of patent licensing contract |