CN102722710A - Population characteristic extraction method based on viscous fluid - Google Patents
Population characteristic extraction method based on viscous fluid Download PDFInfo
- Publication number
- CN102722710A CN102722710A CN2012101704296A CN201210170429A CN102722710A CN 102722710 A CN102722710 A CN 102722710A CN 2012101704296 A CN2012101704296 A CN 2012101704296A CN 201210170429 A CN201210170429 A CN 201210170429A CN 102722710 A CN102722710 A CN 102722710A
- Authority
- CN
- China
- Prior art keywords
- space
- time
- field
- delta
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a population characteristic extraction method based on viscous fluid. The method comprises the following steps of: segmenting video into space time blocks; measuring signals along with signal changes in the time domain and spatial domain by utilizing a temporal and spatial variation metric matrix; constructing an abstract temporal and spatial variation field for describing representation characteristics of population movement by utilizing a characteristic value analysis method; calculating a temporal and spatial force field for analyzing excitation characteristics of the population movement by utilizing a tangential force analysis method of the viscous fluid; combining the temporal and spatial variation field, the temporal and spatial force field and position information to construct a temporal and spatial viscous fluid field; and analyzing population events by utilizing the viscous fluid field. The temporal and spatial variation characteristics of the signals are extracted, individual detection and segmentation are not required, and the population characteristic extraction method is suitable for large-scale population analysis; and moreover, the movement and excitation characteristics of the population are combined, the nature characteristics of the movement can be better dug, so that high robustness and efficiency can be well exerted in the next population behavior analysis and abnormal event detection.
Description
Technical field
The invention belongs to technical field of computer vision, be specially a kind of population characteristic method for distilling, especially relate to a kind of low-level image feature method for distilling that is suitable for large-scale groups in the video analysis based on video.
Background technology
At present, large-scale activity has become the important carrier of economic development and cultural exchanges, and population management also becomes the importance in the social management.In recent years, computer vision technique had been brought into play more and more important effect already in population management, such as the tracking of the flow control of colony, colony's target and group abnormality event detection etc.
The extraction of population characteristic is a vital task in the population analysis, the central problem that at first will solve.Find that from existing technical literature retrieval integral body has two kinds of main research thinkings.One of thinking is the angle from microcosmic, colony regarded as by individual goal constitute, and constitutes the characteristic of its colony through the extraction of personal feature; Its key is to realize that detection to individual goal is (referring to Leibe, B., Seemann; E., Schiele, B.. " Pedestrian detection in crowded scenes " .IEEE Comput.Soc.Conf.Comput.Vis.Pattern Recogn.CVPR 2005.) and follow the tracks of (referring to: S.Pellegrini; A.Ess; K.Schindler, and L.van Gool.You ' ll never walk alone:Modeling social behavior for multi-target tracking.In ComputerVision, 2009 IEEE 12th International Conference on; Pages 261 – 268,292009-oct.).But the performance of these class methods will be along with the increase of population size and will significantly descending of population density.Say from another angle, as a rule, also need not understand each individual detailed information in the colony in the management process of large-scale colony.Thinking two is the angles from macroscopic view, through top-down thinking, an integral body is used as by colony studies.Usually the global feature of only considering colony based on the research method of integral body is global characteristics in other words; Flow, density and main flow direction of motion etc. like colony; And the individual information of usually not considering colony is (referring to L.Kratz and K.Nishino.Anomaly detection in extremely crowded scenes using spatio-temporal motion pattern models.In Computer Vision and Pattern Recognition; IEEE Conference on; Pages1446 – 1453, June 2009.) yet., these class methods have been ignored the interaction between the inner individuality of colony.In addition, this class methods majority with the calculating of light stream as the basis, responsive to noise ratio, can not detect unexpected motion change.
Summary of the invention
The objective of the invention is to overcome the weak point of above-mentioned prior art,, propose a kind of population characteristic method for distilling based on viscous fluid based on new colony's low-level image feature modeling method---space-time viscous fluid field.
The present invention realizes through following technical scheme; The present invention utilizes fluid that group movement is simulated; Utilize the tangential force of viscous fluid that acting force individual in the colony is analyzed simultaneously, the space-time characterisation of large-scale group movement is analyzed from two angles of presentation and excitation of group movement.From movement imagery, group movement is the variation of signal on the time-space domain, the present invention proposes a change in time and space matrix this variation is measured, and has set up the change in time and space field; From the angle of excitation, group movement is acting force decision between intrasubject and the individuality, and the present invention utilizes the tangential force of viscous fluid to simulate, and has set up the space-time field of force.The change in time and space field and the space-time field of force have constituted space-time viscous fluid field jointly.This method need not carried out detection and tracking to the individuality in the colony, is more suitable for the analysis of large-scale colony; The presentation and the incentive characteristic that have combined group movement simultaneously can better excavate the intrinsic propesties of motion, make its group behavior at next step in analyzing and anomalous event detect in performance better robustness and efficient.
Population characteristic method for distilling based on viscous fluid of the present invention comprises following step:
The first step: structure change in time and space metric matrix, realize that (variation t) is estimated for x, y at space-time position P to signal in the video.
Concrete steps are:
To pixel P arbitrarily (x, y t), are the center with this pixel, construct one on spatial domain radius be r, the degree of depth is the cylindrical space-time piece of T on time domain.
2. calculate the inner signal of this space-time piece and change, as to pixel P (x, y, description f t)
P, that is:
I wherein
A(r, θ, t) and I
B(r+ Δ r, θ+Δ θ, t+ Δ t) is any two coordinate points pixel values size in the space-time piece; Δ r and Δ θ are the relative position relations of these two pixels; (x y) is step function, promptly to Diff
∫ ∫ ∫
V() dV is the volume branch in this space-time piece.
3. change the position relationship delta r and the Δ θ of two pixels, further calculate this pixel feature description to different radius Δ r and angle delta θ variation in the space-time piece:
M=1 wherein, 2 ..., M; N=1,2 ..., N.
4. these feature descriptions are constituted one and change the description matrix F
MN, wherein every row has characterized the intensity of variation of signal on time domain, and the variation of signal on spatial domain levied in every tabulation, that is:
Second step: based on the change in time and space matrix F of signal
MN, make up change in time and space field ST
Fluid(x, y, t).Concrete steps are:
1. utilize Hermitian matrix of change in time and space matrix construction
and then extract the variation of signal on time domain, that is:
λ wherein
MaxIt is the Hermitian matrix
Eigenvalue of maximum,
It is corresponding unit character vector.Then (spatial-temporal characteristics t) does pixel P for x, y
3. utilize the spatial-temporal characteristics of each pixel in the sequence
Obtain change in time and space field (spatiotemporal fluid field) ST
Fluid(x, y, t), that is:
The 3rd step: utilize the viscous fluid tangential force that the change in time and space field is analyzed, further obtain space-time field of force ST
Force(x, y, t).
Concrete steps are:
1. utilize the theory of tangential force in the viscous fluid, at first calculate the tangential force of change in time and space field, that is: along the x direction
In like manner, can calculate the tangential force of change in time and space field along y and t direction.
2. in the tangential based on all directions, calculate the space-time acting force, that is:
3. utilize the space-time acting force of each pixel in the sequence, obtain the space-time field of force (spatiotemporal force field), that is:
The 4th step: based on the change in time and space field and the space-time field of force of each pixel in the sequence, structure space-time viscous fluid field (spatiotemporal viscous fluid field, STVF (x, y, t)), that is:
F wherein
PosBe pixel coordinate it, ST
FluidBe the change in time and space field, ST
ForceIt is the space-time field of force.
The 5th step: (bag of feature BoF) with latent Di Li Cray model (latent Dirichlet Allocation), analyzes space-time viscous fluid field, realizes the analysis of group behavior and the detection of group abnormality incident to utilize the characteristic bag.
Compared with prior art, beneficial effect of the present invention is: 1) extract the spatial-temporal characteristics of signal, in order to group movement is described, therefore need not carry out individual detection and cut apart suitable more large-scale population analysis; 2) proposed the change in time and space matrix, constructed an abstract viscous fluid field, the kinetic characteristic of colony has been simulated; 3) utilize the tangential force in the viscous fluid theory theoretical; Interacting goals power in the colony is simulated; Excavate kinetic characteristic from the angle of excitation; Can better excavate the intrinsic propesties of motion, make its group behavior at next step in analyzing and anomalous event detect in performance better robustness and efficient.
Description of drawings
Fig. 1 carries out the main-process stream block diagram of colony's event analysis based on space-time viscous fluid field.
Fig. 2 is to (synoptic diagram is estimated in x, y, the signal t) variation in cylinder space-time piece at space-time position P.
The synoptic diagram as a result of Fig. 3 change in time and space field, wherein figure (a) is original image frame (being specially the 61st frame in the former sequence), and figure (b) is corresponding change in time and space field result, and figure (c) is the vertical view of change in time and space field.
The synoptic diagram as a result in Fig. 4 space-time field of force, wherein figure (a) is original image frame (being specially the 61st frame in the former sequence), and figure (b) is corresponding space-time field of force result, and figure (c) is the vertical view in the space-time field of force.
Fig. 5 carries out the figure as a result that group behavior is understood based on space-time viscous fluid of the present invention field.
Fig. 6 is the event detection result's that obtains of method of the present invention and social force method and optical flow method a receiver operating characteristic curve (Receiver operating characteristic curve, ROC curve).
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment is that prerequisite is implemented with technical scheme of the present invention, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment
The picture frame that this enforcement is adopted is looked (video for traffic surveillance) from the monitoring of the colony among the database PETS2009.This video sequence be by (IEEE International Workshop on Performance Evaluation of Tracking and Surveillance PETS2009) provides, with analysis and the understanding of carrying out group behavior.
The video bottom space-time characteristic method for distilling that present embodiment relates to comprises following concrete steps:
The first step: structure change in time and space metric matrix, realize signal in the video in space-time position P (x, y, the estimation of variation characteristic t).
Concrete steps are:
To pixel P arbitrarily (x, y t), are the center with this pixel, construct one on spatial domain radius be r, the degree of depth is the cylindrical space-time piece of T on time domain.R=3 in the present embodiment, T=7.
2. calculate the inner signal of this space-time piece and change, as to pixel P (x, y, description f t)
P, that is:
Wherein Δ r and Δ θ are the relative position relations of these two pixels, in the present embodiment, and Δ r=1,
3. change the position relationship delta r and the Δ θ of two pixels, further calculate this pixel feature description to different radius Δ r and angle delta θ variation in the space-time piece:
M=1 wherein, 2 ..., M; N=1,2 ..., N.In the present embodiment, M=8, N=6.
4. these feature descriptions are constituted one and change the description matrix F
MN, wherein every row has characterized the intensity of variation of signal on time domain, and the variation of signal on spatial domain levied in every tabulation, that is:
Second step: based on the change in time and space matrix F of signal
MN, make up change in time and space field ST
Fluid(x, y, t).Concrete steps are:
1. utilize Hermitian matrix of change in time and space matrix construction
of preamble and then extract the variation of signal on time domain, that is:
2. utilize the method for Eigenvalue Analysis; Obtain the eigenvalue of maximum of matrix
; I.e.
marked change direction on feature space, that is:
λ wherein
MaxIt is the Hermitian matrix
Eigenvalue of maximum,
It is corresponding unit character vector.Then pixel P (x, y, t) (spatial-temporal characteristics in t) does for x, y at space-time piece V
3. utilize the spatial-temporal characteristics of each pixel in the sequence
Obtain change in time and space field ST
Fluid(x, y, t), that is:
Accompanying drawing 3 is synoptic diagram as a result of change in time and space field, and wherein figure (a) is original image frame (being specially the 61st frame in the former sequence), and figure (b) is corresponding change in time and space field result, and figure (c) is the vertical view of change in time and space field.In the middle of the synoptic diagram as a result of change in time and space field, represent different amplitudes of variation with different gray scales, along with the increase of amplitude of variation, gray-scale value increases gradually.As can be seen from the figure, in the background area, because the amplitude of variation of signal on the time-space domain is smaller, so gray-scale value is smaller; And for the zone of pedestrian's motion, the variation of signal on the time-space domain is more remarkable, trends towards than the high-gray level value.
The 3rd step: utilize the viscous fluid tangential force that the change in time and space field is analyzed, further obtain space-time field of force ST
Force(x, y, t).
Concrete steps are:
1. utilize the theory of tangential force in the viscous fluid, at first calculate the tangential force of change in time and space field, that is: along the x direction
In like manner, calculate the tangential force of change in time and space field along y and t direction.
2. in the tangential based on all directions, calculate the space-time acting force, that is:
3. utilize the space-time acting force of each pixel in the sequence, obtain the space-time field of force (spatiotemporal force field), that is:
Accompanying drawing 4 is the synoptic diagram as a result in the space-time field of force, and wherein figure (a) is original image frame (being specially the 61st frame in the former sequence), and figure (b) is corresponding space-time field of force result, and figure (c) is the vertical view in the space-time field of force.In the middle of the synoptic diagram as a result in the space-time field of force, represent different amplitudes of variation with different gray values, along with the increase of amplitude of variation, gray-scale value increases gradually.As can be seen from the figure, in the background area or motion zone more stably,, so trend towards smaller gray-scale value because the tangential force of abstract viscous fluid field is smaller; And for the more violent zone of motion change, the tangential force of abstract viscous fluid field is more remarkable, and gray-scale value is bigger.
The 4th step: based on the change in time and space field and the space-time field of force of each pixel in the sequence, structure space-time viscous fluid field (spatiotemporal viscous fluid field, STVF (x, y, t)), that is:
F wherein
PosBe pixel coordinate it, ST
FluidBe the change in time and space field, ST
ForceIt is the space-time field of force.In the present embodiment, f
PosBe one 2 * 1 vector, ST
FluidAnd ST
ForceBe the vector of N * 1, (x, y t) are the vector of (2N+2) * 1 to STVF.
The 5th step: utilize characteristic bag (bag of feature; BoF) method is utilized space-time viscous fluid field characteristic, STVF (x; Y; T), utilize one of clustering method structure to analyze dictionary,
be K=256 in the present embodiment.
The 6th step: utilize latent Di Li Cray model (latent Dirichlet Allocation), space-time viscous fluid field is analyzed, realize the analysis of group behavior and the detection of group abnormality incident.
Accompanying drawing 5 carries out the figure as a result that group behavior is understood based on space-time viscous fluid of the present invention field.
Wherein represent testing result, use the scroll bar of different gray scales to represent that crowd's state is for walking or running in the frame different frame in the sequence with scroll bar.Four scroll bars from top to bottom successively representative be the manual event recognition result of mark, as reference system in order to carry out the comparison of method; Second result that on behalf of method of the present invention, scroll bar obtain; The representative of the 3rd scroll bar based on the method for social force (referring to R.Mehran; A.Oyama; And M.Shah.Abnormal crowd behavior detection using social force model.In Computer Vision and Pattern Recognition; 2009.CVPR 2009.IEEE Conference on, pages 935-942, and june 2009) result that obtains; The 4th scroll bar representative result that identification obtains based on optical flow method.Method of the present invention is based on the method for social force and the result that identification obtains based on optical flow method.As can be seen from the figure, the method for social force and the light stream of method of the present invention before can better realize the detection and the analysis of group behavior, and the precision of detection is significantly improved.
Fig. 6 is the event detection result's that obtains of method of the present invention and social force method and optical flow method receiver operating characteristic curve (Receiver operating characteristic curve; ROC curve). wherein three curves are represented ROC curve of the present invention from top to bottom respectively, and corresponding curve is sent out in curve that the social force method is corresponding and light stream.As can be seen from the figure; The ROC TG-AUC of method of the present invention is greater than the ROC TG-AUC of social force and optical flow method; Because method of the present invention characterizes the characteristic of group movement from two angles of presentation and excitation of motion; Can better excavate the essential characteristic of motion, especially can better find the conversion opportunity of different motion characteristic.
Although content of the present invention has been done detailed introduction through above-mentioned preferred embodiment, will be appreciated that above-mentioned description should not be considered to limitation of the present invention.After those skilled in the art have read foregoing, for multiple modification of the present invention with to substitute all will be conspicuous.Therefore, protection scope of the present invention should be limited appended claim.
Claims (7)
1. population characteristic method for distilling based on viscous fluid may further comprise the steps:
The first step: to pixel P arbitrarily (x, y t), are the center with this pixel, construct one on spatial domain radius be r, the degree of depth is the cylindrical space-time piece of T on time domain;
Second step: calculate the inner signal of this space-time piece and change, as to pixel P (x, y, description f t)
P
The 3rd step: change the position relationship delta r and the Δ θ of two pixels, further calculate to this pixel feature description
to different radius Δ r and angle delta θ variation in the space-time piece
The 4th step:
structure change in time and space metric matrix that utilizes diverse location; Realization to signal in the video at space-time position P (x; Y, variation t) is estimated;
The 5th step: utilize Hermitian matrix of change in time and space matrix construction
And then the variation of extraction signal on time domain,
Be the change in time and space matrix of signal,
Be F
MNTransposition;
The 6th step: the method for utilizing Eigenvalue Analysis; Obtain the marked change of matrix
; And then obtain pixel P (x; Y, spatial-temporal characteristics t);
The 7th step: the spatial-temporal characteristics of utilizing each pixel in the sequence
Obtain change in time and space field ST
Fluid(x, y, t), that is:
The 8th step: utilize the viscous fluid tangential force that the change in time and space field is analyzed, further obtain space-time power;
The 9th step: utilize the space-time acting force of each pixel in the sequence, obtain the space-time field of force, that is:
The tenth step:, make up space-time viscous fluid field based on the change in time and space field and the space-time field of force of each pixel in the sequence;
The 11 step: utilize characteristic bag and latent Di Li Cray model, space-time viscous fluid field is analyzed, realize the analysis of group behavior and the detection of group abnormality incident.
2. the population characteristic method for distilling based on viscous fluid according to claim 1 is characterized in that: pixel P (x, y, the measure of variation f in space-time piece t) described in second step
P, divide acquisition through the volume in the space-time piece, specifically:
I wherein
A(r, θ, t) and I
B(r+ Δ r, θ+Δ θ, t+ Δ t) is any two coordinate points pixel values size in the space-time piece; Δ r and Δ θ are the relative position relations of these two pixels; (x y) is step function, promptly to Diff
∫ ∫ ∫
V() dV is the volume branch in this space-time piece.
3. the population characteristic method for distilling based on viscous fluid according to claim 1; It is characterized in that: described change in time and space metric matrix of the 4th step; Through changing the position relationship delta r and the Δ θ of two pixels in the space-time piece; Further calculating obtains the feature description of different radius Δ r and angle delta θ variation in the space-time piece this pixel, is specially:
Wherein
Change in time and space metric matrix F
MNEvery row has characterized the intensity of variation of signal on time domain, and the variation of signal on spatial domain levied in every tabulation.
4. the population characteristic method for distilling based on viscous fluid according to claim 1; It is characterized in that: the structure of abstract viscous fluid field of the 6th step obtains the marked change of signal on time domain through the Eigenvalue Analysis to the change in time and space metric matrix, is specially:
5. the population characteristic method for distilling based on viscous fluid according to claim 1 is characterized in that: the 8th the step colony space-time power utilize the analysis of viscous fluid tangential force to obtain, be specially:
Wherein
In like manner, can calculate the tangential force of change in time and space field, ST along y and t direction
FluidIt is the change in time and space field.
6. the population characteristic method for distilling based on viscous fluid according to claim 1; It is characterized in that: the structure of the space-time viscous fluid field in the tenth step; Based on the change in time and space field and the space-time field of force of each pixel in the sequence,, be specially in conjunction with the position formation of pixel:
7. according to each described population characteristic method for distilling of claim 1-6 based on viscous fluid; It is characterized in that: through constructing abstract viscous fluid field stimulation group movement characteristic; Utilize the fluid tangential force to obtain motion-activated simultaneously; Through combining the presentation and the incentive characteristic of group movement, can better excavate the intrinsic propesties of motion, make its group behavior analysis and anomalous event at next step detect in performance better robustness and efficient; Simultaneously need not carry out detection and tracking, be more suitable for the analysis of large-scale colony the individuality in the colony.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210170429.6A CN102722710B (en) | 2012-05-28 | 2012-05-28 | Population characteristic extraction method based on viscous fluid |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210170429.6A CN102722710B (en) | 2012-05-28 | 2012-05-28 | Population characteristic extraction method based on viscous fluid |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102722710A true CN102722710A (en) | 2012-10-10 |
CN102722710B CN102722710B (en) | 2014-10-15 |
Family
ID=46948460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210170429.6A Active CN102722710B (en) | 2012-05-28 | 2012-05-28 | Population characteristic extraction method based on viscous fluid |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102722710B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103902966A (en) * | 2012-12-28 | 2014-07-02 | 北京大学 | Video interaction event analysis method and device base on sequence space-time cube characteristics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1202065A (en) * | 1997-06-02 | 1998-12-16 | 松下电器产业株式会社 | Image detection method, image detection apparatus, image processing method, image processing apparatus, and medium |
US20020181590A1 (en) * | 2001-04-24 | 2002-12-05 | Koninklijki Philips Electronics N.V. | 3-D Recursive vector estimation for video enhancement |
CN101799865A (en) * | 2010-02-25 | 2010-08-11 | 上海复控华龙微系统技术有限公司 | Pedestrian space-time outline presenting method based on ellipse Fourier decomposition |
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
-
2012
- 2012-05-28 CN CN201210170429.6A patent/CN102722710B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1202065A (en) * | 1997-06-02 | 1998-12-16 | 松下电器产业株式会社 | Image detection method, image detection apparatus, image processing method, image processing apparatus, and medium |
US20020181590A1 (en) * | 2001-04-24 | 2002-12-05 | Koninklijki Philips Electronics N.V. | 3-D Recursive vector estimation for video enhancement |
CN101799865A (en) * | 2010-02-25 | 2010-08-11 | 上海复控华龙微系统技术有限公司 | Pedestrian space-time outline presenting method based on ellipse Fourier decomposition |
CN102231792A (en) * | 2011-06-29 | 2011-11-02 | 南京大学 | Electronic image stabilization method based on characteristic coupling |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103902966A (en) * | 2012-12-28 | 2014-07-02 | 北京大学 | Video interaction event analysis method and device base on sequence space-time cube characteristics |
CN103902966B (en) * | 2012-12-28 | 2018-01-05 | 北京大学 | Video interactive affair analytical method and device based on sequence space-time cube feature |
Also Published As
Publication number | Publication date |
---|---|
CN102722710B (en) | 2014-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chaker et al. | Social network model for crowd anomaly detection and localization | |
Ullah et al. | Anomalous entities detection and localization in pedestrian flows | |
CN102156880B (en) | Method for detecting abnormal crowd behavior based on improved social force model | |
CN102063613B (en) | People counting method and device based on head recognition | |
CN108230364B (en) | Foreground object motion state analysis method based on neural network | |
Ullah et al. | Multi-feature-based crowd video modeling for visual event detection | |
CN102043967B (en) | Effective modeling and identification method of moving object behaviors | |
CN106203274A (en) | Pedestrian's real-time detecting system and method in a kind of video monitoring | |
Wang et al. | SPB-YOLO: An efficient real-time detector for unmanned aerial vehicle images | |
CN102163290A (en) | Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information | |
CN101901354B (en) | Method for detecting and tracking multi targets at real time in monitoring videotape based on characteristic point classification | |
CN102890781A (en) | Method for identifying wonderful shots as to badminton game video | |
Gong et al. | Local distinguishability aggrandizing network for human anomaly detection | |
CN105405150A (en) | Abnormal behavior detection method and abnormal behavior detection device based fused characteristics | |
CN106033548B (en) | Crowd abnormity detection method based on improved dictionary learning | |
CN101290658A (en) | Gender recognition method based on gait | |
Liu et al. | An automatic detection algorithm of metro passenger boarding and alighting based on deep learning and optical flow | |
Qi et al. | Automated traffic volume analytics at road intersections using computer vision techniques | |
CN103400154A (en) | Human body movement recognition method based on surveillance isometric mapping | |
CN103902966A (en) | Video interaction event analysis method and device base on sequence space-time cube characteristics | |
CN102214359A (en) | Target tracking device and method based on hierarchic type feature matching | |
Hu et al. | Parallel spatial-temporal convolutional neural networks for anomaly detection and location in crowded scenes | |
Zhang et al. | Beyond particle flow: Bag of trajectory graphs for dense crowd event recognition | |
Hu et al. | Two-stage unsupervised video anomaly detection using low-rank based unsupervised one-class learning with ridge regression | |
CN115620227A (en) | Crowd abnormal behavior real-time detection system and method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |