CN103679215A - Video monitoring method based on group behavior analysis driven by big visual big data - Google Patents

Video monitoring method based on group behavior analysis driven by big visual big data Download PDF

Info

Publication number
CN103679215A
CN103679215A CN201310746795.6A CN201310746795A CN103679215A CN 103679215 A CN103679215 A CN 103679215A CN 201310746795 A CN201310746795 A CN 201310746795A CN 103679215 A CN103679215 A CN 103679215A
Authority
CN
China
Prior art keywords
behavior
field picture
vector
feature
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310746795.6A
Other languages
Chinese (zh)
Other versions
CN103679215B (en
Inventor
黄凯奇
康运锋
曹黎俊
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201310746795.6A priority Critical patent/CN103679215B/en
Publication of CN103679215A publication Critical patent/CN103679215A/en
Application granted granted Critical
Publication of CN103679215B publication Critical patent/CN103679215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A video monitoring method achieved by a computer comprises the steps of receiving video data captured by a vidicon; establishing a group behavior model according to the received video data; estimating parameters of the group behavior model to obtain multiple crowd behaviors existing in a scene; using the obtained group behavior model to obtain behavior feature sets of different crowds; converting the obtained behavior feature sets and using the converted behavior feature sets to obtain statistical people number values according to all of crowd behaviors. The vidicon has general applicability in angle. The video monitoring method can be used for people counting at open entrances and exits, is small in calculation quantity and can meet the real-time video processing requirement.

Description

The video frequency monitoring method of the group behavioural analysis based on the large data-driven of vision
Technical field
The present invention relates to a kind of video frequency monitoring method, particularly a kind of video frequency monitoring method of the group behavioral analysis technology based on the large data-driven of vision.
Background technology
Most of traditional supervisory systems need special monitor staff to carry out artificial judgment for monitor video.This need to expend a large amount of manpowers, and people is absorbed in something for a long time, may neglect some abnormal conditions, thereby bring negative consequence.Intelligent video monitoring system can be identified different objects, during abnormal conditions in finding monitored picture, can in fast and the most best mode, give the alarm and useful information is provided, thereby can more effectively assist monitor staff obtain accurate information and process accident, and reduce to greatest extent wrong report and fail to report phenomenon.
In correlation technique, according to the difference of crowd behaviour detection method, video frequency monitoring method can be divided into two classes.The many people behavior recognition methods of one class methods based on motion tracking, the method is subject to the challenge of number in crowd.When number is more, serious shielding, cannot carry out single tracking, therefore can only be applied to the situation that scene is simple and number is few.The crowd behaviour recognition methods of Equations of The Second Kind method based on feature learning or structure behavior model, is used in crowd abnormal behaviour and detects, as crowd massing, crowd scatter and crowd is run and the abnormal behaviour such as crowd fighting etc.The method is more suitable for many scenes in people, by extracting feature, sets up model, and uses machine learning method to obtain model parameter, is conducive to improve verification and measurement ratio.But model can not be described all behaviors, so the model different to specific behavior needs.The sample that lacks in training in addition still brings challenges to obtaining optimum model parameter.
Summary of the invention
The object of this invention is to provide a kind of video frequency monitoring method, can detect and identify crowd behaviour, and add up the number that different rows is crowd.
To achieve these goals, a kind of video frequency monitoring method can comprise step:
1) receive the video data of being caught by video camera;
2) according to the video data receiving, set up group behavior model;
3) estimate the parameter of described group behavior model, obtain the multiple crowd behaviour existing in scene;
4) use the group behavior model obtaining to obtain the behavioural characteristic collection of different crowd;
5) the behavioural characteristic collection obtaining is changed, and come for often with the behavioural characteristic collection of conversion
Plant crowd behaviour and obtain people's numerical value of adding up.
According to technical scheme of the present invention, its advantage is: 1) mathematical model is simple, and parameter is few, and training is convenient; 2) can be used for crowded environment, calculate the semi-invariant of specific behavior number; 3) setting of camera angle has general applicability, can be for being opened to entrance demographics; 4) calculated amount is little, can meet the requirement that real-time video is processed.
Accompanying drawing explanation
Fig. 1 shows according to the process flow diagram of the video frequency monitoring method of the embodiment of the present invention;
Fig. 2 shows the word-document model structure according to the embodiment of the present invention;
Fig. 3 shows the field scene example according to the embodiment of the present invention;
Fig. 4 shows according to different crowd behavioural characteristic collection in the field scene of the embodiment of the present invention;
Fig. 5 shows the geometry correction schematic diagram according to the embodiment of the present invention;
Fig. 6 shows the on-the-spot garden number obtaining according to the embodiment of the present invention and changes example.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
It should be noted that, in accompanying drawing or instructions description, similar or identical part is all used identical figure number.The implementation that does not illustrate in accompanying drawing or describe is form known to a person of ordinary skill in the art in affiliated technical field.In addition, although the demonstration of the parameter that comprises particular value can be provided herein, should be appreciated that, parameter is without definitely equaling corresponding value, but can in acceptable error margin or design constraint, be similar to corresponding value.In addition, the direction term of mentioning in following examples, such as " on ", D score, 'fornt', 'back', " left side ", " right side " etc., be only the direction with reference to accompanying drawing.Therefore, the direction term of use is to be not used for limiting the present invention for illustrating.
According to technical scheme of the present invention, first, for scene crowd's complicacy, with group behavior model, excavate the multiple behavior in scene; Then, the K class crowd behaviour according to detecting, obtains behavioural characteristic collection for every class crowd; Then, the behavioural characteristic collection obtaining is converted to for example 5 dimensional feature vectors, to reduce intrinsic dimensionality, and by parameter correlation time, obtains a 5*G dimensional feature vector; Then, utilize the 5*G dimensional feature vector training of human artificial neural networks obtaining, thereby add up the semi-invariant of every class crowd's behavior number.The whole technical scheme process flow diagram of the embodiment of the present invention as shown in Figure 1.Below embodiments of the invention are elaborated.
Step 1: receive the video data being obtained by video camera, and can carry out such as processing such as denoisings the video data receiving.
Step 2: the video data based on receiving is set up group behavior model.
Due to the complicacy of crowd behaviour, conventionally in a scene, there is different crowd behaviours, be difficult to all behaviors of single model description.Therefore can obtain by group behavior model the feature set of every kind of behavior, and usage behavior feature set is done population analysis.This group behavior model can be a kind of word-document model, that is: low-level image feature is as word, and video segment is as document, thereby the crowd behaviour in excavation video implies theme, and obtains the feature set of every kind of crowd behaviour, i.e. low-level image feature set.
The model low-level image feature that the embodiment of the present invention adopts is local motion information.For example, can obtain motion pixel by frame difference method, then use optical flow method (Horn B K P, Schunck B G.Determining optical flow[J] .Artificial intelligence, 1981,17 (1): 185-203) calculate the velocity vector of motion pixel, and then obtain the feature of motion pixel, be i.e. position and movement velocity.Here, using each motion pixel as a word w i, one section of video can comprise M two field picture, i.e. M document, and each document can represent by a word set, that is, and document W={w i, i=1 ..., N}, wherein w i={ x i, y i, u i, v i, N is the number of pixels in this frame of video, and x represents the horizontal level of pixel, and y represents the upright position of pixel, and u represents the speed of pixel along continuous straight runs, v represents pixel speed vertically.Certainly, those skilled in the art can adopt other known technologies in field of motion estimation to represent document W.
Fig. 2 shows word-document model structure that the embodiment of the present invention is used.Wherein, α represents the power relatively between implicit theme in collection of document, and β represents the probability distribution of all implicit themes self, stochastic variable π jcharacterize document level j, stochastic variable π jsize represent the proportion of each implicit theme in destination document.At word layer, z jirepresent that destination document j distributes to the implicit theme share of each word i, x jiit is the term vector representation of destination document.Suppose to have K behavior theme, each theme is the multinomial distribution of word, and α can be that the Dirichlet of corpus distributes.For each document j, Dirichlet distribution Dir (π j| be α) with π jfor parameter.For each the word i in document j, theme z jiprobability distribution be π jk, word x jiabout parameter multinomial distribution.π wherein jand z jifor dependent variable, α and β are the parameters that needs optimization.When given α and β, stochastic variable π j, theme z j={ z ji, word x j={ x jijoint probability distribution as shown in formula (1):
p ( x j , z j , π j | α , β ) = p ( π j | α ) Π i = 1 N p ( z ji | π j ) p ( x ji | z ji , β ) = Γ ( Σ k = 1 K α k ) Π k = 1 K Γ ( α k ) π j 1 α 1 - 1 · · · π jk α k - 1 Π i = 1 N π jz ji β z ji x ji - - - ( 1 )
Therefore, the key problem that builds word-document model is the deduction that implicit variable distributes, and obtains the configuration information (π, z) of the inner implicit theme of destination document.Yet, due to posteriority distribution p (z j, π j| α, β) be not easy to calculate, can utilize the variation distribution of formula as follows (2) to be similar to this distribution:
q ( z j , π j | γ j , φ j ) = q ( π j | γ j ) Π i = 1 N q ( z ji | φ ji ) - - - ( 2 )
Wherein, γ jfor Dirichlet distribution q (π j| γ j) parameter, { φ jibe multinomial distribution q (z j| φ j) parameter.(γ j, φ j) can be by calculating logp (x j| α, β) maximal value obtain.
Step 3: estimate the parameter of group behavior model, obtain the various crowd behaviours that exist in scene.
Optimized parameter (α, β) can be by calculating logp (x j| α, β) maximal value obtain, as shown in formula (3).
( α * , β * ) = arg max ( α , β ) Σ j = 1 M log p ( x j | α , β ) - - - ( 3 )
Equally due to p (x j| α, β) be not easy direct calculating, can carry out estimated parameter (α, β) by a kind of maximal possibility estimation EM method of variation: in E-step, for each document j, find optimum variational parameter , use the variation distribution of the optimum variational parameter being obtained by E-step to be similar to above-mentioned formula (2), by two step cycle calculations, obtain optimized parameter (α *, β *).
As example, Fig. 3 shows certain two field picture of the video data of reception, wherein use the group behavior model of the embodiment of the present invention to excavate and under this scene, comprise for example four implicit themes (crowd behaviour), that is: move upward, move downward, to left movement, move right.
Step 4.: use the group behavior model obtaining to obtain different crowd behavioural characteristic collection.
Every two field picture in video all comprises different crowd behaviours, can use the parameter of the group behavior model obtaining in step 3, obtains the feature set of every kind of crowd behaviour, by word-document model shown in (4).
f k * = { x k * i | i = 1 , . . . , F } k * = arg max k ∈ { 1 , . . . , K } p ( x i , z k , i | α , β ) - - - ( 4 )
Wherein,
Figure BDA0000450148950000054
be the feature set of k* behavior, the number of feature in the feature set that F is k* behavior, x kifor word is the feature of i pixel of k kind behavior.
Fig. 4 shows the crowd behaviour in scene, wherein utilize Optical-flow Feature point (only having shown Partial Feature point in image) to represent different behaviors, in figure, have Three Groups of Population behavior: the Based on Feature Points in rectangular area 1 moves upward, the Based on Feature Points in rectangular area 2 moves downward to the Based on Feature Points in left movement, rectangular area 3.
Step 5: the behavioural characteristic collection obtaining is changed, and obtained people's numerical value of statistics for every kind of behavior with the behavioural characteristic collection of conversion.
By group behavior model, obtained above the feature set of different crowd behaviour and every kind of behavior.Although behavioural characteristic collection also can be described behavior crowd's number, intrinsic dimensionality is higher, and the parameter training time is longer, and can not directly obtain accumulation number.Therefore, the method according to this invention, can be converted to 5 dimensional feature vectors by the behavioural characteristic collection of every two field picture, reduces thus intrinsic dimensionality.Meanwhile, time parameter can be added behavioural characteristic to concentrate, for each the behavioural characteristic collection that utilizes above-mentioned formula (4) to obtain, can obtain the proper vector NF={AS of a 5*G dimension g, SV g, DV g, DD g, NP g, wherein G is time parameter, represents G frame, for adding up the semi-invariant of specific behavior number.Particularly, can utilize following methods to obtain above-mentioned 5*G dimensional feature vector:
(1) average velocity vector AS g:
AS g={ AS g, g=1 ..., G}, wherein AS gbe the average velocity of g two field picture, can shown in (5), obtain AS g.
AS g = 1 F Σ i = 1 F v gi 2 + u gi 2 - - - ( 5 )
Wherein, u jiand v githe x and the y direction speed component that represent respectively i feature in g two field picture.
(2) speed variance vectors AV g:
SV g={ SV g, g=1 ..., G}, wherein SV gbe the speed variance of g two field picture, for weighing the complexity of every two field picture light stream speed, can shown in (6), obtain SV g.
SV g = 1 F Σ i = 1 F ( v gi 2 + u gi 2 - AS g ) 2 - - - ( 6 )
(3) direction variance vector DV g:
DV g={ DV g, g=1 ..., G}, wherein DV gbe the direction variance of g two field picture, for weighing the complexity of light stream direction, can shown in (7), obtain DV g.
DV g = 1 8 Σ i = 1 8 ( ND gi - ND ‾ g ) 2 - - - ( 7 )
Be divided into 8 intervals by 0~360 °, concentrate the direction character of light stream to vote according to angular interval every kind of behavioural characteristic, obtain the direction histogram of every kind of behavior.ND gifor i of direction histogram interval statistical value,
Figure BDA0000450148950000073
for { ND gi, i=1 ..., the mean value of 8}.
(4) divergence vector DD g:
DD g={ DD g, g=1 ..., G}, wherein DD gbe the divergence of g two field picture, can shown in (5), obtain DD g.
DD g = Σ i = 1 8 ND gi × | RD g ( i ) | RD g ( i ) = mod ( i - MD g , 8 ) - 8 × ( mod ( i - MD g , 8 ) ≥ 4 ) - - - ( 8 )
MD wherein g=max (ND gi), i=1 ..., 8.
(5) behavior sum of all pixels vector
Because the monitoring scene depth of field is generally larger, there is more serious perspective phenomenon (same object in the projection of scene on the plane of delineation, close to video camera, seem large, away from video camera, seem little), therefore need to do weighting to the contribution of different pixels on the plane of delineation and process.Suppose that ground is plane, people is perpendicular to ground.As shown in Figure 5, establish the some P that disappears vcoordinate be (x v, y v), reference line is y r=H/2, can be suc as formula the contribution factor that obtains any one pixel I (x, y) on the plane of delineation shown in (9).
S C ( x , y ) = ( y r - y v y - y v ) 2 - - - ( 9 )
The sum of all pixels for the behavior is: the sum of all pixels vector of the behavior is NP g={ NP g, g=1 ..., G}
Obtain after 5*G dimensional feature vector, by craft, demarcate the number of two kinds of different behaviors of turnover, for training of human artificial neural networks model, the neural network model training is added up for number of people entering.Can obtain demographics with known neural net method.Experiment enters the total number of persons in garden by adding up the number of entering exporting under this scene with the method acquisition of the difference of the number of going out.Fig. 6 (a) shows on-the-spot certain two field picture turnover behavior crowd's number.In the image upper right corner, by red font, express from starting to count down to the number of entering so far and going out number: enter (In): 157, go out (Out): 39, in image, only shown part Optical-flow Feature point, Based on Feature Points in elliptic region 1 " goes out ", Based on Feature Points in elliptic region 2 " enters ", arrow representation feature spot moving direction, black box is number statistical regions.Fig. 6 (b) shows the variation (every 2 minutes being unit) of number in garden, and wherein in garden, the average accuracy of demographics is 92.35%.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the foregoing is only specific embodiments of the invention and oneself; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. a computer implemented video frequency monitoring method, comprises step:
A) receive the video data of being caught by video camera;
B) according to the video data receiving, set up group behavior model;
C) estimate the parameter of described group behavior model, obtain the multiple crowd behaviour existing in scene;
D) use the group behavior model obtaining to obtain the behavioural characteristic collection of different crowd;
E) the behavioural characteristic collection obtaining is changed, and for every kind of crowd behaviour, obtained people's numerical value of statistics with the behavioural characteristic collection of conversion.
2. method according to claim 1, wherein, described step b) comprising: set up word-document model, wherein using each motion pixel as a word w i, the M two field picture of one section of video, corresponding to M document, is used word set W={w i, i=1 ..., N} represents document, wherein w i={ x i, y i, u i, v i, N is the number of pixels in described frame of video, and x represents the horizontal level of pixel, and y represents the upright position of pixel, and u represents the speed of pixel along continuous straight runs, v represents pixel speed vertically.
3. method according to claim 1, wherein, described step c) comprising: the parameter of estimating described group behavior model by maximal possibility estimation EM method.
4. method according to claim 2, wherein, described step c) comprising: according to following formula, with group behavior model, detect the behavior existing in scene, and obtain the feature set of every kind of behavior:
f k * = { x k * i | i = 1 , . . . , F } k * = arg max k ∈ { 1 , . . . , K } p ( x i , z k , i | α , β )
Wherein, α represents the power relatively between implicit theme in collection of document, and β represents the probability distribution of all implicit themes self, supposes total K behavior,
Figure FDA0000450148940000012
be the feature set of k* specific behavior, the number of feature in the feature set that F is k* behavior.
5. method according to claim 4, wherein, described steps d) comprising: the proper vector NF={AS that the feature set of the behavior obtaining is converted to 5*G dimension g, SV g, DV g, DD g, NP g, and training of human artificial neural networks is with statistical number of person, and wherein, AS grepresent average velocity vector, SV gexpression speed variance vectors, DV grepresent direction variance vector, DD grepresent divergence vector, and NP gexpression behavior sum of all pixels vector.
6. method according to claim 5, wherein, average velocity vector AS gbe calculated as:
AS G={AS g,g=1,...,G}
AS wherein gbe the average velocity of g two field picture, u giand v githe x and the y direction speed component that represent respectively i feature in g two field picture.
7. method according to claim 5, wherein, speed variance vectors SV gbe calculated as:
SV G={SV g,g=1,...,G}
SV wherein gbe the speed variance of g two field picture, SV g = 1 F Σ i = 1 F ( v gi 2 + u gi 2 - AS g ) 2 , U giand v githe x and the y direction speed component that represent respectively i feature in g two field picture.
8. method according to claim 5, wherein, direction variance vector DV gbe calculated as:
DV G={DV g,g=1,...,G}
DV wherein gbe the direction variance of g two field picture,
Figure FDA0000450148940000023
nD gifor i of direction histogram interval statistical value,
Figure FDA0000450148940000024
mean value, u giand v githe x and the y direction speed component that represent respectively i feature in g two field picture.
9. method according to claim 5, wherein, divergence vector DD gbe calculated as:
DD G={DD g,g=1,...,G}
DD wherein git is the divergence of g two field picture.
Wherein DD g = Σ i = 1 8 ND gi × | RD g ( i ) | RD g ( i ) = mod ( i - MD g , 8 ) - 8 × ( mod ( i - MD g , 8 ) ≥ 4 )
MD wherein g=max (ND gi), i=1 ..., 8, ND gifor i of direction histogram interval statistical value.
10. method according to claim 5, wherein, behavior sum of all pixels vector NP gbe calculated as:
NP G={NP g,g=1,...,G}
NP wherein gthe behavior sum of all pixels of g two field picture, NP g = 1 F Σ i = 1 F S C ( x gi , y gi ) .
CN201310746795.6A 2013-12-30 2013-12-30 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives Active CN103679215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310746795.6A CN103679215B (en) 2013-12-30 2013-12-30 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310746795.6A CN103679215B (en) 2013-12-30 2013-12-30 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives

Publications (2)

Publication Number Publication Date
CN103679215A true CN103679215A (en) 2014-03-26
CN103679215B CN103679215B (en) 2017-03-01

Family

ID=50316703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310746795.6A Active CN103679215B (en) 2013-12-30 2013-12-30 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives

Country Status (1)

Country Link
CN (1) CN103679215B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104320617A (en) * 2014-10-20 2015-01-28 中国科学院自动化研究所 All-weather video monitoring method based on deep learning
CN105096344A (en) * 2015-08-18 2015-11-25 上海交通大学 A group behavior identification method and system based on CD motion features
CN105100683A (en) * 2014-05-04 2015-11-25 深圳市贝尔信智能系统有限公司 Video-based passenger flow statistics method, device and system
CN108573497A (en) * 2017-03-10 2018-09-25 北京日立北工大信息系统有限公司 Passenger flow statistic device and method
US10127597B2 (en) 2015-11-13 2018-11-13 International Business Machines Corporation System and method for identifying true customer on website and providing enhanced website experience
CN109063549A (en) * 2018-06-19 2018-12-21 中国科学院自动化研究所 High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN110874878A (en) * 2018-08-09 2020-03-10 深圳云天励飞技术有限公司 Pedestrian analysis method, device, terminal and storage medium
CN112084925A (en) * 2020-09-03 2020-12-15 厦门利德集团有限公司 Intelligent electric power safety monitoring method and system
CN113012386A (en) * 2020-12-25 2021-06-22 贵州北斗空间信息技术有限公司 Security alarm multi-level linkage rapid pushing method
CN115856980A (en) * 2022-11-21 2023-03-28 中铁科学技术开发有限公司 Marshalling station operator monitoring method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751553A (en) * 2008-12-03 2010-06-23 中国科学院自动化研究所 Method for analyzing and predicting large-scale crowd density
US20110243450A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Material recognition from an image
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method
CN102708573A (en) * 2012-02-28 2012-10-03 西安电子科技大学 Group movement mode detection method under complex scenes
US8406498B2 (en) * 1999-01-25 2013-03-26 Amnis Corporation Blood and cell analysis using an imaging flow cytometer
CN103258193A (en) * 2013-05-21 2013-08-21 西南科技大学 Group abnormal behavior identification method based on KOD energy feature

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406498B2 (en) * 1999-01-25 2013-03-26 Amnis Corporation Blood and cell analysis using an imaging flow cytometer
CN101751553A (en) * 2008-12-03 2010-06-23 中国科学院自动化研究所 Method for analyzing and predicting large-scale crowd density
US20110243450A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Material recognition from an image
CN102385705A (en) * 2010-09-02 2012-03-21 大猩猩科技股份有限公司 Abnormal behavior detection system and method by utilizing automatic multi-feature clustering method
CN102708573A (en) * 2012-02-28 2012-10-03 西安电子科技大学 Group movement mode detection method under complex scenes
CN103258193A (en) * 2013-05-21 2013-08-21 西南科技大学 Group abnormal behavior identification method based on KOD energy feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
茅耀斌: ""视频监控中的群体运动分析研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邹友辉: ""基于统计图模型的视频异常事件检测"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100683A (en) * 2014-05-04 2015-11-25 深圳市贝尔信智能系统有限公司 Video-based passenger flow statistics method, device and system
CN104320617A (en) * 2014-10-20 2015-01-28 中国科学院自动化研究所 All-weather video monitoring method based on deep learning
CN104320617B (en) * 2014-10-20 2017-09-01 中国科学院自动化研究所 A kind of round-the-clock video frequency monitoring method based on deep learning
CN105096344A (en) * 2015-08-18 2015-11-25 上海交通大学 A group behavior identification method and system based on CD motion features
CN105096344B (en) * 2015-08-18 2018-05-04 上海交通大学 Group behavior recognition methods and system based on CD motion features
US10127597B2 (en) 2015-11-13 2018-11-13 International Business Machines Corporation System and method for identifying true customer on website and providing enhanced website experience
CN108573497A (en) * 2017-03-10 2018-09-25 北京日立北工大信息系统有限公司 Passenger flow statistic device and method
CN108573497B (en) * 2017-03-10 2020-08-21 北京日立北工大信息系统有限公司 Passenger flow statistical device and method
CN109063549A (en) * 2018-06-19 2018-12-21 中国科学院自动化研究所 High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN109063549B (en) * 2018-06-19 2020-10-16 中国科学院自动化研究所 High-resolution aerial video moving target detection method based on deep neural network
CN110874878A (en) * 2018-08-09 2020-03-10 深圳云天励飞技术有限公司 Pedestrian analysis method, device, terminal and storage medium
CN112084925A (en) * 2020-09-03 2020-12-15 厦门利德集团有限公司 Intelligent electric power safety monitoring method and system
CN113012386A (en) * 2020-12-25 2021-06-22 贵州北斗空间信息技术有限公司 Security alarm multi-level linkage rapid pushing method
CN115856980A (en) * 2022-11-21 2023-03-28 中铁科学技术开发有限公司 Marshalling station operator monitoring method and system

Also Published As

Publication number Publication date
CN103679215B (en) 2017-03-01

Similar Documents

Publication Publication Date Title
CN103679215A (en) Video monitoring method based on group behavior analysis driven by big visual big data
CN102156880B (en) Method for detecting abnormal crowd behavior based on improved social force model
CN102324016B (en) Statistical method for high-density crowd flow
CN103077423B (en) To run condition detection method based on crowd's quantity survey of video flowing, local crowd massing situation and crowd
CN101464944B (en) Crowd density analysis method based on statistical characteristics
Cao et al. Abnormal crowd motion analysis
CN101470809B (en) Moving object detection method based on expansion mixed gauss model
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
Zha et al. Detecting group activities with multi-camera context
CN106023245B (en) Moving target detecting method under the static background measured based on middle intelligence collection similarity
CN103839065A (en) Extraction method for dynamic crowd gathering characteristics
CN107301376B (en) Pedestrian detection method based on deep learning multi-layer stimulation
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN102855465B (en) A kind of tracking of mobile object
CN104820995A (en) Large public place-oriented people stream density monitoring and early warning method
CN103593679A (en) Visual human-hand tracking method based on online machine learning
CN104050685A (en) Moving target detection method based on particle filtering visual attention model
CN104200218A (en) Cross-view-angle action identification method and system based on time sequence information
Hermina et al. A Novel Approach to Detect Social Distancing Among People in College Campus
CN101877135B (en) Moving target detecting method based on background reconstruction
CN104063879B (en) Pedestrian flow estimation method based on flux and shielding coefficient
CN109977796A (en) Trail current detection method and device
CN103530601A (en) Monitoring blind area crowd state deduction method based on Bayesian network
CN105118073A (en) Human body head target identification method based on Xtion camera
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant