CN102708573A - Group movement mode detection method under complex scenes - Google Patents

Group movement mode detection method under complex scenes Download PDF

Info

Publication number
CN102708573A
CN102708573A CN2012100468328A CN201210046832A CN102708573A CN 102708573 A CN102708573 A CN 102708573A CN 2012100468328 A CN2012100468328 A CN 2012100468328A CN 201210046832 A CN201210046832 A CN 201210046832A CN 102708573 A CN102708573 A CN 102708573A
Authority
CN
China
Prior art keywords
cell
vector
motion
proper vector
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100468328A
Other languages
Chinese (zh)
Other versions
CN102708573B (en
Inventor
贺文骅
刘志镜
屈鉴铭
王韦桦
唐国良
姚勇
袁通
侯晓慧
陈东辉
周鸿�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201210046832.8A priority Critical patent/CN102708573B/en
Publication of CN102708573A publication Critical patent/CN102708573A/en
Application granted granted Critical
Publication of CN102708573B publication Critical patent/CN102708573B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a group movement mode detection method under complex scenes. The group movement mode detection method under the complex scenes includes the steps: A1, acquiring a monitoring video; A2, acquiring a movement region by means of background differencing; A3, optimizing movement information in cells; and A4, establishing a global movement pattern according to the movement information of the cells. By the method, an optical flow method is used for acquiring the movement information under complex scenes like public places large in crowd density in real-time monitoring, and the information is regionalized in the form of the cells. Monitoring analysis is performed for the movement pattern by using the cells as basic objects, and semantic region judgment is performed further. The group movement mode detection method under the complex scenes solves the problem that a traditional movement pattern analysis method cannot be applicable to the complex scenes serious in blocking and numerous in targets, thereby providing intelligent support for monitoring and security of the public places.

Description

Group movement mode detection method under the complex scene
Technical field
The invention belongs to computer vision, Intelligent Information Processing field.Relating generally to a kind of video monitoring content intelligent analysis method, specifically is the group movement mode detection method under a kind of complex scene.
Background technology
Along with the concern of the whole world for public safety constantly increases, the watch-dog of public place is more and more, and security protection and monitoring demand are also more and more.Therefore,, received the concern of more and more Chinese scholars, become a much-talked-about topic in computer vision research field in recent years, also obtained a lot of achievements for the processing of monitor message in large scale and the analysis of human motion behavior and incident.
So-called complex scene generally is meant to comprise more object in the scene and distribute scattered irregularly, is for the simple scene of those backgrounds.Compare with traditional motor behavior analysis, the group movement analysis of complex scene exists many new challenges.Scene relative complex, target are small and weak relatively, block problems such as phenomenon is extremely serious, make that target detection and the target following in the classic method all is difficult to realize the motor behavior analysis after also can't carrying out naturally.Therefore, for multiobject motion analysis, be divided into based on individuality with based on whole both direction.The former is applicable to the situation that the target number is less, can adopt the traditional algorithm after the improvement to carry out the detection and tracking of target, utilizes the information that obtains to carry out the motor behavior analysis and modeling then; The latter then is applicable to colony's scene that density is higher.The present invention then is the situation that belongs to the latter, no longer study to each target, but with whole colony as research object, research and analyse on the whole.
About the analysis of group movement pattern, scholar both domestic and external has done certain work, but the unripe system that can come into operation.In general, the group movement pattern is meant the set of the moving stream with similar motion feature, can reflect the characteristics and certain form of group movement.Common motion path or the moving region of containing movable information that take the form of, such as coloured curve, the different directional information of color showing wherein, and the path of curve representation motion.For the detection of motor pattern, 3 steps have been followed basically: the 1) detection of characteristic; 2) extraction of motion information of characteristic; The identification of 3) motor pattern/behavior/incident and modeling.Many times, after motor pattern detects completion, can further realize detection to abnormal behaviour/incident.For the detection of characteristic, method commonly used has Corner Detection, SIFT, the detection of KLT key point etc.; For obtaining of movable information, method commonly used is that optical flow method and space-time gradient method detect; And different with content according to the mode of feature detection and extraction of motion information, the kind of the modeling pattern of motor behavior is also inequality, and common have maximum entropy model, direction Histogram graph model, hidden Markov model, motion energy model, a topic model etc.
Though now proposed some methods, also had some problems about population analysis.Existing method lays particular stress on very much theoretical property on the one hand, under real complicated scene, is difficult to realize expected result; On the other hand, the operand of existing method is bigger, and calculation cost is very high.The present invention considers these problems, to the situation of real complex scene, improves the algorithm practicality, simultaneously the detection efficiency of optimized Algorithm.
Summary of the invention
The present invention is divided into cell with the video area; Adopt the LK optical flow method that movable information is extracted; Be unit with the cell afterwards, movable information is carried out statistical study, service orientation histogram and velocity histogram carry out modeling to the motor pattern of each cell; Passing in and out relation according to the moving stream between the adjacent cells lattice then; Carry out the semanteme study of optimization rectification of cell motor pattern and cell, last comprehensive all cell motor patterns generate global motion pattern and overall semantic region.
Technical scheme of the present invention is following:
Group movement mode detection method under a kind of complex scene comprises the steps A1-A4:
A1, obtain monitor video;
A2, use background subtraction point-score obtain the moving region, at first use the Harris angular-point detection method to obtain the point that the t frame is easy to follow the tracks of in the video sequence, P t={ p i=(x i, y i) | 0≤i≤N Featurepoints, N wherein FeaturepointsIt is the quantity upper limit threshold of setting for the Harris angular-point detection method; As unique point, use the LK optical flow method these unique points to be followed the tracks of and the result is calculated these points, can obtain the proper vector V of t frame t={ v i=(x i, y i, u i, v i, a i, m i) | 0≤i≤N Featurepoints, (u wherein i, v i) be motion vector, a iBe V tOrientation angle, m iIt is the size of motion vector.Our setting threshold M afterwards MaxAnd M Min, as the threshold value of motion vector size, with all m i≤M MinAnd m i>=M MaxVector be judged to be noise and remove; All frames in the video are followed the tracks of the proper vector that obtains form a set, employing Gauss ART method further reduces the quantity of proper vector in this set, to reduce calculated amount; The video area is divided into M * N equal-sized cell, makes that the proper vector in this set belongs to these cells respectively; Proper vector in each cell is added up, and sets up the direction histogram H of these proper vectors I, j
A3, the movable information in the cell is optimized; If histogram H I, jIn maximal value be h Max, represent that promptly the proper vector quantity of this direction is maximum; Set λ h Max(0<λ<1) is threshold value, if the proper vector quantity of certain direction thinks that greater than this threshold value this direction is the direction of primary motion of its corresponding unit lattice; Add up in 8 adjacent cells of each cell, point to the quantity of the contained proper vector of direction of primary motion of this cell, its summation n InInflow vector number as cell; The summation note of the proper vector quantity of the direction of primary motion that each cell is contained is made n OutOutflow vector number as cell; If the inflow vector number and the vectorial number of outflow of comparing unit lattice are n Out>>n In, promptly this cell flows out vectorial number and is far longer than and flows into vectorial number, judges the cell of this cell for " outlet " type; If n Out<<n In, promptly this cell flows into vectorial number and is far longer than the vectorial number of outflow, judges the cell of this cell for " inlet " type; Otherwise judge the cell of this cell for " path " type;
A4, set up the global motion pattern: adjacent " outlet " type units lattice are merged, form " outlet " according to the movable information of cell; Adjacent " inlet " cell is merged, form inlet; From " export ", the direction of primary motion according to cell connects adjacent " path " type units lattice successively, forms in " path "; One " outlet ", one " inlet ", and lack most " path " that can connect them form a kind of motor pattern, representes the cell that the different motion pattern is comprised with various colors.
Movable information optimization comprises the steps: in the described method, the cell in the said steps A 3
Step S1: the direction of primary motion that utilizes the direction histogram acquisition cell of cell;
Step S2:, add up the moving stream turnover situation of this cell according to the moving stream principal direction of adjacent cells lattice;
Step S3: according to the relation of the moving stream between cell, the semantic type of judging unit lattice.
Described method, the global motion pattern of setting up in the said steps A 4 comprises the steps:
Step S1: the semantic type according to cell, cell is merged, form the semantic region;
Step S2:, draw out the visable representation in path between dissimilar zones and the zone according to the incidence relation of adjacent cells lattice and the incidence relation of semantic region.
The present invention uses optical flow method that the bigger complex scene of the such crowd density in the public place under the true monitoring is carried out extraction of motion information, and with the form of cell with these information areaizations.The present invention is basic object with the cell, and motor pattern is carried out monitoring analysis, the stepping lang justice of going forward side by side regional determination.The invention solves traditional motor pattern analytical approach and can't be applicable to the problem of the complex scene that this serious shielding, target are numerous, for the monitoring and the security protection of public arena provides intelligent support.
Description of drawings
Fig. 1 is an operational scheme synoptic diagram of the present invention;
Fig. 2 is the original video sectional drawing;
Fig. 3 is feature point extraction figure;
Fig. 4 is instantaneous proper vector figure;
Fig. 5 is an eigenvector fields;
Fig. 6 is divided into cell with the video area;
A, B, C, D have represented detected 4 kinds of motor patterns respectively among Fig. 7;
Embodiment
Below in conjunction with specific embodiment, the present invention is elaborated.
Embodiment 1:
The present invention is the group movement mode detection method under a kind of complex environment, is primarily aimed at the comparatively intensive public places of crowd such as market, station.Utilize crowd's motor pattern of computer vision technique analysis and detection special scenes, for follow-up abnormality detection, scene understanding or artificial monitoring provide support.
Fig. 1 method flow diagram of the present invention, the group movement mode detection method under the complex scene comprises the steps A1-A4:
A1, obtain monitor video, the size of this video is 480 * 360 pixels, and Fig. 2 is the original sectional drawing of this video;
A2, use background subtraction point-score obtain the moving region, in the moving region, utilize Harris Corner Detection Algorithm extract minutiae, utilize the LK optical flow method that these unique points are followed the tracks of, and obtain proper vector.
The extraction of movable information is the basic premise of follow-up work, and the extraction of the group movement information under complex environment since target small and weak, problem such as block, traditional method can run into many restrictions, even is difficult to realize.The present invention does not carry out modeling to background; Because under the complicated group environment; To carry out modeling or carry out the cost that other processing all will cost a lot of money background; And after step in can carry out denoising to movable information, so, can not exert an influence to end product even do not carry out background process yet.
At first use the Harris angular-point detection method to obtain the unique point that the t frame is easy to follow the tracks of in the video sequence, be the unique point of detected this two field picture like the white circle among Fig. 3.P t={ p i=(x i, y i) | 0≤i≤N Featurepoints, N wherein FeaturepointsIt is the quantity upper limit threshold that we set for the Harris angular-point detection method.As unique point, use the LK optical flow method these unique points to be followed the tracks of and the result is calculated these points, can obtain the proper vector V of t frame t={ v i=(x i, y i, u i, v i, a i, m i) | 0≤i≤N Featurepoints, (u wherein i, v i) be motion vector, a iBe V tOrientation angle, m iIt is the size of motion vector.Our setting threshold M afterwards MaxAnd M Min, as the threshold value of motion vector size, with all m i≤M MinAnd m i>=M MaxVector be judged to be noise and remove.Fig. 4 is the instantaneous proper vector figure of a certain frame in the video, and white arrow is the motion feature vector of this moment individual features point.Even like this, through the accumulation of certain hour, total one section video sequence proper vector quantity still very many, can make that the calculation cost in later stage is too big.So we adopt Gauss ART method further to reduce the quantity of proper vector, to reduce calculated amount.Fig. 5 is the vector field that all proper vectors of this section video form.The video area is divided into 24 * 18 cells, and the size of each cell is 20 * 20 pixels; Write down in each cell, begin the proper vector that is constantly comprised to t from video.For example, the unit area cell of the capable j row of i I, jThe proper vector that preceding t frame comprises is V T-i, j, set up its direction Histogram graph model H T-i, j, statistics " last, upper right, right, bottom right, down, a left side down, left, upper left " quantity of the proper vector of these 8 directions.
A3, further the movable information in the cell is optimized.If histogram H T-ijIn maximal value be h Max, represent that promptly the proper vector quantity of this direction is maximum, be h MaxWe set λ h Max(0<λ<1) is threshold value, if the proper vector quantity of certain direction is greater than this threshold value, we think that this direction is the direction of primary motion of its corresponding unit lattice, and like Fig. 6, arrow has shown the principal direction of its corresponding unit lattice.The direction of primary motion of certain cell possibly be one, also possibly be a plurality of.Direction of primary motion has shown this content institute of cell institute travel direction, so we think that the moving stream in subconscious this cell will move to the pairing adjacent cells lattice of its main direction.For example, cell cell I, jDirection of primary motion be " right side ", we think next constantly so, cell cell I, jInterior moving stream should get into cell cell I+1, jAdd up in 8 adjacent cells of each cell, point to the quantity of the contained proper vector of direction of primary motion of this cell, its summation n InEntering vector number as cell; The summation note of the proper vector quantity of the direction of primary motion that each cell is contained is made n OutIf the inflow of comparing unit lattice and outflow quantity are n Out>>n In, promptly the quantity of this cell outflow is far longer than the quantity of inflow, judges the cell of this cell for " outlet " type; If n Out<<n In, promptly the quantity of this cell inflow is far longer than the quantity of outflow, judges the cell of this cell for " inlet " type; Otherwise judge the cell of this cell for " path " type.
A4, set up the global motion pattern according to the movable information of cell.Adjacent " outlet " type units lattice are merged, form " outlet "; Adjacent " inlet " cell is merged, form inlet; From " export ", the direction of primary motion according to cell connects adjacent " path " type units lattice successively, forms in " path "; One " outlet ", one " inlet ", and a kind of motor pattern is promptly represented in lack most " path " that can connect them.4 little figure of Fig. 7 have represented detected 4 kinds of motor patterns respectively.
Embodiment 2:
Movable information optimization comprises the steps: in the cell in the steps A 3 according to the invention
Step S1: the direction of primary motion that utilizes the direction histogram acquisition cell of cell;
Step S2:, add up the moving stream turnover situation of this cell according to the moving stream principal direction of adjacent cells lattice;
Step S3: according to the relation of the moving stream between cell, the semantic type of judging unit lattice.
According to before the direction histogram of each cell of calculating, we become direction of primary motion with proper vector quantity in 8 directions greater than the direction of setting threshold.Afterwards, according to the direction of primary motion of its 8 adjacent cells lattice of each cell, the moving stream of calculating this cell gets into quantity.For example among Fig. 2, the cell of the capable j row of i is remembered and is made cell [i; J], in its 8 adjacent cells, that direction of primary motion points to it is cell cell [i-1; J-1] and cell [i, j-1], contained proper vector quantity is respectively n1 and n2; So flowing into the moving stream quantity of cell cell [i, j] is n In=n1+n2.And the direction of primary motion of cell cell [i, j] has two, comprises a n3 and n4 proper vector respectively, so the moving stream quantity that flows out this cell is n Out=n3+n4.
Through comparing the inflow vector quantity n of each cell InWith the vectorial quantity n of outflow Out, if n Out>>n In, promptly the quantity of this cell outflow is far longer than the quantity of inflow, judges the cell of this cell for " outlet " type; If n Out<<n In, promptly the quantity of this cell inflow is far longer than the quantity of outflow, judges the cell of this cell for " inlet " type; Otherwise judge the cell of this cell for " path " type.
Embodiment 3:
The global motion pattern of setting up in the steps A 4 according to the invention comprises the steps:
Step S1: the semantic type according to cell, cell is merged, form the semantic region;
Step S2:, draw out the visable representation in path between dissimilar zones and the zone according to the incidence relation of adjacent cells lattice and the incidence relation of semantic region.
By step 3, we have obtained the semantic type of cell, and we merge generation " outlet " zone and " inlet " zone respectively with the cell of adjacent " outlet " type, the cell of " inlet " type then.We are from " outlet " zone afterwards, and the cell of " path " type is coupled together, and form " path " zone, and the condition of connection is " the adjacent cells lattice that the current cell direction of primary motion points to ".Afterwards, we are one " inlet " zone and " outlet " zone, and can own " path " parts with what they coupled together, are called a motor pattern.At last, express different motor patterns with various colors.
Should be understood that, concerning those of ordinary skills, can improve or conversion, and all these improvement and conversion all should belong to the protection domain of accompanying claims of the present invention according to above-mentioned explanation.

Claims (3)

1. the group movement mode detection method under the complex scene is characterized in that, comprises the steps A1-A4:
A1, obtain monitor video;
A2, use background subtraction point-score obtain the moving region, at first use the Harris angular-point detection method to obtain the point that the t frame is easy to follow the tracks of in the video sequence, P t={ p i=(x i, y i) | 0≤i≤N Featurepoints, N wherein FeaturepointsIt is the quantity upper limit threshold of setting for the Harris angular-point detection method; As unique point, use the LK optical flow method these unique points to be followed the tracks of and the result is calculated these points, can obtain the proper vector V of t frame t={ v i=(x i, y i, u i, v i, a i, m i) | 0≤i≤N Featurepoints, (u wherein i, v i) be motion vector, a iBe V tOrientation angle, m iIt is the size of motion vector.Our setting threshold M afterwards MaxAnd M Min, as the threshold value of motion vector size, with all m i≤M MinAnd m i>=M MaxVector be judged to be noise and remove; All frames in the video are followed the tracks of the proper vector that obtains form a set, employing Gauss ART method further reduces the quantity of proper vector in this set, to reduce calculated amount; The video area is divided into M * N equal-sized cell, makes that the proper vector in this set belongs to these cells respectively; Proper vector in each cell is added up, and sets up the direction histogram H of these proper vectors I, j
A3, the movable information in the cell is optimized; If histogram H I, jIn maximal value be h Max, represent that promptly the proper vector quantity of this direction is maximum; Set λ h Max(0<λ<1) is threshold value, if the proper vector quantity of certain direction thinks that greater than this threshold value this direction is the direction of primary motion of its corresponding unit lattice; Add up in 8 adjacent cells of each cell, point to the quantity of the contained proper vector of direction of primary motion of this cell, its summation n InAs the inflow vector number of the vectorial number of the entering of cell as cell; The summation note of the proper vector quantity of the direction of primary motion that each cell is contained is made n OutOutflow vector number as cell; If the inflow vector number and the vectorial number of outflow of comparing unit lattice are n Out>>n In, promptly this cell flows out vectorial number and is far longer than and flows into vectorial number, judges the cell of this cell for " outlet " type; If n Out<<n In, promptly this cell flows into vectorial number and is far longer than the vectorial number of outflow, judges the cell of this cell for " inlet " type; Otherwise judge the cell of this cell for " path " type;
A4, set up the global motion pattern: adjacent " outlet " type units lattice are merged, form " outlet " according to the movable information of cell; Adjacent " inlet " cell is merged, form inlet; From " export ", the direction of primary motion according to cell connects adjacent " path " type units lattice successively, forms in " path "; One " outlet ", one " inlet ", and lack most " path " that can connect them form a kind of motor pattern, representes the cell that the different motion pattern is comprised with various colors.
2. method according to claim 1 is characterized in that, movable information optimization comprises the steps: in the cell in the said steps A 3
Step S1: the direction of primary motion that utilizes the direction histogram acquisition cell of cell;
Step S2:, add up the moving stream turnover situation of this cell according to the moving stream principal direction of adjacent cells lattice;
Step S3: according to the relation of the moving stream between cell, the semantic type of judging unit lattice.
3. method according to claim 1 is characterized in that, the global motion pattern of setting up in the said steps A 4 comprises the steps:
Step S1: the semantic type according to cell, cell is merged, form the semantic region;
Step S2:, draw out the visable representation in path between dissimilar zones and the zone according to the incidence relation of adjacent cells lattice and the incidence relation of semantic region.
CN201210046832.8A 2012-02-28 2012-02-28 Group movement mode detection method under complex scenes Expired - Fee Related CN102708573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210046832.8A CN102708573B (en) 2012-02-28 2012-02-28 Group movement mode detection method under complex scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210046832.8A CN102708573B (en) 2012-02-28 2012-02-28 Group movement mode detection method under complex scenes

Publications (2)

Publication Number Publication Date
CN102708573A true CN102708573A (en) 2012-10-03
CN102708573B CN102708573B (en) 2015-02-04

Family

ID=46901292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210046832.8A Expired - Fee Related CN102708573B (en) 2012-02-28 2012-02-28 Group movement mode detection method under complex scenes

Country Status (1)

Country Link
CN (1) CN102708573B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679215A (en) * 2013-12-30 2014-03-26 中国科学院自动化研究所 Video monitoring method based on group behavior analysis driven by big visual big data
CN103793920A (en) * 2012-10-26 2014-05-14 杭州海康威视数字技术股份有限公司 Retro-gradation detection method based on video and system thereof
CN104809742A (en) * 2015-04-15 2015-07-29 广西大学 Article safety detection method in complex scene
CN107103614A (en) * 2017-04-12 2017-08-29 合肥工业大学 The dyskinesia detection method encoded based on level independent element
CN113645096A (en) * 2021-08-11 2021-11-12 四川华腾国盛科技有限公司 Building intelligent engineering detection system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006099241A (en) * 2004-09-28 2006-04-13 Ntt Data Corp Abnormality detection apparatus, abnormality detection method and abnormality detection program
CN101751678A (en) * 2009-12-16 2010-06-23 北京智安邦科技有限公司 Method and device for detecting violent crowd movement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006099241A (en) * 2004-09-28 2006-04-13 Ntt Data Corp Abnormality detection apparatus, abnormality detection method and abnormality detection program
CN101751678A (en) * 2009-12-16 2010-06-23 北京智安邦科技有限公司 Method and device for detecting violent crowd movement

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIN HU ET AL.: "Learning Motion Patterns in Crowded Scenes Using Motion Flow Field", 《19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 *
SHU WANG ET AL.: "Anomaly Detection in Crowd Scene", 《2010 IEEE 10TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING》 *
童俊艳: "视频监控中的群体运动分析研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793920A (en) * 2012-10-26 2014-05-14 杭州海康威视数字技术股份有限公司 Retro-gradation detection method based on video and system thereof
CN103793920B (en) * 2012-10-26 2017-10-13 杭州海康威视数字技术股份有限公司 Retrograde detection method and its system based on video
CN103679215A (en) * 2013-12-30 2014-03-26 中国科学院自动化研究所 Video monitoring method based on group behavior analysis driven by big visual big data
CN103679215B (en) * 2013-12-30 2017-03-01 中国科学院自动化研究所 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives
CN104809742A (en) * 2015-04-15 2015-07-29 广西大学 Article safety detection method in complex scene
CN107103614A (en) * 2017-04-12 2017-08-29 合肥工业大学 The dyskinesia detection method encoded based on level independent element
CN107103614B (en) * 2017-04-12 2019-10-08 合肥工业大学 Dyskinesia detection method based on level independent element coding
CN113645096A (en) * 2021-08-11 2021-11-12 四川华腾国盛科技有限公司 Building intelligent engineering detection system

Also Published As

Publication number Publication date
CN102708573B (en) 2015-02-04

Similar Documents

Publication Publication Date Title
CN107330372B (en) Analysis method of video-based crowd density and abnormal behavior detection system
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN107748873B (en) A kind of multimodal method for tracking target merging background information
CN101321269B (en) Passenger flow volume detection method based on computer vision
CN102903119B (en) A kind of method for tracking target and device
CN102521565A (en) Garment identification method and system for low-resolution video
CN102270348B (en) Method for tracking deformable hand gesture based on video streaming
Velipasalar et al. Automatic counting of interacting people by using a single uncalibrated camera
CN102867188B (en) Method for detecting seat state in meeting place based on cascade structure
CN102708573B (en) Group movement mode detection method under complex scenes
CN102063613A (en) People counting method and device based on head recognition
CN102890781A (en) Method for identifying wonderful shots as to badminton game video
CN102156880A (en) Method for detecting abnormal crowd behavior based on improved social force model
Cui et al. Abnormal event detection in traffic video surveillance based on local features
CN104732236B (en) A kind of crowd's abnormal behaviour intelligent detecting method based on layered shaping
CN105138982A (en) Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification
CN106203513A (en) A kind of based on pedestrian's head and shoulder multi-target detection and the statistical method of tracking
CN103489012B (en) Crowd density detecting method and system based on support vector machine
CN104811655A (en) System and method for film concentration
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN104268520A (en) Human motion recognition method based on depth movement trail
CN105069816A (en) Method and system for counting inflow and outflow people
CN103049749A (en) Method for re-recognizing human body under grid shielding
Mu et al. Resgait: The real-scene gait dataset
CN103577804A (en) Abnormal human behavior identification method based on SIFT flow and hidden conditional random fields

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150204

Termination date: 20160228