CN106203360A - Intensive scene crowd based on multistage filtering model hives off detection algorithm - Google Patents

Intensive scene crowd based on multistage filtering model hives off detection algorithm Download PDF

Info

Publication number
CN106203360A
CN106203360A CN201610559499.9A CN201610559499A CN106203360A CN 106203360 A CN106203360 A CN 106203360A CN 201610559499 A CN201610559499 A CN 201610559499A CN 106203360 A CN106203360 A CN 106203360A
Authority
CN
China
Prior art keywords
point
tau
picture frame
video image
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610559499.9A
Other languages
Chinese (zh)
Inventor
赵倩
邵洁
赵琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Electric Power
University of Shanghai for Science and Technology
Original Assignee
Shanghai University of Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Electric Power filed Critical Shanghai University of Electric Power
Priority to CN201610559499.9A priority Critical patent/CN106203360A/en
Publication of CN106203360A publication Critical patent/CN106203360A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A kind of intensive scene crowd based on multistage filtering model hives off detection algorithm, relates to image analysis technology field, and solved is the technical problem improving the Detection results that hives off.The foreground area that this algorithm obtains in the background subtraction of gauss hybrid models extracts KLT characteristic point, by analyzing the kinetic characteristic of characteristic point, time-space matrix is used to filter step by step adjacent to filter operator, velocity attitude filter operator and motion relevance filter operator, travel through the characteristic point in all prospects, it is achieved detection of hiving off.The algorithm that the present invention provides, it is adaptable to video image analysis.

Description

Intensive scene crowd based on multistage filtering model hives off detection algorithm
Technical field
The present invention relates to image analysis technology, particularly relate to a kind of intensive scene crowd based on multistage filtering model and divide The technology of group's detection algorithm.
Background technology
Crowd movement's analysis, social event and unusual checking in intensive scene are current intelligent monitoring research fields Hot subject, be also video monitoring system and one of the important directions of intelligent transportation research.Maintaining public order, improving city The aspects such as the science of city's traffic programme have important researching value.
Generally in dense population moving scene, crowd's common manifestation is a kind of irregular movement combination form, the most unordered Motion.The characteristic of this complexity substantially increases the difficulty of research, and therefore the rarest achievement in research relates to before.For close The traditional analysis thinking of collection scene is that scene is regarded as the motion set of target, to realize extracting target trajectory and Activity recognition Purpose.Its processing method has two classes, a class to be processing methods based on target's center's object, and another kind of is process based on scene Method.
Processing method based on target's center's object regards dense population as a large amount of group of individuals, from ontoanalysis angle Set out extraction its speed and direction, then use the connection figure of weighting or multi-level clustering method from bottom to up to realize colony's inspection Survey;The effectiveness guaranteed conditions of this algorithm is that target gather density is low, and the pixel resolution of target individual is high.But disordered motion Blocking very serious in intensive scene between crowd and hiding relation is the most unknown, cause target is correctly partitioned into a difficulty Point, effect of therefore hiving off is poor.
Processing method based on scene mainly has optical flow method, dynamic texture and grid particle etc., and these algorithms are for intensive The treatment effect that there is the scene of serious occlusion issue and compound movement pattern scene in crowd is poor, but also needs single Pedestrian carries out splitting and sample training, needs some prior informations are provided in advance;And the less stable of neighborhood characteristics point, The Detection results that hives off needs to be improved further.
Summary of the invention
For defect present in above-mentioned prior art, the technical problem to be solved is to provide a kind of to intensive Crowd exists scene and the high treating effect of compound movement pattern scene of serious occlusion issue, and need not single row People carries out splitting and sample training, it is not necessary to prior information is provided in advance;And the good stability of neighborhood characteristics point, inspection of hiving off Survey effective based on multistage filtering model intensive scene crowd to hive off detection algorithm.
In order to solve above-mentioned technical problem, a kind of intensive scene crowd based on multistage filtering model provided by the present invention Hive off detection algorithm, it is characterised in that specifically comprises the following steps that
S1) to target video image, first use mixed Gauss model that each picture frame is carried out background modeling, then use Prospect, the background of each picture frame are split by Background difference, thus obtain the foreground area of each picture frame;
S2) to target video image, KLT track algorithm is used to extract feature from the foreground area of each picture frame Point, thus obtain the characteristic point coordinate set of each picture frame;
S3) the characteristic point coordinate set to each picture frame, each characteristic point that this feature point coordinates is concentrated work one by one For impact point, and obtain initial same group's point set of each impact point according to the method for step S3.1 to S3.9, final obtain each Initial same group's point set of individual impact point constitutes initial same group's sequence of point sets of this picture frame;
The step of the initial same group's point set obtaining impact point is as follows:
S3.1) from ζtIn take a characteristic point as impact point i, and setFor the nearest-neighbor of impact point i, wherein, ζtCharacteristic point coordinate set for t picture frame in target video image;
S3.2) impact point i and ζ is calculatedtIn the Gauss weights of each characteristic point, specific formula for calculation is:
w i , j = exp ( - d i s t ( i , j ) r ) = exp ( - ( x i - x j ) 2 + ( y i - y j ) 2 r )
Wherein, wi,jFor the Gauss weights of impact point i Yu characteristic point j, j ∈ ζtAnd j ≠ i, (xi,yi) it is the seat of impact point i Mark, (xj,yj) it being characterized the coordinate of a j, the value of r is 20;
S3.3) to ζtIn each characteristic point in addition to impact point i implement that time-space matrix is neighbouring to be filtered, filter method is:
To ζtIn any one characteristic point j, j ∈ ζtAnd j ≠ i, if there being wi,j ≥0.5wmax, then characteristic point j is included intoW thereinmaxFor impact point i and ζtIn further feature point Gauss weights in maximum;
S3.4) setOccur simultaneously for impact point i neighborhood between t → t+d, d=therein 3;
S3.5) calculateIn each characteristic point and impact point i from the velocity angle of t → t+d, specific formula for calculation For:
θ i , j = 1 d + 1 Σ τ = t t + d a b s ( θ i τ - θ j τ )
IfAndThen make
IfAndThen make
IfOrThen make
IfAndThen
IfAndThen
IfOrThen
Wherein, θi,jIt is characterized a j and impact point i from the velocity angle of t → t+d, d=3, It is characterized The velocity attitude of some j τ picture frame in target video image,It is characterized a j in target video image The coordinate of τ picture frame,It is characterized the seat of a j+1 picture frame of τ in target video image Mark;For the velocity attitude of impact point i τ picture frame in target video image,For impact point i in target The coordinate of τ picture frame in video image,For impact point i τ+1 figure in target video image Coordinate as frame;
S3.6) forIn any one characteristic point j, if there being θi,j>=45 or (360-θi,j) >=45, then will This feature point j fromIn shave and remove;
S3.7) calculateIn each characteristic point and impact point i from the motion relevance of t → t+d, specifically calculate public affairs Formula is:
C i , j = 1 d + 1 Σ τ = t t + d v τ i · v τ j | | v τ i | | · | | v τ j | |
v τ i = ( v x i τ , v y i τ ) = ( x i τ + 1 - x i τ , y i τ + 1 - y i τ )
| | v τ i | | = ( v x i τ ) 2 + ( v y i τ ) 2
v τ j = ( v x j τ , v y j τ ) = ( x j τ + 1 - x j τ , y j τ + 1 - y j τ )
| | v τ j | | = ( v x j τ ) 2 + ( v y j τ ) 2
Wherein, Ci,jIt is characterized a j and impact point i from the motion relevance of t → t+d, d=3, For mesh The speed of punctuate i τ picture frame in target video image,It is characterized a j the τ figure in target video image As the speed of frame,For the coordinate of impact point i τ picture frame in target video image,For The coordinate of the impact point i+1 picture frame of τ in target video image;It is characterized a j at target video image In the coordinate of τ picture frame,It is characterized the seat of a j+1 picture frame of τ in target video image Mark;
S3.8) C is setth=0.6, forIn any one characteristic point j, if there being Ci,j≤Cth, then should Characteristic point j fromIn shave and remove;
S3.9) willIt is defined as initial same group's point set of impact point i
S4) the initial same group's sequence of point sets to each picture frame, this is initially initially same with each in group's sequence of point sets Group's point set is according to characteristic point number descending;
S5) the initial same group's sequence of point sets to each picture frame, first to this according to the method for step S5.1 to step S5.3 Begin initially to carry out key words sorting with group's point set with each in group's sequence of point sets;
S5.1) make K=1, L=1, all characteristic points that k-th initially puts concentration with group are labeled as L;
S5.2) K=K+1 is made;
If k-th initially puts the most unmarked mistake of all characteristic points of concentration with group, then make L=L+1, and by the beginning of k-th The all characteristic points beginning to put concentration with group are collectively labeled as L;
If k-th is initially put a concentration at least characteristic point with group and is had been labeled as L, then by k-th initially with group's point The all characteristic points concentrated are collectively labeled as L;
S5.3) step S5.2 is repeated, until initially each the most complete with the equal labelling of group's point set with in group's sequence of point sets;
S6) the characteristic point coordinate set to each picture frame, is classified as same class colony characteristic point identical for key words sorting, and This picture frame indicates characteristic point identical for key words sorting by identical color, and contingency table in this picture frame Remember that different characteristic point indicates by different color, thus the detection that realizes hiving off.
Based on multistage filtering model the intensive scene crowd that the present invention provides hives off detection algorithm, has following useful effect Really:
1) kinestate of the state estimation crowd of the characteristic point on employing moving target crowd, can avoid intensive Occlusion issue serious in crowd, the high treating effect to compound movement pattern scene;
2) need not single pedestrian is split and sample training, it is not required that any prior information;
3) time-space matrix is neighbouring filter in the feature of nearest-neighbor count can be according to the beeline of neighbor distance characteristic point Automatically adjust, and the neighborhood that the neighborhood choice continuous videos image of characteristic point obtains occurs simultaneously, it is ensured that neighborhood characteristics point Stability;
4) algorithm not only considers the dependency of group movement, and considers velocity attitude concordance, detection effect of hiving off The best.
Accompanying drawing explanation
Fig. 1 is that based on multistage filtering model the intensive scene crowd of the embodiment of the present invention hives off the principle of detection algorithm Figure.
Detailed description of the invention
Illustrate embodiments of the invention are described in further detail below in conjunction with accompanying drawing, but the present embodiment is not used to limit The present invention processed, the analog structure of every employing present invention and similar change thereof, protection scope of the present invention, the present invention all should be listed in In pause mark all represent the relation of sum.
As it is shown in figure 1, a kind of based on multistage filtering model the intensive scene crowd that the embodiment of the present invention is provided hives off Detection algorithm, it is characterised in that specifically comprise the following steps that
S1) to target video image, first use mixed Gauss model that each picture frame is carried out background modeling, then use Prospect, the background of each picture frame are split by Background difference, thus obtain the foreground area of each picture frame;
S2) to target video image, KLT track algorithm is used to extract feature from the foreground area of each picture frame Point, thus obtain the characteristic point coordinate set of each picture frame;
S3) the characteristic point coordinate set to each picture frame, each characteristic point that this feature point coordinates is concentrated work one by one For impact point, and obtain initial same group's point set of each impact point according to the method for step S3.1 to S3.9, final obtain each Initial same group's point set of individual impact point constitutes initial same group's sequence of point sets of this picture frame;
The step of the initial same group's point set obtaining impact point is as follows:
S3.1) from ζtIn take a characteristic point as impact point i, and setFor the nearest-neighbor of impact point i, wherein, ζtCharacteristic point coordinate set for t picture frame in target video image;
S3.2) impact point i and ζ is calculatedtIn the Gauss weights of each characteristic point, specific formula for calculation is:
w i , j = exp ( - d i s t ( i , j ) r ) = exp ( - ( x i - x j ) 2 + ( y i - y j ) 2 r )
Wherein, wi,jFor the Gauss weights of impact point i Yu characteristic point j, j ∈ ζtAnd j ≠ i, (xi,yi) it is the seat of impact point i Mark, (xj,yj) it being characterized the coordinate of a j, r is constant, the representative value 20 of r;wi,jWill along with distance dist (i, j) increase and subtract Little, on the contrary then increase, and this also just means that crowd is the most intensive, wi,jThe biggest;
S3.3) to ζtIn each characteristic point in addition to impact point i implement that time-space matrix is neighbouring to be filtered, filter method is:
To ζtIn any one characteristic point j, j ∈ ζtAnd j ≠ i, if there being wi,j ≥0.5wmax, then characteristic point j is included intoW thereinmaxFor impact point i and ζtIn further feature point Gauss weights in maximum;
ObtainIn, the number of characteristic point can be according to wmaxIt is automatically adjusted, wmaxIt is impact point i and the spy nearest away from it Gauss weights between levying a little;
S3.4) setOccur simultaneously for impact point i neighborhood between t → t+d, d=therein 3;T → t+d refers to that the t picture frame of target video image is to the t+d picture frame;
S3.5) calculateIn each characteristic point and impact point i from the velocity angle of t → t+d, specific formula for calculation For:
θ i , j = 1 d + 1 Σ τ = t t + d a b s ( θ i τ - θ j τ )
IfAndThen make
IfAndThen make
IfOrThen make
IfAndThen
IfAndThen
IfOrThen
Wherein, θi,jIt is characterized a j and impact point i from the velocity angle of t → t+d, d=3, It is characterized The velocity attitude of some j τ picture frame in target video image,It is characterized a j in target video image The coordinate of τ picture frame,It is characterized the seat of a j+1 picture frame of τ in target video image Mark;For the velocity attitude of impact point i τ picture frame in target video image,For impact point i in target The coordinate of τ picture frame in video image,For impact point i τ+1 figure in target video image Coordinate as frame;
S3.6) forIn any one characteristic point j, if hadOr (360-θi,j) >=45, then will This feature point j fromIn shave and remove;
S3.7) calculateIn each characteristic point and impact point i from the motion relevance of t → t+d, specifically calculate Formula is:
C i , j = 1 d + 1 Σ τ = t t + d v τ i · v τ j | | v τ i | | · | | v τ j | |
v τ i = ( v x i τ , v y i τ ) = ( x i τ + 1 - x i τ , y i τ + 1 - y i τ )
| | v τ i | | = ( v x i τ ) 2 + ( v y i τ ) 2
v τ j = ( v x j τ , v y j τ ) = ( x j τ + 1 - x j τ , y j τ + 1 - y j τ )
| | v τ j | | = ( v x j τ ) 2 + ( v y j τ ) 2
Wherein, Ci,jIt is characterized a j and impact point i from the motion relevance of t → t+d, d=3, For mesh The speed of punctuate i τ picture frame in target video image,It is characterized a j the τ figure in target video image As the speed of frame,For the coordinate of impact point i τ picture frame in target video image,For The coordinate of the impact point i+1 picture frame of τ in target video image;It is characterized a j at target video image In the coordinate of τ picture frame,It is characterized the seat of a j+1 picture frame of τ in target video image Mark;
S3.8) C is setth=0.6, forIn any one characteristic point j, if there being Ci,j≤Cth, then should Characteristic point j fromIn shave and remove;
S3.9) willIt is defined as initial same group's point set of impact point i In all characteristic points all and mesh The same group of punctuate i;
S4) the initial same group's sequence of point sets to each picture frame, this is initially initially same with each in group's sequence of point sets Group's point set is according to characteristic point number descending;
S5) the initial same group's sequence of point sets to each picture frame, first to this according to the method for step S5.1 to step S5.3 Begin initially to carry out key words sorting with group's point set with each in group's sequence of point sets;
S5.1) make K=1, L=1, all characteristic points that k-th initially puts concentration with group are labeled as L;
S5.2) K=K+1 is made;
If k-th initially puts the most unmarked mistake of all characteristic points of concentration with group, then make L=L+1, and by the beginning of k-th The all characteristic points beginning to put concentration with group are collectively labeled as L;
If k-th is initially put a concentration at least characteristic point with group and is had been labeled as L, then by k-th initially with group's point The all characteristic points concentrated are collectively labeled as L;
S5.3) step S5.2 is repeated, until initially each the most complete with the equal labelling of group's point set with in group's sequence of point sets;
S6) the characteristic point coordinate set to each picture frame, is classified as same class colony characteristic point identical for key words sorting, and This picture frame indicates characteristic point identical for key words sorting by identical color, and contingency table in this picture frame Remember that different characteristic point indicates by different color, thus the detection that realizes hiving off.

Claims (1)

1. an intensive scene crowd based on multistage filtering model hives off detection algorithm, it is characterised in that specifically comprise the following steps that
S1) to target video image, first use mixed Gauss model that each picture frame is carried out background modeling, then use background Prospect, the background of each picture frame are split by difference method, thus obtain the foreground area of each picture frame;
S2) to target video image, KLT track algorithm is used to extract characteristic point from the foreground area of each picture frame, from And obtain the characteristic point coordinate set of each picture frame;
S3) the characteristic point coordinate set to each picture frame, using this feature point coordinates concentrate each characteristic point one by one as mesh Punctuate, and initial same group's point set of each impact point is obtained according to the method for step S3.1 to S3.9, final each mesh obtained Initial same group's point set of punctuate constitutes initial same group's sequence of point sets of this picture frame;
The step of the initial same group's point set obtaining impact point is as follows:
S3.1) from ζtIn take a characteristic point as impact point i, and setFor the nearest-neighbor of impact point i, wherein, ζtFor The characteristic point coordinate set of t picture frame in target video image;
S3.2) impact point i and ζ is calculatedtIn the Gauss weights of each characteristic point, specific formula for calculation is:
w i , j = exp ( - d i s t ( i , j ) r ) = exp ( - ( x i - x j ) 2 + ( y i - y j ) 2 r )
Wherein, wi,jFor the Gauss weights of impact point i Yu characteristic point j, j ∈ ζtAnd j ≠ i, (xi,yi) it is the coordinate of impact point i, (xj,yj) it being characterized the coordinate of a j, the value of r is 20;
S3.3) to ζtIn each characteristic point in addition to impact point i implement that time-space matrix is neighbouring to be filtered, filter method is:
To ζtIn any one characteristic point j, j ∈ ζtAnd j ≠ i, if there being wi,j>=0.5wmax, then be included into characteristic point j W thereinmaxFor impact point i and ζtIn further feature point Gauss weights in maximum;
S3.4) setOccur simultaneously for impact point i neighborhood between t → t+d, d=3 therein;
S3.5) calculateIn each characteristic point and impact point i from the velocity angle of t → t+d, specific formula for calculation be:
θ i , j = 1 d + 1 Σ τ = t t + d a b s ( θ i τ - θ j τ )
IfAndThen make
IfAndThen make
IfOrThen make
IfAndThen
IfAndThen
IfOrThen
Wherein, θi,jIt is characterized a j and impact point i from the velocity angle of t → t+d, d=3, It is characterized a j The velocity attitude of τ picture frame in target video image,It is characterized a j in target video image The coordinate of τ picture frame,It is characterized the coordinate of a j+1 picture frame of τ in target video image; For the velocity attitude of impact point i τ picture frame in target video image,For impact point i at target video The coordinate of τ picture frame in image,For impact point i+1 picture frame of τ in target video image Coordinate;
S3.6) forIn any one characteristic point j, if there being θi,j>=45 or (360-θi,j) >=45, then by this spy Levy a j fromIn shave and remove;
S3.7) calculateIn each characteristic point and impact point i from the motion relevance of t → t+d, specific formula for calculation For:
C i , j = 1 d + 1 Σ τ = t t + d v τ i · v τ j | | v τ i | | · | | v τ j | |
v τ i = ( v x i τ , v y i τ ) = ( x i τ + 1 - x i τ , y i τ + 1 - y i τ )
| | v τ i | | = ( v x i τ ) 2 + ( v y i τ ) 2
v τ j = ( v x j τ , v y j τ ) = ( x j τ + 1 - x j τ , y j τ + 1 - y j τ )
| | v τ j | | = ( v x j τ ) 2 + ( v y j τ ) 2
Wherein, Ci,jIt is characterized a j and impact point i from the motion relevance of t → t+d, d=3, For impact point i The speed of τ picture frame in target video image,It is characterized a j the τ picture frame in target video image Speed,For the coordinate of impact point i τ picture frame in target video image,For target The coordinate of the some i+1 picture frame of τ in target video image;It is characterized a j in target video image The coordinate of τ picture frame,It is characterized the coordinate of a j+1 picture frame of τ in target video image;
S3.8) C is setth=0.6, forIn any one characteristic point j, if there being Ci,j≤Cth, then by this feature Point j fromIn shave and remove;
S3.9) willIt is defined as initial same group's point set of impact point i
S4) the initial same group's sequence of point sets to each picture frame, by this initially with each in group's sequence of point sets initially with group's point Collection is according to characteristic point number descending;
S5) the initial same group's sequence of point sets to each picture frame, initially same to this according to the method for step S5.1 to step S5.3 Group's sequence of point sets each initially carries out key words sorting with group's point set;
S5.1) make K=1, L=1, all characteristic points that k-th initially puts concentration with group are labeled as L;
S5.2) K=K+1 is made;
If k-th initially puts the most unmarked mistake of all characteristic points of concentration with group, then make L=L+1, and k-th is initially same All characteristic points that group's point is concentrated are collectively labeled as L;
If k-th is initially put a concentration at least characteristic point with group and had been labeled as L, then k-th is initially put concentration with group All characteristic points be collectively labeled as L;
S5.3) step S5.2 is repeated, until initially each the most complete with the equal labelling of group's point set with in group's sequence of point sets;
S6) the characteristic point coordinate set to each picture frame, is classified as same class colony characteristic point identical for key words sorting, and at this Picture frame indicates characteristic point identical for key words sorting by identical color, and key words sorting phase in this picture frame Different characteristic point indicates by different color, thus the detection that realizes hiving off.
CN201610559499.9A 2016-07-15 2016-07-15 Intensive scene crowd based on multistage filtering model hives off detection algorithm Pending CN106203360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610559499.9A CN106203360A (en) 2016-07-15 2016-07-15 Intensive scene crowd based on multistage filtering model hives off detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610559499.9A CN106203360A (en) 2016-07-15 2016-07-15 Intensive scene crowd based on multistage filtering model hives off detection algorithm

Publications (1)

Publication Number Publication Date
CN106203360A true CN106203360A (en) 2016-12-07

Family

ID=57474613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610559499.9A Pending CN106203360A (en) 2016-07-15 2016-07-15 Intensive scene crowd based on multistage filtering model hives off detection algorithm

Country Status (1)

Country Link
CN (1) CN106203360A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016358A (en) * 2017-03-24 2017-08-04 上海电力学院 Intermediate density scene microcommunity real-time detection method
CN107071344A (en) * 2017-01-22 2017-08-18 深圳英飞拓科技股份有限公司 A kind of large-scale distributed monitor video data processing method and device
CN109977800A (en) * 2019-03-08 2019-07-05 上海电力学院 A kind of intensive scene crowd of combination multiple features divides group's detection method
CN109977809A (en) * 2019-03-08 2019-07-05 上海电力学院 A kind of adaptive crowd divides group's detection method
CN111382784A (en) * 2020-03-04 2020-07-07 厦门脉视数字技术有限公司 Moving target tracking method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592138A (en) * 2011-12-30 2012-07-18 上海电力学院 Object tracking method for intensive scene based on multi-module sparse projection
CN104239865A (en) * 2014-09-16 2014-12-24 宁波熵联信息技术有限公司 Pedestrian detecting and tracking method based on multi-stage detection
CN104933726A (en) * 2015-07-02 2015-09-23 中国科学院上海高等研究院 Dense crowd segmentation method based on space-time information constraint
CN104933412A (en) * 2015-06-16 2015-09-23 电子科技大学 Abnormal state detection method of medium and high density crowd

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592138A (en) * 2011-12-30 2012-07-18 上海电力学院 Object tracking method for intensive scene based on multi-module sparse projection
CN104239865A (en) * 2014-09-16 2014-12-24 宁波熵联信息技术有限公司 Pedestrian detecting and tracking method based on multi-stage detection
CN104933412A (en) * 2015-06-16 2015-09-23 电子科技大学 Abnormal state detection method of medium and high density crowd
CN104933726A (en) * 2015-07-02 2015-09-23 中国科学院上海高等研究院 Dense crowd segmentation method based on space-time information constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIAN ZHAO ET AL.: "A MULTISTAGE FILTERING FOR DETECTING GROUP IN THE CROWD", 《ICALIP 2016》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071344A (en) * 2017-01-22 2017-08-18 深圳英飞拓科技股份有限公司 A kind of large-scale distributed monitor video data processing method and device
CN107016358A (en) * 2017-03-24 2017-08-04 上海电力学院 Intermediate density scene microcommunity real-time detection method
CN107016358B (en) * 2017-03-24 2020-03-20 上海电力学院 Real-time detection method for small group in medium-density scene
CN109977800A (en) * 2019-03-08 2019-07-05 上海电力学院 A kind of intensive scene crowd of combination multiple features divides group's detection method
CN109977809A (en) * 2019-03-08 2019-07-05 上海电力学院 A kind of adaptive crowd divides group's detection method
CN111382784A (en) * 2020-03-04 2020-07-07 厦门脉视数字技术有限公司 Moving target tracking method
CN111382784B (en) * 2020-03-04 2021-11-26 厦门星纵智能科技有限公司 Moving target tracking method

Similar Documents

Publication Publication Date Title
CN102663409B (en) Pedestrian tracking method based on HOG-LBP
CN106203360A (en) Intensive scene crowd based on multistage filtering model hives off detection algorithm
CN107967451B (en) Method for counting crowd of still image
CN109508684B (en) Method for recognizing human behavior in video
CN103984946B (en) High resolution remote sensing map road extraction method based on K-means
CN103839065A (en) Extraction method for dynamic crowd gathering characteristics
CN102156880A (en) Method for detecting abnormal crowd behavior based on improved social force model
CN103489012B (en) Crowd density detecting method and system based on support vector machine
CN103020965A (en) Foreground segmentation method based on significance detection
CN101976338B (en) Method for detecting judgment type visual saliency based on gradient direction histogram
CN108960185A (en) Vehicle target detection method and system based on YOLOv2
CN104732236B (en) A kind of crowd's abnormal behaviour intelligent detecting method based on layered shaping
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN107609509A (en) A kind of action identification method based on motion salient region detection
Wu et al. Coherent motion detection with collective density clustering
Mo et al. Background noise filtering and distribution dividing for crowd counting
CN107944392A (en) A kind of effective ways suitable for cell bayonet Dense crowd monitor video target mark
CN104392445A (en) Method for dividing crowd in surveillance video into small groups
CN105761507B (en) A kind of vehicle count method based on three-dimensional track cluster
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
He et al. Motion pattern analysis in crowded scenes by using density based clustering
CN104616295A (en) News image horizontal headline caption simply and rapidly positioning method
CN108537823A (en) Moving target detecting method based on mixed Gauss model
CN108009480A (en) A kind of image human body behavioral value method of feature based identification
CN107507190A (en) A kind of low latitude moving target detecting method based on visible light sequential image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20161207