CN109543555A - A method of crowd density is monitored in real time by pattern-recognition and machine vision - Google Patents

A method of crowd density is monitored in real time by pattern-recognition and machine vision Download PDF

Info

Publication number
CN109543555A
CN109543555A CN201811277215.2A CN201811277215A CN109543555A CN 109543555 A CN109543555 A CN 109543555A CN 201811277215 A CN201811277215 A CN 201811277215A CN 109543555 A CN109543555 A CN 109543555A
Authority
CN
China
Prior art keywords
image
crowd density
training
foreground pixel
subblock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811277215.2A
Other languages
Chinese (zh)
Inventor
郏东耀
吴能凯
张兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN201811277215.2A priority Critical patent/CN109543555A/en
Publication of CN109543555A publication Critical patent/CN109543555A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods monitored in real time by pattern-recognition and machine vision to crowd density, divide image subblock;Determine the selection threshold value R of image subblock foreground pixel accounting;Calibration to image subblock, when the average foreground pixel accounting r < R of image subblock, using the regression algorithm counted based on foreground pixel;As the average foreground pixel accounting r > R of image, using the learning classification algorithm based on texture feature extraction;Obtain the crowd density grade of entire image.Present invention incorporates the hybrid algorithms of two kinds of crowd density estimation algorithms to have preferable classifying quality in each crowd density grade, the advantages of preferably having merged two kinds of algorithms, not only the shortcomings that two kinds of crowd density estimation algorithms of effective solution, but also the training process of algorithm is simplified on the whole, while also improving average classification accuracy.

Description

It is a kind of by pattern-recognition and machine vision crowd density to be monitored in real time Method
Technical field
The present invention relates to calculate machine digital image processing field, and in particular to one kind is by pattern-recognition and machine vision to people The method that group's density is monitored in real time.
Background technique
Along with the development of economic society, crowd massing phenomenon is more and more common, crowd's management and decision also more sophisticated, Therefore very great application value is had based on the real-time crowd density detection of video, is increasingly becoming digital image processing field Research emphasis.
Crowd density classification method is broadly divided into two major classes, respectively tracking and statistical method based on local pixel at present And based on whole texture analysis learning classification method.Mainly there are Davies, Cho, Ma etc. for the former is more representational The method of people, they think crowd density and the wired sexual intercourse of foreground pixel, can be by the method for linear regression fit come pre- Survey the grade of crowd density;Marana, Wu, Rahmalan and Chan then think that the texture of crowd density such as image is related, low close Degree crowd possesses more coarse texture image, on the contrary then have and possess more fine and smooth texture image.Therefore according to it is assumed that can be with By obtaining the textural characteristics of crowd's image, Training Support Vector Machines obtain crowd density class information.
Above two method suffers from the limitation of oneself, is based only upon local pixel tracking and the method estimation of statistics is being supervised When the more intensive scene of survey crowd, due to crowd block it is more serious because foreground pixel accounts for when crowd density gradually increases Than no longer there is linear relationship with crowd density, and uses entire training sample set to carry out regression fit and affect model pair instead The classification of low-density grade is deteriorated so as to cause evaluation fitting effect and mistake even occurs;And it is based only upon whole textural characteristics point The crowd density sorting algorithm of analysis is easy to be influenced by ambient noise under lower crowd density, to influence classification effect Fruit, and directly use entire training sample set to support vector machines (Support in above-mentioned traditional crowd density estimation algorithm Vector Machine, SVM) classifier is trained, and not only training effectiveness is lower but also is highly susceptible to two in training set The influence of " Jie's class point " and noise pollution point between class sample, low so as to cause classification accuracy, structure risk is big.Specific ginseng It is subject to Publication about Document:
[1] A.N.Marana, L.F.Costa, R.A.Lotufo, S.A.Velastin.On the efficacy of Texture analysis for crowd monitoring [C] //Computer Graphics, Image Processing, and Vision (SIBGRAPI), Proceedings, International Symposium on.Rio De Janeiro:IEEE press, 1998:354-361.
[2] X.Wu, G.Liang, K.K.Lee, Y.Xu.Crowd Density Estimation Using Texture Analysis and Learning [C] //In Robotics and Biomimetics, IEEE International Conference on. Kunming, China:IEEE Press, 2006:214-219.
[3] H.Rahmalan, M.S.Nixon, J.N.Carter.On Crowd Density Estimation for Surveillance [C] //In Crime and Security, The Institution of Engineering and Technology Conference on.London, Britain:IET press, 2006:540-545.
[4] A.B.Chan, Zhang-Sheng John Liang, N.Vasconcelos.Privacy preserving Crowd monitoring:Counting people without people models or tracking [C] //In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on.Anchorage, AK, USA:IEEE Press, 2008:1-7.
[5] numb Wenhua, Huang Lei, Liu Changping are known based on crowd density grade separation model [J] the mode of Confidence Analysis Not and artificial intelligence, 2011,24 (1): 30-39.
[6] Wang Guode, Zhang Peilin, Ren Guoquan wait texture characteristic extracting method [J] computer of fusion LBP and GLCM Engineering, 2012,38 (11): 199-201.
[7] Wang Haopeng, Li Hui identify [J] agriculture based on the unginned cotton classification of impurities of local binary patterns and gray level co-occurrence matrixes Industrial engineering (IE) journal, 2015 (3): 236-241.
Summary of the invention
In view of the above technical problems, the present invention, which is used, combines foreground pixel regression calculation with texture feature extraction study Crowd density estimation algorithm, the advantages of combining two kinds of crowd density estimation algorithms as far as possible, the classifying quality that is optimal.For Reach above-mentioned purpose, the present invention specifically adopts the technical scheme that
A method of crowd density is monitored in real time by pattern-recognition and machine vision, comprising:
S1: image subblock is divided, determines the selection threshold value R of image subblock foreground pixel accounting r;
S2: image subblock crowd density ranking, when the average foreground pixel accounting r < R of image subblock, using being based on The regression algorithm of foreground pixel statistics calculates crowd density, obtains crowd density Grade numbers according to calibration scale;When putting down for image When equal foreground pixel accounting r > R, crowd density is calculated using the learning classification algorithm based on texture feature extraction, according to calibration Table obtains crowd density Grade numbers, and establishes SVM classifier, carries out people to image subblock to be detected using SVM classifier Group's density rating evaluation;
S3: after the crowd density grade for obtaining each image subblock, the crowd density grade using image subblock is needed Estimate the crowd density grade of entire image, therefore define the crowd density grade of entire image are as follows:
Wherein, dallIndicate the crowd density Grade numbers of entire image, dseg(i) crowd density etc. for being image subblock i Grade number, # [] are defined as the operator that rounds up, and N is the number that entire image is divided into image subblock.
Further, the selection threshold value R of image subblock foreground pixel accounting is determined specifically:
S21: foreground pixel accounting r in image subblock is calculated:
Wherein NproFor the foreground pixel point in image subblock, NsegFor image subblock pixel.
S22: statistics number n corresponding with accounting rp
S23: approximate fits curve:
f(np)=Kr+b;
When K is non-constant, that is, non-linear relation is presented, corresponding accounting r=R, R are to select threshold value at this time.
Further, threshold value R=0.6 is selected.
Further, crowd density is calculated using the regression algorithm counted based on foreground pixel, specifically:
S311: the foreground pixel accounting r of n image subblock in training set is extractedsegAnd prospect number np, construction training Set of data points { (rseg(i), np(i)) }, i=1,2,3 ..., n;
S312: carrying out least square method regression fit to training data point set, to obtain:
np=f (rseg);
S313: being divided into image subblock for test image, acquires the accounting r of foreground pixel respectivelyseg, obtain corresponding Prospect number np, referring to calibration scale, respectively obtain corresponding density rating number.
Further, crowd density, specific steps are calculated using the learning classification algorithm based on texture feature extraction are as follows:
S321: being divided into 3 image subblocks to each frame image inputted in training set, to obtain 3 original gradations Image, then the LBP image of 3 original-gray images is acquired respectively;
S322: using gray level N=16, d=1, and acquiring θ in LBP image and original-gray image respectively is respectively 0 °, 45 °, 90 °, 135 ° of gray level co-occurrence matrixes;
S323: inverse difference moment (Homogeneity), entropy (Entropy), energy are acquired respectively to each gray level co-occurrence matrixes GLCM Measure (Energy) and contrast (Contrast) this four statistics.The feature vector of each image subblock can be described as follows:
μi={ a1, a2, a3, a4, b1, b2, b3, b4}
Wherein i=1,2,3, aiWith biFor 4 dimensional vectors, LBP image and original-gray image are respectively indicated on 4 directions Gray level co-occurrence matrixes 4 statistics;
S324: the feature vector of 3 image subblocks is successively spliced, and can obtain the feature vector x of one 96 dimension, The textural characteristics that a frame image is characterized with the vector, wherein the feature vector of every frame image is described as follows:
X={ μ1, μ2, μ3}
S325: referring to calibration scale, demarcating each image subblock, obtains the density rating number of image.
Further, the specific steps of SVM classifier are established are as follows: construction training sample set S:
S={ (xi, yi): i=1,2 ..., l, (xi, yi)∈Rn× { 4,5 } }
Wherein l is the number of training sample, xiFor feature vector, label yiFor the Grade numbers of every frame image;
It carries out the exceptional sample based on Bayesian Estimation to filter out, deletes and threshold epsilon is selected to select 0.1, " be situated between what training sample was concentrated It after class point " and noise pollution point are rejected, forms new training sample set and supporting vector machine model is trained, using being based on Improve the repetitive exercise algorithm of K-means cluster, on the basis of comprehensive original SVM iterative algorithm based on cluster, to it is initial repeatedly The sample set that generation training uses is screened, and the probability in initial training sample set comprising supporting vector is further increased, thus Accelerate the training speed to SVM, process is as follows:
Step 1: filtering out the biggish sample set S of posterior probability1If training sample set S=S1+S2, due to sample set S1Posterior probability it is larger, S1Interior included feature vector is more typical, it is believed that supporting vector is more likely to be present in sample Collect S1In;
Step 2: K-means clustering method is utilized, by sample set S1Gather for k class, sample set S1It indicates are as follows:
S1=S11∪S12∪S13…∪S1k
Step 3: looping to determine S1iWhether contain two class sample points in (i=1,2,3 ..., k), if S1iIn contain two class samples This point, then initial training collection I=I ∪ S1i, wherein the initial value of I is empty set, only set of the selection containing two class difference sample points It just can determine that classifying face as training set;
Step 4: initial training collection I feeding SVM being trained, initial SVM classifier is obtained;
Step 5: setting sample set R=S-I, sample set R is sent into initial SVM classifier as test set and is surveyed Examination, if not being combined into W by the collection that initial SVM classifier is tested;
Step 6: judging whether the included vector number Num (W) of W is less than threshold gamma, SVM classifier is most if being less than Otherwise whole classifier enables I=I ∪ W, jumps into step 4 and continue to execute.
Interested foreground image is carried out piecemeal according to perspective model first by this hair fan in proportion, then to scheming in piecemeal Foreground pixel proportion is counted as in, and foreground pixel regression calculation algorithm will be used less than threshold value by introducing Threshold segmentation, The algorithm that on the contrary then use is learnt based on texture feature extraction;For texture feature extraction, the invention proposes fusion part with The description operator of overall intensity co-occurrence matrix is to effectively enhance classifying quality.And it is supported using training sample set training During vector machine model, for the exceptional sample that training sample is concentrated, the method based on Bayesian Estimation is proposed to filter Except exceptional sample, the accuracy rate of support vector cassification is effectively increased;For the Training strategy problem of supporting vector machine model, The repetitive exercise algorithm for improving K-means cluster, effectively accelerates the training speed of support vector machines.
Compared with prior art, the beneficial technical effect of the present invention are as follows:
The hybrid algorithm for combining two kinds of crowd density estimation algorithms has preferable classification in each crowd density grade Effect, the advantages of preferably having merged two kinds of algorithms, not only the shortcomings that two kinds of crowd density estimation algorithms of effective solution, and And the training process of algorithm is also simplified on the whole, while also improving average classification accuracy;And the present invention is improved The repetitive exercise algorithm of K-means cluster significantly improves support vector machines under the premise of guaranteeing average correct classification rate Training effectiveness also effectively accelerates the testing time of test sample.This is primarily due in original iterative algorithm based on cluster On the basis of, using the posterior probability formula of training sample set, the sample set used primary iteration training is screened, further Improve include in initial training sample set supporting vector probability so that more targetedly simultaneously to the selection of training subset It enhances and support vector machines is finally improved to effectively reduce the number of repetitive exercise to the foundation of training sample screening Training performance.
Detailed description of the invention
Fig. 1 is the flow chart that the present invention monitors crowd density in real time by pattern-recognition and machine vision.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
The present invention relates to the adaptive crowd's grades for learning to combine with texture feature extraction by foreground pixel regression calculation Density estimation algorithm, algorithm flow chart is as shown in Figure 1, specifically comprise the following steps:
S1, image subblock is divided, determines the selection threshold value of foreground pixel accounting
The present invention minimizes the influence of perspective effect generation using the method for dividing image subblock, and is pre-processed It is as follows:
Under the more sparse scene of crowd, the regression algorithm based on foreground pixel statistics has more outstanding performance. But when crowd is more intensively and in the presence of in the case where blocking, the performance that algorithm is showed is with regard to very unsatisfactory, this In the case of, the learning classification algorithm based on texture feature extraction has more apparent advantage.Therefore, it can be observed by calculating The linear relationship of foreground pixel accounting and number in image, determines the selection threshold value of foreground pixel accounting, in specific field in image Statistic of classification algorithm adaptable therewith is used under scape.
Step 1: calculate foreground pixel accounting r in image:
Wherein NproFor the foreground pixel point in image, NsegFor image slices vegetarian refreshments.
Step 2: statistics prospect number n corresponding with foreground pixel accounting rp, prospect number npIt is directly by observing Out.
Step 3: fitting curve of approximation: the prospect number n of foreground pixel accounting r corresponding theretopThere are certain linear Rule obtains matched curve according to linear relationship between the two, so can obtain number by foreground pixel accounting r:
f(np)=Kr+b (1)
Step 4: when K is non-constant, that is, non-linear relation is presented, at this time corresponding accounting r=R, R is to select threshold Value, is presented linear relationship as r < R, calculates crowd density using the regression algorithm based on foreground pixel, and works as r > R Shi Bucheng Existing linear relationship, then the method based on texture feature extraction is used to calculate crowd density.
The test database that the present invention selects is to apply more extensive standard database-UCSD demographic data library at present, By largely testing verifying, under conditions of there are centainly allowance is judged, measuring and selecting threshold value R is 0.6.
The calibration of S2, image subblock crowd density
Crowd density grade, which demarcates the table of comparisons, to be formulated on the basis of image subblock, and selection threshold value R=0.6 is selected For the separation of middle-high density.As the average foreground pixel accounting r < R of image subblock, using what is counted based on foreground pixel Regression algorithm calculates crowd density grade;As the average foreground pixel accounting r > R of image, using based on texture feature extraction Learning classification algorithm calculate crowd density grade, now both of these case is described below:
(1) regression algorithm based on foreground pixel statistics
When the foreground pixel accounting r of image subblock is less than selection threshold value R, by experimental verification, foreground pixel accounting with Number in image subblock has preferable linear approximate relationship, therefore can fit foreground pixel using linear regression algorithm and account for Than the functional relation with number in image subblock, the specific steps are as follows:
Step 1: extracting the foreground pixel accounting r of n image subblock in training setsegAnd prospect number np, construction training Set of data points { (rseg(i), np(i)) } (i=1,2,3 ... .., n);
Step 2: least square method regression fit is carried out to training data point set, to obtain:
np=f (rsrg)
Step 3: test image being divided into image subblock, acquires the accounting r of the foreground pixel of sub-block respectivelyseg, in utilization Formula (1) acquires corresponding prospect number np, referring to calibration scale 1, respectively obtain the crowd density Grade numbers of corresponding sub block.
Crowd density grade is demarcated in 1 image subblock of table
(2) the learning classification algorithm based on textural characteristics
When the foreground pixel accounting r of image subblock is greater than selection threshold value R, the present invention mentions the LBP image of original image Gray level co-occurrence matrixes are taken to reinforce the description to original image Local textural feature, it will be by the LBP image of original-gray image Gray level co-occurrence matrixes are combined with the gray level co-occurrence matrixes of original image itself, can comprehensively describe image from part with whole In textural characteristics so that the resolution capability of described textural characteristics enhances, it is hereby achieved that preferable classifying quality, mentions The arrangement performance of the system of liter.Specific step is as follows:
Step 1: 3 image subblocks being divided into each frame image inputted in training set, to obtain 3 original gradations Image, then the LBP image of 3 original-gray images is acquired respectively;
Step 2: using gray level N=16, d=1, acquire 4 (θ points of directions in LBP image and original-gray image respectively Wei not be 0 °, 45 °, 90 °, 135 °) on gray level co-occurrence matrixes GLCM;
Step 3: to each gray level co-occurrence matrixes GLCM acquire respectively inverse difference moment (Homogeneity), entropy (Entropy), Energy (Energy) and contrast (Contrast) this four statistics.The feature vector of each image subblock can describe such as Under:
μi={ a1, a2, a3, a4, b1, b2, b3, b4}
Wherein i=1,2,3, it is the quantity of image subblock, ajWith bj(j=1,2,3,4) is 4 dimensional vectors, respectively indicates LBP 4 statistics of the gray level co-occurrence matrixes of image and original-gray image on 4 directions.
Step 4: the feature vector of 3 image subblocks successively being spliced, the feature vector of one 96 dimension can be obtained X characterizes the textural characteristics of a frame image with this feature vector x.Wherein the feature vector of every frame image is described as follows:
X={ μ1, μ2, μ3}
Step 5: thinking to observe the number in picture sub-block, referring to calibration scale 1, each image subblock is demarcated, is obtained To the crowd density Grade numbers of image.
Step 6: establishing SVM classifier
By the feature vector in step 5 together with crowd density ranked contacts, as training sample later, construction instruction Practice sample set S:
S={ (xi, yi): i=1,2 ..., l, (xi, yi)∈Rn× { 4,5 } } wherein l be training sample number, xiFor spy Levy vector, label yiFor the Grade numbers of every frame image.
Then it carries out the exceptional sample based on Bayesian Estimation to filter out, deletes and threshold epsilon is selected to select 0.1, training sample is concentrated " Jie's class point " and noise pollution point reject after, form new training sample set and supporting vector machine model be trained.And It proposes a kind of based on the repetitive exercise algorithm for improving K-means cluster, the basis of comprehensive original SVM iterative algorithm based on cluster On, the sample set used primary iteration training screens, and further increases in initial training sample set comprising supporting vector Probability, to accelerate to the training speed of SVM.Its process is as follows:
Step 1: filtering out the biggish sample set S of posterior probability1.If training sample set S=S1+S2, due to sample set S1Posterior probability it is larger, S1Interior included feature vector is more typical, it can be considered that supporting vector there is more likely to be In sample set S1In.
Step 2: K-means clustering method is utilized, by sample set S1Gather for k class.Therefore, sample set S1It may be expressed as:
S1=S11∪S12∪S13…∪S1k
Step 3: looping to determine S1iWhether contain two class sample points in (i=1,2,3..., k), if S1iIn contain two class samples This point, then initial training collection I=I ∪ Sli, wherein the initial value of I is empty set.Obviously, it only chooses and contains two class difference sample points Set just can determine that classifying face as training set.
Step 4: initial training collection I feeding SVM being trained, initial SVM classifier is obtained.
Step 5: setting sample set R=S-I, sample set R is sent into initial SVM classifier as test set and is surveyed Examination, if not being combined into W by the collection that initial SVM classifier is tested.
Step 6: judging whether the included vector number Num (W) of W is less than threshold gamma, SVM classifier is most if being less than Whole classifier.Otherwise I=I ∪ W is enabled, step 4 is jumped into and continues to execute.
S3, the crowd density grade for obtaining entire image
After image to be detected is handled by this paper sorting algorithm, after the crowd density grade for obtaining each image subblock, The crowd density grade of the crowd density hierarchical estimation entire image using image subblock is needed, therefore defines the people of entire image Group's density rating are as follows:
Wherein, dallIndicate the crowd density Grade numbers of entire image, dseg(i) crowd density etc. for being image subblock i Grade number, # [] are defined as the operator that rounds up, and N is the number that entire image is divided into image subblock.
The present invention is described in detail above by specific embodiment, these detailed descriptions must not believe that the present invention It is only limited to these description contents.Those skilled in the art conceive according to the present invention, these are described and it is known in this field normal to combine Know any improvement made, equivalents, should be included within the scope of protection of the claims of the present invention.

Claims (6)

1. a kind of method monitored in real time by pattern-recognition and machine vision to crowd density, it is characterised in that:
S1: image subblock is divided, determines the selection threshold value R of image subblock foreground pixel accounting r;
S2: image subblock crowd density ranking, when the average foreground pixel accounting r < R of image subblock, using the prospect of being based on The regression algorithm of pixels statistics calculates crowd density, obtains crowd density Grade numbers according to calibration scale;When image it is average before When scene element accounting r > R, crowd density is calculated using the learning classification algorithm based on texture feature extraction, is obtained according to calibration scale To crowd density Grade numbers, and SVM classifier is established, it is close to carry out crowd to image subblock to be detected using SVM classifier Spend ranking;
S3: after the crowd density grade for obtaining each image subblock, the crowd density hierarchical estimation using image subblock is needed The crowd density grade of entire image, therefore define the crowd density grade of entire image are as follows:
Wherein, dallIndicate the crowd density Grade numbers of entire image, dseg(i) it is compiled for the crowd density grade of image subblock i Number, # [] is defined as the operator that rounds up, and N is the number that entire image is divided into image subblock.
2. according to the method described in claim 1, it is characterized by: determining the selection threshold value R of image subblock foreground pixel accounting Specifically:
S21: foreground pixel accounting r in image subblock is calculated:
Wherein NproFor the foreground pixel point in image subblock, NsegFor image subblock pixel.
S22: statistics number n corresponding with accounting rp
S23: approximate fits curve:
f(np)=Kr+b;
When K is non-constant, that is, non-linear relation is presented, corresponding accounting r=R, R are to select threshold value at this time.
3. according to the method described in claim 2, selection threshold value R=0.6.
4. according to the method described in claim 1, it is characterized by: calculating people using the regression algorithm counted based on foreground pixel Group's density, specifically:
S311: the foreground pixel accounting r of n image subblock in training set is extractedsegAnd prospect number np, construct training data point Gather { (rseg(i),np}, (i)) i=1,2,3, n;
S312: carrying out least square method regression fit to training data point set, to obtain:
np=f (rseg);
S313: being divided into image subblock for test image, acquires the accounting r of foreground pixel respectivelyseg, obtain corresponding prospect Number np, referring to calibration scale, respectively obtain corresponding density rating number.
5. according to the method described in claim 1, it is characterized by: using based on the learning classification algorithm of texture feature extraction Calculate crowd density, specific steps are as follows:
S321: being divided into 3 image subblocks to each frame image inputted in training set, so that 3 original-gray images are obtained, Acquire the LBP image of 3 original-gray images respectively again;
S322: using gray level N=16, d=1, and acquiring θ in LBP image and original-gray image respectively is respectively 0 °, and 45 °, 90 °, 135 ° of gray level co-occurrence matrixes;
S323: inverse difference moment (Homogeneity), entropy (Entropy), energy are acquired respectively to each gray level co-occurrence matrixes GLCM (Energy) and contrast (Contrast) this four statistics.The feature vector of each image subblock can be described as follows:
μi={ a1,a2,a3,a4,b1,b2,b3,b4}
Wherein i=1,2,3, aiWith biFor 4 dimensional vectors, the ash of LBP image and original-gray image on 4 directions is respectively indicated Spend 4 statistics of co-occurrence matrix;
S324: the feature vector of 3 image subblocks is successively spliced, and the feature vector x of one 96 dimension can be obtained, with this Vector characterizes the textural characteristics of a frame image, wherein the feature vector of every frame image is described as follows:
X={ μ123}
S325: referring to calibration scale, demarcating each image subblock, obtains the density rating number of image.
6. establishing the specific steps of SVM classifier according to method described in the claims 1 are as follows: construction training sample set S:
S={ (xi,yi): i=1,2, l, (xi,yi)∈Rn×{4,5}}
Wherein l is the number of training sample, xiFor feature vector, label yiFor the Grade numbers of every frame image;
It carries out the exceptional sample based on Bayesian Estimation to filter out, deletes and threshold epsilon is selected to select 0.1, " Jie's class that training sample is concentrated It after point " and noise pollution point are rejected, forms new training sample set and supporting vector machine model is trained, using based on changing Into the repetitive exercise algorithm of K-means cluster, on the basis of comprehensive original SVM iterative algorithm based on cluster, to primary iteration The sample set that training uses is screened, and the probability in initial training sample set comprising supporting vector is further increased, thus plus Fastly to the training speed of SVM, process is as follows:
Step 1: filtering out the biggish sample set S of posterior probability1If training sample set S=S1+S2, due to sample set S1's Posterior probability is larger, S1Interior included feature vector is more typical, it is believed that supporting vector is more likely to be present in sample set S1 In;
Step 2: K-means clustering method is utilized, by sample set S1Gather for k class, sample set S1It indicates are as follows:
S1=S11US12US13···US1k
Step 3: looping to determine S1iWhether contain two class sample points in (i=1,2,3..., k), if S1iIn contain two class sample points, Then initial training collection I=IUS1i, wherein the initial value of I is empty set, only chooses the set for containing two class difference sample points as instruction Practicing collection just can determine that classifying face;
Step 4: initial training collection I feeding SVM being trained, initial SVM classifier is obtained;
Step 5: sample set R=S-I is set, sample set R is sent into initial SVM classifier as test set and is tested, if W is not combined by the collection that initial SVM classifier is tested;
Step 6: judging whether the included vector number Num (W) of W is less than threshold gamma, SVM classifier is final point if being less than Class device, otherwise enables I=IUW, jumps into step 4 and continues to execute.
CN201811277215.2A 2018-10-30 2018-10-30 A method of crowd density is monitored in real time by pattern-recognition and machine vision Pending CN109543555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811277215.2A CN109543555A (en) 2018-10-30 2018-10-30 A method of crowd density is monitored in real time by pattern-recognition and machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811277215.2A CN109543555A (en) 2018-10-30 2018-10-30 A method of crowd density is monitored in real time by pattern-recognition and machine vision

Publications (1)

Publication Number Publication Date
CN109543555A true CN109543555A (en) 2019-03-29

Family

ID=65845583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811277215.2A Pending CN109543555A (en) 2018-10-30 2018-10-30 A method of crowd density is monitored in real time by pattern-recognition and machine vision

Country Status (1)

Country Link
CN (1) CN109543555A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797873A (en) * 2023-02-06 2023-03-14 泰山学院 Crowd density detection method, system, equipment, storage medium and robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044073A (en) * 2009-10-09 2011-05-04 汉王科技股份有限公司 Method and system for judging crowd density in image
CN103164711A (en) * 2013-02-25 2013-06-19 昆山南邮智能科技有限公司 Regional people stream density estimation method based on pixels and support vector machine (SVM)
US20160133025A1 (en) * 2014-11-12 2016-05-12 Ricoh Company, Ltd. Method for detecting crowd density, and method and apparatus for detecting interest degree of crowd in target position

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102044073A (en) * 2009-10-09 2011-05-04 汉王科技股份有限公司 Method and system for judging crowd density in image
CN103164711A (en) * 2013-02-25 2013-06-19 昆山南邮智能科技有限公司 Regional people stream density estimation method based on pixels and support vector machine (SVM)
US20160133025A1 (en) * 2014-11-12 2016-05-12 Ricoh Company, Ltd. Method for detecting crowd density, and method and apparatus for detecting interest degree of crowd in target position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张兵: "智能视频监控中人群密度分析及突发异常行为检测", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797873A (en) * 2023-02-06 2023-03-14 泰山学院 Crowd density detection method, system, equipment, storage medium and robot

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
Zhang et al. Triplet-based semantic relation learning for aerial remote sensing image change detection
Khodabakhsh et al. Fake face detection methods: Can they be generalized?
CN109325550B (en) No-reference image quality evaluation method based on image entropy
CN108875676A (en) Biopsy method, apparatus and system
CN112233073A (en) Real-time detection method for infrared thermal imaging abnormity of power transformation equipment
Boracchi et al. Novelty detection in images by sparse representations
CN109344736A (en) A kind of still image people counting method based on combination learning
CN105389562B (en) A kind of double optimization method of the monitor video pedestrian weight recognition result of space-time restriction
CN106780485A (en) SAR image change detection based on super-pixel segmentation and feature learning
CN112949572A (en) Slim-YOLOv 3-based mask wearing condition detection method
CN109117883A (en) SAR image sea ice classification method and system based on long memory network in short-term
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN107146220B (en) A kind of universal non-reference picture quality appraisement method
CN113591674B (en) Edge environment behavior recognition system for real-time video stream
CN108229289A (en) Target retrieval method, apparatus and electronic equipment
CN107590427A (en) Monitor video accident detection method based on space-time interest points noise reduction
CN113239869A (en) Two-stage behavior identification method and system based on key frame sequence and behavior information
CN109961425A (en) A kind of water quality recognition methods of Dynamic Water
CN111950457A (en) Oil field safety production image identification method and system
WO2024021461A1 (en) Defect detection method and apparatus, device, and storage medium
CN117809124B (en) Medical image association calling method and system based on multi-feature fusion
CN111833347A (en) Transmission line damper defect detection method and related device
Zeeshan et al. A newly developed ground truth dataset for visual saliency in videos
CN109543555A (en) A method of crowd density is monitored in real time by pattern-recognition and machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329

RJ01 Rejection of invention patent application after publication