CN106056078A - Crowd density estimation method based on multi-feature regression ensemble learning - Google Patents

Crowd density estimation method based on multi-feature regression ensemble learning Download PDF

Info

Publication number
CN106056078A
CN106056078A CN201610374700.6A CN201610374700A CN106056078A CN 106056078 A CN106056078 A CN 106056078A CN 201610374700 A CN201610374700 A CN 201610374700A CN 106056078 A CN106056078 A CN 106056078A
Authority
CN
China
Prior art keywords
pixel
density estimation
image
scene
crowd density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610374700.6A
Other languages
Chinese (zh)
Other versions
CN106056078B (en
Inventor
郑宏
张洞明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute of Wuhan University
Original Assignee
Shenzhen Research Institute of Wuhan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute of Wuhan University filed Critical Shenzhen Research Institute of Wuhan University
Priority to CN201610374700.6A priority Critical patent/CN106056078B/en
Publication of CN106056078A publication Critical patent/CN106056078A/en
Application granted granted Critical
Publication of CN106056078B publication Critical patent/CN106056078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a crowd density estimation method based on multi-feature regression ensemble learning. With the head width of a person as a reference, hierarchical image blocking is carried out on a scene frame image, and image scaling and Gamma correction are carried out on each block to realize consistency of the scale and the light of the image; a pretreated sample is used for building a density estimation model, D-SIFT, GLCM and GIST features are extracted to build a first-layer support vector regression (SVR) rough prediction model, a rough prediction result serves as a new feature to build a second-layer SVR fine prediction model, fine prediction results of all sub images are summarized, and density estimation is carried out according to number classification set by the scene. Problems of scene light changes, camera height and angle changes and pedestrian occlusion are overcome, multiple scene samples are used, multiple features are adopted, a regression ensemble learning building model is used, and thus, crowd density estimation on multiple different scenes can be realized.

Description

A kind of crowd density estimation method based on multiple features regression equation integrated study
Technical field
The present invention relates to one and belong to Digital Image Processing, mode identification technology, particularly to one based on the most special Levy the crowd density estimation method of regression equation integrated study.
Background technology
Along with the raising of people's living standard, the continuous quickening of urbanization progress, the collectivity activity of extensive public place Day by day frequent, thus in recent years because of accident produced by the crowd is dense of common occurrence.Therefore, how computer vision pair is used Crowd carries out real-time intelligent monitoring, makes crowd density estimation in time, and takes effective measures, for ensureing social stability Significant with crowd's safety.
At present the method for crowd density estimation can be divided into two big classes:
1) direct method: direct method uses some graders to attempt each individuality in segmentation or detection crowd, then enters Row counting obtains crowd density.These methods can be further separated into two groups: 1. method based on model: by model or The shape profile of person people detects or splits.The one proposed such as Lin et al. extracts the number of people based on Haar wavelet transformation Contouring feature combination supporting vector machine carry out method (Lin S F, Chen the J Y, Chao H of pedestrian detection X.Estimation of number of people in crowded scenes using perspective transformation[J].Systems,Man and Cybernetics,Part A:Systems and Humans,IEEE Transactions on, 2001,31 (6): 645-654), Felzenszwalb etc. propose a kind of based on parts with the ladder of improvement The DPM (Deformable parts model) of degree rectangular histogram (Histogram of oriented gradient, HOG) feature Detection algorithm (Felzenszwalb P F, Girshick R B, McAllester D, et al.Object detection with discriminatively trained part-based models[J].Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2010,32 (9): 1627-1645), Gall and Lempitsky propose a kind of use Hough forest framework that all parts of pedestrian is detected and give a mark judge pedestrian and Method (Gall J, the Lempitsky V.Class-specific hough forests for object of its position detection[M].Decision forests for computer vision and medical image Analysis.Springer London, 2013:143-157) etc., Gardzinski et al. (Gardzinski P, Kowalak K,Kaminski L,et al.Crowd density estimation based on voxel model in multi-view surveillance systems[C].Systems,Signals and Image Processing (IWSSIP), 2015International Conference on.IEEE, 2015:216-219) then utilize multiple visual angle Camera carries out 3D prospect modeling, extracts human body according to the shape of human body and judges crowd's number;2. side based on trajectory clustering Method: detect each individuality by the point of interest with the pedestrian of tracking cluster for a long time.As Rabaud and Belongie proposes A kind of use Kanade-Lucas-Tomasi (KLT) tracker and a series of low-level image feature cluster trajectory and speculate field Method (Rabaud V, Belongie S.Counting crowded moving objects [C] .Computer of scape number Vision and Pattern Recognition,2006IEEE Computer Society Conference on.IEEE, 2006,1:705-711), Rao et al. (Rao A S, Gubbi J, Marusic S, et al.Estimation of crowd density by clustering motion cues[J].The Visual Computer,2015,31(11):1533- 1552) follow the tracks of acquisition crowd's profile by optical flow method, from movable information, filter out the trajectory of people, then cluster analysis people Population density.Direct method effect in the case of scene is fewer in number is preferable, but its shortcoming is it is also obvious that in the case of crowded There is serious overlap in crowd, and the performance straight line of direct method declines.
2) indirect method: indirect method is treated crowd as entirety, by extracting Texture eigenvalue and combining recurrence to crowd Model obtains crowd density.Indirect method can also be divided three classes: 1. analysis based on pixel: first these methods remove scene Background, then uses the feature of some unusual bottoms to estimate crowd density.Davies etc. (Davies A C, Yin J H, Velastin S A.Crowd monitoring using image processing[J].Electronics& Communication Engineering Journal, 1995,7 (1): 37-47) by extraction prospect and analyze crowd's prospect And edge pixel, and add visual angle correction, carry out estimated number by linear relationship.(Hussain N, the Yatim H such as Hussasin S M,Hussain N L,et al.CDES:A pixel-based crowd density estimation system for Masjid al-Haram [J] .Safety Science, 2011,49 (6): 824-833) correcting perspective distortion by scaling Foreground pixel on extract low-level image feature then use backward neutral net to carry out supervised training, sparse crowd is estimated by the model of training Meter is very accurate, but occurs that crowd is blocked along with density raises, and the estimation of mistake then ramps;2. based on texture and the side of gradient Method: compare method based on pixel, texture and Gradient Features and can preferably express the number in scene.Use and estimate at crowd density Texture and Gradient Features in meter include gray level co-occurrence matrixes (Gray-level co-occurrence matrix, GLCM), ULBP feature (Uniform local binary Pattern), HOG feature, and gradient direction co-occurrence matrix (Gradient Orientation co-occurrence matrix, GOCM) etc.;3. the method for distinguished point based: characteristic point is interested Character pixel, the angle point detected the most in the picture.Such as Conte et al. (Conte D, Foggia P, Percannella G,et al.Counting moving persons in crowded scenes[J].Machine vision and Applications, 2013,24 (5): 1029-1042) employ a kind of acceleration robust features (Speeded-up robust Features, SURF) detect angle point, the angle point number of movement then be used to estimate crowd density, Liang et al. (Liang R, Zhu Y,Wang H.Counting crowd flow based on feature points[J].Neurocomputing, 2014,133:377-384) form prospect masking-out by three-frame differencing and binaryzation, re-use SURF and extract characteristic point, Rear combination light stream judges direction and the density that crowd walks.Kishor et al. (Kishore P V V, Rahul R, Sravya K,et al.Crowd Density Analysis and tracking[C].Advances in Computing, Communications and Informatics(ICACCI),2015International Conference on.IEEE, It is 2015:1209-1213) then on light flow graph, detect FAST (Features for accelerated segment test) Angle point, then becomes density Estimation figure according to angle point number form.Indirect method typically requires extraction prospect or movable information to reduce background Interference, and in actual application, owing to illumination variation, pedestrian continue crowded and various contextual factor etc. so that prospect and The extraction of movable information becomes a more difficult job, thus results in these methods and is difficult in actual applications make estimate accurately Meter
Summary of the invention
It is an object of the invention to provide a kind of crowd density estimation method based on multiple features regression equation integrated study.
For achieving the above object, the present invention is by the following technical solutions: a kind of based on multiple features regression equation integrated study Crowd density estimation method, comprises the following steps:
Image block step: obtain the video monitoring two field picture of scene, enters scene using the head width of people as reference The image block that row is multi-level, zooms in and out the unified size of process and corrects pretreatment acquisition through Gamma multi-level block image Subimage sample;
Crowd density estimation step: use D-SIFT, GLCM of ground floor support vector regression model antithetical phrase image pattern Slightly predict with tri-kinds of features of GIST;Second layer support vector regression model is used to enter slightly predicting the outcome as new feature The thin prediction of row, by the addition that carefully predicts the outcome of all subimage samples, carries out density according to the crowd density classification of scene settings Estimate.
Preferably, the concretely comprising the following steps of described multi-level image piecemeal:
First scene interest region delimited, it is then determined that the size of ground floor block image, selected with reference to pedestrian, when its head Just, after entering in interest domain bottom boundary, measuring its head width is w pixel, then set the width of ground floor block image as head Portion's width w*128/42 pixel, continues to move along referring next to pedestrian, during until head width is w*21/42=w/2 pixel, Its crown is the height of ground floor block image to the length of interest domain bottom boundary;
Row determines the size of second layer block image again, selected with reference to pedestrian, when its head is directly over ground floor block image During top, measuring its head width is w1Pixel, then set the width of second layer block as head width w1* 128/42 pixel, then Continue to move along with reference to pedestrian, until head width is w1* 21/42=w1During/2 pixel, on its crown to ground floor block image The length on limit is the height of second layer block image;
By that analogy, then row determines the size of third layer block image, until multi-level block image is complete to scene interest region Become non-overlapping whole covering.
Preferably, the wide height that described multi-level block image zooms in and out after processing unified size is 128 pixels.
Preferably, the step that multi-level block image corrects pretreatment acquisition subimage through Gamma includes: first by pixel value 0~255 are divided into three intervals, then pixel value are converted to angle, are specifically expressed as follows:
Wherein x is pixel value, x0And x1It is respectively the pixel threshold set, E1=[0, x0], E2=[x0,x1], E3=[x1, 255],It it is then the angle after conversion;
Then utilize trigonometric function relation to determine gamma value γ (x), be defined as follows:
Simple adjust Gamma value by weights a that it can be made to rise and fall is excessive, in being the introduction of weights b and have employed formula (3) Shown linear correction function is modified
f ( x ) = b x 0 - x x 0 , x ∈ E 1 0 , x ∈ E 2 b x 1 - x 255 - x 1 , x ∈ E 3 - - - ( 3 )
Final revised Gamma value is defined as
γ ^ ( x ) = f ( x ) + γ ( x ) - - - ( 4 )
Value after pixel is corrected is
Preferably, described crowd density estimation step includes:
Antithetical phrase image pattern extracts D-SIFT, GLCM and GIST feature respectively;
The feature extracted is utilized respectively the ground floor thick forecast model of support vector regression model training, for test sample Collection, obtains, by thick forecast model, the different thick predictive value of number that tri-features of D-SIFT, GLCM and GIST are corresponding;
Using thick for number predictive value as the new characteristic use second layer thin forecast model of support vector regression model training, people The result of the thick prediction of number, by thin forecast model, obtains subimage sample number prediction the most accurately, i.e. thin predictive value;
The thin predictive value of all subimage samples of one width two field picture is added, the number in statistics scene region of interest territory,
According to the density classification standard in scene interest region, i.e. can get the crowd density estimation value of present frame.
The present invention compared with prior art provides the benefit that: instant invention overcomes scene illumination change, camera heights angle The problem that degree change and pedestrian are blocked, utilizes multiple scene sample take various features and apply the mode integrated study of recurrence Build model, be applicable to multiple different scene to realize crowd density estimation.
The invention will be further described with specific embodiment below in conjunction with the accompanying drawings.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the present invention;
Fig. 2 is that block image size determines schematic diagram;
Fig. 3 is multi-level block image and scene interest region corresponding relation schematic diagram;
Fig. 4 is the flow chart of regression equation integrated study.
Detailed description of the invention
In order to more fully understand the technology contents of the present invention, below in conjunction with specific embodiment, technical scheme is entered One step introduction and explanation.
As it is shown in figure 1, be the schematic flow sheet of the present invention, a kind of crowd density based on multiple features regression equation integrated study Method of estimation, comprise the following steps into:
Image block step: obtain the video monitoring two field picture of scene, enters scene using the head width of people as reference The image block that row is multi-level, zooms in and out the unified size of process and corrects pretreatment acquisition through Gamma multi-level block image Subimage sample;
Crowd density estimation step: use D-SIFT, GLCM of ground floor support vector regression model antithetical phrase image pattern Slightly predict with tri-kinds of features of GIST;Second layer support vector regression model is used to enter slightly predicting the outcome as new feature The thin prediction of row, by the addition that carefully predicts the outcome of all subimage samples, carries out density according to the crowd density classification of scene settings Estimate.
Further, as in figure 2 it is shown, determine schematic diagram for block image size;As it is shown on figure 3, be multi-level block image and field Scape interest region corresponding relation schematic diagram;In technique scheme, concretely comprising the following steps of multi-level image piecemeal:
First scene interest region delimited, it is then determined that the size of ground floor block image, selected with reference to pedestrian, when its head Just, after entering in interest domain bottom boundary, measuring its head width is w pixel, then set the width of ground floor block image as head Portion's width w*128/42 pixel, continues to move along referring next to pedestrian, during until head width is w*21/42=w/2 pixel, Its crown is the height of ground floor block image to the length of interest domain bottom boundary;
Row determines the size of second layer block image again, selected with reference to pedestrian, when its head is directly over ground floor block image During top, measuring its head width is w1Pixel, then set the width of second layer block as head width w1* 128/42 pixel, then Continue to move along with reference to pedestrian, until head width is w1* 21/42=w1During/2 pixel, on its crown to ground floor block image The length on limit is the height of second layer block image;
By that analogy, then row determines the size of third layer block image, until multi-level block image is complete to scene interest region Become non-overlapping whole covering.
The method of the employing image block with head part's width as reference, by carrying out the most multiple layer to two field picture Different size of piece of secondary division, carries out structure and the prediction of number of model, it is possible to overcome perspective with block for basic element The problem of projection effect.
After image block, we obtained a lot of far and near different, vary in size, multilamellar under different time and weather Secondary block image, extracting before feature, needs it is carried out pretreatment to reduce environmental disturbances and to reduce training burden,
First multi-level block image zooms in and out the unified size of process, and the wide height after unified size is 128 pixels, this Unified for the block image of the different distance sample for same size can be trained, without to distance by sample normalization size Sample separately training, greatly reduce training burden.
Secondly, in order to reduce the impact brought due to ambient lighting, need block image is carried out Gamma rectification, multilamellar Secondary block image is corrected pretreatment through Gamma and is obtained subimage, and concrete steps include: first pixel value 0~255 is divided into three districts Between, then pixel value is converted to angle, is specifically expressed as follows:
Wherein x is pixel value, x0And x1It is respectively the pixel threshold set, E1=[0, x0], E2=[x0,x1], E3=[x1, 255],It it is then the angle after conversion;
Then utilize trigonometric function relation to determine gamma value γ (x), be defined as follows:
Simple adjust Gamma value by weights a that it can be made to rise and fall is excessive, in being the introduction of weights b and have employed formula (3) Shown linear correction function is modified
f ( x ) = b x 0 - x x 0 , x ∈ E 1 0 , x ∈ E 2 b x 1 - x 255 - x 1 , x ∈ E 3 - - - ( 8 )
Final revised Gamma value is defined as
γ ^ ( x ) = f ( x ) + γ ( x ) - - - ( 9 )
Value after pixel is corrected is
Further, as shown in Figure 4, for the flow chart of regression equation integrated study, crowd density estimation step includes:
Antithetical phrase image pattern extracts D-SIFT, GLCM and GIST feature respectively, is set to xD-SIFT、xGLCMAnd xGIST
The feature extracted is utilized respectively the ground floor thick forecast model of support vector regression model training, for test sample Collection, obtains three model f by ground floor support vector regression model regression fit1(xD-SIFT)、f2(xGLCM) and f3(xGIST), The predictive value y of model output tri-features of D-SIFT, GLCM and GISTD-SIFT、yGLCMAnd yGIST, the thick prediction of corresponding different numbers Value, consists of new feature by these three predictive value:
xALL=[yD-SIFT,yGLCM,yGIST] (11)
The characteristic use second layer support vector regression model training thin forecast model f that this is newFinal(xALL), number The result of thick prediction, by thin forecast model, obtains the y of subimage sample number prediction the most accuratelyFinal, i.e. thin predictive value;Return Formula integrated study is returned to include two parts: training (study) part and prediction (application) part, as shown in Figure 4, training part is then It is training regression model, first several subimages is extracted feature, and count the number of each subimage as its number mark Sign, the sample set of composition training part, be then divided into training set and test set, train corresponding thick of three kinds of features by training set Regression model, test set can be predicted output, i.e. thick predictive value accordingly by robust regression model.Thick by three models Predictive value combines, as new feature, the sample set that number label composition is new, continues new sample set to be divided into new training set with new Test set.Train thin regression model by new training set, and new test set obtains thin predictive value by thin regression model and judges Model is the most accurate.
Predicted portions is then to predict number by the model trained.The test sample of unknown number extracts spy Levying, the robust regression model trained then in conjunction with training part obtains thick predictive value, using three thick predictive values as new spy Levy, be input in thin regression model, i.e. can get thin predictive value, the most final number prediction.
Inconsistent to the sensitivity of crowd density in view of different features, therefore use two-layer to return and can make up each other Deficiency, the most just can improve precision of prediction.
The thin predictive value of all subimage samples of one width two field picture is added, the number in statistics scene region of interest territory,
According to the density classification standard in scene interest region, i.e. can get the crowd density estimation value of present frame.Such as: false If maximum number n that current scene can accommodatemaxFor standard, use average classification, be divided into Pyatyi: [0, nmax/5]、[nmax/5, 2nmax/5]、[2nmax/5,3nmax/5]、[3nmax/5,4nmax/ 5] and [4nmax/ 5, ∞), it is designated as VL (the lowest), L (low), M respectively (medium), H (high) and VH (the highest), compare above-mentioned standard according to the number in statistics scene region of interest territory and can complete crowd density Valuation.
The above only further illustrates the technology contents of the present invention with embodiment, in order to reader is easier to understand, But not representing embodiments of the present invention and be only limitted to this, any technology done according to the present invention extends or recreation, all by this Bright protection.

Claims (5)

1. a crowd density estimation method based on multiple features regression equation integrated study, it is characterised in that comprise the following steps:
Image block step: obtain the video monitoring two field picture of scene, carries out many as reference to scene using the head width of people The image block of level, zooms in and out the unified size of process and corrects pretreatment acquisition subgraph through Gamma multi-level block image Decent;
Crowd density estimation step: use ground floor support vector regression model antithetical phrase image pattern D-SIFT, GLCM and Tri-kinds of features of GIST are slightly predicted;Second layer support vector regression model is used to carry out slightly predicting the outcome as new feature Thin prediction, by the addition that carefully predicts the outcome of all subimage samples, carries out density according to the crowd density classification of scene settings and estimates Meter.
Crowd density estimation method based on multiple features regression equation integrated study the most according to claim 1, it is characterised in that Concretely comprising the following steps of described multi-level image piecemeal:
First scene interest region delimited, it is then determined that the size of ground floor block image, selected with reference to pedestrian, when its head is lucky After entering in interest domain bottom boundary, measuring its head width is w pixel, then set the width of ground floor block image as head width Degree w*128/42 pixel, continues to move along referring next to pedestrian, during until head width is w*21/42=w/2 pixel, and its head The length pushing up interest domain bottom boundary is the height of ground floor block image;
Row determines the size of second layer block image again, selected with reference to pedestrian, when its head is directly over ground floor block image top Time, measuring its head width is w1Pixel, then set the width of second layer block as head width w1* 128/42 pixel, referring next to Pedestrian continues to move along, until head width is w1* 21/42=w1During/2 pixel, its crown is to ground floor block image top Length is the height of second layer block image;
By that analogy, then row determines the size of third layer block image, until multi-level block image completes nothing to scene interest region Overlapping whole coverings.
Crowd density estimation method based on multiple features regression equation integrated study the most according to claim 2, it is characterised in that The wide height that described multi-level block image zooms in and out after processing unified size is 128 pixels.
Crowd density estimation method based on multiple features regression equation integrated study the most according to claim 3, it is characterised in that The step that multi-level block image corrects pretreatment acquisition subimage through Gamma includes: first pixel value 0~255 is divided into three Interval, is then converted to angle by pixel value, is specifically expressed as follows:
Wherein x is pixel value, x0And x1It is respectively the pixel threshold set, E1=[0, x0], E2=[x0,x1], E3=[x1, 255],It it is then the angle after conversion;
Then utilize trigonometric function relation to determine gamma value γ (x), be defined as follows:
Simple adjust Gamma value by weights a that it can be made to rise and fall is excessive, in being the introduction of weights b and have employed shown in formula (3) Linear correction function be modified
f ( x ) = b x 0 - x x 0 , x ∈ E 1 0 , x ∈ E 2 b x 1 - x 255 - x 1 , x ∈ E 3 - - - ( 3 )
Final revised Gamma value is defined as
γ ^ ( x ) = f ( x ) + γ ( x ) - - - ( 4 )
Value after pixel is corrected is
5. according to the arbitrary described crowd density estimation method based on multiple features regression equation integrated study of claim 1,2,3 or 4, It is characterized in that, described crowd density estimation step includes:
Antithetical phrase image pattern extracts D-SIFT, GLCM and GIST feature respectively;
The feature extracted is utilized respectively the ground floor thick forecast model of support vector regression model training, for test sample collection, The different thick predictive value of number that tri-features of D-SIFT, GLCM and GIST are corresponding is obtained by thick forecast model;
Using thick for number predictive value as the new characteristic use second layer thin forecast model of support vector regression model training, number is thick The result of prediction, by thin forecast model, obtains subimage sample number prediction the most accurately, i.e. thin predictive value;
Being added by the thin predictive value of all subimage samples of one width two field picture, the number in statistics scene region of interest territory, according to field The density classification standard in scape interest region, i.e. can get the crowd density estimation value of present frame.
CN201610374700.6A 2016-05-31 2016-05-31 Crowd density estimation method based on multi-feature regression type ensemble learning Active CN106056078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610374700.6A CN106056078B (en) 2016-05-31 2016-05-31 Crowd density estimation method based on multi-feature regression type ensemble learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610374700.6A CN106056078B (en) 2016-05-31 2016-05-31 Crowd density estimation method based on multi-feature regression type ensemble learning

Publications (2)

Publication Number Publication Date
CN106056078A true CN106056078A (en) 2016-10-26
CN106056078B CN106056078B (en) 2021-09-14

Family

ID=57172224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610374700.6A Active CN106056078B (en) 2016-05-31 2016-05-31 Crowd density estimation method based on multi-feature regression type ensemble learning

Country Status (1)

Country Link
CN (1) CN106056078B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878952A (en) * 2017-03-20 2017-06-20 上海迪爱斯通信设备有限公司 The Forecasting Methodology and device of area people quantity
CN107480786A (en) * 2017-08-07 2017-12-15 复旦大学 Recognition with Recurrent Neural Network track likelihood probability computational methods based on output state limitation
CN108985256A (en) * 2018-08-01 2018-12-11 曜科智能科技(上海)有限公司 Based on the multiple neural network demographic method of scene Density Distribution, system, medium, terminal
CN110598630A (en) * 2019-09-12 2019-12-20 江苏航天大为科技股份有限公司 Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network
CN112733624A (en) * 2020-12-26 2021-04-30 电子科技大学 People stream density detection method, system storage medium and terminal for indoor dense scene

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761723A (en) * 2014-01-22 2014-04-30 西安电子科技大学 Image super-resolution reconstruction method based on multi-layer supporting vectors

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761723A (en) * 2014-01-22 2014-04-30 西安电子科技大学 Image super-resolution reconstruction method based on multi-layer supporting vectors

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
GABRIELA CSURKA 等: "Visual categorization with bags of keypoints", 《RESEARCHGATE》 *
OLIVA A 等: "Modeling the shape of the scene: a holistic representation of the spatial envelope", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
X. WU 等: "Crowd Density Estimation Using Texture Analysis and Learning", 《2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS》 *
侯鹏鹏: "基于GLCM纹理特征分析的人群密度估计方法实现", 《中国安防》 *
肖保良: "基于Gist特征与PHOG特征融合的多场景分类", 《中北大学学报(自然科学版)》 *
覃勋辉 等: "多种人群密度场景下的人群计数", 《中国图象图形学报》 *
郭婷: "大规模群体人数检测算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878952A (en) * 2017-03-20 2017-06-20 上海迪爱斯通信设备有限公司 The Forecasting Methodology and device of area people quantity
CN107480786A (en) * 2017-08-07 2017-12-15 复旦大学 Recognition with Recurrent Neural Network track likelihood probability computational methods based on output state limitation
CN107480786B (en) * 2017-08-07 2021-04-30 复旦大学 Output state limitation-based recurrent neural network track likelihood probability calculation method
CN108985256A (en) * 2018-08-01 2018-12-11 曜科智能科技(上海)有限公司 Based on the multiple neural network demographic method of scene Density Distribution, system, medium, terminal
CN110598630A (en) * 2019-09-12 2019-12-20 江苏航天大为科技股份有限公司 Method for detecting passenger crowding degree of urban rail transit based on convolutional neural network
CN112733624A (en) * 2020-12-26 2021-04-30 电子科技大学 People stream density detection method, system storage medium and terminal for indoor dense scene
CN112733624B (en) * 2020-12-26 2023-02-03 电子科技大学 People stream density detection method, system storage medium and terminal for indoor dense scene

Also Published As

Publication number Publication date
CN106056078B (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN108805093B (en) Escalator passenger tumbling detection method based on deep learning
CN107967451B (en) Method for counting crowd of still image
CN108615027B (en) Method for counting video crowd based on long-term and short-term memory-weighted neural network
Braham et al. Deep background subtraction with scene-specific convolutional neural networks
CN108492319B (en) Moving target detection method based on deep full convolution neural network
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN103310444B (en) A kind of method of the monitoring people counting based on overhead camera head
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
Kobayashi et al. Three-way auto-correlation approach to motion recognition
CN111709300B (en) Crowd counting method based on video image
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
Sengar et al. Motion detection using block based bi-directional optical flow method
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN113822352B (en) Infrared dim target detection method based on multi-feature fusion
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN109242019A (en) A kind of water surface optics Small object quickly detects and tracking
Usmani et al. Particle swarm optimization with deep learning for human action recognition
CN114332644B (en) Large-view-field traffic density acquisition method based on video satellite data
Moreno-Garcia et al. Video sequence motion tracking by fuzzification techniques
CN105118073A (en) Human body head target identification method based on Xtion camera
Khude et al. Object detection, tracking and counting using enhanced BMA on static background videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant