CN104574352A - Crowd density grade classification method based on foreground image - Google Patents

Crowd density grade classification method based on foreground image Download PDF

Info

Publication number
CN104574352A
CN104574352A CN201410444117.9A CN201410444117A CN104574352A CN 104574352 A CN104574352 A CN 104574352A CN 201410444117 A CN201410444117 A CN 201410444117A CN 104574352 A CN104574352 A CN 104574352A
Authority
CN
China
Prior art keywords
image
crowd
foreground
density
crowd density
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410444117.9A
Other languages
Chinese (zh)
Inventor
印勇
邵坤艳
吴明仙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201410444117.9A priority Critical patent/CN104574352A/en
Publication of CN104574352A publication Critical patent/CN104574352A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a novel crowd density grade classification method and provides a method for performing grade classification on crowd density through a foreground image. A movement foreground is extracted and subjected to Fourier transform, and an obtained frequency spectrum image is directly regarded as the texture of the image and is subjected to texture analysis to extract a feature vector for representing the crowd density. A novel algorithm disclosed by the invention avoids analysis on unnecessary backgrounds, so that the calculated amount is reduced, the adaptation of the algorithm to a complicated background is improved, and the robustness of the algorithm is also improved. The method is applicable to crowd density grade classification under the complicated background.

Description

A kind of crowd density grade separation method based on foreground image
Technical field
The present invention relates to computer intelligence field of video processing, be specially a kind of crowd density grade separation method based on foreground image, be especially applicable to the crowd density analysis that there is the interference such as illumination, wind in scene.
Background technology
Along with increasing of world population, the probability that people appear at same place is increasing, but the crowded situation of the thing followed is also more and more serious, sometimes even there occurs serious accident.The larger place of flow of the people as: stadium, market, school, station and various places of public amusement etc., crowded phenomenon happens occasionally.If ignore the place of these flow of the people more complicated and do not note supervision, will cause the generation of a series of major accident, therefore the crowd density of public arena detects and has profound significance.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art part, propose a kind of crowd density grade separation method based on foreground image.Compared to existing technology, the method directly carries out texture analysis to the frequency spectrum of foreground image, adds accuracy and decreases the algorithm time used.In addition, constantly renewal is carried out to background due to the method and have good robustness and efficiency.
The present invention is achieved by the following technical solutions, comprises the following steps:
The first step: binary conversion treatment is carried out to current video frame.
Subtract each other with the background image of present frame and acquisition, taking absolute value to its difference, and utilize prior predefined threshold value T to carry out binaryzation to it, is 0 the part assignment being less than T, the part assignment being greater than T is 1, and therefore we just obtain the image only containing 0 and foreground image.Wherein the value of threshold value T is:
g ( i , j ) = abs ( f ( i , j ) - bg ( i , j ) ) T = sqrt ( max ( g ( : ) ) ) - - - ( 1 )
b ( i , j ) = 1 g ( i , j ) &GreaterEqual; T 0 g ( i , j ) < T - - - ( 2 )
Wherein, f (i, j) is present frame, and bg (i, j) is the background image obtained, and b (i, j) is the background image of the binaryzation obtained, and they are all the matrixes of M × N.Average due to pixel represents the average gray of entire image, carries out binaryzation have great importance by the difference of average gray to image.
Second step: the binary image utilizing the first step to obtain obtains foreground image.
fg(i,j)=f(i,j) ·*b(i,j) (3)
Here b (i, j) converts the image after single precision type to, and fg (i, j) is the image only containing sport foreground, and non-athletic background parts wherein all converts through this computing the black part that pixel is 0 to.
3rd step: the foreground image obtained second step carries out Fourier transform and asks its spectrogram.
4th step: texture analysis is carried out to the spectrogram that the 3rd step obtains: low density crowd shows as open grain, Dense crowd shows as close grain, the spectrogram radio-frequency component of low density crowd is less, the spectrogram radio-frequency component of Dense crowd is more, spectrogram is directly considered as texture image, and then utilizes gray level co-occurrence matrixes to carry out texture analysis, obtain the proper vector that can characterize crowd density, classify by support vector machine, crowd is divided into low-density and high density.
The foreground image that said method of the present invention extracts can react the motion parts in video accurately, the part of moving is not had all to be labeled as 0, obtain actual be a sparse matrix, be convenient to store and calculate, decrease calculated amount, shorten computing time, it also avoid the unnecessary interference of background simultaneously, improve the robustness of algorithm, there is practical significance.
Accompanying drawing explanation
Fig. 1 is overall procedure block diagram of the invention process.
Fig. 2 is the background image extracted.
Fig. 3 is the foreground image extracted.
Fig. 4 is the spectrogram of different densities grade crowd.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated: the present embodiment is implemented under premised on technical solution of the present invention, give detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment:
The picture frame that this enforcement adopts is from standard database.
The crowd density grade separation based on foreground image that the present embodiment relates to, main-process stream as shown in Figure 1, comprises following concrete steps:
The first step: utilize gauss hybrid models to obtain not containing the background image of sport foreground, as shown in Figure 2.Certain some x at the stochastic distribution probability in certain moment is
p ( x ) = &Sigma; i = 1 k p ( G i ) p ( x / G i ) = &Sigma; i = 1 k &omega; i g ( x i , &mu; i , i , &Sigma; i , i ) - - - ( 4 )
In formula, k is the quantity of Gaussian distribution, ω ithe weights of i-th Gaussian distribution, μ i,iand Σ i,ibe mean vector and the variance matrix of i-th Gaussian distribution; G is the gauss of distribution function of i-th probability density.On this distributed model, be that the posterior probability p (B|x) of background can be expressed as further with x
p ( B / x ) = &Sigma; i = 1 k p ( B / G i ) p ( G i / x ) = &Sigma; i = 1 k p ( B / G i ) [ p ( x / G i ) p ( G i ) p ( x ) ] = &Sigma; i = 1 k p ( x / G i ) p ( G i ) p ( B / G i ) &Sigma; i = 1 k p ( x / G i ) p ( G i ) - - - ( 5 )
In formula, G ibe i-th Gaussian distribution, p (G i) be the weights of this Gaussian distribution in gauss hybrid models, in actual applications, this is a priori.When there being new observation station x i+1come interim, then by this sampled pixel value respectively with the average μ of k Gauss model i,icompare, calculate the probability that this point falls into corresponding Gaussian distribution simultaneously, and select the Gaussian distribution of coupling by a certain judging rules.This rule is
C is a constant.When there is the Gaussian distribution of coupling, then need according to current pixel x iupdate process is carried out to the weights of these Gaussian distribution, average and variance parameter.
ω i,i+1=(1-α)×ω i,i+α×M(t) (7)
In formula, α is that pace of learning is relevant with the time.
μ i+1=(1-ρ)×μ i+ρ×x i(9)
σ i+1 2=(1-ρ)×σ i 2+ρ(x i+1i) T(x i+1i) (10)
ρ=α×g(x iii 2) (11)
In all Gaussian distribution of the coupling obtained, press sort, then therefrom select the Gaussian distribution that can represent background, and finally determine the Gaussian distribution characterizing background.
Second step: the foreground image based on background subtraction method obtains: first the frame gray processing of foreground image and current acquisition, present frame and background image subtraction and by a threshold value subtracting each other result binaryzation, the part of motion is had to be labeled as 1, the background parts of moving is not had to be labeled as 0, the image of above-mentioned binaryzation is multiplied with present frame, result obtains only containing the prospect part of motion, and other part of not moving all is labeled as 0, and the result obtained as shown in Figure 3.
g(i,j)=abs(f(i,j)-bg(i,j)),T=sqrt(max(g:))) (12)
b ( i , j ) = 1 g ( i , j ) > T 0 g ( i , j ) &le; T - - - ( 13 )
fg(i,j)=f(i,j) ·*b(i,j) (14)
Wherein, f (i, j) is present frame, and bg (i, j) is the background image obtained, and b (i, j) is the background image of the binaryzation obtained, and they are all the matrixes of M × N.Fg is that the background parts of non-athletic is marked as 0 only containing the image of sport foreground.
3rd step: carry out Fourier transform to the image that second step obtains, as shown in Figure 4, Dense crowd image and low density crowd image appearance are different spectrum shape.
4th step: texture analysis is carried out to the spectrogram that the 3rd step obtains: low density crowd shows as open grain, and spectrogram includes less radio-frequency component; Dense crowd shows as close grain, spectrogram includes more rich radio-frequency component, spectrum shape is more complicated, spectrogram is directly considered as the texture of image, sign Dense crowd and other eigenwert of low density crowd spectral difference is extracted by carrying out texture analysis to spectrogram, gray level co-occurrence matrixes is a kind of method of effective texture analysis, uses gray level co-occurrence matrixes to carry out texture analysis to it, obtains four eigenwerts: energy, entropy, contrast, unfavourable balance square.Eigenwert is formed proper vector, is convenient to support vector machine and proper vector is classified.
(1) energy
W = &Sigma; i = 1 N &Sigma; j = 1 N f 2 ( i , j | d , &theta; ) - - - ( 15 )
In formula, f (i, j|d, θ) prolongs angle θ for gray-scale value (i, j) occurs in, and distance is the probability of d.Energy is the quadratic sum of gray level co-occurrence matrixes element value, so also claim energy.Crowd is more intensive, and energy is larger.
(2) contrast
CON = &Sigma; i = 1 N &Sigma; j = 1 N ( i - j ) 2 f ( i , j | d , &theta; ) - - - ( 16 )
Contrast reflect the sharpness of image and the texture rill depth degree.Texture rill is darker, and its contrast is larger, and visual effect is more clear; Otherwise contrast is little, then rill is shallow, and effect is fuzzy.Gray scale difference and the large pixel of contrast are to more, and this value is larger.Larger away from cornerwise element value in gray level co-occurrence matrixes, contrast is larger.Crowd is more intensive, and contrast is larger.
(3) entropy
E = &Sigma; i = 1 N &Sigma; j = 1 N f ( i , j | d , &theta; ) log ( f ( i , j | d , &theta; ) ) - - - ( 17 )
The tolerance of the quantity of information that image has, texture information with belong to the information of image, be the tolerance of a randomness, when all elements in co-occurrence matrix has that in maximum randomness, space co-occurrence matrix, all values is almost equal, in co-occurrence matrix during element dispersion distribution, entropy is larger.It illustrates non-uniform degree and the complexity of texture in image.Crowd density is larger, and entropy is less.
(4) correlativity
COR = &Sigma; i = 1 N &Sigma; j = 1 N ( i , j ) f ( i , j | d , &theta; ) - &mu; 1 &mu; 2 &sigma; 1 &sigma; 2 - - - ( 18 )
Wherein μ 1, μ 2, σ 1, σ 2be respectively:
&mu; 1 = &Sigma; i = 1 N i &Sigma; j = 1 N f ( i , j | d , &theta; ) - - - ( 19 )
&mu; 2 = &Sigma; j = 1 N j &Sigma; i = 1 N f ( i , j | d , &theta; ) - - - ( 20 )
&sigma; 1 2 = &Sigma; i = 1 N ( i - &mu; 1 ) 2 &Sigma; j = 1 N f ( i , j | d , &theta; ) - - - ( 21 )
&sigma; 2 2 = &Sigma; j = 0 N ( j - &mu; 2 ) 2 &Sigma; i = 0 N f ( i , j | d , &theta; ) - - - ( 22 )
Relativity measurement spatial gray level co-occurrence matrix element is expert at or similarity degree on column direction, and therefore, correlation size reflects local gray level correlativity in image.When matrix element value even equal time, correlation is just large; On the contrary, if matrix pixel values differs greatly, correlation is little.Crowd density is less, and correlativity is larger.
5th step: the proper vector that the 4th step extracts is classified by support vector machine: support vector machine adopts gaussian radial basis function kernel function.
Owing to only crowd being divided into high density and low-density, here be linear classification in fact, set d=1 in gray level co-occurrence matrixes, θ=45 °, initially the separatrix of a given dense population is trained support vector machine, applies in practical matter and crowd density is divided into low-density and high density.
Prove by experiment, the present embodiment well can carry out crowd density grade separation than front method.The real-time update of background ensure that and the accuracy of experiment meets the requirement of real-time background extracting.The crowd density estimation method based on foreground image herein reduces calculated amount, improves the robustness of algorithm.

Claims (5)

1. a novel crowd density grade separation method, is the crowd density analysis based on foreground image, comprises the following steps:
(1) with present frame and background image subtraction binaryzation is carried out to the result of subtracting each other;
(2) the image dot product of present frame and above-mentioned binaryzation, thus the foreground portion of motion is divided and is kept intact, and motionless background parts is assigned 0;
(3) Fourier transform is done to the image only containing sport foreground;
(4) directly Fourier spectrum is considered as the texture of image, texture analysis is carried out to spectral image, obtain the eigenwert that can characterize crowd density.
2. the crowd density grade separation method based on foreground image according to claim 1, it is characterized in that: the threshold value that step (1) carries out binaryzation selection calculates according to following steps: f (i, j) be current video frame, bg (i, j) be background image, T is the threshold value chosen:
g(i,j)=abs(f(i,j)-bg(i,j))
T=sqrt(max(g:))) 。
3. the crowd density grade separation method based on foreground image according to claim 1, it is characterized in that: step (2) to be multiplied present frame the prospect obtaining moving with the image after binaryzation: b (i, j) be the image after binaryzation, fg is final foreground image:
fg(i,j)=f(i,j).*b(i,j) 。
4. the crowd density grade separation method based on foreground image according to claim 1, is characterized in that: step (3) does Fourier transform to the image only containing sport foreground.
5. the crowd density grade separation method based on foreground image according to claim 1, it is characterized in that: step (4) directly carries out texture analysis to Fourier spectrum: because low density crowd shows as open grain, Dense crowd shows as close grain, after our image different to crowd density carries out Fourier transform, obviously can see that the frequency spectrum of Dense crowd and low density crowd image has larger difference, therefore, directly spectrogram can be considered as texture image, with gray level co-occurrence matrixes, texture analysis is carried out to it, extracting eigenwert utilizes support vector machine to classify to it, the crowd of judging belongs to low-density or high density.
CN201410444117.9A 2014-09-02 2014-09-02 Crowd density grade classification method based on foreground image Pending CN104574352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410444117.9A CN104574352A (en) 2014-09-02 2014-09-02 Crowd density grade classification method based on foreground image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410444117.9A CN104574352A (en) 2014-09-02 2014-09-02 Crowd density grade classification method based on foreground image

Publications (1)

Publication Number Publication Date
CN104574352A true CN104574352A (en) 2015-04-29

Family

ID=53090328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410444117.9A Pending CN104574352A (en) 2014-09-02 2014-09-02 Crowd density grade classification method based on foreground image

Country Status (1)

Country Link
CN (1) CN104574352A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096072A (en) * 2016-05-17 2016-11-09 北京交通大学 Dense crowd emulation mode based on intelligent body
CN106295661A (en) * 2016-08-15 2017-01-04 北京林业大学 The plant species identification method of leaf image multiple features fusion and device
CN108764124A (en) * 2018-05-25 2018-11-06 天津科技大学 The detection method and device of crowd movement
CN108961201A (en) * 2017-05-19 2018-12-07 广州康昕瑞基因健康科技有限公司 Image definition recognition methods and auto focusing method
CN110321869A (en) * 2019-07-10 2019-10-11 应急管理部天津消防研究所 Personnel's detection and extracting method based on Multiscale Fusion network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
WEI-LIEH HSU 等: "Crowd Density Estimation Based on Frequency Analysis", 《2011 SEVENTH INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION HIDING AND MULTIMEDIA SIGNAL PROCESSING》 *
刘小锐 等: "频域基于灰度共生矩阵的人群密度估计", 《微计算机信息》 *
刘福美 等: "一种基于图像处理的人群密度估计方法", 《计算机与数字工程》 *
杨裕 等: "复杂场景中的自动人群密度估计", 《现代电子技术》 *
谢鹏程: "复杂场景下实时监控中人群密度估计的研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096072A (en) * 2016-05-17 2016-11-09 北京交通大学 Dense crowd emulation mode based on intelligent body
CN106096072B (en) * 2016-05-17 2019-06-25 北京交通大学 Dense crowd emulation mode based on intelligent body
CN106295661A (en) * 2016-08-15 2017-01-04 北京林业大学 The plant species identification method of leaf image multiple features fusion and device
CN108961201A (en) * 2017-05-19 2018-12-07 广州康昕瑞基因健康科技有限公司 Image definition recognition methods and auto focusing method
CN108764124A (en) * 2018-05-25 2018-11-06 天津科技大学 The detection method and device of crowd movement
CN110321869A (en) * 2019-07-10 2019-10-11 应急管理部天津消防研究所 Personnel's detection and extracting method based on Multiscale Fusion network

Similar Documents

Publication Publication Date Title
CN107358258B (en) SAR image target classification based on NSCT double CNN channels and selective attention mechanism
Wang et al. Ship detection in SAR images via local contrast of Fisher vectors
CN104574352A (en) Crowd density grade classification method based on foreground image
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN107527023B (en) Polarized SAR image classification method based on superpixels and topic models
CN103258332B (en) A kind of detection method of the moving target of resisting illumination variation
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN109344880B (en) SAR image classification method based on multiple features and composite kernels
CN105261004A (en) Mean shift and neighborhood information based fuzzy C-mean image segmentation method
CN106570183B (en) A kind of Color Image Retrieval and classification method
CN109948593A (en) Based on the MCNN people counting method for combining global density feature
CN103679719A (en) Image segmentation method
CN103093238B (en) based on the visual dictionary construction method of D-S evidence theory
CN104732552B (en) SAR image segmentation method based on nonstationary condition
CN104794730A (en) Superpixel-based SAR image segmentation method
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
CN103903238A (en) Method for fusing significant structure and relevant structure of characteristics of image
CN104021567B (en) Based on the fuzzy altering detecting method of image Gauss of first numeral law
CN104008394A (en) Semi-supervision hyperspectral data dimension descending method based on largest neighbor boundary principle
CN102496142A (en) SAR (synthetic aperture radar) image segmentation method based on fuzzy triple markov fields
CN114724218A (en) Video detection method, device, equipment and medium
CN103530866A (en) Image processing method and device based on Gaussian cloud transformation
Zhang et al. Pseudo supervised solar panel mapping based on deep convolutional networks with label correction strategy in aerial images
CN103810287A (en) Image classification method based on topic model with monitoring shared assembly
CN104036300A (en) Mean shift segmentation based remote sensing image target identification method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150429

WD01 Invention patent application deemed withdrawn after publication