CN102289805A - Based on the density of the subway crowd video detection method - Google Patents

Based on the density of the subway crowd video detection method Download PDF

Info

Publication number
CN102289805A
CN102289805A CN2011101562708A CN201110156270A CN102289805A CN 102289805 A CN102289805 A CN 102289805A CN 2011101562708 A CN2011101562708 A CN 2011101562708A CN 201110156270 A CN201110156270 A CN 201110156270A CN 102289805 A CN102289805 A CN 102289805A
Authority
CN
China
Prior art keywords
step
video
density
population
image
Prior art date
Application number
CN2011101562708A
Other languages
Chinese (zh)
Inventor
蔚晓明
Original Assignee
北京云加速信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京云加速信息技术有限公司 filed Critical 北京云加速信息技术有限公司
Priority to CN2011101562708A priority Critical patent/CN102289805A/en
Publication of CN102289805A publication Critical patent/CN102289805A/en

Links

Abstract

本发明涉及一种基于视频的地铁人群密度检测方法,该方法首先基于地铁不同场合下的地铁视频图像输入相应的四点坐标来标记出有人活动区域和无人活动区域;对视频中每一帧是否为错误帧判断,如果是,则舍弃错误的视频帧;如果不是,则对没有人的视频抽取一帧帧进行背景高斯建模;利用获得的高斯背景图像与有人群的视频帧通过背景差法得出前景;对所述步骤4)获得的前景图进行标记,对无人活动区域的像素值标记为0;对获得的图像进行二值化处理,得到的背景为黑色,人为白色;并对获得的二值化图像进行连通域标记;根据人群整体密度和局部密度来检查地铁人群密度。 The present invention relates to a detection method based on the density of population MTR video, the method to mark the first active area and no activity was based on the corresponding region of the input video image coordinates of four points in different underground metro occasions; each frame of a video determining whether the error frame, if so, to discard erroneous video frames; if not, no one of a video frame extracted Gaussian background model; Gaussian background image using the obtained video frames with populations with background subtraction by method foreground stars; the step 4) obtained is labeled foreground, the pixel value flag of 0 is no active region; an image obtained by binarization process, the obtained black background, white man; and obtained on the binarized image is labeled connected component; verifying subway crowd density the bulk density and the local population density.

Description

一种基于视频的地铁人群密度检测方法 Based on the density of the subway crowd video detection method

技术领域 FIELD

[0001] 本发明涉及人群密度检测,特别涉及一种基于视频的地铁人群密度检测方法。 [0001] The present invention relates to a population density detection, particularly to a video-based detection methods population density subway. 背景技术 Background technique

[0002] 现在各大城市地铁交通在迅速发展,为人们的生活带来了很多便利。 [0002] Now the major metro cities in the rapid development has brought a lot of convenience to people's lives. 对人群密度的检测十分重要。 Detection of the density of population is very important. 因为人流密度过大,有很多隐患要发生。 Because the flow density is too large, there are many hidden dangers to happen.

[0003] 公开号为CN101431664A的中国专利申请介绍了一种基于视频图像的密度客流自动检测方法及系统,采用计算机视觉算法对采集到的视频图像进行处理,在人群密度较低时,采用像素数计算,并应用时间轴的信息进行背景生成;在人群密度较高时应用小波包分解来对人群图像进行多角度分析,提取小波包分解系数矩阵的计盒维数作为特征并使用分类器分类,得到人群密度等级。 [0003] Publication No. CN101431664A Chinese patent application describes a method and system for automatic detection based on traffic density video images using computer vision algorithms to captured video image processing, when the crowd density is low, the number of pixels employed computing, and application information to generate background timeline; crowd density applications wavelet packet decomposition of the multi-angle image analysis groups, cartridge extraction counting dimension wavelet packet decomposition of the coefficient matrix as features and classification using a classifier is high, get the crowd density levels.

[0004] 公开号为US2007/0031005A1的美国专利介绍了一种基于视频的实时人群密度估计,并使用MRF (Markov Random Field)法检测交化图像,然后再利用几何规则估计场景的人群密度。 [0004] Publication No. US2007 / 0031005A1 describes a U.S. patent based on real-time estimation of population density video, image and use the detection of cross-MRF (Markov Random Field) method, and then use the estimated geometrically regular scene crowd density. 然而上述方法对于高密度人群的场景密度估计准确率较差。 However, the above method of high-density population density estimation scene poor accuracy.

[0005] 公开号为CN102034M3A的中国专利介绍了一种获取视频图像中人群密度图的方法,该方法利用Harris算法获取检测区域中的角点,接着对每个角点进行密度扩散,获取检测区域内相应于该角点的每个像素点的密度扩散值,然后将每个像素点的对应于不同角点的密度扩散值进行累加,以获得该像素点的密度值;最后,根据像素点的密度值,画出检测区域的人群密度图。 [0005] Publication No. CN102034M3A Chinese patent describes a method for obtaining a video image in FIG population density, the method using the Harris corner detection algorithm acquisition region, then for each corner point density diffusion, acquires the detection region the density value within the diffusion of the corner point corresponding to each pixel, and each pixel corresponding to a density accumulated value diffusion angles different points to obtain the density value of the pixel point; Finally, according to the pixel point density values ​​shown in FIG population density detection region. 这种方法计算复杂度高,没有考虑到视频出现黑屏、雪花和移位的现象,整体的人群密度计算准确性下降了。 This method of high computational complexity, without taking into account the video screen goes black, snow and shifting phenomenon, the overall population density calculation accuracy declined. 因此,在地铁人群密度检测方面实际应用意义不大。 Therefore, in terms of density detection metro population has little practical significance.

[0006] 因此,目前急需解决传统的人群密度检测的问题。 [0006] Therefore, it is an urgent need to solve the problems of conventional detection of density of population. 发明内容 SUMMARY

[0007] 本发明的目的在于,为解决上述问题,本发明提出一种基于视频的地铁人群密度检测方法,实现实时准确的人群密度检测。 Objective [0007] The present invention is to solve the above problems, the present invention provides a detection method based on population density subway video, real-time and accurate detection of population density.

[0008] 为实现上述发明目的,本发明提出一种基于视频的地铁人群密度检测方法,其特征在于,该方法的具体内容包括如下步骤: [0008] In order to achieve the above object, the present invention provides a detection method based on population density subway video, wherein the content of the specific method comprises the steps of:

[0009] 步骤1):基于地铁不同场合下的地铁视频图像输入相应的四点坐标来标记出有人活动区域和无人活动区域; [0009] Step 1): Enter the appropriate coordinates of four points in different video underground metro occasions based on the mark region and no activity was active area;

[0010] 步骤2、:根据所述步骤1)获得的无人活动区域对视频中每一帧是否为错误帧判断,如果是,则舍弃错误的视频帧;如果不是,则转至步骤3); [0010] Step 2 ,: according to the step 1) no active video region obtained for each frame is an error frame is determined, if yes, discard erroneous video frames; if not, then go to step 3) ;

[0011] 步骤;3):对没有人的视频抽取一帧帧进行背景高斯建模; [0011] Step; 3): No one of a video frame extracted Gaussian background model;

[0012] 步骤4):利用所述步骤幻获得的高斯背景图像与有人群的视频帧通过背景差法得出前景; [0012] Step 4): the step of using the Gaussian magic background image obtained with the video frame have populations obtained by the foreground background subtraction;

[0013] 步骤幻:根据所述步骤1)获得的无人活动区域对所述步骤4)获得的前景图进行标记,对无人活动区域的像素值标记为0 ; [0013] Step Magic: 1) no active area obtained in the step 4) is labeled foreground obtained according to the steps, no pixel value of the flag to the active area is 0;

[0014] 步骤6):对所述步骤幻获得的图像进行二值化处理,得到的背景为黑色,人为白色;并对获得的二值化图像进行连通域标记; [0014] Step 6): The image obtained in step magic binarization process, the obtained black background, white man; and binarized image obtained was connected component labeling;

[0015] 步骤7):判读i < = N是否成立;如果i < = N成立,则转至步骤幻;如果i < = N成立,则转至步骤10);其中,N表示连通域个数,i表示第几个连通域,0 < i < = N,i为正整数; [0015] Step 7): interpretation i <= N is satisfied; if i <= N is satisfied, go to step phantom; if i <= N is satisfied, go to step 10); wherein, N denotes the number of connected area , i denotes the first of several communication field, 0 <i <= N, i is a positive integer;

[0016] 步骤8):如果连通域i的人群局部密度不成立,则连通域i的像素点个数不统 [0016] Step 8): If the local population density of the connected component i is not satisfied, the number of pixels in communication field i is not unified

计到人群整体密度中;否则,则转至步骤9);其中, The overall density of population count; otherwise, go to step 9); wherein,

Figure CN102289805AD00051

Obj (j)为判断为人群的第i个连通域的像素点个数,Numjiiem为该连通域最小外接矩形包含的像素点个数; Obj (j) is the number of pixels is determined that the i-th population communication domain, Numjiiem communication domain for the minimum number of pixels contained in the circumscribed rectangle;

[0017] 步骤9):将连通域i的像素点个数统计到人群整体密度中,然后i自动加1,重复所述步骤7)、所述步骤8)和所述步骤9直至i = N结束; [0017] Step 9): the number of pixels to the communication field i of statistical population bulk density, and then 1 i automatic, repeating the step 7), the step 8), and said step 9 until i = N End;

[0018] 步骤10):根据式(1)对一帧图像的人群整体密度进行计算; [0018] Step 10): The formula (1) the overall density of the population of one frame of image is calculated;

[0019] [0019]

Figure CN102289805AD00052

[0020] 其中,Num_obj (j)为判断为人群的第i个连通域的像素点个数, [0020] wherein, Num_obj (j) is the number of pixels determined as the i-th population communication domain,

Figure CN102289805AD00053

为所有判读为人群的连通域的像素点总和;Height*Width是一帧图像的像素点总和;Num_ label为根据四点坐标标记的无人活动区域的像素点总数。 The sum of all pixel groups interpreted as communication domain; Height * Width is the sum of a pixel of the image; Num_ label according to the pixel coordinates of four points marked active area Total no.

[0021] 所述步骤2~)中的错误帧包括:黑屏、雪花和移位。 [0021] step 2 -) of the error frame includes: black, snow and shift.

[0022] 所述黑屏检测的方法包括:对视频帧灰度化后,对四点标记出的无人域求均值和方差,如果均值小于阈值th,且方差小于阈值zh,则判读为黑屏。 [0022] The method of detecting the blank comprising: after grayscale video frames, out of the four o'clock marker no domain averaging and variance, if the average is less than the threshold th, and the variance is less than the threshold value zh, is interpreted as black.

[0023] 所述阈值th取60 ;所述阈值zh取100。 [0023] 60 take the threshold value th; zh take the threshold 100.

[0024] 所述雪花和移位检测方法包括:对视频帧灰度化后,对四点标记出的无人域的像素点值与该无人域的背景像素点值相减,计算该区域相减后的像素差的平方和,如果平方和值大于阈值ch,则判读为摄像机移位或雪花。 [0024] The snow and the displacement detection method comprising: after grayscale video frames, out of the four pixel values ​​marked no background pixel field with the field no points subtracted from the calculated region pixel differences squared and subtracted, and if the value is greater than the threshold squared ch, or the interpretation of the camera displaced snow.

[0025] 所述阈值ch取100。 The [0025] 100 threshold ch taken.

[0026] 本发明的优点在于,本发明提出的方法标记出无人活动区域和有人活动区域,在检测人群密度检测方面只计算有人活动区域,这样计算复杂度降低,为实时性奠定了基础; 在计算人流密度时,还考虑到了摄像头画面出现黑屏、雪花和移位的现象,这样这样整体上提高了人群密度检测的准确率。 [0026] The advantage of the invention is that the proposed method of the present invention, the active area and no marked activity area was, in terms of population density detector detects only the active area was calculated, reducing the complexity of this calculation, the basis for real-time; when calculating the density of the flow, also takes into account the camera screen appears black, snow and shift phenomenon, so that this increases the overall accuracy of detecting the density of the population. 最后,通过对有人运动区域的连通域内的像素点个数与有人运动区域的像素点数的比值为人群整体密度值,这样计算复杂度低,并且这样衡量人群整体密度的精确度较高。 Finally, the ratio of the number of pixels and the number of pixels of the motion area was in communication within the region as a whole was moving population density, so that the low computational complexity, and thus a higher overall accuracy to measure the density of population. 尤其对于人群密度高的场景。 Especially for high population density scene.

附图说明 BRIEF DESCRIPTION

[0027] 图1为本发明的一种基于视频的地铁人群密度检测方法流程图;[0028] 图2为本发明的连通域i的定义图; [0027] FIG 1. A flowchart of the present invention is based on the population density of underground video detection method; [0028] FIG communication domain defined i in FIG. 2 of the present invention;

[0029] 图3为本发明的无人区域的标定图。 [0029] FIG. 3 FIG no calibration zone of the present invention.

具体实施方式 Detailed ways

[0030] 下面结合附图和具体实施方式,对本发明的技术方案进行进一步详细的说明。 [0030] accompanying drawings and the following detailed description, the technical scheme of the present invention will be described in further detail.

[0031] 如图1所述,图1为本发明的一种基于视频的地铁人群密度检测方法流程图。 [0031] FIG. 1, FIG. 1. A flowchart of the present invention is based on the population density of underground video detection method. 该方法的具体内容包括如下步骤: Details of the method comprising the steps of:

[0032] 步骤1):基于地铁不同场合下的地铁视频图像输入相应的四点坐标来标记出有人活动区域和无人活动区域; [0032] Step 1): Enter the appropriate coordinates of four points in different video underground metro occasions based on the mark region and no activity was active area;

[0033] 步骤2、:根据所述步骤1)获得的无人活动区域对视频中每一帧是否为错误帧判断,如果是,则舍弃错误的视频帧;如果不是,则转至步骤3); [0033] Step 2 ,: according to the step 1) no active video region obtained for each frame is an error frame is determined, if yes, discard erroneous video frames; if not, then go to step 3) ;

[0034] 步骤幻:对一场景下没有人的视频抽取一帧帧进行背景高斯建模; [0034] Step Magic: no video for a frame extracted at the background of a scene Gaussian model;

[0035] 步骤4):利用所述步骤幻获得的高斯背景图像与有人群的视频帧通过背景差法得出前景; [0035] Step 4): the step of using the Gaussian magic background image obtained with the video frame have populations obtained by the foreground background subtraction;

[0036] 步骤幻:根据所述步骤1)获得的无人活动区域对所述步骤4)获得的前景图进行标记,对无人活动区域的像素值标记为0 ; [0036] Step Magic: 1) no active area obtained in the step 4) is labeled foreground obtained according to the steps, no pixel value of the flag to the active area is 0;

[0037] 步骤6):对所述步骤幻获得的图像进行二值化处理,得到的背景为黑色,人为白色;并对获得的二值化图像进行连通域标记; [0037] Step 6): The image obtained in step magic binarization process, the obtained black background, white man; and binarized image obtained was connected component labeling;

[0038] 步骤7):判读i < = N是否成立;如果i < = N成立,则转至步骤8);如果i < = N成立,则转至步骤10);其中,N表示连通域个数,i表示第几个连通域,0 < i < = N,i为正整数; [0038] Step 7): interpretation i <= N is satisfied; if i <= N is satisfied, go to step 8); if i <= N is satisfied, go to step 10); wherein, N represents a communication domain number, i denotes the first of several communication field, 0 <i <= N, i is a positive integer;

[0039] 步骤8):如果连通域i的人群局部密度不成立,则连通域i的像素点个数不统 [0039] Step 8): If the local population density of the connected component i is not satisfied, the number of pixels in communication field i is not unified

计到人群整体密度中;否则,则转至步骤9);其中,mmmmmMm=^-^,Num The overall density of population count; otherwise, go to step 9); wherein, mmmmmMm = ^ - ^, Num

Obj (j)为判断为人群的第i个连通域的像素点个数,Numjiiem为该连通域最小外接矩形包含的像素点个数; Obj (j) is the number of pixels is determined that the i-th population communication domain, Numjiiem communication domain for the minimum number of pixels contained in the circumscribed rectangle;

[0040] 步骤9):将连通域i的像素点个数统计到人群整体密度中,然后i自动加1,重复所述步骤7)、所述步骤8)和所述步骤9直至i = N结束; [0040] Step 9): the number of pixels to the communication field i of statistical population bulk density, and then 1 i automatic, repeating the step 7), the step 8), and said step 9 until i = N End;

[0041] 步骤10):根据式(1)对一帧图像的人群整体密度进行计算; [0041] Step 10): The formula (1) the overall density of the population of one frame of image is calculated;

[0042] [0042]

Figure CN102289805AD00061

[0043] 其中,Num_obj (j)为判断为人群的第i个连通域的像素点个数, [0043] wherein, Num_obj (j) is the number of pixels determined as the i-th population communication domain,

Figure CN102289805AD00062

为所有判读为人群的连通域的像素点总和;Height*Width是一帧图像的像素点总和;Num_ label为根据四点坐标标记的无人活动区域的像素点总数。 The sum of all pixel groups interpreted as communication domain; Height * Width is the sum of a pixel of the image; Num_ label according to the pixel coordinates of four points marked active area Total no.

[0044] 1、人群整体密度与局部密度的定义 [0044] 1, the overall population density and definition of the local density

[0045] [0045]

Figure CN102289805AD00071

[0046] 其中,Num_obj (j)为判断为人群的第i个连通域的像素点个数, [0046] wherein, Num_obj (j) is the number of pixels determined as the i-th population communication domain,

Figure CN102289805AD00072

为所有判读为人群的连通域的像素点总和;Height*Width是一帧图像的像素点总和;Num_ label为根据四点坐标标记的无人活动区域的像素点总数。 The sum of all pixel groups interpreted as communication domain; Height * Width is the sum of a pixel of the image; Num_ label according to the pixel coordinates of four points marked active area Total no.

[0047] 如图2所示,图2为本发明的连通域i的定义图。 [0047] As shown, FIG communication domain defined i 2 of the present invention in FIG. 2. 人群局部密度的计算公式为: Formula for calculating the local density of population:

Figure CN102289805AD00073

,其中Num obj (i)是连通域i的像素点个数,Num mem是该连通域最小外接矩Mum — mem Wherein Num obj (i) is the number of pixels of the i communication domain, Num mem is the minimum external communication domain Mum Moment - mem

形包含的像素点个数。 Number of pixels included in shape.

[0048] 2、高斯背景建模 [0048] 2, Gaussian background model

[0049] 传统的背景建模经常使用混合高斯背景建模,但针对地铁环境,光照强度无明显变化,并且在背景建立的时间段,运动前景在某个区域出现的有效时长远小于该区域无人时长的情况下,对混合高斯背景建模做了简化。 [0049] The traditional background modeling Gaussian mixture background modeling is often used, but no significant changes in the environment for subway, light intensity, and the time period established in the background, moving foreground appear in an area smaller than the region's long-term no valid under long people when the situation on the Gaussian mixture background modeling done simplified. 目前只用了一个高斯模型进行背景的建模。 Currently only modeling background with a Gaussian model. 高斯模型的参数有:均值,方差。 Gaussian model parameters are: mean and variance. 简化的高斯背景建模也分为两步。 Gaussian background model is also reduced in two steps.

[0050] 背景的建立:因为这里只有单高斯模型,为了提高背景模型的正确性,这里使用了多帧图像来进行高斯背景的建立。 [0050] Background of the establishment: Because there is only a single Gaussian model, in order to improve the accuracy of the background model, used here to establish a multi-frame image Gaussian background. 并且为了避免行人走动或停留的干扰,每隔一段时间抽取一帧图像。 To avoid interference and pedestrians walking or stays, an image extraction intervals.

[0051] 具体过程如下:首先任意抽取一帧图像,将每个像素点的高斯模型均值置为像素点当前值。 [0051] The process is as follows: First, a random selection of a picture, the mean Gaussian model for each pixel is set to the current pixel value. 每隔一定的时间抽取一帧图像,更新每个像素点高斯模型均值,一共抽取N帧, 此时建立的像素点高斯背景模型的均值bg_m0del.mean(i,j)是任意抽取的N帧图像的对应像素点的均值。 Regular intervals extracting an image update mean Gaussian model for each pixel, extracting a total of N frames, the mean pixel bg_m0del.mean Gaussian background model is established at this time (i, j) is the random selection of the frame image N It means of the corresponding pixel. 此时给每个像素点高斯模型的方差置初始值Variancejni (这里的方差初始值的设定参考无人活动区域N帧的方差与有人活动区域的N帧的方差,取合适值)。 At this time, the variance of a Gaussian model for each pixel of the initial value counter Variancejni (initial value setting variance reference herein variance and variance of frame N was no active region of the active area of ​​the frame N, an appropriate value is taken).

[0052] 背景的更新:对于以后的每帧图像,进行条件判定,当前像素点值为pixel,令temp = (pixel-bg_model. mean (i, j)); [0052] Background update: For each subsequent image frame, a condition determination, Pixel value of the current pixel, so temp = (pixel-bg_model mean (i, j).);

[0053] 如果temp*temp/bg_model. variance (i,j) <= threshold(阔值1),则判定该像素点为背景,对该像素点高斯模型两个参数进行更新,更新过程如下: [0053] If the temp * temp / bg_model variance (i, j) <= threshold (width value 1), it is determined that the pixel is a background pixel to update the parameters of two Gaussian model, the update process is as follows:

[0054] bg_model. mean (i, j) = (bg_model. mean (i,j)+learn_rate*temp); [0054] bg_model mean (i, j) = (bg_model mean (i, j) + learn_rate * temp.).;

[0055] bg_model. variance(i, j) = (l-learn_rate)*bg_model. variance(i, j)+learn_ rate*temp"2 ;其中learn_rate 是学习率。 .. [0055] bg_model variance (i, j) = (l-learn_rate) * bg_model variance (i, j) + learn_ rate * temp "2; wherein learn_rate is the learning rate.

[0056] 如果temp*temp/bg_model. variance (i, j) > threshold(阔值1),则认为是前景, 不处理。 [0056] If the temp * temp / bg_model. Variance (i, j)> threshold (a value of width), is considered to be foreground, not processed.

[0057] 可以看出,针对本设计:简化的高斯建模方法能够较好的对背景进行建模。 [0057] As can be seen, for the present design: Gaussian modeling can be simplified to better background model.

[0058] 高斯背景建模的抽取帧数以及如何抽取的考虑: [0058] the number of frames extracted Gaussian background model and how to extract considerations:

[0059] 1、抽取的帧数的选择主要是考虑到场景不会发生剧烈的变化,而摄像机的频率, 如果是25帧每秒,则选择隔4秒抽取一帧,这样如果背景中有人走动的话,刚好使得前后两帧之间人的距离比较远,做平均的时候,人已经走出了之前帧位置区域,这样拟合背景相对更准。 [0059] 1, select the number of frames extracted mainly on account of the drastic scene change does not occur, the frequency of the camera, if it is 25 frames per second, 4 seconds extracted select a compartment, so that if someone walking BACKGROUND then, just before and after the person making the distance between the two distant, the average do when people are out of the area before the frame position, so fitting backdrop relatively more accurate. [0060] 2、取100帧进行高斯模型的建立,因为通过实验,发现取超过100帧以上拟合效果差不多,如果拟合太多帧,则会加大运算量与运算时长,且取的帧数越多,拟合时间会越长, 进入地跌站台的人会渐渐变多,反而拟合效果不好,取的太少,会不大准确(有人的模糊影像),所以取100帧。 [0060] 2, 100 for taking the Gaussian mixture model, as experimentally found taking more than 100 fitting similar results, if too many frames fit, and long computation operation will increase, and takes a frame the larger the number, the longer the time will fit, and fell into the site people will gradually become more, but not good fitting effect, take too little, will not exactly (it was blurred image), so take the 100.

[0061] 简化的高斯背景建模中的参数设定:threShold(阈值1)的选取是无人区域与人群活动区域的折中,本设计中取2*2. 5*2. 5,learn_rate取了常用的0. 05。 [0061] BACKGROUND simplified Gaussian modeling parameter settings: Select THReshold (Threshold 1) is no compromise of the active area region populations, the present design to take 2 * 2 5 * 2 5, learn_rate taken. the usual 0.05.

[0062] 3、无人区域标定 [0062] 3, no calibration region

[0063] 如图3所示,图3为本发明的无人区域的标定图。 [0063] As shown in FIG. 3, FIG. 3 FIG no calibration zone of the present invention. 由于有些地方是不可能出现人的,比如电子钟上方,和两边立柱的上侧空间。 As some local people is impossible, such as digital clock above the upper space, and both sides of the column. 对视频帧中的人流进行统计,发现无人区域可以用三条直线进行标定,标定的坐标如下:x0 = 210 ;y0 = 1 ;xl = 80 ;yl = 150 ;x2 = 80 ;y2 = 210 ;x3 = 210 ;y3 = 352 ;对于图像中的其他点,如果其坐标x,y,符合条件χ < xl 或(xl-xO) X y+ (x-xl) X y0_ (χ-χΟ) X yl > = O 或(x3_x2) X y+ (χ-χ3) X y2~ (χ-χ2) Xy3 > =O即判断为无人区域。 Video frame flow statistics, no one region can be found by three straight calibrated in the following coordinates: x0 = 210; y0 = 1; xl = 80; yl = 150; x2 = 80; y2 = 210; x3 = 210; y3 = 352; for other points in the image, if the coordinates x, y, qualified χ <xl or (xl-xO) X y + (x-xl) X y0_ (χ-χΟ) X yl> = O or (x3_x2) X y + (χ-χ3) X y2 ~ (χ-χ2) xy3> = O it is determined that no area. 四个坐标点也可自行输入,即可由用户自行进行无人区域的标定。 Four coordinate points may be input by, the calibration can be performed by the user area no.

[0064] 4、错误帧的检测思路 [0064] 4, the detection error frames ideas

[0065] 黑屏:考虑到黑屏的特点是整个屏幕基本是一样。 [0065] a black screen: taking into account the characteristics of the entire black screen is the basic screen is the same. 读出人群视频帧,转化为人群灰度图后,对整个图像求均值与方差,如果均值小于阈值1,方差值小于阈值2,则判断为黑屏。 After reading out the video frames crowd, the crowd is converted to grayscale, averaging and variance for the entire image, if the average is less than the threshold value 1, threshold value 2 is smaller than the variance value, it is determined that a black screen. 这部分可以1秒中抽取一帧来计算,以便减少运算量。 This portion may be extracted in a one second is calculated, in order to reduce the amount of computation.

[0066] 判断雪花和移位:由于有无人区域的坐标标定,所以在读出人群视频帧,转化为人群灰度图后,将人群灰度图的无人区域的像素点与背景的该区域相减,计算该区域相减之后的像素差的平方和,如果值大于某一设定的阈值3时,认为摄像机移位或雪花的情况。 [0066] Analyzing and shifting snow: coordinates of the calibration due to no area, the video frame read out crowd, the crowd is converted to grayscale after the pixels from the background region no crowd the grayscale area subtraction, pixel difference square region after the subtraction, and 3 if the value is greater than a set threshold value is calculated, that the shift of cameras or snowflakes. (由于目前没有相关的视频帧错误视频用来实验的,所以几个阈值目前的设定为:阈值1为60,阈值2为100,阈值3为100),由于黑屏、雪花和移位这几种都是视频错误的警报,而且都是可以在无人区域进行判断的,所以可以合并来判断,这部分只需要在无人区域计算,所以计算量并不大。 (Since there is no associated video frame error video for the experiment, the number of the threshold current is set as: threshold 1 to 60, the threshold value 2 is 100, the threshold value 3 to 100), since the black, snow and shifting these video species are false alarms, and can be determined in all populated area, it is possible to determine the combined, this part need only count in the absence area, so the calculation amount is not large.

[0067] 最后所应说明的是,以上实施例仅用以说明本发明的技术方案而非限制。 [0067] Finally, it should be noted that the above embodiments are intended to illustrate and not limit the present invention. 尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行修改或者等同替换,都不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。 While the reference to the present invention, a detailed description of the embodiments, those of ordinary skill will appreciate, modifications to the technical solutions of the present invention, or equivalent replacements without departing from the spirit and scope of the technical solutions of the present invention, which should be encompassed by the present invention, among the scope of the claims.

Claims (6)

1. 一种基于视频的地铁人群密度检测方法,其特征在于,该方法的具体内容包括如下步骤:步骤1):基于地铁不同场合下的地铁视频图像输入相应的四点坐标来标记出有人活动区域和无人活动区域;步骤幻:根据所述步骤1)获得的无人活动区域对视频中每一帧是否为错误帧判断,如果是,则舍弃错误的视频帧;如果不是,则转至步骤3);步骤幻:对没有人的视频抽取一帧帧进行背景高斯建模;步骤4):利用所述步骤幻获得的高斯背景图像与有人群的视频帧通过背景差法得出前景;步骤幻:根据所述步骤1)获得的无人活动区域对所述步骤4)获得的前景图进行标记,对无人活动区域的像素值标记为0 ;步骤6):对所述步骤幻获得的图像进行二值化处理,得到的背景为黑色,人为白色;并对获得的二值化图像进行连通域标记;步骤7):判读i < = N是否成立;如果 A detection method based on population density subway video, characterized in that the specific contents of the process comprises the following steps: Step 1): based on the respective coordinates of four points subway input video image at different occasions to mark underground activity was region and no active region; step magic: the step 1) no active video region obtained for each frame is an error frame is determined, if yes, discard erroneous video frames; if not, go to step 3); magic steps of: extracting a Gaussian model of the background frame no video; step 4): a step of the background image using the Gaussian phantom obtained with the video frame have populations derived by foreground background subtraction; step magic: 1) no active area obtained in the step 4) were obtained according to a foreground marking step, the pixel value flag of 0 is no active region; step 6): a step of obtaining magic an image binarization process, the obtained black background, white man; and binarized image obtained was connected component labeling; step 7): interpretation i <= N is satisfied; if i < = N成立,则转至步骤8);如果i < = N成立,则转至步骤10);其中,N表示连通域个数,i表示第几个连通域,0 < i <= N,i为正整数;步骤8):如果连通域i的人群局部密度不成立,则连通域i的像素点个数不统计到人群整体密度中;否则,则转至步骤9);其中,mmmmmmi%=M^n-obM,Num 0bj(j)Numjmm为判断为人群的第i个连通域的像素点个数,Numjiiem为该连通域最小外接矩形包含的像素点个数;步骤9):将连通域i的像素点个数统计到人群整体密度中,然后i自动加1,重复所述步骤7)、所述步骤8)和所述步骤9直至i = N结束;步骤10):根据式(1)对一帧图像的人群整体密度进行计算;N^ Num — obj(f)每一帧视频图像中的人群整体密度=——~—-—-Height 氺Width - Num — labelN其中,Num_obj(j)为判断为人群的第i个连通域的像素点个数,—峋⑴为所有i=l判读为人群的连通域的像素点总和;Hei i <= N is satisfied, go to step 8); if i <= N is satisfied, go to step 10); wherein, N denotes the number of connected area, i denotes the first of several communication field, 0 <i <= N , i is a positive integer; step 8): If the local communication field i of the population density is not satisfied, the number of pixels in communication domain to the population count i is not the entire density; otherwise, go to step 9); wherein, mmmmmmi% = M ^ n-obM, Num 0bj (j) Numjmm is determined that the number of pixels of the i-th communication domain groups, Numjiiem communication domain for the number of pixels included in the minimum bounding rectangle; step 9): communicating i number of pixels domain statistical population to bulk density, and then 1 i automatic, repeating the step 7), the step 8), and said step 9 until the end of i = N; step 10): the formula ( 1) the overall density of the population of one frame of image is calculated; N ^ Num - obj (f) the overall density of the population in each frame of the video image = - ~ ---- Height Shui Width - Num - labelN wherein, Num_obj ( j) is the number of pixels is determined that the i-th population communication domain, - Xun ⑴ sum of all population i = l interpreted as communication domain pixels; Hei ght*Width是一帧图像的像素点总和;Num_label 为根据四点坐标标记的无人活动区域的像素点总数。 ght * Width is the sum of a pixel of the image; Num_label pixel is labeled according to the coordinates of four points of the active area Total no.
2.根据权利要求1所述的基于视频的地铁人群密度检测方法,其特征在于,所述步骤2)中的错误帧包括:黑屏、雪花和移位。 According to claim Metro population density based method for detecting video of claim 1 wherein said step 2) comprises a frame errors: black, snow and shift.
3.根据权利要求2所述的基于视频的地铁人群密度检测方法,其特征在于,所述黑屏检测的方法包括:对视频帧灰度化后,对四点标记出的无人域求均值和方差,如果均值小于阈值th,且方差小于阈值zh,则判读为黑屏。 3. The detecting method based on population density of underground video, wherein the claim 2, the black screen detecting method comprising: after grayscale video frames, out of the four o'clock marker, and no domain averaging variance, if the average is less than the threshold th, and the variance is less than the threshold value zh, is interpreted as black.
4.根据权利要求3所述的基于视频的地铁人群密度检测方法,其特征在于,所述阈值th取60 ;所述阈值zh取100。 4. The detection method based on population density of underground video, wherein according to claim 3, taking the threshold value th 60; zh take the threshold 100.
5.根据权利要求2所述的基于视频的地铁人群密度检测方法,其特征在于,所述雪花和移位检测方法包括:对视频帧灰度化后,对四点标记出的无人域的像素点值与该无人域的背景像素点值相减,计算该区域相减后的像素差的平方和,如果平方和值大于阈值ch,则判读为摄像机移位或雪花。 The detection method based on population density Metro video claim 2 wherein said displacement detection method of snow and comprises: after grayscale video frames, out of the four points marked unmanned domain of pixel value of the background pixel value of the no field subtraction, calculation of the squared difference between the pixel region and subtraction, and if the value is greater than the threshold squared ch, or the interpretation of the camera displaced snow.
6.根据权利要求5所述的基于视频的地铁人群密度检测方法,其特征在于,所述阈值ch 取100。 6. The detection method based on population density of underground video, wherein according to claim 5, wherein said threshold value takes 100 ch.
CN2011101562708A 2011-06-10 2011-06-10 Based on the density of the subway crowd video detection method CN102289805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101562708A CN102289805A (en) 2011-06-10 2011-06-10 Based on the density of the subway crowd video detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101562708A CN102289805A (en) 2011-06-10 2011-06-10 Based on the density of the subway crowd video detection method

Publications (1)

Publication Number Publication Date
CN102289805A true CN102289805A (en) 2011-12-21

Family

ID=45336195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101562708A CN102289805A (en) 2011-06-10 2011-06-10 Based on the density of the subway crowd video detection method

Country Status (1)

Country Link
CN (1) CN102289805A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708565A (en) * 2012-05-07 2012-10-03 深圳市贝尔信智能系统有限公司 Foreground detection method, device and system
CN102750710A (en) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 Method and device for counting motion targets in images
CN103347156A (en) * 2013-06-24 2013-10-09 清华大学 Method for estimating number of wild animal populations based on infrared camera data
CN103489012A (en) * 2013-09-30 2014-01-01 深圳市捷顺科技实业股份有限公司 Crowd density detecting method and system based on support vector machine
CN104135910A (en) * 2012-02-20 2014-11-05 佳能株式会社 The image forming method and an image forming apparatus
CN105574499A (en) * 2015-12-15 2016-05-11 东华大学 Method and system for detecting and counting number of people based on SOC
CN106407901A (en) * 2016-08-31 2017-02-15 重庆云库房物联科技有限公司 Infrared array-based induced abortion behavior analysis system and human body identification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040091153A1 (en) * 2002-11-08 2004-05-13 Minolta Co., Ltd. Method for detecting object formed of regions from image
CN101188743A (en) * 2007-09-17 2008-05-28 深圳先进技术研究院 An intelligent digital system based on video and its processing method
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040091153A1 (en) * 2002-11-08 2004-05-13 Minolta Co., Ltd. Method for detecting object formed of regions from image
CN101325690A (en) * 2007-06-12 2008-12-17 上海正电科技发展有限公司 Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN101188743A (en) * 2007-09-17 2008-05-28 深圳先进技术研究院 An intelligent digital system based on video and its processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
王尔丹等: "基于多尺度分析和分形的人群密度估计方法", 《计算机工程与应用》 *
蒋绪团等: "基于实时背景提取的人群密度检测", 《电脑知识与技术》 *
郭军: "有线电视视频图像故障监测系统设计开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135910B (en) * 2012-02-20 2017-05-24 佳能株式会社 The image forming method and an image forming apparatus
CN104135910A (en) * 2012-02-20 2014-11-05 佳能株式会社 The image forming method and an image forming apparatus
CN102708565A (en) * 2012-05-07 2012-10-03 深圳市贝尔信智能系统有限公司 Foreground detection method, device and system
CN102750710A (en) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 Method and device for counting motion targets in images
CN103347156B (en) * 2013-06-24 2016-03-30 清华大学 A quantitative method of estimating wildlife populations based on an infrared camera data
CN103347156A (en) * 2013-06-24 2013-10-09 清华大学 Method for estimating number of wild animal populations based on infrared camera data
CN103489012A (en) * 2013-09-30 2014-01-01 深圳市捷顺科技实业股份有限公司 Crowd density detecting method and system based on support vector machine
CN103489012B (en) * 2013-09-30 2017-05-24 深圳市捷顺科技实业股份有限公司 Based on population density SVM detection method and system
CN105574499A (en) * 2015-12-15 2016-05-11 东华大学 Method and system for detecting and counting number of people based on SOC
CN105574499B (en) * 2015-12-15 2019-08-20 东华大学 A kind of number detection statistics method and system based on SOC
CN106407901A (en) * 2016-08-31 2017-02-15 重庆云库房物联科技有限公司 Infrared array-based induced abortion behavior analysis system and human body identification method

Similar Documents

Publication Publication Date Title
Gallup et al. Piecewise planar and non-planar stereo for urban scene reconstruction
Spinello et al. People detection in RGB-D data
CN103069434B (en) Method and system for multi-mode video event indexing
US7409076B2 (en) Methods and apparatus for automatically tracking moving entities entering and exiting a specified region
US20130169822A1 (en) Camera calibration using feature identification
CN1980322B (en) Object detecting system and object detecting method
CN103164706B (en) The object based on the video signal analysis apparatus and method of counting
CN101277429B (en) Method and system for amalgamation process and display of multipath video information when monitoring
Bayona et al. Comparative evaluation of stationary foreground object detection algorithms based on background subtraction techniques
CN101739686B (en) Moving object tracking method and system thereof
CN1228984C (en) Video object dividing method based on motion detection
Chen et al. A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction
CN101739551B (en) Method and system for identifying moving objects
CN101510358B (en) Method and apparatus for processing real time statistical vehicle flowrate using video image
US8243987B2 (en) Object tracking using color histogram and object size
US8457360B2 (en) Detection of vehicles in an image
CN101344965A (en) Tracking system based on binocular camera shooting
CN101404086B (en) Target tracking method and device based on video
CN101324956A (en) Method for tracking anti-shield movement object based on average value wander
CN101029824B (en) Method and apparatus for positioning vehicle based on characteristics
CN101216941A (en) Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN101393603B (en) Method for recognizing and detecting tunnel fire disaster flame
CN103413444A (en) Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN100544446C (en) Real time movement detection method in use for video monitoring
Guomundsson et al. ToF imaging in smart room environments towards improved people tracking

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
RJ01 Rejection of invention patent application after publication