CN103198300B - Parking event detection method based on double layers of backgrounds - Google Patents

Parking event detection method based on double layers of backgrounds Download PDF

Info

Publication number
CN103198300B
CN103198300B CN201310104633.2A CN201310104633A CN103198300B CN 103198300 B CN103198300 B CN 103198300B CN 201310104633 A CN201310104633 A CN 201310104633A CN 103198300 B CN103198300 B CN 103198300B
Authority
CN
China
Prior art keywords
background
model
target
parking
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310104633.2A
Other languages
Chinese (zh)
Other versions
CN103198300A (en
Inventor
谢正光
李宏魁
胡建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University Technology Transfer Center Co ltd
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201611069411.1A priority Critical patent/CN106778540B/en
Priority to CN201310104633.2A priority patent/CN103198300B/en
Publication of CN103198300A publication Critical patent/CN103198300A/en
Application granted granted Critical
Publication of CN103198300B publication Critical patent/CN103198300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a parking event detection method based on double layers of backgrounds. The parking event detection method based on the double layers of backgrounds mainly comprises the steps of double-layer background modeling, secondary background replacement, parking detection, and state table updating. According to the parking event detection method based on the double-layer background, an absolute value of the pixel difference of corresponding points of a main background and the second background is used for judging whether a stopping object appears, detection is carried out according to the outline of the stopping object, if the stopping object is a vehicle, a parking state is calibrated and updated, and in addition, the background is replaced by judging whether the foreground in a background model is empty or not. The parking event detection method based on the double layers of backgrounds can well detect a parking event in real time, and is simple and capable of reducing influence of the environment on a detection result by means of the double layers of backgrounds and accurately recording a parking position and the parking state due to the establishment of a parking event state table. The parking event detection method based on the double layers of backgrounds is capable of being used in different occasions such as an expressway, a parking lot, and an urban road due to the setting of detection parameters and interested areas, high in accuracy and good in real-time performance.

Description

基于双层背景的停车事件检测方法A Parking Event Detection Method Based on Two-layer Background

技术领域technical field

本发明涉及视频检测领域,具体涉及一种基于双层背景的停车事件检测方法。The invention relates to the field of video detection, in particular to a parking event detection method based on a double-layer background.

背景技术Background technique

随着国民经济的快速发展和机动车辆的迅猛增加,我国交通问题日益严峻。机动车驻停、抛洒物、交通事故停驶以及恶劣环境造成的山体滑坡等交通事件造成的交通堵塞和二次事故不断增加,该事故具有突发性和偶然性,一旦发生将造成极大的人员和财产损失。以往的停车事件检测,主要通过人工监控,交通流信息采集等方法,极大的消耗了交通管制中的人力、物力和财力。所以,建立基于视频的自动停车事件检测系统尤为重要和必要。With the rapid development of the national economy and the rapid increase of motor vehicles, my country's traffic problems are becoming increasingly serious. Traffic jams and secondary accidents caused by traffic incidents such as parking of motor vehicles, throwing objects, suspension of traffic accidents, and landslides caused by harsh environments are increasing. This accident is sudden and accidental, and once it occurs, it will cause great casualties and property damage. In the past, the detection of parking incidents was mainly through manual monitoring, traffic flow information collection and other methods, which greatly consumed the human, material and financial resources in traffic control. Therefore, it is particularly important and necessary to establish a video-based automatic parking event detection system.

目前世界上已经开发了多种基于视频的停车事件检测方法,主要有基于虚拟框像素、灰度统计,基于目标速度和基于单背景块分割的停车事件检测方法。At present, a variety of video-based parking event detection methods have been developed in the world, mainly based on virtual frame pixels, grayscale statistics, based on target speed and based on single background block segmentation parking event detection methods.

基于虚拟框像素、灰度统计的方法通过背景差法获取运动像素和静止像素,并根据划定区域内像素或灰度的变化情况判断车辆是否停止;此方法尽管算法简单,但是易受到外界光照条件等干扰,且需人工设定虚拟检测区域,检测实用性较差。基于目标速度的停车事件检测方法需要对车辆进行实时跟踪,并且对场景进行标定,需要实时的计算目标的运动速度;该算法较为复杂且误报率较高。基于单背景的块分割停车检测方法通过保存三幅不同的背景,两两比较确定是否有可疑块,对可疑块进行检测判断停车事件;该方法需对图像进行分块计算,不适用于道路情况复杂地段,且保存三幅背景图像来自同一背景模型,检测精确度不高。上述方法对于车辆停驶或驶离缺乏有效的分析,算法对停车事件分析的有效性和实用性已不能满足现实要求。The method based on virtual frame pixels and grayscale statistics obtains moving pixels and static pixels through background difference method, and judges whether the vehicle stops according to the changes of pixels or grayscale in the delimited area; although this method is simple in algorithm, it is vulnerable to external light Conditions and other interference, and the virtual detection area needs to be manually set, so the detection practicability is poor. The parking event detection method based on the target speed needs to track the vehicle in real time, calibrate the scene, and calculate the moving speed of the target in real time; the algorithm is relatively complex and has a high false alarm rate. The block segmentation parking detection method based on a single background saves three different backgrounds and compares them pairwise to determine whether there is a suspicious block, and then detects the suspicious block to determine the parking event; this method needs to divide the image into blocks and is not suitable for road conditions In complex areas, and the three background images saved are from the same background model, the detection accuracy is not high. The above method lacks effective analysis for vehicles stopping or driving away, and the effectiveness and practicability of the algorithm for analyzing parking events can no longer meet the realistic requirements.

发明内容Contents of the invention

本发明的目的在于提供一种受环境影响较小,停车检测准确的基于双层背景的停车事件检测方法。The object of the present invention is to provide a parking event detection method based on a double-layer background that is less affected by the environment and has accurate parking detection.

本发明的技术解决方案是:Technical solution of the present invention is:

一种基于双层背景的停车事件检测方法,其特征是:包括下列步骤:通过以下步骤实现:A kind of parking incident detection method based on double-layer background, it is characterized in that: comprise the following steps: realize by following steps:

(1)建立两层不同的背景——主背景和次背景,利用主背景的更新速度快、对停止目标较敏感的特性,而次背景更新较慢、对停止目标反应较慢的特性,比较两幅图的差异;(1) Establish two different backgrounds—the main background and the secondary background, using the characteristics of the main background’s fast update speed and sensitivity to the stop target, and the secondary background’s slow update and slow response to the stop target. the difference between the two images;

(2)对双层背景对应的像素点作差并求出绝对值,得到静止目标,对该双层背景差值图像进行二值化;通过HSI阴影抑制,并消除目标相应像素点的阴影,对该二值图像进行闭运算,消除不连续的空洞;(2) Make a difference to the pixels corresponding to the double-layer background and calculate the absolute value to obtain a stationary target, and then binarize the difference image of the double-layer background; suppress the shadow of the corresponding pixel by HSI, and eliminate the shadow of the corresponding pixel of the target. Perform a closed operation on the binary image to eliminate discontinuous holes;

(3)根据摄像机焦距、高度与角度,对图像中目标像素点加权,远处的目标像素点权值加大,近处的目标像素点权值减小,求出加权后的目标像素和值,当达到阈值时,停车事件计数器S加1,当S大于阈值时,则将该二值图像保存下来;(3) According to the focal length, height and angle of the camera, weight the target pixels in the image, increase the weight of the distant target pixels, and decrease the weight of the nearby target pixels, and calculate the weighted target pixel and value , when the threshold is reached, the parking event counter S is increased by 1, and when S is greater than the threshold, the binary image is saved;

(4)对该二值图像进行滤波,并对图像中的目标进行分割和轮廓检测,设置检测灵敏度,对像素值大于灵敏度阈值的目标画出矩形框,根据长宽比判断该目标是否为车辆,当目标为车辆时,记录该矩形框的对角线交点坐标,存入停车事件状态表;(4) Filter the binary image, perform segmentation and contour detection on the target in the image, set the detection sensitivity, draw a rectangular frame for the target whose pixel value is greater than the sensitivity threshold, and judge whether the target is a vehicle according to the aspect ratio , when the target is a vehicle, record the coordinates of the intersection of the diagonals of the rectangular frame and store it in the parking event state table;

(5)对该像素进行处理和分析,判断该目标是否为车辆;如果该目标被判断为车辆,触发停车事件报警并将主背景当前帧赋给次背景,继续进行差异比较;当次背景中没有目标时,保留该帧图像作为纯净背景;(5) Process and analyze the pixel to determine whether the object is a vehicle; if the object is judged to be a vehicle, trigger a parking event alarm and assign the current frame of the main background to the secondary background, and continue to compare differences; When there is no target, keep the frame image as a pure background;

(6)当次背景模型中的前景为空时,存储当前的背景图像作为纯净背景,当检测到有停止目标时,将主背景当前帧存储并与该纯净的背景图像进行比较,若两帧图像中差别小于阈值时跳至第步骤(2)继续运行,若两帧图像中差别大于阈值时,则用该主背景当前帧替换次背景,跳至步骤(2)继续进行,再次检测到停车事件时,比较目标中心点坐标,判断目标是否驶离,并更新状态表。(6) When the foreground in the secondary background model is empty, store the current background image as a pure background. When a stop target is detected, store the current frame of the main background and compare it with the pure background image. If two frames When the difference in the image is less than the threshold, skip to step (2) and continue running. If the difference between the two frames of images is greater than the threshold, replace the secondary background with the current frame of the main background, skip to step (2) and continue, and detect parking again When an event occurs, compare the coordinates of the center point of the target, judge whether the target is leaving, and update the state table.

本发明的基于双层背景的停车事件检测方法与基于虚拟线圈统计,基于目标跟踪的停车检测方法和基于单背景的块分割停车检测方法相比,受环境影响较小,无需对图像进行设定和图像标定,算法更加简单可行,降低了停车检测的运算复杂度,提高停车事件检测的实时性。通过HIS阴影抑制和形态学滤波处理消除干扰,使得停车检测更加的准确;建立状态表更新,准确的记录停车位置和状态。Compared with the parking detection method based on virtual coil statistics, the parking detection method based on target tracking and the block segmentation parking detection method based on single background, the parking event detection method based on the double-layer background of the present invention is less affected by the environment and does not need to set the image And image calibration, the algorithm is simpler and more feasible, which reduces the computational complexity of parking detection and improves the real-time performance of parking event detection. Eliminate interference through HIS shadow suppression and morphological filtering processing, making parking detection more accurate; establish a status table update, and accurately record the parking position and status.

附图说明Description of drawings

下面结合附图和实施例对本发明作进一步说明。The present invention will be further described below in conjunction with drawings and embodiments.

图1是基于双层背景的停车检测方法流程图。Figure 1 is a flowchart of a parking detection method based on a double-layer background.

具体实施方式detailed description

结合附图和实施例给出一致基于双层背景的停车事件检测方法,该方法通过建立次背景——混合高斯背景模型和主背景——RunningAvg背景,对两层背景求差值,并对差值图像进行二值化,得到静止目标或低速运动目标。通过对该二值化图像的统计和轮廓识别检测出是否有停驶车辆或遗留物,将此刻的RunningAvg背景与纯净的背景图像对比,判断道路是否畅通,更新停车状态表和次背景,进行下一次的停车事件检测。具体实施步骤如下:In conjunction with the accompanying drawings and embodiments, a parking event detection method based on a double-layer background is provided. The method establishes a secondary background—a mixed Gaussian background model and a main background——RunningAvg background, and calculates the difference between the two layers of background, and calculates the difference The value image is binarized to obtain a static target or a low-speed moving target. Through the statistics and contour recognition of the binary image to detect whether there are stopped vehicles or leftovers, compare the current RunningAvg background with the pure background image to judge whether the road is smooth, update the parking status table and the secondary background, and proceed to the next step One-shot parking event detection. The specific implementation steps are as follows:

步骤一,将输入的视频图像In进行灰度化处理得到灰度图像Ingray,通过灰度图像Ingray建立主背景和次背景;Step 1, grayscale processing is performed on the input video image I n to obtain a grayscale image Ingray , and a main background and a secondary background are established through the grayscale image Ingray ;

主背景、次背景以及纯净背景的建立及更新:Establishment and update of main background, secondary background and pure background:

主背景模型的建立——RunningAvg模型,RunningAvg模型如下式所示:The establishment of the main background model - the RunningAvg model, the RunningAvg model is shown in the following formula:

BB AvgAvg (( ii ,, jj )) == ∂∂ avgavg BB nno -- 11 (( ii ,, jj )) ++ (( 11 -- ∂∂ avgavg )) II nno (( ii ,, jj ))

——Bavg(i,j)为RunningAvg背景模型;——Bavg(i,j) is the RunningAvg background model;

——Bn(i,j)为第n帧更新后的背景值,——B n (i, j) is the updated background value of the nth frame,

——Bn-1(i,j)为第n-1帧的背景值,——B n-1 (i, j) is the background value of the n-1th frame,

——In(i,j)为当前视频帧的灰度值,——I n (i, j) is the gray value of the current video frame,

——为更新速率。—— is the update rate.

本发明对进行了如下改进: ∂ avg = ∂ avg 1 M n + ∂ avg 2 ( 1 - M n ) The present invention is The following improvements have been made: ∂ avg = ∂ avg 1 m no + ∂ avg 2 ( 1 - m no )

—— M n = 0 D n ( i , j ) < T 1 D n ( i , j ) &GreaterEqual; T , Mn为状态位;—— m no = 0 D. no ( i , j ) < T 1 D. no ( i , j ) &Greater Equal; T , M n is a status bit;

Dn(i,j)=|In(i,j)-In-1(i,j)|,Dn(i,j)为相邻帧残差图像;D n (i,j)=|I n (i,j)-I n-1 (i,j)|, Dn(i,j) is the residual image of adjacent frames;

分别为可变加权参数; are the variable weighting parameters respectively;

次背景模型的建立——首先初始化预先定义的几个高斯模型,对高斯模型中的参数进行初始化,并求出之后将要用到的参数。其次,对于每一帧中的每一个像素进行处理,看其是否匹配某个模型,若匹配,则将其归入该模型中,并对该模型根据新的像素值进行更新,若不匹配,则以该像素建立一个高斯模型,初始化参数,代理原有模型中最不可能的模型。最后选择前面几个最有可能的模型作为背景模型,为背景目标提取做铺垫。The establishment of the secondary background model - first initialize several pre-defined Gaussian models, initialize the parameters in the Gaussian model, and calculate the parameters that will be used later. Secondly, process each pixel in each frame to see if it matches a model, if it matches, put it into the model, and update the model according to the new pixel value, if it does not match, Then build a Gaussian model with this pixel, initialize the parameters, and represent the most unlikely model in the original model. Finally, select the most probable models in the front as background models to pave the way for background target extraction.

混合高斯模型p(xN)为像素出现概率统计值,如下式所示:The mixed Gaussian model p(x N ) is the statistical value of the probability of occurrence of pixels, as shown in the following formula:

pp (( xx NN )) == &Sigma;&Sigma; jj == 11 KK ww jj &eta;&eta; (( xx NN ;; &theta;&theta; jj ))

——wj为k阶高斯背景的权重,xN为输入样本,θj为观察值;——w j is the weight of the k-order Gaussian background, x N is the input sample, and θ j is the observation value;

——η(x;θk)为k阶高斯的标准正态分布,其表达式如下:——η(x;θ k ) is the standard normal distribution of k-order Gaussian, and its expression is as follows:

&eta;&eta; (( xx ;; &theta;&theta; kk )) == &eta;&eta; (( xx ;; &mu;&mu; kk ,, cc )) == 11 (( 22 &pi;&pi; )) DD. 22 || &Sigma;&Sigma; KK || 11 22 ee -- 11 22 (( xx -- &mu;&mu; kk )) TT &Sigma;&Sigma; kk -- 11 (( xx -- &mu;&mu; kk ))

——μk为均值;——μ k is the mean value;

——∑k2I为方差;——∑ k = σ 2 I is the variance;

背景像素判断公式如下式:The background pixel judgment formula is as follows:

BB GaussGauss (( ii ,, jj )) == argarg bb minmin (( &Sigma;&Sigma; jj == bb ww jj >> TT ))

ww ^^ kk NN ++ 11 == (( 11 -- aa )) ww ^^ kk NN ++ aa pp ^^ (( ww kk || xx NN ++ 11 ))

——BGauss为背景像素点;——B Gauss is the background pixel;

——a为学习速率;——a is the learning rate;

——wk为初始化权重,为wk期望值;——w k is the initialization weight, is w k expected value;

——T为背景阈值;——T is the background threshold;

混合高斯模型使用K个高斯模型来表征图像中各个像素点的特征,在新一帧图像获得后更新混合高斯模型,用当前图像中的每个像素点与混合高斯模型匹配,如果成功则判定该点为背景点,否则为前景点。通观整个高斯模型,主要是有方差和均值两个参数决定,对均值和方差的学习,采取不同的学习机制,将直接影响到模型的稳定性、精确性和收敛性。由于我们是对运动目标的背景提取建模,因此需要对高斯模型中方差和均值两个参数实时更新。为提高模型的学习能力,改进方法对均值和方差的更新采用不同的学习率;为提高在繁忙的场景下,大而慢的运动目标的检测效果,引入权值均值的概念,建立背景图像并实时更新,然后结合权值、权值均值和背景图像对像素点进行前景和背景的分类。The mixed Gaussian model uses K Gaussian models to represent the characteristics of each pixel in the image. After a new frame of image is obtained, the mixed Gaussian model is updated, and each pixel in the current image is matched with the mixed Gaussian model. If it is successful, the Points are background points, otherwise they are foreground points. Looking at the entire Gaussian model, it is mainly determined by the two parameters of variance and mean. For the learning of mean and variance, adopting different learning mechanisms will directly affect the stability, accuracy and convergence of the model. Since we are modeling the background extraction of moving targets, we need to update the two parameters of the variance and mean in the Gaussian model in real time. In order to improve the learning ability of the model, the improved method adopts different learning rates for the update of the mean and variance; in order to improve the detection effect of large and slow moving targets in busy scenes, the concept of weight mean is introduced, and the background image is established and Update in real time, and then classify the foreground and background of the pixel points by combining the weight, the mean value of the weight and the background image.

所谓纯净背景,当次背景——混合高斯背景模型的前景值为空时,保存当前的图像BG_clear作为纯净背景。The so-called pure background, when the foreground value of the secondary background-mixed Gaussian background model is empty, save the current image B G_clear as the pure background.

步骤二,初始化停车状态表,标记停场景中是否已有停止车辆。Step 2, initialize the parking state table, and mark whether there is a stopped vehicle in the parking scene.

步骤三,双层背景对应像素点的值作差并求出绝对值,得到静止目标或低速运动目标图像D(i,j)=|BAvg(i,j)-BGauss(i,j)|,对该双层背景差值图像进行二值化,二值化阈值Th。通过HSI阴影抑制,并消除目标相应像素点的阴影,对该图像进行闭运算,消除不连续的空洞,最后得到图像Ddet(i,j);Step 3, the value of the corresponding pixel of the double-layer background is differenced and the absolute value is obtained to obtain the image of a stationary target or a low-speed moving target D(i,j)=|B Avg (i,j)-B Gauss (i,j) |, perform binarization on the double-layer background difference image, and use the binarization threshold T h . Through HSI shadow suppression, and eliminate the shadow of the corresponding pixel of the target, perform a closed operation on the image, eliminate discontinuous holes, and finally obtain the image D det (i,j);

步骤四,根据摄像机的焦距和角度对图像Ddet(i,j)白色像素点进行加权,本实例大致将图像分为5个部分对像素点进行加权。由远到近,权值分别为2.0、1.6、1.2、1.1和1。对加权后的像素进行统计当大于阈值Tp时,本实例Tp=220,计数器S加1,否则S=0;当S大于等于90时,则将该二值化图像Dobj保存下来;Step 4: Weight the white pixels of the image D det (i, j) according to the focal length and angle of the camera. In this example, the image is roughly divided into 5 parts to weight the pixels. From far to near, the weights are 2.0, 1.6, 1.2, 1.1 and 1 respectively. Perform statistics on the weighted pixels. When it is greater than the threshold T p , in this example T p =220, the counter S is added by 1, otherwise S=0; when S is greater than or equal to 90, the binarized image D obj is saved;

步骤五,对图像Dobj进行滤波,并对图像中的目标进行分割和轮廓检测,设置检测灵敏度Tp,对像素值大于等于Tp的目标,通过轮廓检测确定目标的左上角边缘和右下角边缘,用方框将其标出,通过计算框图的长宽比和框内白色像素的个数去除非车辆目标,Step 5: Filter the image D obj , and perform segmentation and contour detection on the target in the image, set the detection sensitivity T p , and determine the upper left edge and lower right corner of the target through contour detection for the target whose pixel value is greater than or equal to T p The edge is marked with a box, and non-vehicle targets are removed by calculating the aspect ratio of the box and the number of white pixels in the box,

轮廓左上角点坐标为(x1,y1),右下角坐标为(x2,y2),则长为l=max(|x1-x2|,|y1-y2|),宽为w=min(|x1-x2|,|y1-y2|)。当成立时,则该目标为车辆,其中t1,t2根据车辆的长宽比取值。当目标为车辆时,触发停车事件报警并记录该矩形框的对角线交点坐标,存入停车事件状态表;The coordinates of the upper left corner of the contour are (x 1 ,y 1 ), and the coordinates of the lower right corner are (x 2 ,y 2 ), then the length is l=max(|x 1 -x 2 |,|y 1 -y 2 |), The width is w=min(|x 1 -x 2 |,|y 1 -y 2 |). when When established, the target is a vehicle, where t 1 and t 2 take values according to the aspect ratio of the vehicle. When the target is a vehicle, trigger a parking event alarm and record the diagonal intersection coordinates of the rectangular frame, and store it in the parking event state table;

步骤六,方框对角线交点坐标为(xi1,yi1)其中 并存储坐标信息,当再次检测到停驶目标为车辆时,得到下一批车辆边界方框对角线交点坐标(xj1,yj1),当max(|xi1-xj1|,|yi1-yj1|)≤Tc则认为有车辆驶离,消除状态表中存储的车辆坐标,更新停止车辆数量,即车辆存在值减1。当max(|xi1-xj1|,|yi1-yj1|)>Tc时,则认为有车辆驶入,增加状态表中车辆坐标,更新停止车辆数量,即车辆存在值加1,其中Tc为偏移量阈值,且w为车宽;Step 6, the coordinates of the intersection of the diagonals of the box are (x i1 , y i1 ) where And store the coordinate information, when it is detected that the stopping target is a vehicle again, the coordinates (x j1 , y j1 ) of the diagonal intersection point of the next batch of vehicle boundary boxes are obtained, when max(|x i1 -x j1 |,|y If i1 -y j1 |)≤Tc, it is considered that there is a vehicle leaving, the vehicle coordinates stored in the state table are eliminated, and the number of stopped vehicles is updated, that is, the value of vehicle existence is reduced by 1. When max(|x i1 -x j1 |,|y i1 -y j1 |)>T c , it is considered that there is a vehicle entering, increase the vehicle coordinates in the state table, update the number of stopped vehicles, that is, add 1 to the vehicle existence value, where T c is the offset threshold, and w is the width of the vehicle;

将当前的RunnineAvg背景存储并与纯净的高斯背景图像BG_clear进行比较,若两帧图像中差别小于阈值Td,跳至步骤三继续执行,若两帧图像中差别大于等于阈值Td,则用该RunnineAvg背景替换高斯背景,并跳至步骤三继续执行;Store the current RunnineAvg background and compare it with the pure Gaussian background image B G_clear . If the difference between the two frames of images is less than the threshold T d , skip to step 3 and continue. If the difference between the two frames of images is greater than or equal to the threshold T d , use The RunnineAvg background replaces the Gaussian background, and skips to step 3 to continue execution;

主背景参数设置,更新速度参数 Main background parameter setting, update speed parameter

次背景参数设置,高斯分布权重之和阈值T=0.7,背景阈值T=2.5,学习速率初始权重wk=0.05,初始标准差∑k=30;Secondary background parameter setting, Gaussian distribution weight sum threshold T=0.7, background threshold T=2.5, learning rate Initial weight w k =0.05, initial standard deviation ∑ k =30;

步骤三中,阈值Th=25;步骤五中,检测灵敏度Tp=200。In step three, the threshold T h =25; in step five, the detection sensitivity T p =200.

Claims (1)

1. a kind of parking event detecting method based on background double layer, is characterized in that:Comprise the following steps:
(1) set up the main background of the different background of two-layer and time background, using the renewal speed of main background fast, to stopping target More sensitive characteristic, and relatively slow, slower to the stopping target response characteristic of secondary context update, compare the difference of two width figures;
Main background model set up RunningAvg model, RunningAvg model is shown below:
B A v g ( i , j ) = &part; a v g B n - 1 ( i , j ) + ( 1 - &part; a v g ) I n ( i , j )
——BAvg(i, j) is RunningAvg background model;
——Bn(i, j) is the background value after n-th frame renewal,
——Bn-1(i, j) is the background value of the (n-1)th frame,
——In(i, j) is the gray value of current video frame,
——For renewal rate;
RightCarry out following improvement:
—— M n = { 0 D n ( i , j ) < T 1 D n ( i , j ) &GreaterEqual; T , MnFor mode bit;
Dn(i, j)=| In(i,j)-In-1(i, j) |, Dn (i, j) is consecutive frame residual image;
It is respectively variable weighting parameter;
The foundation of secondary background model initializes predefined several Gauss model first, and the parameter in Gauss model is entered Row initialization, and the parameter that will use after obtaining;Secondly, each of each frame pixel is processed, sees it Whether mate certain model, if coupling, be classified in this model, and this model is updated according to new pixel value, If mismatching, a Gauss model, initiation parameter being set up with this pixel, acting on behalf of model most unlikely in original model; Above several most possible models are finally selected as background model, to be that target context extraction is laid the groundwork;
Mixed Gauss model p (xN) it is pixel probability of occurrence statistical value, it is shown below:
p ( x N ) = &Sigma; j = 1 K w j &eta; ( x N ; &theta; j )
——wjFor the weight of k rank Gaussian Background, xNFor input sample, θjFor observed value;
——η(x;θk) for k rank Gauss standard normal distribution, its expression formula is as follows:
&eta; ( x ; &theta; k ) = &eta; ( x ; &mu; k , c ) = 1 ( 2 &pi; ) D 2 | &Sigma; K | 1 2 e - 1 2 ( x - &mu; k ) T &Sigma; k - 1 ( x - &mu; k )
——μkFor average;
——∑k2I is variance;
Background pixel judgment formula such as following formula:
B G a u s s ( i , j ) = arg b m i n ( &Sigma; j = 1 b w j > T )
w ^ k N + 1 = ( 1 - a ) w ^ k N + a p ^ ( w k | x N + 1 )
——BGaussFor background pixel point;
A is learning rate;
——wkFor initializing weight,For wkExpected value;
T is background threshold;
Mixed Gauss model carrys out the feature of each pixel in phenogram picture using K Gauss model, obtains in a new two field picture Update mixed Gauss model afterwards, mated with mixed Gauss model with each pixel in present image, if success, judge This point is background dot, otherwise for foreground point;Take an overall view of whole Gauss model, mainly have variance and two parameters of average to determine, right Average and the study of variance, take different study mechanisms, will directly influence stability, accuracy and the convergence of model; Due to being it is therefore desirable to variance in Gauss model and two parameters of average in real time more to the modeling of the background extracting of moving target Newly;For improving the learning capacity of model, improved method adopts different learning rates to the renewal of average and variance;For improving numerous Under busy scene, the Detection results of big and slow moving target, introduce the concept of weights average, set up background image in real time more Newly, then in conjunction with weights, weights average and background image, the classification of foreground and background is carried out to pixel;
(2) to background double layer, corresponding pixel is made difference and is obtained absolute value, obtains static target, to background double layer error image Carry out binaryzation;By HSI cast shadow suppressing, and eliminate the shade of target respective pixel point, closed operation is carried out to bianry image, disappears Except discontinuous cavity;
(3) according to focal length of camera, height and angle, target pixel points in image are weighted, the target pixel points weights of distant place Increase, target pixel points weights nearby reduce, and obtain the object pixel value preset after weighting, when a threshold is reached, Parking Enumerator S adds 1, when S is more than threshold value, then preserves this bianry image;
(4) this bianry image is filtered, and the target in image is split and contour detecting, setting detection is sensitive Degree, the target being more than the threshold of sensitivity to pixel value draws rectangle frame, judges whether this target is vehicle according to length-width ratio, works as mesh When being designated as vehicle, record the diagonal intersecting point coordinate of this rectangle frame, be stored in Parking state table;
(5) pixel is processed and analyzed, judged whether this target is vehicle;If this target is judged as vehicle, triggering Parking is reported to the police and main background present frame is assigned to time background, proceeds comparison in difference;When there is no target in secondary background, Retain image as pure background;
(6) when the prospect in secondary background model is space-time, store current background image as pure background, stop when having detected Only during target, it is compared by the storage of main background present frame and with pure background image, if difference is less than threshold in two field pictures Skip to step (2) during value to continue to run with, if difference is more than threshold value in two field pictures, replace time back of the body with this main background present frame Scape, skips to step (2) and proceeds, when Parking is detected again, comparison object center point coordinate, and judge whether target sails From, and update state table.
CN201310104633.2A 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds Active CN103198300B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611069411.1A CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer
CN201310104633.2A CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310104633.2A CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201611069411.1A Division CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Publications (2)

Publication Number Publication Date
CN103198300A CN103198300A (en) 2013-07-10
CN103198300B true CN103198300B (en) 2017-02-08

Family

ID=48720836

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201310104633.2A Active CN103198300B (en) 2013-03-28 2013-03-28 Parking event detection method based on double layers of backgrounds
CN201611069411.1A Active CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201611069411.1A Active CN106778540B (en) 2013-03-28 2013-03-28 Parking detection is accurately based on the parking event detecting method of background double layer

Country Status (1)

Country Link
CN (2) CN103198300B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489662B2 (en) * 2016-07-27 2019-11-26 Ford Global Technologies, Llc Vehicle boundary detection
CN106791275B (en) * 2016-12-19 2019-09-27 中国科学院半导体研究所 A method and system for image event detection and marking
CN109101934A (en) * 2018-08-20 2018-12-28 广东数相智能科技有限公司 Model recognizing method, device and computer readable storage medium
CN109285341B (en) * 2018-10-31 2021-08-31 中电科新型智慧城市研究院有限公司 Urban road vehicle abnormal stop detection method based on real-time video
CN109741350B (en) * 2018-12-04 2020-10-30 江苏航天大为科技股份有限公司 Traffic video background extraction method based on morphological change and active point filling
CN112101279B (en) * 2020-09-24 2023-09-15 平安科技(深圳)有限公司 Target object abnormality detection method, target object abnormality detection device, electronic equipment and storage medium
CN113409587B (en) * 2021-06-16 2022-11-22 北京字跳网络技术有限公司 Abnormal vehicle detection method, device, equipment and storage medium
CN114724063B (en) * 2022-03-24 2023-02-24 华南理工大学 A Highway Traffic Incident Detection Method Based on Deep Learning
CN119132101B (en) * 2024-11-13 2025-02-07 杭州优橙科技有限公司 Intelligent parking space state recognition system and method based on video detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1350941A (en) * 2000-10-27 2002-05-29 新鼎系统股份有限公司 Method and device for moving vehicle image tracking
CN102244769A (en) * 2010-05-14 2011-11-16 鸿富锦精密工业(深圳)有限公司 Object and key person monitoring system and method thereof
CN102314591A (en) * 2010-07-09 2012-01-11 株式会社理光 Method and equipment for detecting static foreground object

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985859B2 (en) * 2001-03-28 2006-01-10 Matsushita Electric Industrial Co., Ltd. Robust word-spotting system using an intelligibility criterion for reliable keyword detection under adverse and unknown noisy environments
KR100799333B1 (en) * 2003-12-26 2008-01-30 재단법인 포항산업과학연구원 Vehicle type discrimination device and method for parking control
CN101447082B (en) * 2008-12-05 2010-12-01 华中科技大学 A real-time detection method for moving objects
CN102096931B (en) * 2011-03-04 2013-01-09 中南大学 Moving target real-time detection method based on layering background modeling
CN102496281B (en) * 2011-12-16 2013-11-27 湖南工业大学 A vehicle red light detection method based on the combination of tracking and virtual coil
CN102819952B (en) * 2012-06-29 2014-04-16 浙江大学 Method for detecting illegal lane change of vehicle based on video detection technique
US20140126818A1 (en) * 2012-11-06 2014-05-08 Sony Corporation Method of occlusion-based background motion estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1350941A (en) * 2000-10-27 2002-05-29 新鼎系统股份有限公司 Method and device for moving vehicle image tracking
CN102244769A (en) * 2010-05-14 2011-11-16 鸿富锦精密工业(深圳)有限公司 Object and key person monitoring system and method thereof
CN102314591A (en) * 2010-07-09 2012-01-11 株式会社理光 Method and equipment for detecting static foreground object

Also Published As

Publication number Publication date
CN106778540B (en) 2019-06-28
CN103198300A (en) 2013-07-10
CN106778540A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN103198300B (en) Parking event detection method based on double layers of backgrounds
CN110084095B (en) Lane line detection method, lane line detection apparatus, and computer storage medium
CN110178167B (en) Video Recognition Method of Intersection Violation Based on Camera Cooperative Relay
CN108304798B (en) Street level order event video detection method based on deep learning and motion consistency
CN108647649B (en) A method for detecting abnormal behavior in video
CN107767400B (en) A moving target detection method for remote sensing image sequences based on hierarchical saliency analysis
CN111814621A (en) A multi-scale vehicle pedestrian detection method and device based on attention mechanism
CN105005989B (en) A kind of vehicle target dividing method under weak contrast
CN111626170B (en) Image recognition method for railway side slope falling stone intrusion detection
CN106778633B (en) Pedestrian identification method based on region segmentation
CN104183127A (en) Traffic surveillance video detection method and device
CN108230364A (en) A kind of foreground object motion state analysis method based on neural network
EP2813973B1 (en) Method and system for processing video image
CN101770583B (en) Template matching method based on global features of scene
CN110619279A (en) Road traffic sign instance segmentation method based on tracking
CN107808524B (en) A UAV-based vehicle detection method at road intersections
CN107832762A (en) A kind of License Plate based on multi-feature fusion and recognition methods
Singh et al. Vehicle detection and accident prediction in sand/dust storms
CN113516853A (en) Multi-lane traffic flow detection method for complex monitoring scene
CN112241693A (en) Illegal welding fire image identification method based on YOLOv3
CN109993134A (en) A vehicle detection method at road intersection based on HOG and SVM classifier
CN114519819A (en) Remote sensing image target detection method based on global context awareness
CN106407951A (en) Monocular vision-based nighttime front vehicle detection method
CN105893970A (en) Nighttime road vehicle detection method based on luminance variance characteristics
CN110443142A (en) A kind of deep learning vehicle count method extracted based on road surface with segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201015

Address after: 226019 No.205, building 6, Nantong University, No.9, Siyuan Road, Nantong City, Jiangsu Province

Patentee after: Center for technology transfer, Nantong University

Address before: 226019 Jiangsu city of Nantong province sik Road No. 9

Patentee before: NANTONG University

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 226001 No.9, Siyuan Road, Chongchuan District, Nantong City, Jiangsu Province

Patentee after: Nantong University Technology Transfer Center Co.,Ltd.

Address before: 226019 No.205, building 6, Nantong University, No.9, Siyuan Road, Nantong City, Jiangsu Province

Patentee before: Center for technology transfer, Nantong University

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20130710

Assignee: Nantong qianrui Information Technology Co.,Ltd.

Assignor: Nantong University Technology Transfer Center Co.,Ltd.

Contract record no.: X2023980053321

Denomination of invention: A Parking Event Detection Method Based on Double Layer Background

Granted publication date: 20170208

License type: Common License

Record date: 20231221