CN105208398B - A kind of method for obtaining the real-time Background of road - Google Patents

A kind of method for obtaining the real-time Background of road Download PDF

Info

Publication number
CN105208398B
CN105208398B CN201510608645.8A CN201510608645A CN105208398B CN 105208398 B CN105208398 B CN 105208398B CN 201510608645 A CN201510608645 A CN 201510608645A CN 105208398 B CN105208398 B CN 105208398B
Authority
CN
China
Prior art keywords
frame
cluster
road
grid
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510608645.8A
Other languages
Chinese (zh)
Other versions
CN105208398A (en
Inventor
杨燕
潘鸿
李天瑞
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN201510608645.8A priority Critical patent/CN105208398B/en
Publication of CN105208398A publication Critical patent/CN105208398A/en
Application granted granted Critical
Publication of CN105208398B publication Critical patent/CN105208398B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种获取道路实时背景图的方法,使用网格聚类快速获取道路背景图,首先将道路划分为诸多网格,其次对这些网格进行H分量的统计,然后根据H分量曲线图确定不同的视频帧聚成的簇,最后从中抽取特征帧,最后从中抽取特征帧。该方法具有复杂度低、效率高、可实施性强的优点。

The invention discloses a method for obtaining a real-time background image of a road. Grid clustering is used to quickly obtain a road background image. Firstly, the road is divided into many grids, and secondly, the statistics of the H components are performed on these grids, and then according to the H component curve The graph determines the clusters into which different video frames are clustered, and finally extracts feature frames from them, and finally extracts feature frames from them. The method has the advantages of low complexity, high efficiency and strong practicability.

Description

一种获取道路实时背景图的方法A method of obtaining real-time background images of roads

技术领域technical field

本发明属于数字图片处理和聚类领域,特别适用于道路视频抽取背景图。道路背景图的建模是视频压缩传输处理的一种有效方法,具体涉及一种新的网格聚类抽取背景图片的特征帧。The invention belongs to the field of digital image processing and clustering, and is particularly suitable for extracting background images of road videos. The modeling of the road background image is an effective method for video compression and transmission processing, which involves a new grid clustering method to extract the feature frames of the background image.

背景技术Background technique

最近,背景建模在高效率监控视频编码发挥越来越重要的作用。同时,许多实用视频编码应用对背景建模也提出了一些具体的要求,如低存储成本、低计算复杂度。Recently, background modeling has played an increasingly important role in efficient surveillance video coding. At the same time, many practical video coding applications also put forward some specific requirements for background modeling, such as low storage cost and low computational complexity.

现有的背景建模方法大致可分类为2类:参数化方法,如高斯混合模型-1、高斯混合模型-2、高斯混合模型-3,和非参数方法包括贝叶斯模型,核密度估计,时间中值滤波,均值漂移等。Existing background modeling methods can be roughly classified into 2 categories: parametric methods, such as Gaussian mixture model-1, Gaussian mixture model-2, Gaussian mixture model-3, and non-parametric methods including Bayesian models, kernel density estimation , temporal median filtering, mean shift, etc.

这些方法具有数学公式多,编码复杂、运行效率低等弊端,亟需采用新的、简单易行且高效的方法用于背景建模。These methods have the disadvantages of many mathematical formulas, complex coding, and low operating efficiency. It is urgent to adopt new, simple and efficient methods for background modeling.

通过对现有专利及相关技术的检索发现,现有的与获取道路背景图技术相关的方法和系统包括:Through the search of existing patents and related technologies, it is found that the existing methods and systems related to the technology of obtaining road background images include:

⑴Low-complexity and high-efficiency background modeling forsurveillance video coding,2012IEEE International Conference on VisualCommunication and Image Processing,San Jose,USA,pp.1-6,11/2012⑴Low-complexity and high-efficiency background modeling for surveillance video coding, 2012IEEE International Conference on Visual Communication and Image Processing, San Jose, USA, pp.1-6, 11/2012

提出了一个分部和重量为基础的(SWRA)方法。首先将每个像素在SWRA在训练帧中的位置分成若干时间段,然后计算其相应的平均值与权重。之后,加权平均过程被用来减少的影响前景像素,并获得建模结果。A division and weight-based (SWRA) method is proposed. First, the position of each pixel in the SWRA in the training frame is divided into several time periods, and then its corresponding average value and weight are calculated. Afterwards, a weighted average process is used to reduce the influence of foreground pixels and obtain modeling results.

⑵A Fuzzy Background Modeling Approach for Motion Detection inDynamic Backgrounds,Multimedia and Signal Processing Volume 346of the seriesCommunications in Computer and Information Science pp 177-185⑵A Fuzzy Background Modeling Approach for Motion Detection in Dynamic Backgrounds, Multimedia and Signal Processing Volume 346 of the series Communications in Computer and Information Science pp 177-185

提出了一种方法,在高斯混合-2模型中,使用模糊逻辑系统递归自适应滤波器来计算的更新权重,并最终来获取道路背景图。结果证明使用该模糊的方法与传统方法相比,具有较大的优势。A method is proposed to use the fuzzy logic system recursive adaptive filter to calculate the updated weights in the Gaussian Mixture-2 model, and finally to obtain the road background map. The result proves that the fuzzy method has a great advantage compared with the traditional method.

⑶Difference of Gaussian Edge-Texture Based Background Modeling forDynamic Traffic Conditions,Advances in Visual Computing Volume 5358of theseries Lecture Notes in Computer Science pp 406-417⑶Difference of Gaussian Edge-Texture Based Background Modeling for Dynamic Traffic Conditions, Advances in Visual Computing Volume 5358 of theseseries Lecture Notes in Computer Science pp 406-417

提出了基于高斯边缘纹理的,获取道路背景和探测汽车的方法。该方法通过像素点和其边缘、非边缘像素点建立联系,具备很强的学习性能,能够很好的对道路前景进行探测和归类。A method for obtaining road background and detecting cars based on Gaussian edge texture is proposed. This method establishes connections between pixels and their edge and non-edge pixels, has strong learning performance, and can detect and classify road prospects well.

可以看出以上方法均是基于高斯模型进行道路图处理,具有公式复杂,编码实现难度大,不具备实时性的缺点,不适合于现今对实时性要求较高的道路监控领域。It can be seen that the above methods are all based on the Gaussian model for road map processing, which has the disadvantages of complex formulas, difficult coding implementation, and lack of real-time performance. It is not suitable for the current road monitoring field that requires high real-time performance.

现有专利中尚没有明确针对聚类获取道路背景图的方法,所以我们提出的基于网格聚类获取道路背景图的方法有较好的研究意义与应用价值。In the existing patents, there is no clear method for obtaining the road background map for clustering, so the method we propose to obtain the road background map based on grid clustering has good research significance and application value.

发明内容Contents of the invention

鉴于以上陈述的已有方案的不足,本发明旨在提供高效、简单的方法,并使之克服现有技术的以上缺点。In view of the shortcomings of the existing solutions stated above, the present invention aims to provide an efficient and simple method to overcome the above shortcomings of the prior art.

为了实现上述目的,本发明的考虑是:In order to achieve the above object, the consideration of the present invention is:

正常行驶的道路,每一帧图片上面都是有车辆的。但是使用网格的概念,把每一帧视频划分成细小的网格。我们将每一网格中的视频图片用HSV中的H分量做出颜色直方图,将一段时间内该网格的所有H分量的最大值,做出曲线图。网格中没有汽车的时候,H分量的最大值曲线是基本稳定的。当有汽车通过的时候,特别是汽车颜色有到了背景图颜色存在较大区别的时候,H分量的最大值会出现巨大变化。汽车颜色和道路完全一样,或者同一种颜色汽车串联通过的概率是基本不存在的。On a normal driving road, there are vehicles on each frame of the picture. But using the concept of a grid, each frame of video is divided into fine grids. We use the H component in HSV to make a color histogram of the video picture in each grid, and make a graph of the maximum value of all the H components of the grid within a period of time. When there is no car in the grid, the maximum value curve of the H component is basically stable. When a car passes by, especially when there is a big difference between the color of the car and the color of the background image, the maximum value of the H component will change dramatically. The color of the car is exactly the same as that of the road, or the probability of cars of the same color passing in series is basically non-existent.

其具体处理包含如下的手段:Its specific treatment includes the following means:

一种获取道路实时背景图的方法,使用网格聚类快速获取道路背景图的方法,首先将道路划分为诸多网格,其次对这些网格进行H分量的统计,然后根据H分量曲线图确定不同的视频帧聚成的簇,最后从中抽取特征帧,包含如下的处理手段:A method for obtaining real-time background images of roads, using grid clustering to quickly obtain road background images, first divides the road into many grids, secondly performs H-component statistics on these grids, and then determines according to the H-component curve Clusters of different video frames, and finally extract feature frames from them, including the following processing methods:

⑴将30秒的道路视频抽取为帧,视频每秒30帧,共计900帧;(1) Extract the 30-second road video into frames, the video is 30 frames per second, a total of 900 frames;

⑵对每帧做网格截取划分,每帧画面划分为100格,分别用A-J和1-10的二维矩阵表示;(2) Perform grid interception and division for each frame, and each frame is divided into 100 grids, which are represented by two-dimensional matrices of A-J and 1-10;

⑶对每一帧的每一网格计算HSV中H分量,对每一网格的H分量统计最大值;将900帧的每一帧每一网格内的H分量的最大值绘制成曲线图;曲线的波动,说明该个时间段内主色彩出现了偏移,有车辆通过该区域;(3) Calculate the H component in HSV for each grid of each frame, and count the maximum value of the H component of each grid; draw the maximum value of the H component in each grid of each frame of 900 frames into a graph ;The fluctuation of the curve indicates that the main color has shifted in this time period, and there are vehicles passing through the area;

⑷用聚类的方法提取特征帧:(4) Extract feature frames by clustering method:

a.从网格A1开始计算,将所有偏差阈值范围在10以内的点聚类为一簇,该最大的簇就是路面背景图的大聚类;a. Start calculation from grid A1, and cluster all the points whose deviation threshold range is within 10 into one cluster, and the largest cluster is the large cluster of the road background map;

b.将第a步中获得的最大簇中所有相同值的点聚类为一簇,找出数目最大的簇,取其最长的连续点,并抽取出特征帧;特征帧pk的提取模型为:b. Cluster all the points of the same value in the largest cluster obtained in step a into one cluster, find the cluster with the largest number, take the longest continuous point, and extract the feature frame; the extraction of the feature frame p k The model is:

其中,pk为提取到的特征帧,pn为最大簇中的每一帧,n为最大簇中帧的数目,i为最大簇中第一帧开始的编号,j为最大簇中最后一帧的编号。Among them, p k is the extracted feature frame, p n is each frame in the largest cluster, n is the number of frames in the largest cluster, i is the starting number of the first frame in the largest cluster, j is the last frame in the largest cluster The number of the frame.

(5)将获取到的特征帧替换该网格原来的帧,并返回第(4)步开始下一网格的抽取特征帧,直到J10处理完成,得到整幅道路实时背景图;(5) Replace the original frame of the grid with the obtained feature frame, and return to step (4) to start the extraction of the next grid feature frame, until J10 processing is completed, and the real-time background image of the whole road is obtained;

(6)返回第(1)步开始下一轮计算。(6) Return to step (1) to start the next round of calculation.

在实际实施时,每一帧的网格的划分方法,除划分为为100格外,还可根据实际需要决定划分的网格数。In actual implementation, the grid division method of each frame can be divided into 100 grids, and the number of grids can be determined according to actual needs.

本发明针对道路背景图的建模问题,具体提出了一种可行性高,且简单、实时性很强的网格聚类获取道路背景图方法。Aiming at the modeling problem of the road background map, the present invention specifically proposes a highly feasible, simple and real-time grid clustering method for obtaining the road background map.

附图说明如下:The accompanying drawings are as follows:

图1为视频一第一帧分割图。FIG. 1 is a segmentation diagram of the first frame of a video.

图2为b1区域的H分量曲线图。Fig. 2 is a graph of the H component in the b1 region.

图3为b5区域的H分量曲线图。Fig. 3 is a graph of the H component in the b5 region.

图4为第1帧图片。Figure 4 is the first frame picture.

图5为第900帧图片。Figure 5 is the 900th frame picture.

图6为道路背景合成图。Figure 6 is a road background synthesis map.

图7为本发明方法的流程图。Fig. 7 is a flowchart of the method of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明作进一步的描述。The present invention will be further described below in conjunction with the accompanying drawings.

因为外界阳光、云层、树叶的干扰,我们是无法得到精确的曲线变化图。因此我们在曲线图中找出在阈值范围内变化的点,并将其聚为一簇。我们认为这就是道路背景图的簇。再在该簇中,继续寻找具有稳态变化的,具备最大特征值的帧,并将其抽出,作为该网格的背景图。网格划分如图1所示,其具体包括:Due to the interference of external sunlight, clouds, and leaves, we cannot get an accurate curve change map. So we find points in the graph that vary within a threshold and group them into clusters. We think this is the cluster of the road background map. Then in this cluster, continue to search for the frame with the largest eigenvalue with steady-state changes, and extract it as the background image of the grid. The grid division is shown in Figure 1, which specifically includes:

⑴将30秒的道路视频抽取为帧。如果视频每秒30帧,共计900帧;(1) Extract the 30-second road video into frames. If the video is 30 frames per second, a total of 900 frames;

⑵对每帧做网格截取划分,每帧画面我们划分为100格,分别用A-J和1-10的二维矩阵表示,如图一;(2) Carry out grid interception and division for each frame. We divide each frame into 100 grids, which are represented by two-dimensional matrices of A-J and 1-10, as shown in Figure 1;

⑶对每一帧的每一网格计算HSV中H分量,对每一网格的H分量统计最大值。将900帧的每一帧每一网格内的H分量的最大值绘制成曲线图。曲线的波动,说明该个时间段内主色彩出现了偏移,有车辆通过该区域。(3) Calculate the H component in HSV for each grid of each frame, and count the maximum value of the H component of each grid. The maximum value of the H component in each grid of each frame of 900 frames is plotted as a graph. The fluctuation of the curve indicates that the main color has shifted during this time period, and there are vehicles passing through this area.

⑷用聚类的方法提取特征帧:(4) Extract feature frames by clustering method:

特征帧pk的提取公式: The extraction formula of feature frame p k :

p:选定的特征帧;p: selected feature frame;

i:第(4)-b步中,选定的起始位置;i: in the (4)-b step, the selected starting position;

j:第(4)-b步中,选定的结束位置。j: the selected end position in step (4)-b.

a.从网格A1开始计算,将所有偏差阈值范围在10以内的点聚类为一簇,这样可以过滤掉因为树叶摆动、阳光散射、云层变化导致的H值的偏移;因为该区域道路空闲的时候占多数,认为这个最大的簇就是路面背景图的大聚类;a. Starting from grid A1, cluster all the points with deviation thresholds within 10 into one cluster, which can filter out the deviation of H value caused by swinging leaves, sunlight scattering, and cloud cover changes; because the roads in this area It is in the majority when it is free, and it is considered that the largest cluster is the large cluster of the road background map;

b.将第a步中获得的最大簇,继续对其处理。把簇中所有相同值的点聚类为一簇,找出数目最大的簇。这样可以找出最能代表当前环境下道路背景的帧;b. Continue to process the largest cluster obtained in step a. Cluster all points with the same value in the cluster into one cluster, and find the cluster with the largest number. This allows to find the frame that best represents the road background in the current environment;

c.再在第b步获得的簇中,取最长的连续点,并抽取出特征帧。通过这一步骤,可以进一步确认该时刻,该区域道路是处于外界影响最为稳定的、道路处于空闲的状态。我们认为该特征帧可以表示当前道路的该区域的实时背景图,因此可以用该特征帧替换原来的帧。如图2、图3是绘制的H分量最大值在30秒内的变化曲线。c. In the cluster obtained in step b, take the longest continuous point and extract the feature frame. Through this step, it can be further confirmed that at this moment, the roads in this area are in the state where the external influence is the most stable and the roads are in an idle state. We think that this feature frame can represent the real-time background image of this area of the current road, so the original frame can be replaced by this feature frame. As shown in Fig. 2 and Fig. 3, the variation curve of the maximum value of the H component within 30 seconds is drawn.

(5)将获取到的特征帧替换原来的特征帧,并返回第(4)步开始下一网格的抽取特征帧,直到J10处理完成;(5) Replace the original feature frame with the obtained feature frame, and return to step (4) to start the extraction of the feature frame of the next grid until J10 processing is completed;

(6)返回第(1)步开始下一轮计算,如图7所述步骤。(6) Return to step (1) to start the next round of calculation, as shown in Figure 7.

从第一帧和第900帧的图片可以发现,图片有较大的颜色深浅变化,如图4、图5、图6所述。从图6合成图可以看出,由于云层的运动、光线的变化,不同的网格图是有颜色深浅区别的。从实验可以得出这样一个结论,道路背景图的实时性替换,对减少网络视频流量,是有实际意义的。From the pictures of the first frame and the 900th frame, it can be found that the picture has a large color shade change, as described in Figure 4, Figure 5, and Figure 6. From the composite image in Figure 6, it can be seen that due to the movement of clouds and changes in light, different grid images have different shades of color. It can be concluded from the experiment that the real-time replacement of the road background image has practical significance for reducing network video traffic.

从以上实验论述中,可以获知该方法具有以下明显的优点:From the above experimental discussion, it can be known that this method has the following obvious advantages:

1、该方法具有可行性程度很高、编码简单的优势;1. This method has the advantages of high feasibility and simple coding;

2、该方法获取的背景图具有较强的实时性;2. The background image obtained by this method has strong real-time performance;

3、该方法提取的特征帧,能够及时反应云层运动、光线变化等诸多变化,并且对道路两边树叶的晃动具有很强的抗干扰性。3. The feature frames extracted by this method can respond to many changes such as cloud movement and light changes in time, and have strong anti-interference to the shaking of leaves on both sides of the road.

Claims (1)

1.一种获取道路实时背景图的方法,其特征在于,使用网格聚类快速获取道路背景图的方法,首先将道路划分为诸多网格,其次对这些网格进行H分量的统计,然后根据H分量曲线图确定不同的视频帧聚成的簇,最后从中抽取特征帧,包含如下的处理步骤:1. A method for obtaining the real-time background map of the road, characterized in that, using grid clustering to quickly obtain the method for the background map of the road, at first the road is divided into many grids, and then the statistics of the H components are carried out to these grids, and then Determine the clusters of different video frames according to the H component graph, and finally extract the feature frame from it, including the following processing steps: ⑴将30秒的道路视频抽取为帧,视频每秒30帧,共计900帧;(1) Extract the 30-second road video into frames, the video is 30 frames per second, a total of 900 frames; ⑵对每帧做网格截取划分,每帧画面划分为100格,分别用A-J和1-10的二维矩阵表示;(2) Perform grid interception and division for each frame, and each frame is divided into 100 grids, which are represented by two-dimensional matrices of A-J and 1-10; ⑶对每一帧的每一网格计算HSV中H分量,对每一网格的H分量统计最大值;将900帧的每一帧每一网格内的H分量的最大值绘制成曲线图;曲线的波动,说明该30秒内主色彩出现了偏移,有车辆通过测量区域;(3) Calculate the H component in HSV for each grid of each frame, and count the maximum value of the H component of each grid; draw the maximum value of the H component in each grid of each frame of 900 frames into a graph ;The fluctuation of the curve indicates that the main color shifted within the 30 seconds, and a vehicle passed the measurement area; ⑷用聚类的方法提取特征帧:(4) Extract feature frames by clustering method: a.从网格A1开始计算,将所有偏差阈值范围在10以内的点聚类为一簇,该最大的簇就是路面背景图的大聚类;a. Start calculation from grid A1, and cluster all the points whose deviation threshold range is within 10 into one cluster, and the largest cluster is the large cluster of the road background map; b.将第a步中获得的最大簇中所有相同值的点聚类为一簇,找出数目最大的簇,取其最长的连续点,并抽取出特征帧;特征帧pk的提取模型为: b. Cluster all the points of the same value in the largest cluster obtained in step a into one cluster, find the cluster with the largest number, take the longest continuous point, and extract the feature frame; the extraction of the feature frame p k The model is: 其中,pk为提取到的特征帧,pn为最大簇中的每一帧,n为最大簇中帧的数目,i为最大簇中第一帧开始的编号,j为最大簇中最后一帧的编号;Among them, p k is the extracted feature frame, p n is each frame in the largest cluster, n is the number of frames in the largest cluster, i is the starting number of the first frame in the largest cluster, j is the last frame in the largest cluster frame number; (5)将获取到的特征帧替换该网格原来的帧,并返回第(4)步开始下一网格的抽取特征帧,直到J10处理完成,得到整幅道路实时背景图;(5) Replace the original frame of the grid with the obtained feature frame, and return to step (4) to start the extraction of the next grid feature frame, until J10 processing is completed, and the real-time background image of the whole road is obtained; (6)返回第(1)步开始下一轮计算。(6) Return to step (1) to start the next round of calculation.
CN201510608645.8A 2015-09-22 2015-09-22 A kind of method for obtaining the real-time Background of road Expired - Fee Related CN105208398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510608645.8A CN105208398B (en) 2015-09-22 2015-09-22 A kind of method for obtaining the real-time Background of road

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510608645.8A CN105208398B (en) 2015-09-22 2015-09-22 A kind of method for obtaining the real-time Background of road

Publications (2)

Publication Number Publication Date
CN105208398A CN105208398A (en) 2015-12-30
CN105208398B true CN105208398B (en) 2018-06-19

Family

ID=54955785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510608645.8A Expired - Fee Related CN105208398B (en) 2015-09-22 2015-09-22 A kind of method for obtaining the real-time Background of road

Country Status (1)

Country Link
CN (1) CN105208398B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0993443A (en) * 1995-05-16 1997-04-04 Sanyo Electric Co Ltd Color-monochromatic image conversion method and edge position detection method for object to be inspected
CN101533515A (en) * 2009-04-13 2009-09-16 浙江大学 Background modeling method based on block facing video monitoring
CN101834981A (en) * 2010-05-04 2010-09-15 崔志明 Video background extracting method based on online cluster
CN102722720A (en) * 2012-05-25 2012-10-10 苏州大学 Video background extraction method based on hue-saturation-value (HSV) space on-line clustering
CN103985114A (en) * 2014-03-21 2014-08-13 南京大学 Surveillance video person foreground segmentation and classification method
JP2015121901A (en) * 2013-12-20 2015-07-02 日本放送協会 Video area division device and video area division program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101215987B1 (en) * 2008-12-22 2012-12-28 한국전자통신연구원 Apparatus for separating foreground from back ground and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0993443A (en) * 1995-05-16 1997-04-04 Sanyo Electric Co Ltd Color-monochromatic image conversion method and edge position detection method for object to be inspected
CN101533515A (en) * 2009-04-13 2009-09-16 浙江大学 Background modeling method based on block facing video monitoring
CN101834981A (en) * 2010-05-04 2010-09-15 崔志明 Video background extracting method based on online cluster
CN102722720A (en) * 2012-05-25 2012-10-10 苏州大学 Video background extraction method based on hue-saturation-value (HSV) space on-line clustering
JP2015121901A (en) * 2013-12-20 2015-07-02 日本放送協会 Video area division device and video area division program
CN103985114A (en) * 2014-03-21 2014-08-13 南京大学 Surveillance video person foreground segmentation and classification method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
a fuzzy backgrournd modeling approach for motion detection in dynamic backgournds;YING DING ET.AL;《Mutimedia and signal processing 》;20090918;第177至185页 *
forground object detection in complex scences using cluster color;chung chi lin et.al;《2014 8th international conference on innovative moblie and internet services in ubiquitous computing》;20140602;第529页至532页 *
low-complexity and high-efficiency background mdodeling for surveillance video coding;Xinguo Zhang et.al;《visual communications and image processing,2012 IEEE》;20121130;第1至第6页 *
基于HSV颜色空间的视频车辆检测与跟踪算法研究;赵作升;《中国优秀硕士全文数据库》;20090215;I138-478 *
基于网格模型的运动估计特征点的提取;杨坤;《中国优秀硕士学位论文全文数据库》;20081115;I138-731 *
应用色彩空间聚类方法道路建模;向宸薇;《中国图像图形学报》;20130816;第18卷(第8期);0976页至0981页 *
改进的道路背景提取与更新算法;李洁等;《视频应用与工程》;20130612;第37卷(第11期);194页至1977页 *

Also Published As

Publication number Publication date
CN105208398A (en) 2015-12-30

Similar Documents

Publication Publication Date Title
CN108122247B (en) A kind of video object detection method based on saliency and feature prior model
CN102682303B (en) Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN109409242B (en) A method for detecting black smoke vehicles based on recurrent convolutional neural network
CN102915544B (en) Video image motion target extracting method based on pattern detection and color segmentation
CN102073852B (en) Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
CN103020985B (en) A kind of video image conspicuousness detection method based on field-quantity analysis
WO2023207742A1 (en) Method and system for detecting anomalous traffic behavior
CN104077613A (en) Crowd density estimation method based on cascaded multilevel convolution neural network
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
CN103853724B (en) multimedia data classification method and device
CN102903124A (en) Moving object detection method
CN103530893A (en) Foreground detection method in camera shake scene based on background subtraction and motion information
CN102147861A (en) Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN106447674A (en) Video background removing method
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
CN102592138A (en) Object tracking method for intensive scene based on multi-module sparse projection
CN106156747B (en) The method of the monitor video extracting semantic objects of Behavior-based control feature
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN110111267A (en) A Single Image Rain Removal Method Based on Optimization Algorithm Combined with Residual Network
CN114565973A (en) Motion recognition system, method and device and model training method and device
CN1266656C (en) Intelligent alarming treatment method of video frequency monitoring system
CN111444913A (en) License plate real-time detection method based on edge-guided sparse attention mechanism
CN110889360A (en) A method and system for crowd counting based on switched convolutional network
CN102314681A (en) Adaptive KF (keyframe) extraction method based on sub-lens segmentation
CN110503049B (en) A method for estimating the number of vehicles in satellite video based on generative adversarial network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180619

Termination date: 20210922