CN102903120A - Time-space condition information based moving object detection method - Google Patents

Time-space condition information based moving object detection method Download PDF

Info

Publication number
CN102903120A
CN102903120A CN2012102513544A CN201210251354A CN102903120A CN 102903120 A CN102903120 A CN 102903120A CN 2012102513544 A CN2012102513544 A CN 2012102513544A CN 201210251354 A CN201210251354 A CN 201210251354A CN 102903120 A CN102903120 A CN 102903120A
Authority
CN
China
Prior art keywords
image
background
condition information
detection
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102513544A
Other languages
Chinese (zh)
Inventor
包卫东
熊志辉
王斌
谭树人
刘煜
王炜
徐玮
陈立栋
张茂军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Vision Splend Photoelectric Technology Co ltd
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2012102513544A priority Critical patent/CN102903120A/en
Publication of CN102903120A publication Critical patent/CN102903120A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了基于时空条件信息的运动目标检测方法。该方法考虑人类视觉时空域显著性构建目标检测时空域模型,计算检测图像属于时空域参考背景的条件概率,以负对数核对条件概率进行非线性变换提取时空条件信息,考虑图像特征局部一致性对邻域内图像条件信息加权求和,作为特征利用线性分类器进行目标检测。采用颜色直方图快速计算条件概率,以图像块代替单个像素进行建模和检测,降低算法复杂度、存储空间需求,结合图像块差分预检测机制提高目标检测速度。本发明算法复杂度低、存储空间需求小、算法实时性高,能够有效抑制背景扰动干扰和孤立噪声影响,并在现有计算机上实现运动目标实时检测,且适于嵌入式智能摄像机平台应用。

The invention discloses a moving target detection method based on spatio-temporal condition information. This method considers the saliency of human visual space-time domain to construct the space-time domain model of target detection, calculates the conditional probability that the detected image belongs to the reference background in the space-time domain, and uses the negative logarithm to check the conditional probability to perform nonlinear transformation to extract the space-time condition information, considering the local consistency of image features The image condition information in the neighborhood is weighted and summed, and used as a feature for target detection using a linear classifier. The color histogram is used to quickly calculate the conditional probability, and the image block is used instead of a single pixel for modeling and detection, which reduces the algorithm complexity and storage space requirements, and combines the image block differential pre-detection mechanism to improve the target detection speed. The invention has low algorithm complexity, small storage space requirement, and high real-time performance of the algorithm, can effectively suppress the influence of background disturbance interference and isolated noise, and realize real-time detection of moving targets on existing computers, and is suitable for embedded intelligent camera platform applications.

Description

基于时空条件信息的运动目标检测方法A Moving Target Detection Method Based on Spatial-Temporal Condition Information

技术领域: Technical field:

本发明涉及计算机视觉中的视频运动目标分割,特指视频监控系统中的运动目标检测。  The invention relates to video moving target segmentation in computer vision, in particular to moving target detection in a video monitoring system. the

背景技术: Background technique:

视频运动目标检测是计算机视觉应用的基础问题之一,它是运动目标跟踪、运动目标识别、人机接口、动作识别、行为理解等高级应用的基础支撑,已经在视频监控、视频检索等具体应用中发挥重要作用,将在军事、交通、安防、文化娱乐等诸多领域发挥更大作用。  Video moving object detection is one of the basic problems in computer vision applications. It is the basic support for advanced applications such as moving object tracking, moving object recognition, man-machine interface, action recognition, and behavior understanding. It has been used in specific applications such as video surveillance and video retrieval. It will play an important role in military, transportation, security, culture and entertainment and many other fields. the

智能视频监控系统能够将人从繁重的视频监视任务中解放出来,减少人工干预,减轻监视人员工作负担,自动发现监视环境中的运动目标,自动跟踪与识别运动目标,自动发现监视场景中可疑事件和提取感兴趣信息。前述智能视频监控系统中的智能分析功能都依赖于视频运动目标检测算法,视频运动目标检测是将视频中运动目标与背景进行分离,以提取运动目标,它是智能视频监控系统的基础算法,是后续目标跟踪、识别、可疑事件检测的算法基础。  The intelligent video surveillance system can liberate people from heavy video surveillance tasks, reduce manual intervention, reduce the workload of surveillance personnel, automatically discover moving targets in the surveillance environment, automatically track and identify moving targets, and automatically discover suspicious events in the surveillance scene and extract information of interest. The intelligent analysis functions in the aforementioned intelligent video surveillance system all rely on the video moving object detection algorithm. The video moving object detection is to separate the moving object from the background in the video to extract the moving object. It is the basic algorithm of the intelligent video surveillance system. Algorithmic basis for subsequent target tracking, recognition, and suspicious event detection. the

目前主流的运动目标检测方法有减背景法和光流法。光流法复杂的计算使其难以得以实际应用;减背景法是目前最常用也是最有效的运动目标检测方法,其核心思想是使用合适的模型进行场景描述,并以此判定场景的变化检测运动目标。常用的减背景方法包括混合高斯模型(Gaussians Mixture Model,GMM)、非参数模型(Kernel Density Estimation,KDE)、码本模型(Code Book)等,它们通过在时域上对像素亮度进行建模以检测运动目标。运动目标检测的挑战来自于如何克服自然环境变化(光照变化、树叶的摇晃、雨雪、水面波动等)和成像器材(电子噪声、摄像机的晃动等)的影响。常用的基于时域建模的减背景方法,通过图像特征(颜色、梯度、纹理、边缘等)在时域上的变化检测并定位运动目标。但图像中各个像素位置处的图像特征并非是孤立的,它们之间存在着联系,因此利用时域变化难以处理场景中背景扰动,即使采用多模态模型,比如混合高斯模型(GMM),也难以抑制环境噪声影响。虽然基于图像分割(基于随机场的运动目标分割)的方法能够抑制孤立噪声,但是基于分割的方法依赖于初始的检测结果,在初始检测结果错误严重时也难以获得准确分割结果,且算法实时性差。基于时空域模型的方法充分考虑了图像颜色分布的时空一致性,在时空域联合建模进行运动目标检测,在处理动态场景中背景扰动时表现出良好性能。基于时空域的算法,由于需要处理大量的时空域数据,计算复杂度高,内存需求大,算法实时性差,由于孤立噪声干扰影响,最终的检测结果还需要进行形态学滤波或图像分割等后处理才能得到较好的检测结果。  The current mainstream moving target detection methods include background subtraction method and optical flow method. The complex calculation of the optical flow method makes it difficult to be practically applied; the background subtraction method is currently the most commonly used and most effective method for moving object detection. Target. Commonly used background subtraction methods include Gaussian Mixture Model (GMM), non-parametric model (Kernel Density Estimation, KDE), Code Book model (Code Book), etc., which model pixel brightness in the time domain to Detect moving objects. The challenge of moving target detection comes from how to overcome the influence of natural environment changes (light changes, shaking leaves, rain and snow, water surface fluctuations, etc.) and imaging equipment (electronic noise, camera shaking, etc.). The commonly used background subtraction method based on temporal domain modeling detects and locates moving targets through changes in image features (color, gradient, texture, edge, etc.) in the temporal domain. However, the image features at each pixel position in the image are not isolated, and there is a relationship between them. Therefore, it is difficult to deal with background disturbances in the scene by using time domain changes. It is difficult to suppress the influence of environmental noise. Although the method based on image segmentation (moving target segmentation based on random field) can suppress isolated noise, the method based on segmentation relies on the initial detection results, and it is difficult to obtain accurate segmentation results when the initial detection results are seriously wrong, and the algorithm has poor real-time performance. . The method based on the temporal-spatial domain model fully considers the spatial-temporal consistency of image color distribution, and performs joint modeling in the temporal-spatial domain for moving object detection, and shows good performance when dealing with background disturbances in dynamic scenes. Algorithms based on the time-space domain, due to the need to process a large amount of time-space domain data, have high computational complexity, large memory requirements, and poor real-time performance of the algorithm. Due to the influence of isolated noise interference, the final detection results need post-processing such as morphological filtering or image segmentation. In order to get better test results. the

随着视频监控系统由模拟时代向网络时代发展,摄像机也向着智能化方向发展,越来越多的智能视频处理算法包括运动目标检测算法,需要向智能摄像机移植,在智能摄像机上进行嵌入式实现。但是,现有的能够处理动态场景中环境噪声的视频运动目标检测算法,不仅计算复杂度高,而且内存需求非常大,而难以在嵌入式智能摄像机平台上应用。为此,我们面向智能视频监控系统的实际应用,针对动态场景中的运动目标检测易受环境噪声干扰问题,提出一种基于时空条件信息的动态场景运动目标检测方法,该方法能够有效抑制动态场景中环境噪声干扰,有效检测运动目标,并采用图像分块策略进行目标检测加速,降低算法复杂度,增加实时性,减少内存需求,使基于时空条件信息的运动目标检测方法,不但能够在现有PC平台上实现动态场景运动目标实时检测,还适于嵌入式智能摄像机平台应用。  With the development of video surveillance systems from the analog era to the network era, cameras are also developing towards intelligence. More and more intelligent video processing algorithms, including moving object detection algorithms, need to be transplanted to smart cameras and embedded in smart cameras. . However, the existing video moving object detection algorithms that can deal with environmental noise in dynamic scenes not only have high computational complexity, but also require a lot of memory, making it difficult to apply on embedded smart camera platforms. For this reason, for the practical application of intelligent video surveillance systems, we propose a moving target detection method in dynamic scenes based on spatio-temporal condition information, which can effectively suppress the detection of moving objects in dynamic scenes. In the environment noise interference, effectively detect moving targets, and use the image block strategy to accelerate target detection, reduce algorithm complexity, increase real-time performance, reduce memory requirements, so that the moving target detection method based on spatio-temporal condition information can not only be used in the existing Real-time detection of moving targets in dynamic scenes is realized on the PC platform, and it is also suitable for embedded smart camera platform applications. the

运动目标检测本质是一个二分类问题,即以背景序列为参考条件,把当前观察图像中的像素分类为前景(本发明中也称为目标)和背景。基于算法复杂度的考虑,现有运动目标检测算法多采用线性分类器,对图像像素分类,分割出当前图像中的前景,但是,在动态场景(比如水面波动、树叶摆动的场景)下,扰动的背景(波动的水面、摆动的树叶)与前景常表现为线性不可分的。以水面波动场景中漂浮物检测(图1中b)为例,可以发现背景差分、混合高斯模型、非参数模型都在一定程度上存在背景与前景线性不可分的问题。  The essence of moving target detection is a binary classification problem, that is, using the background sequence as a reference condition, the pixels in the currently observed image are classified into foreground (also called target in the present invention) and background. Based on the consideration of algorithm complexity, the existing moving target detection algorithms mostly use linear classifiers to classify image pixels and segment the foreground in the current image. The background (fluctuating water surface, swinging leaves) and the foreground are often linearly inseparable. Taking the detection of floating objects in the water surface fluctuation scene (b in Figure 1) as an example, it can be found that the background difference, the mixed Gaussian model, and the non-parametric model all have the problem that the background and the foreground are linearly inseparable to a certain extent. the

背景差分运动目标检测:将输入图像与参考背景图像相减得到差分图像,以此作为分类特征,采用二值化操作(最简单的二分类器)检测运动目标。如图1所示,将当前输入图像(图1中b)与参考背景(图1中a)做差分得到背景差分图像(图1中c),然后分别统计该差分图像中背景区域颜色直方图和前景区域颜色直方图,这两个直方图之间的可分离程度便体现了背景与前景的线性可分性。统计背景与前景区域直方图,具体实现方式是:预先手工标记出每一帧图像中的目标区域得到运动目标掩码模板(图1中d),然后,以属于目标掩码区域内的图像统计前景区域差分直方图,以非掩码区域内的图像统计背景区域差分直方图(图2中b0),按照同样的方式,得到整个视频目标区域与背景区域的差分图像直方图如图2中e0所示。将图2中b0、e0下半部分局部放大(图2中c0、f0)可以看出,目标区域与背景区域的差分图像直方图存在大范围重叠区域,亦即背景与前景的可分离程度低,因此,采用背景差分图像作为运动目标检测特征,难以通过线性分类器进行线性分类,因此,背景差分图像特征在处理动态场景中的运动目标检测时,目标与背景是线性不可分的。  Background difference moving target detection: Subtract the input image from the reference background image to obtain a difference image, which is used as a classification feature, and uses a binarization operation (the simplest binary classifier) to detect moving targets. As shown in Figure 1, the difference between the current input image (b in Figure 1) and the reference background (a in Figure 1) is obtained to obtain the background difference image (c in Figure 1), and then the color histogram of the background area in the difference image is counted separately and the color histogram of the foreground area, the degree of separation between the two histograms reflects the linear separability of the background and the foreground. Statistical histograms of background and foreground areas, the specific implementation method is: manually mark the target area in each frame image in advance to obtain the moving target mask template (d in Figure 1), and then count the images belonging to the target mask area The difference histogram of the foreground area, the difference histogram of the background area (b0 in Figure 2) is counted by the image in the non-masked area, and the difference image histogram of the entire video target area and the background area is obtained in the same way as e0 in Figure 2 shown. Partially zooming in on the lower half of b0 and e0 in Figure 2 (c0 and f0 in Figure 2), it can be seen that there is a large-scale overlap in the histogram of the difference image between the target area and the background area, that is, the separability of the background and the foreground is low , therefore, using the background difference image as the moving object detection feature, it is difficult to perform linear classification by a linear classifier. Therefore, the background difference image feature is linearly inseparable from the background when dealing with moving object detection in dynamic scenes. the

混合高斯模型、非参数模型是两种典型的以图像颜色概率分布进行建模的视频运动目标检测算法,它们都以待检测图像像素属于背景的条件概率作为分类特征,然后采用线性分类器进行检测。由于非参数模型能够表示任意概率分布,因此,我们以非参数模型为例,考察基于颜色概率分布建模的背景减方法中前景与背景的线性可分性。如图2所示,采用非参数模型估计当前输入图像(图1中b)属于背景b条件概率特征图像为图2中a1,按照前述方法得到该特征图像上目标与背景区域内的直方图如图2中b1所示,以及整个视频的目标区域与背景区域的直方图如图2中e1所示。对图2中b1、e1下半部分局部放大(图2中c1、f1)可以看出,与图2中c0、f0相比,目标与背景区域的直方图重叠范围减少了,线性可分性增加了,但是目标与背景的线性分界面却较窄,分割阈值的选择容易受到噪声干扰,影响算法鲁棒性。  Mixed Gaussian model and non-parametric model are two typical video moving target detection algorithms modeled on the image color probability distribution. They both use the conditional probability of the image pixel to be detected to belong to the background as the classification feature, and then use a linear classifier for detection. . Since non-parametric models can represent arbitrary probability distributions, we take non-parametric models as an example to investigate the linear separability of foreground and background in background subtraction methods based on color probability distribution modeling. As shown in Figure 2, the non-parametric model is used to estimate that the current input image (b in Figure 1) belongs to the background b. The conditional probability feature image is a1 in Figure 2. According to the above method, the histogram of the target and background areas on the feature image is obtained as As shown in b1 in Figure 2, and the histogram of the target area and the background area of the entire video is shown in e1 in Figure 2. Partially zooming in on the lower half of b1 and e1 in Figure 2 (c1 and f1 in Figure 2), it can be seen that compared with c0 and f0 in Figure 2, the overlapping range of histograms between the target and the background area is reduced, and the linear separability Increased, but the linear interface between the target and the background is narrow, and the selection of the segmentation threshold is easily disturbed by noise, which affects the robustness of the algorithm. the

对非参数模型中的条件概率p(x|b)进行非线性变换,可以得到如图2中a2所示的特征图像,按照同样的方法得到对应的目标与背景区域的直方图(图2中b2、e2)。从对应的下半部分局部放大图(图2中c2、f2)可以发现:目标与背景的线性分界面增宽了。也就是说,该非线性变换增强了前景与背景的线性可分性。  Non-linear transformation is performed on the conditional probability p(x|b) in the non-parametric model, and the feature image shown in a2 in Figure 2 can be obtained, and the corresponding histogram of the target and background regions can be obtained in the same way (in Figure 2 b2, e2). It can be found from the partial enlarged images of the corresponding lower part (c2, f2 in Figure 2): the linear interface between the target and the background has widened. That is, the nonlinear transformation enhances the linear separability of foreground and background. the

图像特征分布具有局部一致性,即图像像素不是孤立的,它与邻域内像素之间存在着联系。当前像素x的图像特征会受到邻域内像素图像特征的影响,因此,将非线性变换后的图像特征在其邻域内进行加权和,可以进一步抑制孤立噪声,增大目标与背景线性分界面(图2中b3、c3、e3、f3),增加分类鲁棒性,降低分类错误。  The distribution of image features has local consistency, that is, image pixels are not isolated, but there are connections between them and pixels in the neighborhood. The image features of the current pixel x will be affected by the image features of the pixels in the neighborhood. Therefore, the weighted sum of the nonlinearly transformed image features in its neighborhood can further suppress the isolated noise and increase the linear interface between the target and the background (Fig. 2 in b3, c3, e3, f3), increase classification robustness and reduce classification errors. the

如图2所示,图2中d0为背景差分算法检测结果,图2中d1为非参数模型检测结果,图2中d2为条件概率非线性变换检测结果,图2中d3为条件概率非线性变换后特征图像邻域内加权求和检测结果。从检测结果可以看出,对条件概率进行非线性变换,并将其在邻域内进行加权求和能够有效抑制动态场景中背景扰动干扰,减少孤立噪声污染,得到良好的目标检测结果。因此,本发明拟采用对图像颜色概率分布进行非线性变换的方式,增强动态场景中前景与背景的线性可分性,以提高动态场景中运动目标检测的精度。  As shown in Figure 2, d0 in Figure 2 is the detection result of the background difference algorithm, d1 in Figure 2 is the detection result of the non-parametric model, d2 in Figure 2 is the detection result of the conditional probability nonlinear transformation, and d3 in Figure 2 is the conditional probability nonlinearity The weighted sum detection results in the neighborhood of the transformed feature image. It can be seen from the detection results that the nonlinear transformation of the conditional probability and the weighted summation in the neighborhood can effectively suppress the background disturbance in the dynamic scene, reduce the isolated noise pollution, and obtain good target detection results. Therefore, the present invention intends to adopt a method of nonlinear transformation of image color probability distribution to enhance the linear separability of foreground and background in dynamic scenes, so as to improve the accuracy of moving object detection in dynamic scenes. the

发明内容: Invention content:

针对计算视觉应用尤其是智能视频监控系统中面向动态场景的运动目标检测,容易受到背景扰动等环境噪声干扰而产生错误检测的问题,本发明旨在提出一种面向动态场景的基于时空条件信息的运动目标检测方法,以抑制动态场景中扰动背景干扰,准确检测运动目标。  Aiming at the problem that computational vision applications, especially dynamic scene-oriented moving target detection in intelligent video surveillance systems, are easily interfered by environmental noise such as background disturbances and cause false detections, the present invention aims to propose a dynamic scene-oriented detection system based on spatio-temporal condition information. A moving target detection method to suppress disturbing background interference in dynamic scenes and accurately detect moving targets. the

本发明提出的解决方案是:  The solution proposed by the present invention is:

1考虑视觉时空显著性构建时空域模型,以此时空域模型运用非参数概率密度估计方法,估计检测图像像素x属于参考背景序列b的条件概率p(x|b),利用负对数核函数对条件概率p(x|b)进行非线性变换,得到x的时空条件信息I(x|b),考虑x邻域内像素对其影响,将x邻域内像素的时空条件信息加权求和,以此作为特征通过线性分类器分类目标与背景;  1 Construct a spatio-temporal domain model considering the saliency of visual spatio-temporal domain, use the non-parametric probability density estimation method to estimate the conditional probability p(x|b) of the detected image pixel x belonging to the reference background sequence b, and use the negative logarithmic kernel function The conditional probability p(x|b) is nonlinearly transformed to obtain the spatio-temporal condition information I(x|b) of x. Considering the influence of the pixels in the neighborhood of x on it, the weighted summation of the spatio-temporal condition information of the pixels in the neighborhood of x is obtained as This is used as a feature to classify the target and background through a linear classifier;

2在时空域模型中,以当前图像中像素x的参考背景域内颜色直方图作为参考背景概率分布,计算条件信息;  2 In the spatio-temporal domain model, the color histogram in the reference background domain of the pixel x in the current image is used as the reference background probability distribution to calculate the condition information;

3采用图像分块方法对前述方法进行优化,以图像块(Image Block,简写为IB)代替单个像素进行背景建模和检测;  3 Optimize the aforementioned method by using the image block method, and use image blocks (Image Block, abbreviated as IB) instead of single pixels for background modeling and detection;

4以图像块背景颜色直方图作为背景模型,代替时空域模型中缓存的背景图像序列,降低数据存储空间需求;  4 Use the background color histogram of the image block as the background model to replace the background image sequence cached in the spatio-temporal domain model to reduce the demand for data storage space;

5以图像块的参考背景颜色直方图,作为图像块内所有像素共用的参考背景模型,计算条件信息并加权求和,进行图像块检测;  5 Use the reference background color histogram of the image block as a reference background model shared by all pixels in the image block, calculate the condition information and weighted summation, and perform image block detection;

6采用图像块差分预检测机制,预先检测出图像中发生变化的图像块作为候选检测区域,减少基于条件信息的图像块检测方法的数据处理量;  6. Using the image block differential pre-detection mechanism, the image block that has changed in the image is detected in advance as a candidate detection area, reducing the data processing amount of the image block detection method based on conditional information;

7图像块被检测为背景时,以当前帧图像块颜色直方图更新该图像块的参考背景颜色直方图,并从该图像块邻域内随机地选择一个图像块按此法进行模型更新,图像块被检测为目标时,不更新。  7 When an image block is detected as the background, update the reference background color histogram of the image block with the color histogram of the current frame image block, and randomly select an image block from the neighborhood of the image block to update the model according to this method, the image block Does not update when detected as a target. the

本发明所提出面向动态场景的基于时空条件信息的运动目标检测方法主要有以下优点:  The moving object detection method based on spatio-temporal condition information for dynamic scenes proposed by the present invention mainly has the following advantages:

1,对条件概率进行负对数非线性变换,增加了利用线性分类器进行目标检测时的线性分类边界宽度,增强了目标与背景的线性可分性,提高了目标检测鲁棒性。  1. Negative logarithmic nonlinear transformation is performed on the conditional probability, which increases the linear classification boundary width when using a linear classifier for target detection, enhances the linear separability of the target and the background, and improves the robustness of target detection. the

2,对条件概率进行负对数非线性变换具有明确的物理意义,即条件信息I(x|y),它是变量x以y为条件的不确定性。在视频中,以参考背景b为条件,当前观察值x的条件信息I(x|b),度量了参考背景对观察值x的确定能力。在动态场景中,参考背景b能够完全确定没有发生变化的区域,部分确定由背景扰动引起的变化区域,难以确定由目标运动引起的变化区域。因此,采用条件信息作为动态场景运动目标检测的分类特征,能够对扰动背景与运动目标进行线性分类。  2. The negative logarithmic nonlinear transformation of the conditional probability has a clear physical meaning, that is, the conditional information I(x|y), which is the uncertainty of the variable x conditioned on y. In the video, conditional on the reference background b, the conditional information I(x|b) of the current observation x measures the ability of the reference background to determine the observation x. In a dynamic scene, the reference background b can completely determine the region without change, partially determine the region of change caused by background disturbance, and it is difficult to determine the region of change caused by object motion. Therefore, using conditional information as the classification feature for moving object detection in dynamic scenes can linearly classify disturbing backgrounds and moving objects. the

3,对条件信息进行加权求和,抑制了孤立噪声影响,增强了抵抗扰动背景干扰能力,进一步增强了目标与背景的线性可区分性,降低了分类误差。  3. The conditional information is weighted and summed, which suppresses the influence of isolated noise, enhances the ability to resist disturbance background interference, further enhances the linear distinction between the target and the background, and reduces the classification error. the

4,考虑视觉时空显著性构建时空域模型,符合人类视觉心理特点,易于提取感兴趣的运动信息。  4. Considering the visual spatiotemporal saliency to construct a spatiotemporal domain model, which conforms to the psychological characteristics of human vision and is easy to extract interesting motion information. the

5,采用图像块代替单个像素进行检测,降低了算法复杂度和内存需求。采用图像块差分预检测机制,通过简单算法预先滤除图像非变化区域,降低了后续目标检测计算量,加快了算法速度。  5. Using image blocks instead of single pixels for detection reduces algorithm complexity and memory requirements. The image block difference pre-detection mechanism is adopted, and the non-changing area of the image is pre-filtered through a simple algorithm, which reduces the calculation amount of subsequent target detection and speeds up the algorithm speed. the

6,所采用的模型更新方法,既能够有效适应场景光照变化,又不会将目标信息更新到参考背景中,能够有效避免滑窗更新方法在目标运动缓慢时产生目标尾部漏测的问题。  6. The model update method adopted can effectively adapt to the scene illumination changes without updating the target information to the reference background, which can effectively avoid the problem of missing the tail of the target when the target moves slowly in the sliding window update method. the

7,总体而言,本发明不仅实现了动态场景中运动目标的有效检测,还克服了现有的面向动态场景的运动目标检测方法存在的算法复杂度高,实时性差,内存需求大,不易于嵌入式实现等缺点,能够在现有计算机平台上实现动态场景运动目标实时检测,适合于嵌入式智能摄像机平台应用。  7. In general, the present invention not only realizes the effective detection of moving objects in dynamic scenes, but also overcomes the high algorithm complexity, poor real-time performance, large memory requirements, and difficulty in existing moving object detection methods for dynamic scenes. Due to the shortcomings of embedded implementation, it can realize real-time detection of moving objects in dynamic scenes on the existing computer platform, and is suitable for the application of embedded intelligent camera platform. the

附图说明: Description of drawings:

图1是背景差算法示意图;  Figure 1 is a schematic diagram of the background difference algorithm;

其中a是背景图像;  where a is the background image;

b是输入图像;  b is the input image;

c是差分图像;  c is the difference image;

d是目标掩码模板,中间深色区域为前景,其余浅色区域为背景。  d is the target mask template, the middle dark area is the foreground, and the remaining light areas are the background. the

图2是运动目标检测算法中前景与背景线性可分性对比示意图;  Figure 2 is a schematic diagram of the comparison of foreground and background linear separability in the moving target detection algorithm;

其中a0-a3分别为背景差分特征图像、条件概率密度特征图像、条件信息特征图像、加权条件信息特征图像;  Among them, a0-a3 are background difference feature image, conditional probability density feature image, conditional information feature image, and weighted conditional information feature image;

b0-b3分别为a0-a3的图像特征分布直方图,它们是以图1中目标掩码模板b为参照计算得到;  b0-b3 are the image feature distribution histograms of a0-a3, which are calculated with reference to the target mask template b in Figure 1;

c0-c3分别为b0-b3底部区域局部放大显示;  c0-c3 are partial enlarged display of the bottom area of b0-b3 respectively;

d0-d3分别为a0-a3特征图像的线性分类结果;  d0-d3 are the linear classification results of a0-a3 feature images respectively;

e0-e3分别为整段视频的背景差分特征图像、条件概率密度特征图像、条件信息特征图像、加权条件信息特征图像对应的前景与背景特征分布直方图;  e0-e3 are the background difference feature image, conditional probability density feature image, conditional information feature image, and weighted conditional information feature image corresponding to the foreground and background feature distribution histograms of the entire video;

f0-f3分别为e0-e3底部区域局部放大显示。  f0-f3 are partial enlarged display of the bottom area of e0-e3 respectively. the

图3是中心四周视觉显著性模型;  Figure 3 is the visual salience model around the center;

其中a是水面漂浮物图像,其中1为中心区域,2为四周参考域;  Where a is the image of floating objects on the water surface, where 1 is the central area and 2 is the surrounding reference domain;

b是高斯差分模型;  b is the Gaussian difference model;

c是a在高斯差分模型b作用下提取的视觉显著性图,图像越亮表示显著性越高。  c is the visual saliency map extracted by a under the action of Gaussian difference model b, and the brighter the image, the higher the saliency. the

图4是视觉显著性时空域模型;  Figure 4 is the visual salience spatio-temporal domain model;

其中1是中心域,与图3中a所示的中心区域1对应;  Among them, 1 is the central region, which corresponds to the central region 1 shown in Figure 3 a;

2是四周参考域,与图3中的a所示的四周参考域2对应;  2 is a surrounding reference domain, which corresponds to the surrounding reference domain 2 shown in a in Figure 3;

3是图像中的像素;  3 is the pixel in the image;

4是像素3的背景参考域。  4 is the background reference domain of pixel 3. the

图5是图像分块检测示意图;  Figure 5 is a schematic diagram of image block detection;

其中3是图像中像素;  where 3 is the pixel in the image;

4是图像块对应的背景参考域,与图4中的4对应;  4 is the background reference domain corresponding to the image block, which corresponds to 4 in Figure 4;

5是图像块;  5 is an image block;

6是整幅图像。  6 is the entire image. the

在上述附图中:  In the above attached drawings:

1-中心域    2-四周参考域    3-像素    4-背景参考域    5-图像块    6-图像  1-center domain 2-surrounding reference domain 3-pixel 4-background reference domain 5-image block 6-image

具体实施方式: Detailed ways:

本发明提出面向动态场景的基于时空条件信息的运动目标检测方法的基本思路是:考虑视觉时空显著性构建时空域模型,以此时空域模型运用非参数概率密度估计方法,估计检测图像像素x属于参考背景序列b的条件概率p(x|b),利用负对数核函数对条件概率p(x|b)进行非线性变换,得到x的时空条件信息I(x|b),考虑x邻域内像素对其影响,将x邻域内像素的时空条件信息加权求和,以此作为特征通过线性分类器对目标与背景进行分类,完成运动目标检测。为了降低算法复杂度、提高算法速度、降低存储空间需求,我们采用图像分块策略对前述方法进行优化,优化后方法,在双核Intel Pentium Dual CPU E21802.0GHz,1GBRAM的计算机上检测640*480分辨率动态场景视频可以达到26fps(frame per second),满足了实时性应用需求。  The basic idea of the present invention to propose a moving target detection method based on spatio-temporal condition information for dynamic scenes is to construct a spatio-temporal domain model in consideration of visual spatio-temporal saliency, use the non-parametric probability density estimation method to estimate the detection image pixel x belongs to Referring to the conditional probability p(x|b) of the background sequence b, the conditional probability p(x|b) is transformed nonlinearly by using the negative logarithmic kernel function to obtain the spatiotemporal conditional information I(x|b) of x. Considering the Influenced by the pixels in the domain, the time-space condition information of the pixels in the x neighborhood is weighted and summed, and used as a feature to classify the target and the background through a linear classifier to complete the moving target detection. In order to reduce the complexity of the algorithm, improve the speed of the algorithm, and reduce the storage space requirements, we use the image block strategy to optimize the aforementioned method. After optimization, the method is tested on a dual-core Intel Pentium Dual CPU E2180 2.0GHz, 1GBRAM computer with a resolution of 640*480 The dynamic scene video rate can reach 26fps (frame per second), which meets the real-time application requirements. the

我们考虑视觉时空显著性构建时空域模型,对参考背景进行建模,用于估计参考背景颜色分布概率和输入图像目标检测。人类视觉系统的视觉显著性体现为空域显著性和时域显著性。空域视觉显著性体现在人眼观察图像时,关注高显著性区域,忽视低显著性区域。人眼视网膜神经节细胞的感受野表现为一种中心四周模型,即高斯差分模型(Difference of Gaussians,图3中b所示)。在高斯差分模型下,中心四周差异越明显,感受野视觉响应越大,对应区域的图像视觉显著性也就越高。具体如图3所示,一张水面漂浮物图像(图3中a),在高斯差分模型(图3中b)作用下,得到空域视觉显著性图(图3中c所示),从图3中c可以看出,由于水面上漂浮的塑料瓶与其四周区域(水面)差异明显,该区域在高斯差分模型下的响应大,视觉显著性高,而波动的水面其与四周区域(同样为水面)差异不明显,该位置在高斯差分模型下的响应小,视觉显著性低。时域视觉显著性体现在:人眼观察时,容易忽视周期性出现的变化(比如晃动的树叶、波动的水面),而特别关注新奇的(突然的)变化,比如扰动背景下的运动目标。视频是按时间顺序组合在一起的图像序列,因此,在视频中,同时存在着空域视觉显著性(单幅图像本身具有空域视觉显著性)和时域视觉显著性(图像内容在时间上的变化体现为时域视觉显著性)。在视频中,经常出现的变化图像视觉显著性低,而新出现的变化图像视觉显著性高。相比于扰动背景(经常出现的变化图像),运动目标常表现为新出现的变化图像,具有更高视觉显著性,因此,在视频运动目标检测任务中,考虑人类视觉的时域、空域视觉显著性,可以有效滤除背景扰动干扰,提高动态场景中运动目标检测效果。  We consider visual spatiotemporal saliency to construct a spatiotemporal domain model to model the reference background for estimating the reference background color distribution probability and input image object detection. The visual saliency of the human visual system is manifested in spatial saliency and temporal saliency. Spatial visual saliency is reflected in the fact that human eyes pay attention to high saliency areas and ignore low saliency areas when observing images. The receptive field of human retinal ganglion cells is represented by a model around the center, that is, the Difference of Gaussians model (Difference of Gaussians, shown in b in Figure 3). Under the Gaussian difference model, the more obvious the difference around the center, the greater the visual response of the receptive field, and the higher the visual salience of the image in the corresponding area. Specifically, as shown in Figure 3, an image of floating objects on the water surface (a in Figure 3), under the action of the Gaussian difference model (b in Figure 3), obtains a spatial visual saliency map (shown in c in Figure 3), from which From c in 3, it can be seen that due to the obvious difference between the plastic bottle floating on the water surface and its surrounding area (water surface), the response of this area under the Gaussian difference model is large, and the visual significance is high, while the fluctuating water surface is different from the surrounding area (also for water surface), the difference is not obvious, the response of this position under the difference of Gaussian model is small, and the visual salience is low. The salience of time-domain vision is reflected in the fact that when the human eye observes, it is easy to ignore periodic changes (such as shaking leaves, fluctuating water surfaces), and pay special attention to novel (sudden) changes, such as moving targets in the disturbed background. Video is a sequence of images combined in chronological order. Therefore, in video, there are both spatial visual saliency (a single image itself has spatial visual saliency) and temporal visual saliency (changes in image content over time). reflected in temporal visual salience). In videos, frequently occurring changing images have low visual salience, while newly emerging changing images have high visual salience. Compared with the disturbing background (changing images that appear frequently), moving objects often appear as new changing images, which have higher visual salience. Therefore, in the task of video moving object detection, the time domain and spatial domain vision of human vision are considered. Significance, it can effectively filter out background disturbances and improve the detection effect of moving objects in dynamic scenes. the

如图4所示,以输入图像(CurImg)中像素3的邻域1作为中心四周视觉注意模型的中 心域,以邻域1对应的四周区域2作为中心四周注意模型的四周域,并以该四周域外边界设定参考背景序列空域范围4,以N帧背景序列(BckSeq,图4中标记为斜线阴影区域4)图像作为像素3的时空域参考背景,以此作为参考条件计算像素3的时空条件信息。  As shown in Figure 4, the neighborhood 1 of pixel 3 in the input image (CurImg) is used as the central area of the visual attention model around the center, and the surrounding area 2 corresponding to neighborhood 1 is used as the surrounding area of the center surrounding attention model, and The outer boundary of the surrounding domain sets the reference background sequence spatial range 4, and takes N frames of the background sequence (BckSeq, marked as the obliquely shaded area 4 in Figure 4) images as the temporal and spatial domain reference background of pixel 3, and uses this as the reference condition to calculate pixel 3 space-time condition information. the

计算时空条件信息需要计算像素值x属于参考背景的条件概率p(x|b),我们以非参数模型中的核密度估计(KDE)方法计算条件概率,其一般形式如公式1所示。  Calculating the spatio-temporal conditional information requires calculating the conditional probability p(x|b) that the pixel value x belongs to the reference background. We use the kernel density estimation (KDE) method in the non-parametric model to calculate the conditional probability. Its general form is shown in Equation 1. the

p ( x | S ) = 1 | S | Σ s ∈ S K ( s - x ) 公式1  p ( x | S ) = 1 | S | Σ the s ∈ S K ( the s - x ) Formula 1

其中K为核函数,其满足∫K(x)dx=1,K(x)=K(-x),∫xK(x)dx=0,∫xxTK(x)dx=I|x|,x为观察数据,S是参考数据集,|S|是归一化因子表示参考数据集S中包含的数据个数。我们采用δ(s-x)核函数代替常用的高斯核函数,进行核密度估计,如公式2所示。  Where K is the kernel function, which satisfies ∫K(x)dx=1, K(x)=K(-x), ∫xK(x)dx=0, ∫xx T K(x)dx=I |x| , x is the observation data, S is the reference data set, |S| is the normalization factor indicating the number of data contained in the reference data set S. We use the δ(sx) kernel function instead of the commonly used Gaussian kernel function for kernel density estimation, as shown in Equation 2.

p ( x | S ) = 1 | S | Σ s ∈ S δ ( s - x ) 公式2  p ( x | S ) = 1 | S | Σ the s ∈ S δ ( the s - x ) Formula 2

δ(s-x)核函数可以用统计直方图快速计算,因此,像素x属于参考背景b的条件概率p(x|b)可以通过参考背景的颜色直方图快速计算,如公式3所示,H为参考背景颜色直方图,H(x)表示像素x在颜色直方图H中的取值,对H(x)进行归一化就得到了像素x属于背景的条件概率p(x|b),其中|H|是归一化因子,通过对直方图H所有取值求和得到。  The δ(s-x) kernel function can be quickly calculated using the statistical histogram, therefore, the conditional probability p(x|b) of the pixel x belonging to the reference background b can be quickly calculated through the color histogram of the reference background, as shown in Equation 3, H is Referring to the background color histogram, H(x) represents the value of the pixel x in the color histogram H, and normalizing H(x) gives the conditional probability p(x|b) that the pixel x belongs to the background, where |H| is the normalization factor, which is obtained by summing all the values of the histogram H. the

p ( x | b ) = 1 | H | H ( x ) 公式3  p ( x | b ) = 1 | h | h ( x ) Formula 3

为了避免直方图锯齿导致概率密度估计失真,我们采用高斯卷积核g对直方图进行平滑处理(见公式4,H为参考背景颜色直方图),以提高概率密度估计精度。  In order to avoid the distortion of the probability density estimation caused by the histogram aliasing, we use the Gaussian convolution kernel g to smooth the histogram (see formula 4, H is the reference background color histogram) to improve the probability density estimation accuracy. the

H=H*g                        公式4  H=H*g Formula 4

对检测图像中像素x属于背景的条件概率p(x|b)进行非线性变换,可以增加动态场景中目标与前景的线性分类边界,增强算法鲁棒性。非线性变换方法包括指数变换、三角变换、负对数变换等,进行非线性变换的目的是:增大前景与背景分类边界,就图3中c1而言,需要对直方图左侧的低值区间进行非线性拉伸,而右侧的高值区间进行非线性压缩。负对数变换正好具有非常好的低值区间非线性拉伸,高值区间非线性压缩能力,因此我们采用负对数核对条件概率p(x|b)进行非线性变换。在信息论中,对条件概率p(x|b)进行负对数变 换就是计算变量x在b条件下的条件信息I(x|b),条件信息具有明确的物理意义,它表示在b条件下x的不确定性。在视频中,以参考背景b为条件,当前观察值x的条件信息I(x|b),度量了参考背景对观察值x的确定能力。在动态场景中,参考背景b能够完全确定没有发生变化的区域,部分确定由背景扰动引起的变化区域,难以确定由目标运动引起的变化区域。因此,条件信息是一个非常有效的动态场景运动目标检测分类特征,利用条件信息可以对扰动背景与运动目标进行线性分类。  The nonlinear transformation of the conditional probability p(x|b) of the pixel x belonging to the background in the detection image can increase the linear classification boundary between the object and the foreground in the dynamic scene and enhance the robustness of the algorithm. Nonlinear transformation methods include exponential transformation, triangular transformation, negative logarithmic transformation, etc. The purpose of nonlinear transformation is to increase the classification boundary between foreground and background. As far as c1 in Figure 3 is concerned, the low value on the left side of the histogram needs to be interval stretches nonlinearly, while intervals with high values on the right perform nonlinear compression. Negative logarithmic transformation just has a very good nonlinear stretching ability in the low-value interval and nonlinear compression ability in the high-value interval, so we use the negative logarithmic check conditional probability p(x|b) for nonlinear transformation. In information theory, the negative logarithmic transformation of the conditional probability p(x|b) is to calculate the conditional information I(x|b) of the variable x under the condition b. The conditional information has a clear physical meaning, which means that under the condition b Under the uncertainty of x. In the video, conditional on the reference background b, the conditional information I(x|b) of the current observation x measures the ability of the reference background to determine the observation x. In a dynamic scene, the reference background b can completely determine the region without change, partially determine the region of change caused by background disturbance, and it is difficult to determine the region of change caused by object motion. Therefore, conditional information is a very effective classification feature for detecting and classifying moving objects in dynamic scenes, and using conditional information can linearly classify disturbing backgrounds and moving objects. the

采用负对数核对条件概率密度p(x|b)进行非线性变换,得到新的图像特征I(x|b),具体如公式5所示。图像特征分布具有局部一致性,即图像像素不是孤立的,它与邻域内像素之间存在着联系。像素x的图像特征会受到邻域内像素图像特征的影响,如公式6所示,我们将x邻域内所有像素xkl的条件信息I(xkl|b)进行加权和后作为像素x的条件信息I'(x|b),再利用线性分类器(公式7),通过阈值τ(一般情况下取5)进行前景与背景分类。在公式6中,αkl为加权权重,可以选择均匀权重αkl=1/(BL*BL)(BL为邻域宽度),或者高斯核权重,或者以像素xkl在邻域颜色分布中所占比重作为权重,在本发明中我们采用均匀权重。  The conditional probability density p(x|b) is nonlinearly transformed by negative logarithmic check to obtain a new image feature I(x|b), as shown in formula 5. The distribution of image features has local consistency, that is, image pixels are not isolated, but there are connections between them and pixels in the neighborhood. The image characteristics of pixel x will be affected by the image characteristics of pixels in the neighborhood. As shown in Equation 6, we weight and sum the condition information I(x kl |b) of all pixels x kl in the neighborhood of x as the condition information of pixel x I'(x|b), and then use the linear classifier (Formula 7) to classify the foreground and background through the threshold τ (usually 5). In Formula 6, α kl is the weighted weight, and you can choose a uniform weight α kl =1/(BL*BL) (BL is the neighborhood width), or a Gaussian kernel weight, or the pixel x kl in the neighborhood color distribution Proportions are used as weights, and we use uniform weights in the present invention.

I(x|b)=-logp(x|b)                    公式5  I(x|b)=-logp(x|b) Formula 5

I ′ ( x | b ) = - Σ k = 1 BL Σ l = 1 BL α kl log ( p ( x kl | b ) ) 公式6  I ′ ( x | b ) = - Σ k = 1 BL Σ l = 1 BL α kl log ( p ( x kl | b ) ) Formula 6

x = 1 I ′ ( x | b ) > τ 0 else 公式7  x = 1 I ′ ( x | b ) > τ 0 else Formula 7

前述基于时空条件信息的目标检测方法,需要计算每一个像素对应的参考背景颜色直方图,并且对每一个像素邻域内的条件信息进行加权求和,计算复杂度较高,算法实时性较差,下面我们将以图像块为单位代替单个像素进行建模和加速,以降低算法复杂度和减少存储空间需求,提高算法速度。  The aforementioned target detection method based on spatio-temporal condition information needs to calculate the reference background color histogram corresponding to each pixel, and perform weighted summation of the condition information in the neighborhood of each pixel, which has high computational complexity and poor real-time performance of the algorithm. Next, we will use the image block as a unit instead of a single pixel for modeling and acceleration to reduce algorithm complexity and storage space requirements, and improve algorithm speed. the

视频中运动目标在时域与空域上,都表现为局部一致性,相比单个像素,图像块更能体现前述局部一致性。因此,采用图像块对参考背景建模和以图像块为单位进行运动目标检测,不但能够降低算法复杂度,减少计算量,降低存储空间需求,而且能够更好地抑制孤立噪声干扰,不影响目标检测精度。  The moving target in the video shows local consistency in both the time domain and the space domain. Compared with a single pixel, the image block can better reflect the aforementioned local consistency. Therefore, using image blocks to model the reference background and detect moving objects in units of image blocks can not only reduce the complexity of the algorithm, reduce the amount of calculation, and reduce the storage space requirements, but also can better suppress isolated noise interference without affecting the target. Detection accuracy. the

与前述方法为每一个像素构建一个参考背景域不同,我们对图像进行分块,并以图像块背景颜色分布作为该图像块内像素共用的参考分布,可以减少颜色直方图数量,降低算法复杂度。如图5所示,图像块5内所有像素共用一个参考背景域4,共用该参考背景域4的背景颜色分布计算条件信息,并将图像块5内所有像素的条件信息进行加权求和,作为图像块5的分类特征,以线性分类器对图像块进行分类,分类检测方法同前述(公式7)。  Different from the previous method of constructing a reference background domain for each pixel, we divide the image into blocks, and use the background color distribution of the image block as the reference distribution shared by the pixels in the image block, which can reduce the number of color histograms and reduce the complexity of the algorithm . As shown in Figure 5, all pixels in the image block 5 share a reference background domain 4, and share the background color distribution calculation condition information of the reference background domain 4, and carry out weighted summation of the condition information of all pixels in the image block 5, as For the classification features of the image block 5, the image block is classified by a linear classifier, and the classification detection method is the same as the above (Formula 7). the

在前述的时空域模型(图4)中,需要缓存N帧背景序列图像,并且在每一次检测时,计算新的参考背景颜色直方图。缓存N帧图像序列,需要较大的存储空间,难以在存储空间有限的嵌入式平台上应用。参考背景颜色分布并不是每一帧都会发生变化,没有必要在每一次检测时都以N帧缓存图像重新计算参考背景颜色直方图。我们以图像块为单位,计算N帧缓存图像的颜色直方图,并以此做为该图像块的背景数据,代替原来的N帧缓存图像,可以减少存储空间需求,并且通过颜色直方图更新就能够适应场景光照变化。在检测时,直接把图像块5对应的参考背景区域4内的颜色直方图求和就得到了图像块5的参考背景分布。  In the aforementioned spatio-temporal domain model (Figure 4), it is necessary to cache N frames of background sequence images, and calculate a new reference background color histogram for each detection. Caching N-frame image sequences requires a large storage space, which is difficult to apply on embedded platforms with limited storage space. The reference background color distribution does not change every frame, and there is no need to recalculate the reference background color histogram with N frames of cached images every time detection is performed. We use the image block as a unit to calculate the color histogram of the N-frame cache image, and use it as the background data of the image block to replace the original N-frame cache image, which can reduce the storage space requirement, and update the color histogram through the color histogram. Ability to adapt to scene lighting changes. During detection, the reference background distribution of the image block 5 is obtained by directly summing the color histograms in the reference background area 4 corresponding to the image block 5 . the

在实际应用的视频监控应用中,图像中多数区域在多数时间里,表现为静态场景,即该区域图像比较稳定较少发生变化,比如建筑物表面、地面,以及其他不动的物体。对与图像中大量存在的静态场景区域,没有必要采用面向动态场景的目标检测方法进行检测,而是可以通过最简单的背景差分方法确定该区域是否发生变化,减少基于条件信息方法的数据处理量,可以提高实际应用的速度。  In practical video surveillance applications, most areas in the image appear as static scenes most of the time, that is, the image in this area is relatively stable and less likely to change, such as building surfaces, ground, and other immobile objects. For a large number of static scene areas in the image, it is not necessary to use a dynamic scene-oriented target detection method for detection, but the simplest background difference method can be used to determine whether the area has changed, reducing the amount of data processing based on the conditional information method , which can increase the speed of practical applications. the

图像差分是最简单的图像变化区域检测方法,通过图像差分可以预先确定图像中发生变化的区域作为候选区域,再采用基于条件信息的方法做进一步处理,以识别其为扰动背景还是真实运动目标。图像块差分考虑了目标运动的时空局部一致性,与单像素差分相比,能够更好抑制孤立噪声,并且容易与图像块条件信息检测方法结合。图像块差分预检测,可以快速检测出图像中发生变化的图像块做为候选检测区域,减少图像块条件信息检测的数据量。图像块差分是计算每一个像素块的SAD(Sum of Absolute Difference),如公式8所示,计算输入图像与背景图像对应位置上图像块的差分绝对值和,其中BL为图像块宽度,m,n表示图像块在图像中位置。对图像块差分结果进行二值化(公式9,T为二值化阈值)就可以预先检测出图像变化区域,作为候选检测区域。  Image difference is the simplest detection method of image change area. Through image difference, the changed area in the image can be pre-determined as a candidate area, and then further processed by the method based on conditional information to identify whether it is a disturbed background or a real moving target. Compared with single-pixel difference, image block difference can better suppress isolated noise, and it is easy to combine with image block condition information detection method. Image block differential pre-detection can quickly detect image blocks that have changed in the image as candidate detection areas, reducing the amount of data for image block condition information detection. The image block difference is to calculate the SAD (Sum of Absolute Difference) of each pixel block, as shown in formula 8, to calculate the absolute difference sum of the image blocks at the corresponding positions of the input image and the background image, where BL is the image block width, m, n represents the position of the image block in the image. By binarizing the image block difference results (Formula 9, T is the binarization threshold), the image change area can be detected in advance as a candidate detection area. the

SAD ( m , n ) = Σ x = 1 BL Σ y = 1 BL | I ( m × BL + x , n × BL + y ) - B ( m × BL + x , n × BL + y ) | 公式8  SAD ( m , no ) = Σ x = 1 BL Σ the y = 1 BL | I ( m × BL + x , no × BL + the y ) - B ( m × BL + x , no × BL + the y ) | Formula 8

IB = 1 SAD > T 0 else 公式9  IB = 1 SAD > T 0 else Formula 9

为适应场景光照变化,需要对背景模型进行更新。模型更新包括两方面内容,对条件信息检测方法中的图像块背景颜色直方图进行更新,对图像块差分预检测算法中的参考背景进行更新。针对颜色直方图更新,具体更新策略是:只有当前检测区域为背景时,才进行直方图更新(公式10),在更新当前图像块背景颜色直方图的同时,并对其邻域内的图像块的背景颜色直方图进行选择性更新。具体是指,以一定概率随机地从当前图像块邻域内选择一个图像块,以当前图像块在当前帧中颜色直方图更新所选择图像块的背景颜色直方图,更新方法同前(公式10)。仅在当前检测区域为背景时才进行模型更新,能够适应场景光照变化,却不会将运动目标信息错误地更新到背景中,而能够有效克服时间滑窗更新将目标信息融入参考背景而导致目标漏检的问题。在更新当前区域的同时,选择性地对其邻域背景进行更新,可以克服只更新背景区域不更新前景区域导致的目标移入移出误检测问题,因为检测为前景的区域由其邻域背景得以更新,经历一段时间积累后,移入移出目标区域背景就完成了更新。  In order to adapt to the scene lighting changes, the background model needs to be updated. The model update includes two aspects, updating the image block background color histogram in the condition information detection method, and updating the reference background in the image block differential pre-detection algorithm. For the update of the color histogram, the specific update strategy is: only when the current detection area is the background, the histogram is updated (Formula 10). The background color histogram is optionally updated. Specifically, an image block is randomly selected from the neighborhood of the current image block with a certain probability, and the background color histogram of the selected image block is updated with the color histogram of the current image block in the current frame. The update method is the same as before (Formula 10) . The model is updated only when the current detection area is the background, which can adapt to the scene illumination changes, but will not update the moving target information to the background by mistake, and can effectively overcome the time sliding window update and integrate the target information into the reference background. The problem of missed detection. While updating the current area, selectively updating its neighborhood background can overcome the problem of false detection caused by only updating the background area and not updating the foreground area, because the area detected as the foreground is updated by its neighborhood background , after a period of accumulation, the background is updated when moving in and out of the target area. the

公式10实际上是把图像块背景颜色直方图Hmn与当前帧中的颜色直方图Hmnc进行加权求和后,做为新的参考背景分布Hmn,其中β0为加权权重,取值越大更新速度越快。  Formula 10 actually weights and sums the background color histogram H mn of the image block and the color histogram H mnc in the current frame as a new reference background distribution H mn , where β 0 is the weighted weight. Larger updates are faster.

Hmn=Hmn*(1-β0)+Hmnc0                    公式10  H mn =H mn *(1-β 0 )+H mnc0 Formula 10

与条件信息方法中的直方图更新方法不同,我们采用背景快速更新、前景缓慢更新的策略,对图像块差分预检测中的参考背景进行更新。以最终的目标检测结果为依据,以公式11进行更新,当图像块被检测为前景时,以速率β1慢速更新,当图像块被检测为背景时,以速率β2快速更新,其中β12。由于图像块差是预检测,为了能够更好地适应环境变化,我们在这里不单对背景进行更新还对前景进行更新,只是更新的速率非常慢,通过实际应用测试证明,该方法不会导致目标漏检,并且能够更好地适应环境变化和处理目标移入移出问题。  Different from the histogram update method in the conditional information method, we adopt the strategy of fast background update and slow foreground update to update the reference background in the patch differential pre-detection. Based on the final target detection result, it is updated according to formula 11. When the image block is detected as the foreground, it is updated slowly at the rate β 1 , and when the image block is detected as the background, it is quickly updated at the rate β 2 , where β 12 . Since the image block difference is pre-detection, in order to better adapt to environmental changes, we not only update the background but also update the foreground here, but the update rate is very slow. The actual application test proves that this method will not cause the target Missed detection, and can better adapt to environmental changes and deal with the problem of moving objects in and out.

b = x * ( 1 - β 1 ) + b * β 1 if x = F x * ( 1 - β 2 ) + b * β 2 else 公式11  b = x * ( 1 - β 1 ) + b * β 1 if x = f x * ( 1 - β 2 ) + b * β 2 else Formula 11

基于时空条件信息的动态场景运动目标检测方法的基本流程如下:  The basic flow of the moving target detection method in dynamic scenes based on spatio-temporal condition information is as follows:

1模型初始化  1 Model initialization

获取背景图像,作为图像块差分预检测的参考背景。对图像分块,在参考背景图像上,计算背景图像中每一个图像块IBmn的颜色直方图作为该图像块的初始背景颜色直方图Hmn。  Obtain the background image as the reference background for image patch differential pre-detection. For the image block, on the reference background image, calculate the color histogram of each image block IB mn in the background image as the initial background color histogram H mn of the image block.

2图像块差分预检测  2 image block differential pre-detection

利用公式7计算分块图像的差分绝对值和,并以公式8进行阈值化预检测出图像变化图像块,作为候选检测区域。  Use Formula 7 to calculate the absolute value sum of the block image, and use Formula 8 to perform thresholding to pre-detect the image change image block as a candidate detection area. the

3候选图像块二次检测  3 Secondary detection of candidate image blocks

对步骤2中图像块差分预检测得到的候选检测区域上,利用条件信息进行二次检测。首先计算候选图像块的参考背景直方图H,再以此计算图像块所有像素的条件信息并进行加权求和,最后通过公式7进行二值化,得到目标检测结果图像(BinImg)。  On the candidate detection area obtained by pre-detection of image block difference in step 2, the condition information is used for secondary detection. First calculate the reference background histogram H of the candidate image block, then calculate the condition information of all pixels of the image block and perform weighted summation, and finally perform binarization by formula 7 to obtain the target detection result image (BinImg). the

4模型更新  4Model update

依据目标检测结果(BinImg)进行模型更新,包括对参考背景和图像块颜色直方图进行更新。按照公式11,对参考背景进行更新。对图像块颜色直方图更新时,首先要计算出当前帧中每一个图像块的颜色分布直方图Hmnc,再按照前述更新方法对图像块背景颜色直方图进行更新。  The model is updated according to the target detection result (BinImg), including updating the reference background and the color histogram of the image block. According to formula 11, the reference background is updated. When updating the color histogram of the image block, it is first necessary to calculate the color distribution histogram H mnc of each image block in the current frame, and then update the background color histogram of the image block according to the aforementioned update method.

Claims (8)

1.基于时空条件信息的运动目标检测方法,其特征在于:对检测图像中的像素点(3)属于参考背景的条件概率p(x|b)进行非线性变换,作为运动目标检测分类特征,以线性分类器进行动态场景中前景与背景分类。 1. A moving target detection method based on spatio-temporal condition information, characterized in that: the conditional probability p(x|b) of the pixel point (3) in the detection image belonging to the reference background is subjected to nonlinear transformation, and used as the classification feature of the moving target detection, Foreground and background classification in dynamic scenes with a linear classifier. 2.根据权利要求1所述的基于时空条件信息的运动目标检测方法,其特征在于,采用非负对数对条件概率p(x|b)进行非线性变换,得到像素点(3)在参考背景条件下的条件信息I(x|b),作为图像分类特征。 2. The moving target detection method based on spatio-temporal condition information according to claim 1, characterized in that the non-negative logarithm is used to perform nonlinear transformation on the conditional probability p(x|b) to obtain the pixel point (3) in the reference The conditional information I(x|b) under the background condition is used as the image classification feature. 3.根据权利要求2所述的基于时空条件信息的运动目标检测方法,其特征在于,加权像素点(3)邻域内像素点的条件信息I(x|b),作为检测像素点(3)的最终分类特征。 3. The moving target detection method based on spatio-temporal condition information according to claim 2, characterized in that the condition information I(x|b) of the pixels in the neighborhood of the weighted pixel (3) is used as the detection pixel (3) final classification features. 4.根据权利要求1或3所述的基于时空条件信息的运动目标检测方法,其特征在于,采用中心四周时空域模型计算条件概率p(x|b),并以此计算像素点(3)的加权条件信息。该时空域模型,以当前检测像素点(3)为中心,构建中心区域(1),并以该中心区域(1)对应的四周区域(2)确定参考背景:以四周区域(2)外边界范围内的所有N-1帧背景序列图像BckSeq(4),并连同当前检测图像CurImg中四周区域2作为参考背景b,计算条件概率p(x|b)。以中心区域(1)做为像素点(3)的邻域,计算加权条件信息。 4. The moving target detection method based on spatio-temporal condition information according to claim 1 or 3, characterized in that the conditional probability p(x|b) is calculated using the spatio-temporal domain model around the center, and the pixel points (3) are calculated accordingly The weighted condition information of . The spatio-temporal domain model, with the current detection pixel point (3) as the center, constructs the central area (1), and determines the reference background with the surrounding area (2) corresponding to the central area (1): the outer boundary of the surrounding area (2) All N-1 frames of background sequence images BckSeq (4) within the range, together with the surrounding area 2 in the current detection image CurImg as the reference background b, calculate the conditional probability p(x|b). Take the central area (1) as the neighborhood of the pixel point (3), and calculate the weighted condition information. 5.根据权利要求4所述的基于时空条件信息的运动目标检测方法,其特征在于,对图像进行分块,以图像块(5)代替单个像素(3)检测进行加速,对图像进行分块,以图像块内像素x的条件信息加权求和作为该图像块特征进行检测分类。 5. The moving target detection method based on spatio-temporal condition information according to claim 4, characterized in that the image is divided into blocks, and the detection is accelerated by using image blocks (5) instead of a single pixel (3), and the image is divided into blocks , using the weighted sum of the conditional information of the pixel x in the image block as the feature of the image block for detection and classification. 6.根据权利要求5所述的基于时空条件信息的运动目标检测算法,其特征在于,采用图像块颜色分布直方图对参考背景概率分布直接建模,以参考背景序列中图像块的颜色分布直方图H作为参考背景b的概率分布p(b),直接利用直方图H计算条件概率p(x|b)。 6. The moving target detection algorithm based on spatio-temporal condition information according to claim 5, characterized in that, the reference background probability distribution is directly modeled by using the image block color distribution histogram, so as to refer to the color distribution histogram of the image block in the background sequence Figure H is used as the probability distribution p(b) of the reference background b, and the conditional probability p(x|b) is calculated directly using the histogram H. 7.根据权利要求5所述的基于时空条件信息的运动目标检测算法,其特征在于,采用图像块差分预检测,预先检测出图像中变化图像块,作为候选图像,再利用条件信息进行二次检测。 7. The moving target detection algorithm based on spatio-temporal condition information according to claim 5, characterized in that, the image block difference pre-detection is used to pre-detect the changed image block in the image as a candidate image, and then the condition information is used for secondary detection. 8.根据权利要求6所述的基于时空条件信息的运动目标检测方法,其特征在于,图像块被检测为背景时,对图像块的参考背景颜色直方图进行更新,并随机选择邻域内图像块参考背景颜色直方图,对选择的颜色直方图进行更新。  8. The moving target detection method based on spatio-temporal condition information according to claim 6, wherein when an image block is detected as background, the reference background color histogram of the image block is updated, and the image block in the neighborhood is randomly selected The selected color histogram is updated with reference to the background color histogram. the
CN2012102513544A 2012-07-19 2012-07-19 Time-space condition information based moving object detection method Pending CN102903120A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102513544A CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102513544A CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Publications (1)

Publication Number Publication Date
CN102903120A true CN102903120A (en) 2013-01-30

Family

ID=47575333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102513544A Pending CN102903120A (en) 2012-07-19 2012-07-19 Time-space condition information based moving object detection method

Country Status (1)

Country Link
CN (1) CN102903120A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168405A (en) * 2013-05-20 2014-11-26 聚晶半导体股份有限公司 Noise suppression method and image processing apparatus
CN104408742A (en) * 2014-10-29 2015-03-11 河海大学 Moving object detection method based on space-time frequency spectrum combined analysis
CN104616323A (en) * 2015-02-28 2015-05-13 苏州大学 Space-time significance detecting method based on slow characteristic analysis
TWI493160B (en) * 2013-05-13 2015-07-21 Global Fiberoptics Inc Method for measuring the color uniformity of a light spot and apparatus for measuring the same
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN106447656A (en) * 2016-09-22 2017-02-22 江苏赞奇科技股份有限公司 Rendering flawed image detection method based on image recognition
CN108133488A (en) * 2017-12-29 2018-06-08 安徽慧视金瞳科技有限公司 A kind of infrared image foreground detection method and equipment
CN109886132A (en) * 2019-01-25 2019-06-14 北京市遥感信息研究所 A method, device and system for detecting aircraft target in sea of clouds background
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 A Smoke Detection Method Combining Deep Convolutional Neural Networks and Visual Change Graphs
CN111476815A (en) * 2020-04-03 2020-07-31 浙江大学 Moving object detection method based on color probability of moving area
CN112101148A (en) * 2020-08-28 2020-12-18 普联国际有限公司 A moving target detection method, device, storage medium and terminal device
CN113542588A (en) * 2021-05-28 2021-10-22 上海第二工业大学 An anti-interference electronic image stabilization method based on visual saliency
CN115200544A (en) * 2022-07-06 2022-10-18 中国电子科技集团公司第三十八研究所 Method and device for tracking target of maneuvering measurement and control station
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 A Dense Clutter Suppression Method Based on Spatial-Temporal Density Discrimination of Detection Points

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0737100A (en) * 1993-07-15 1995-02-07 Tokyo Electric Power Co Inc:The Moving object detection and determination device
CN101482923A (en) * 2009-01-19 2009-07-15 刘云 Human body target detection and sexuality recognition method in video monitoring
CN102254394A (en) * 2011-05-31 2011-11-23 西安工程大学 Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0737100A (en) * 1993-07-15 1995-02-07 Tokyo Electric Power Co Inc:The Moving object detection and determination device
CN101482923A (en) * 2009-01-19 2009-07-15 刘云 Human body target detection and sexuality recognition method in video monitoring
CN102254394A (en) * 2011-05-31 2011-11-23 西安工程大学 Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YASER SHEIKH等: "Bayesian Modeling of Dynamic Scenes for Object Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 27, no. 11, 30 November 2005 (2005-11-30), pages 1778 - 1792, XP001512568, DOI: doi:10.1109/TPAMI.2005.213 *
单勇: "复杂条件下视频运动目标检测和跟踪", 《中国博士学位论文全文数据库》, no. 06, 15 December 2007 (2007-12-15), pages 17 - 18 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI493160B (en) * 2013-05-13 2015-07-21 Global Fiberoptics Inc Method for measuring the color uniformity of a light spot and apparatus for measuring the same
CN104168405A (en) * 2013-05-20 2014-11-26 聚晶半导体股份有限公司 Noise suppression method and image processing apparatus
CN104168405B (en) * 2013-05-20 2017-09-01 聚晶半导体股份有限公司 Noise suppression method and image processing apparatus
CN104408742A (en) * 2014-10-29 2015-03-11 河海大学 Moving object detection method based on space-time frequency spectrum combined analysis
CN104408742B (en) * 2014-10-29 2017-04-05 河海大学 A kind of moving target detecting method based on space time frequency spectrum Conjoint Analysis
CN104616323A (en) * 2015-02-28 2015-05-13 苏州大学 Space-time significance detecting method based on slow characteristic analysis
CN104616323B (en) * 2015-02-28 2018-02-13 苏州大学 A kind of time and space significance detection method based on slow signature analysis
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
CN105551014A (en) * 2015-11-27 2016-05-04 江南大学 Image sequence change detection method based on belief propagation algorithm with time-space joint information
CN105631898A (en) * 2015-12-28 2016-06-01 西北工业大学 Infrared motion object detection method based on spatio-temporal saliency fusion
CN105631898B (en) * 2015-12-28 2019-04-19 西北工业大学 Infrared moving target detection method based on spatiotemporal saliency fusion
CN106447656B (en) * 2016-09-22 2019-02-15 江苏赞奇科技股份有限公司 Rendering flaw image detecting method based on image recognition
CN106447656A (en) * 2016-09-22 2017-02-22 江苏赞奇科技股份有限公司 Rendering flawed image detection method based on image recognition
CN108133488A (en) * 2017-12-29 2018-06-08 安徽慧视金瞳科技有限公司 A kind of infrared image foreground detection method and equipment
CN109886132A (en) * 2019-01-25 2019-06-14 北京市遥感信息研究所 A method, device and system for detecting aircraft target in sea of clouds background
CN109886132B (en) * 2019-01-25 2020-12-15 北京市遥感信息研究所 A method, device and system for detecting aircraft target in sea of clouds background
CN109961042A (en) * 2019-03-22 2019-07-02 中国人民解放军国防科技大学 A Smoke Detection Method Combining Deep Convolutional Neural Networks and Visual Change Graphs
CN111476815A (en) * 2020-04-03 2020-07-31 浙江大学 Moving object detection method based on color probability of moving area
CN112101148A (en) * 2020-08-28 2020-12-18 普联国际有限公司 A moving target detection method, device, storage medium and terminal device
CN112101148B (en) * 2020-08-28 2024-05-03 普联国际有限公司 Moving object detection method and device, storage medium and terminal equipment
CN113542588A (en) * 2021-05-28 2021-10-22 上海第二工业大学 An anti-interference electronic image stabilization method based on visual saliency
CN115200544A (en) * 2022-07-06 2022-10-18 中国电子科技集团公司第三十八研究所 Method and device for tracking target of maneuvering measurement and control station
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 A Dense Clutter Suppression Method Based on Spatial-Temporal Density Discrimination of Detection Points
CN115359085B (en) * 2022-08-10 2023-04-04 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination

Similar Documents

Publication Publication Date Title
CN102903120A (en) Time-space condition information based moving object detection method
CN108492319B (en) Moving target detection method based on deep full convolution neural network
CN103077539B (en) Motion target tracking method under a kind of complex background and obstruction conditions
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN111723773B (en) Method and device for detecting carryover, electronic equipment and readable storage medium
CN109685045B (en) Moving target video tracking method and system
Qu et al. A pedestrian detection method based on yolov3 model and image enhanced by retinex
CN103530893B (en) Based on the foreground detection method of background subtraction and movable information under camera shake scene
US20160019698A1 (en) Systems and methods for people counting in sequential images
CN108122247A (en) A kind of video object detection method based on saliency and feature prior model
CN110598558A (en) Crowd density estimation method, device, electronic equipment and medium
CN109544592B (en) Moving Object Detection Algorithm for Camera Movement
US20170337431A1 (en) Image processing apparatus and method and monitoring system
CN114241511B (en) Weak supervision pedestrian detection method, system, medium, equipment and processing terminal
CN108647649A (en) The detection method of abnormal behaviour in a kind of video
CN109741293A (en) Significant detection method and device
CN113409360A (en) High altitude parabolic detection method and device, equipment and computer storage medium
CN109934224A (en) Small target detecting method based on markov random file and visual contrast mechanism
CN110825900A (en) Training method of feature reconstruction layer, reconstruction method of image features and related device
Hu et al. A novel approach for crowd video monitoring of subway platforms
CN104809742A (en) Article safety detection method in complex scene
Nizar et al. Multi-object tracking and detection system based on feature detection of the intelligent transportation system
Singh et al. Motion detection for video surveillance
Wu et al. Overview of video-based vehicle detection technologies
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHANXI GREEN ELECTRO-OPTIC INDUSTRY TECHNOLOGY INS

Free format text: FORMER OWNER: DEFENSIVE SCIENTIFIC AND TECHNOLOGICAL UNIV., PLA

Effective date: 20130514

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 410073 CHANGSHA, HUNAN PROVINCE TO: 033300 LVLIANG, SHAANXI PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20130514

Address after: 033300 Shanxi city of Lvliang province Liulin County Li Jia Wan Xiang Ge duo Cun Bei River No. 1

Applicant after: SHANXI GREEN OPTOELECTRONIC INDUSTRY SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE (CO., LTD.)

Address before: Zheng Jie, Kaifu District, Hunan province 410073 Changsha inkstone wachi No. 47

Applicant before: National University of Defense Technology of People's Liberation Army of China

ASS Succession or assignment of patent right

Owner name: HUNAN VISIONSPLEND OPTOELECTRONIC TECHNOLOGY CO.,

Free format text: FORMER OWNER: SHANXI GREEN ELECTRO-OPTIC INDUSTRY TECHNOLOGY INSTITUTE (CO., LTD.)

Effective date: 20140110

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 033300 LVLIANG, SHAANXI PROVINCE TO: 410073 CHANGSHA, HUNAN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20140110

Address after: 410073 Hunan province Changsha Kaifu District, 31 Road No. 303 Building 5 floor A Di Shang Yong

Applicant after: HUNAN VISION SPLEND PHOTOELECTRIC TECHNOLOGY Co.,Ltd.

Address before: 033300 Shanxi city of Lvliang province Liulin County Li Jia Wan Xiang Ge duo Cun Bei River No. 1

Applicant before: SHANXI GREEN OPTOELECTRONIC INDUSTRY SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE (CO., LTD.)

C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130130